'; zhtm += ''; zhtm += '

' + pPage + ''; zhtm += '
'; figDoc.write(zhtm); figDoc.close(); popUpWin.focus(); } //--> Using Oracle8 -- Chapter 23 - Diagnosing and Correcting Problems

Using Oracle8


Chapter 23

Diagnosing and Correcting Problems


Common Problems in Oracle Systems

People encounter four common types of failures or problems in an Oracle system:

Data Block Corruption

An Oracle data block is written in an internal binary format and is considered to be corrupt if it no longer conforms to this format. Oracle checks a block for possible corruption before using it in any way. This checking is performed at the cache layer and higher layers of the Oracle code. The cache layer information includes the block type, version, DBA, incarnation, sequence, and block checksum.

If an inconsistency is found, the block is considered to be media corrupt. If a block isn't media corrupt but corruption is identified at the higher layer of the code, the block is considered to be software corrupt.

When a block is found to be corrupt, you normally would see an ORA-1578 or ORA-600 [3374] error. You also can use the following methods to detect and identify corrupt or damaged blocks:

Analyze table table_name validate structure cascade;

It checks every table block at the cache and higher levels. It also checks the associated index blocks and checks the one-to-one correspondence between data and index rows.

Using the DB_VERIFY Utility

The DB_VERIFY utility lets you to perform integrity verification of database files. It's a command-line utility that performs a physical data structure integrity check on an offline database; you can find it in the rdbms/admin directory. (DB_VERIFY checks are limited to cache-managed blocks.) Its syntax is as follows:

DB_VERIFY parameters
Validate your backups
You also can use DB_VERIFY to validate your backups and make sure that you can use them for database recovery.

In the syntax, DB_VERIFY is an operating system-specific executable: dbv on Sun/Sequent systems and dbverif80 on Windows NT systems. parameters is from the following table:

Parameter
Description
Default
FILEName of file to verify None
STARTStarting block address First block of the file
ENDEnding block address Last block of the file
BLOCKSIZELogical block size 2048
LOGFILEFile that logs the DB_VERIFY's output STANDARD ERROR
FEEDBACK nDisplayed for every n pages verified (0 indicates off)  
HELPType Y to displayonline help  
PARFILEName of the parameter file None

Specify a BLOCKSIZE
If you don't specify BLOCKSIZE for non-2KB files, you'll get error DBV-00103.

You can obtain the BLOCK_SIZE by executing the following at the Server Manager prompt:

SVRMGR>SHOW PARAMETER BLOCK_SIZE

To use the DB_VERIFY utility in Windows NT, enter the following at a system prompt:

dbverif80 file=datafile1.ora logfile=dbvlog.out feedback=0

DB_VERIFY will verify the file datafile1.ora, starting from the first Oracle block to the last Oracle block, using a block size of 2048 and put the results in the file dbvlog.out. It doesn't send any dots to the screen for verified pages.

Shut down the database before using DB_VERIFY
You must shut down the database before using DB_VERIFY against its data files, to prevent the data-base from getting corrupted.

To use the DB_VERIFY utility on a Sun machine, enter the following at a system prompt:

dbv file=datafile2.dbf feedback=10
Validating archive logs
You can verify inactive archive log files with DB_VERIFY, even if the database isn't offline. You'll get the following output:

DBVERIFY: Release x.x.x.x.x - date
Copyright......

DBVERIFY - Verification starting: FILE = datafile2.ora

DBVERIFY - Verification complete

Total Pages Examined.............................: 9216
Total Pages Processed....(Data)..................: 2044
Total Pages Failing.........(Data)..................: 0
Total Pages Processed....(Index)..................: 921
Total Pages Failing.........(Index).................: 0
Total Pages Empty................................: 5442
Total Pages Marked Corrupt..........................: 0
Total Pages Influx..................................: 0

On other UNIX platforms such as HP and DEC-UX, results should be very similar.

Verifying a Data File Created on a Raw Device

The following shows the contents of a raw device when used with a data file:



Data
Space left in the data file
Reserved for non-data file usage

When you use DB_VERIFY to verify a data file on a raw device, you should use the START and END parameters. Otherwise, it will mark the non-database blocks as corrupt:

$Dbv testfile.dbf

DBVERIFY: Release x.x.x.x.x - date

Copyright........

DBVERIFY - Verification starting: FILE =  testfile.dbf
Page 23548 is marked software corrupt
Page 23600 is marked software corrupt
Page 23601 is marked software corrupt
Page 23602 is marked software corrupt
Page 23603 is marked software corrupt
Page 23604 is marked software corrupt
Page 23605 is marked software corrupt
Page 23606 is marked software corrupt
Page 23607 is marked software corrupt
Page 23608 is marked software corrupt
Page 23609 is marked software corrupt
Page 23610 is marked software corrupt
Page 23611 is marked software corrupt
Page 23612 is marked software corrupt
Page 23613 is marked software corrupt
Page 23614 is marked software corrupt
Page 23615 is marked software corrupt
Page 23616 is marked software corrupt
Page 23617 is marked software corrupt
Page 23618 is marked software corrupt
Page 23723 is marked software corrupt

DBVERIFY - Verification Complete

Total Pages Examined..............................: 12075
Total Pages Processed....(Data).......................: 0
Total Pages Failing.........(Data)....................: 0
Total Pages Processed....(Index)....................: 462
Total Pages Failing.........(Index)...................: 0
Total Pages Empty.................................: 11482
Total Pages Marked Corrupt...........................: 20
Total Pages Influx....................................: 0

Ensure that you don't get blocks corrupted when using DB_VERIFY

  1. At the Server Manager prompt, type the following:
       SVRMGR> select bytes
               from v$datafile
               where name = 'datafile_name';
    By using the bytes amount from this query, you can determine the number of database blocks in the data file with the following equation:
       Number of blocks = datafile bytes/BLOCKSIZE

  1. Run DB_VERIFY:
       $Dbv file=datafile end=number of blocks

Using the Checksum Facilities

The checksum facilities provided by Oracle can be used to identify corrupt blocks.

Setting DB_BLOCK_CHECKSUM to TRUE causes checksums to be calculated for all data blocks on their next update. A block will always use checksums as soon as it's generated, even if the parameter is removed. The checksums are calculated by the DBWR and the direct loader and stored in the cache header of every data block when writing it to disk.

Setting LOG_BLOCK_CHECKSUM to TRUE causes checksums to be calculated for all redo log blocks.

Parameters used to check for corrupt blocks can affect performance
Setting DB_BLOCK_CHECKSUM or LOG_BLOCK_CHECKSUM to TRUE can result in CPU time being spent in validating blocks, a process that's normally not performed. The performance hit can be from 5% to 20%, depending on the type of data.

Setting _DB_BLOCK_CACHE_PROTECT to TRUE in the init.ora file protects the cache layer from becoming corrupted. It will prevent certain corruption from getting to disk, although it may crash the foreground of the database instance. It will help catch stray writes in the cache and will fail with a stack violation when a process tries to write past the buffer size in the SGA.

The checksum utility also provides various events that you can use to check various blocks for corruption. These events "soft corrupt" a block if they find any kind of mismatch during the checking process. To use these events, type the following at the Server Manager prompt:

alter session set events '102xx trace name context forever, level 10';

The following events are available:

Event
Purpose
10210
Checks data blocks
10211
Checks index blocks
10212
Checks cluster blocks
10225
Checks information in fet$/uset$ for any corruption; can be used in case your create segment statement is hanging

Recovering from Damaged Data Blocks

There are several causes for a data block getting damaged:

To recover from damaged data blocks, you need to figure out the extent of the damage by looking at the full error message(s), the alert log, and all the trace files generated. If there are multiple errors or you have operating system-level errors, you should first resolve those errors before even attempting to recover the damaged file. After the operating system and hardware errors are fixed, you can use the following steps to recover.

The error is in file# (F) and block#

  1. Determine the corrupted file by typing the following at the Server Manager prompt:
           SELECT * FROM v$datafile WHERE file# = F;
    
  2. Determine the damaged object by executing the following at the Server Manager prompt:
       SELECT owner, segment_name, segment_type
       FROM dba_extents
       WHERE file_id = F
        AND    B BETWEEN block_id AND block_id  + blocks - 1;

SEE ALSO
If the segment returned from this query is a rollback segment, you can recover from the damaged segment by using a certain method,

  1. If the segment is an index, determine the table it belongs to by executing the following:
           SELECT table_owner, table_name
           FROM dba_indexes
           WHERE index_name = 'segment_name';
    
    If the segment is a cluster, determine the table it belongs to by executing the following:
           SELECT owner, table_name
           FROM dba_tables
           WHERE cluster_name = 'segment_name';
    
    If the segment is a table, note the table name and the owner.
SEE ALSO
If the table is a data dictionary table, recover from the damage by using a described method,

  1. Run the ANALYZE command at least twice to ensure that the problem isn't an intermittent problem.
    For an index or a table, at the Server Manager prompt type the following:
           ANALYZE TABLE owner.tablename VALIDATE STRUCTURE     CASCADE;
    
    For a cluster, type the following at the Server Manager prompt:
           ANALYZE CLUSTER owner.clustername VALIDATE STRUCTURE    CASCADE;
    

    If you encounter potential hardware errors on a particular disk or controller, first relocate the files to a good disk.

    Recover from a hardware problem in ARCHIVELOG mode

    1. Take the affected data file offline.
    2. Restore its last backup on a good disk.
    3. Rename the data file to the new location.
    4. Recover the data file.
    5. Put the file back online and start using it.

    Recover from a hardware problem in NOARCHIVELOG mode

    1. Take the affected data file offline.
    2. Restore its last backup on a good disk.
    3. Rename the data file to the new location.
    4. Put the file back online and start using it.

    If you have to rename the file, run the ANALYZE command at least twice to verify that the corruption still exists. If ANALYZE still returns an error and you've already fixed the hardware problems, you need to salvage the data.

    You can salvage data from damaged data blocks in several ways:

    If you know the file and block number of the corrupted blocks, you can extract the good data by selecting around the corruption. Before attempting to salvage the data this way, check the following:

    If you have neither, you can extract data around the corrupt block by using the following command at the Server Manager prompt:

    (a) CREATE TABLE salvage AS
    SELECT * FROM corrupt_table WHERE 1 = 2;
    (b) INSERT INTO salvage
    SELECT /*+ ROWID(corrupt_table) */ * FROM corrupt_table
    WHERE rowid <= 'low_rowid_of_corrupt_block';
    (c) INSERT INTO salvage
    SELECT /*+ ROWID(corrupt_table) */ * FROM corrupt_table
    WHERE rowid >= 'high_rowid_of_corrupt_block';
    

    You also can set event 10231 in the initialization file and select around the corruption. This event causes Oracle to skip software and media-corrupted blocks when performing full table scans and allows you to extract the good blocks and recreate the database object:

    Usage: event="10231 trace name context forever, level 10"
    

    You can set event 10233 in the initialization file and select around the corruption. This event is similar to event 10231, except it works with index range scans. Note, however, that data in the corrupted blocks will be lost when event 10233 is set.

    Finally, you can contact Oracle Support Services, which has access to several tools that you can use to extract data from corrupt database objects. See the later section "Working with Oracle Support" for more information.

    Working with the Alert Log and Trace Files

    In an Oracle environment, you'll encounter many kinds of logs and trace files, including the alert log, process trace files, core files, application logs, system logs, and network logs.

    Oracle Alert Log

    The Oracle background and foreground processes log information to a file called the alert log. The primary use of the alert log is to understand the progress of the instance, because it captures the following:

    Summary errors

    SEE ALSO
    More information on the contents and usage of the alert log,

    Oracle Trace Files

    All the Oracle server processes have a file called a trace file to which they record traces, dumps, and error messages. This file isn't written to by any other process.

    Locating trace files
    You should find the trace file in the directory indicated by BACKGROUND_DUMP_DEST,USER_DUMP_DEST,or CORE_DUMP_DEST, depending on the exact error and its cause.

    The trace file is linked to the standard output of the process, allowing the capture of encountered operating system messages. All trace files have a header that contains

    Application Trace Files

    Client-side progress and failure information can be collected in application trace files-for example, spool.lst from SQL*Plus. These files contain useful information, such as the following:

    Extracting the Stack Trace Out of the Core Dump

    When a process aborts, it creates a core file in the current directory. This core file contains a dump of process memory. To dump your process state, use the following command:

    Alter session set events 'immediate trace name processstate level 10';
    

    You can extract a stack trace from the core file, which can indicate where the process failed. To obtain a stack trace when a certain error XXXX occurs, use the following command:

    Alter session set events 'XXXX trace name errorstack forever, level 10';
    

    UNIX versus Windows NT core dump
    These steps are UNIX specific. If you use Windows NT, you can find the core dump in an "access violation" file.

    Get the stack trace from the core file

    1. Log in as ORACLE and change to the $ORACLE_HOME/bin directory.
    2. Type the following, where program is the program that aborted:
           file program
    
    1. Add read permissions to the program. At the operating system prompt, type
           $Chmod +r program
    
    1. Log out and then log in as the user who encountered the error.
    2. The next step varies, depending on the version of UNIX you're using. One of the following commands should exist on your machine:

    Command
    Exit Command/Keystroke
    dbxquit
    xdbquit
    sdbq
    adbCtrl+D
    gdbCtrl+D

      Change to the directory where the core dump is located. In the Bourne or Korn shell, type the following:
           dbx $ORACLE_HOME/bin/program core 2>&1 | tee /tmp/stack            trace
    
      In the C shell, type the following:
           dbx $ORACLE_HOME/bin/program core  | tee /tmp/stacktrace
    

    1. The stack trace should be produced in the file stacktrace. Exit the debug tool.

    Other Useful Files

    Several other useful files can help you diagnose system problems:

    Analyze Your Database with UTLBSTAT/UTLESTAT

    Oracle provides you with two scripts, utlbstat.sql and utlestat.sql, that you can use to tune a database. The utlbstat component gathers the initial performance statistics and places them in temporary tables; the utlestat component gathers the performance statistics at the end of the observation period and places them in temporary tables. When utlestat is finished, it compares the information in the two sets of temporary tables and places the result of comparison in another set of temporary tables. A report is generated from the results and is placed in an ASCII report.txt file in the current directory.

    UTLBSTAT and UTLESTAT don't affect performance
    Using UTLBSTAT and UTLESTAT doesn't result in a performance degradation, except for a few minutes while these scripts run. This is because these scripts simply take a snapshot of the system at the beginning and a snap-shot at the end of the period.

    Use the utlbstat and utlestat scripts

    1. Determine the time period for analysis. You should choose the time period to represent your normal workload. You shouldn't run utlbstat right after the database is started, for example, because it won't represent the normal working condition of the database. Let the database run for some time so that the system caches can be loaded appropriately and represent a stable running environment.
    2. Set initialization parameters. In the init.ora file, set TIMED_STATISTICS to TRUE and then restart the database. Alternatively, if you don't want to restart the database, connect internally and type the following at the Server Manager prompt:
           ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
    
    1. Run utlbstat.sql from the Server Manager prompt at the appropriate time:
             SVRMGR> @%RDBMS80%\admin\utlbstat
      
    2. Run utlestat.sql from the Server Manager at the end of the time period:
           SVRMGR>@%RDBMS80%\admin\utlestat
    
    Keep the database running while getting reports
    Don't shut down the database between the running of UTLBSTAT and UTLESTAT; otherwise, the results will be useless.

    The resulting report.txt file provides the following information:

    Let's analyze each section in more detail and determine how to tune the system based on the report's information.

    Library Cache

    The SQL AREA, TABLE/PROCEDURE, BODY, and TRIGGER rows in the following output show library cache activity for SQL statements and PL/SQL blocks. The other rows indicate library cache activity for object definitions used by Oracle for dependency management.

    
    
    
    How many times object was found in memory (equivalent to # of parses)
    How many times did object was executed
    Object definition was aged out for lack of space
    Should be greater than 0.9 for all rows
    Should be (ideally 0) not more than 1 percent of PINS

    Your aim is to reduce parsing, enable sharing of statements, reduce aging out, and provide enough space to large objects.

    If you have a lot of reloads or the GETHITRATIO is less than 90%, you should increase the SHARED_POOL_SIZE parameter in the initialization file.

    System Statistics

    The following file also provides several comments (not shown here) that you can read for further information:

    Improving database writerperformance
    From these system statistics, DBWR buffers scanned row and DBWR checkpoints should give you a good idea about the amount of load on the DBWR process. On operating systems such as UNIX that allow you to have multiple DBWR processes, you should increase the parameter DB_WRITERS in the initialization file to two per database file. Also, increase DB_BLOCK_WRITE_BATCH to reduce the number of times the DBWR is signaled to perform a write operation.
    Statistic                  Total Per Transa Per Logon Per Sec
    -------------------------- ----- ---------- --------- -------
    CPU used by this session    9904       8.94     70.24   27.66
    CPU used when call started  9904       8.94     70.24   27.66
    CR blocks created              9        .01       .06     .03
    DBWR buffers scanned         907        .82      6.43    2.53
    DBWR checkpoints               6        .01       .04     .02
    DBWR free buffers found      763        .69      5.41    2.13
    DBWR lru scans                89        .08       .63     .25
    DBWR make free requests       76        .07       .54     .21
    DBWR summed scan depth       907        .82      6.43    2.53
    DBWR timeouts                 91        .08       .65     .25
    OS System time used       284700     256.95   2019.15  795.25
    OS User time used         838900     757.13   5949.65  2343.3
    SQL*Net roundtrips to/from  8899       8.03     63.11   24.86
    background checkpoints comp    7        .01       .05     .02
    background checkpoints star    6        .01       .04     .02
    background timeouts          214        .19      1.52      .6
    bytes received via SQL*Net1167468   1053.67   8279.91 3261.08
    bytes sent via SQL*Net to c343632    310.14   2437.11  959.87
    calls to get snapshot scn:  5622       5.07     39.87    15.7
    calls to kcmgas             1130       1.02      8.01    3.16
    calls to kcmgcs              101        .09       .72     .28
    calls to kcmgrs             9064       8.18     64.28   25.32
    change write time            529        .48      3.75    1.48
    cleanouts only - consistent    1          0       .01       0
    cluster key scan block gets  334         .3      2.37     .93
    cluster key scans            303        .27      2.15     .85
    commit cleanout failures: b    1          0       .01       0
    commit cleanout number succ 1329        1.2      9.43    3.71
    consistent changes             9        .01       .06     .03
    consistent gets             5784       5.22     41.02   16.16
    cursor authentications      3744       3.38     26.55   10.46
    data blocks consistent read    9        .01       .06     .03
    db block changes           18659      16.84    132.33   52.12
    db block gets              15638      14.11    110.91   43.68
    deferred (CURRENT) block cl 1260       1.14      8.94    3.52
    enqueue conversions          139        .13       .99     .39
    enqueue releases            2472       2.23     17.53    6.91
    enqueue requests            2466       2.23     17.49    6.89
    execute count              10859        9.8     77.01   30.33
    free buffer requested        447         .4      3.17    1.25
    immediate (CR) block cleano    1          0       .01       0
    logons cumulative            141        .13         1     .39
    logons current                 1          0       .01       0
    messages received            684        .62      4.85    1.91
    messages sent                684        .62      4.85    1.91
    no work - consistent read g 3291       2.97     23.34    9.19
    opened cursors cumulative   3655        3.3     25.92   10.21
    opened cursors current         3          0       .02     .01
    parse count                 5728       5.17     40.62      16
    parse time cpu              1602       1.45     11.36    4.47
    parse time elapsed          1799       1.62     12.76    5.03
    physical reads                29        .03       .21     .08
    physical writes             1021        .92      7.24    2.85
    recursive calls            25096      22.65    177.99    70.1
    recursive cpu usage         5052       4.56     35.83   14.11
    redo blocks written         1420       1.28     10.07    3.97
    redo buffer allocation retr   11        .01       .08     .03
    redo entries                9339       8.43     66.23   26.09
    redo log space requests       13        .01       .09     .04
    redo log space wait time     856        .77      6.07    2.39
    redo size                1796924    1621.77  12744.14 5019.34
    redo small copies           1359       1.23      9.64     3.8
    redo synch time             5011       4.52     35.54      14
    redo synch writes            565        .51      4.01    1.58
    redo wastage             1076955     971.98   7637.98 3008.25
    redo write time             5529       4.99     39.21   15.44
    redo writer latching time      7        .01       .05     .02
    redo writes                  994         .9      7.05    2.78
    rollback changes - undo rec  278        .25      1.97     .78
    rollbacks only - consistent    9        .01       .06     .03
    session logical reads      21135      19.07    149.89   59.04
    session pga memory      20645272   18632.92 146420.3757668.36
    session pga memory max  20645272   18632.92 146420.3757668.36
    session uga memory        232400     209.75   1648.23  649.16
    session uga memory max   5826432    5258.51  41322.2116274.95
    sorts(disk)
    sorts (memory)               282        .25         2     .79
    sorts (rows)                3414       3.08     24.21    9.54
    table fetch by rowid         554         .5      3.93    1.55
    table fetch continued row           indicates row chaining
    table scan blocks gotten     571        .52      4.05    1.59
    table scan rows gotten      3207       2.89     22.74    8.96
    table scans (long tables)      1          0       .01       0
    table scans (short tables)   833        .75      5.91    2.33
    total number commit cleanou 1330        1.2      9.43    3.72
    user calls                  7271       6.56     51.57   20.31
    user commits                1108          1      7.86    3.09
    user rollbacks  # of rollback calls issued by users. User
          rollbacks/user commits, if high indicates a problem.
    write requests               109         .1       .77      .3
    

    Use the following formulas to calculate the data cache hit ratio:

    LOGICAL READS = CONSISTENT GETS + DB BLOCK GETS
    
    HIT RATIO = (LOGICAL READS - PHYSICAL READS) / LOGICAL READS
    

    By using these calculations for the preceding output, we get

    LOGICAL READS =  5784 + 15638 = 21422
    
    CACHE HIT RATIO = (21422 - 29) / 21422 = 99.86%
    

    If the cache hit ratio as calculated here is less than 80%, you should increase the DB_BLOCK_BUFFERS parameter in the initialization file.

    Use the following equation to check the effectiveness of your application to see if indexes are used properly. The result should be close to zero.

    Non-Index lookups ratio = table scans (long tables) /
        table scans (long tables) + table scans (short tables)
                    =  1/(1+833)
                    = close to zero.
    

    Wait Events

    Event Name                    Count Total Time Avg Time
    ----------------------------- ----- ---------- --------
    SQL*Net message from client    9385      41677     4.44
    log file sync                   563       5196     9.23
    write complete waits             68       1624    23.88
    log file switch completion       13        856    65.85
    log buffer space                  7        301       43
    SQL*Net message to client      9385         54      .01
    buffer busy waits                10         25      2.5
    db file sequential read          36         18       .5
    SQL*Net more data from client   139         12      .09
    latch free                        4          2       .5
    control file sequential read     22          0        0
    db file scattered read            1          0        0
    

    Your goal is to eliminate all waits for resources:

    Buffer Busy Waits ratio = Buffer busy waits / Logical reads
                            = 11 / 21422
                            = close to zero
    
    Minimize the waits for buffers
    A ratio of buffer busy waits greater than 4 percent indicates that you need to tune the DB_BLOCK_BUFFERS parameter.

    For example,

    Latch Statistics

    LATCH_NAME           GETS MISSES HIT_RATIO  SLEEPS SLEEPS/MISS
    ------------------ ------ ------ --------- ------ -----------
    cache buffers chai  70230      2         1      0           0
    cache buffers lru    1540      0         1      0           0
    dml lock allocatio   2545      0         1      0           0
    enqueue hash chain   5051      0         1      0           0
    enqueues             6574      0         1      0           0
    ktm global data         1      0         1      0           0
    latch wait list         6      0         1      0           0
    library cache      136074     19         1      3        .158
    library cache load      2      0         1      0           0
    list of block allo   2234      0         1      0           0
    messages             4122      2         1      0           0
    modify parameter v    140      0         1      0           0
    multiblock read ob      6      0         1      0           0
    process allocation    139      0         1      0           0
    redo allocation     12711      4         1      0           0
    row cache objects   15913      0         1      0           0
    sequence cache        431      0         1      0           0
    session allocation   3361      0         1      0           0
    session idle bit    15861      0         1      0           0
    session switching       6      0         1      0           0
    shared pool         10198      0         1      0           0
    sort extent pool        1      0         1      0           0
    system commit numb  16081      0         1      0           0
    transaction alloca   3356      0         1      0           0
    undo global data     3440      0         1      0           0
    user lock             556      0         1      0           0
    26 rows selected.
    
    Anything less than 0.98 indicates a potential problem

    SEE ALSO
    More information on reducing latch contention,

    Rollback Segment Contention

    
    
    
    If the waits-to-gets ratio is greater than 5%, consider adding rollback segments
    Used in determining the waits-to-gets ratio
    Shouldn't be high; set OPTIMAL accordingly


    Shared Pool Size

    The following file also includes columns for COUNT and CUR_USAG (not shown here):

    NAME           GET_REQS GET_MISS SCAN_REQ SCAN_MIS MOD_REQS
    -------------- -------- -------- -------- -------- --------
    dc_tablespaces      283        0        0        0        0
    dc_free_extent       15        3        3        0        9
    dc_segments           3        0        0        0        3
    dc_rollback_se       12        0        0        0        0
    dc_used_extents       3        3        0        0        3
    dc_tablespace_q       3        0        0        0        3
    dc_users           1272        0        0        0        0
    dc_user_grants      849        0        0        0        0
    dc_objects          166        0        0        0        0
    dc_tables          2724        0        0        0        0
    dc_columns          596        0       56        0        0
    dc_table_grants    1260        0        0        0        0
    dc_indexes            2        0       16        0        0
    dc_constraint_d       1        0       13        0        0
    dc_constraint_d       0        0        1        0        0
    dc_usernames        282        0        0        0        0
    dc_sequences          7        0        0        0        7
    dc_tablespaces        3        0        0        0        3
    18 rows selected.
    

    If the GET_MISS to GET_REQS ratio is greater than 15%, consider increasing SHARED_POOL_SIZE.

    I/O Statistics

    The following file also includes READ_TIME and MEGABYTES columns (not shown). The MEGABYTES column shows the size of the tablespaces.

    
    
    
    Try to balance this out overall the tablespaces

    The following file also includes WRITE_TIME, READ_TIME, and MEGABYTES columns (not shown).

    TABLE_SPACE   FILE_NAME READS BLKS_READ WRITES BLKS_WRT
    ----------- ----------- ----- --------- ------ --------
    RBS           rbs01.dbf     1         1    663      663
    SYSTEM       system01.d    38        46    372      372
    TEMP         temp01.dbf     0         0      0        0
    TOOLS       tools01.dbf     0         0      0        0
    USERS       users01.dbf     0         0      0        0
    5 rows selected.
    

    The following actions can be taken to balance out the load:

    Working with Oracle Support

    You can do several things to obtain quality service from Oracle Support. These things are very simple but important because they tell Oracle Support the exact nature and urgency of your problem, so they can provide you with the best possible resolution:


    © Copyright, Macmillan Computer Publishing. All rights reserved.