'; zhtm += ''; zhtm += '

' + pPage + ''; zhtm += '
'; figDoc.write(zhtm); figDoc.close(); popUpWin.focus(); } //--> Using Oracle8 -- Chapter 20 - Tuning Your Memory Structures and File Access

Using Oracle8


Chapter 20

Tuning Your Memory Structures and File Access


Why Oracle Must Be Tuned

One of Oracle8's strongest assets is its rich tuning facility. Oracle8 offers a staggering array of parameters that you can tune to achieve maximum performance for any database system. Because every database has different data structures and different transaction loads, you can maximize the return on your hardware investment only by tuning Oracle. This will minimize the total cost of ownership for your database system and provide users the maximum performance Oracle can deliver.

Although there may be no need initially to dive into the full array of tuning parameters for acceptable performance, the need will become more acute as a database becomes larger and services more transactions. Oracle, out of the box, is tuned to run acceptably well on systems with low memory and CPU resources; most databases of any significant size should be tuned before being considered production quality.

Database Buffer Cache

Disk I/O operations are hideously expensive when compared to I/O operations performed in memory. In fact, reading information from memory is often thousands of times faster than when reading from disk. Oracle capitalizes on this by caching the most recently used database blocks in a memory segment that resides in the SGA. This memory block is known as the database buffer cache, or sometimes as just the buffer cache.

Beware of swapping
When allocating a larger database buffer cache, pay attention to whether the operating system is swapping real memory pages to disk to alleviate a memory shortage. Swapping is terribly expensive and will, by far, outweigh any benefit you gain from an oversized buffer cache.

When tuning the database buffer cache, bigger is usually better. The more database blocks that Oracle can keep in memory, the faster it will perform. When allocating more memory to the database buffer cache, be aware that as the buffer hit ratio approaches 100 percent, it will take increasingly larger amounts of additional memory for the buffer cache's hit ratio to increase noticeably.

Most active database systems have a discernible working set of very frequently used database blocks. When tuning a database, the goal is to keep at least the working set of database buffers in memory to maintain acceptable performance. Not surprisingly, as a database grows larger, it will need a larger buffer cache to maintain the same hit ratio.

Examining Performance

We're interested in knowing the hit ratio of the buffer cache for a database. To determine this, we must know the number of

Consistent gets
Consistent gets are logical block reads associated with Oracle's read consistency system. When rollback segment blocks must be read to provide read consistency, they're not included in the normal logical block reads statistic.

The hit ratio is computed as follows:

(consistent gets + logical reads - physical reads)
   / (consistent gets + logical reads)

The V$SYSSTAT table contains all the needed information since the database was last started. This table is easily accessed through SQL*Plus; the following example uses SQL*Plus to find the database buffer cache hit ratio:



FROM V$SYSSTAT;
Hit Ratio
----------
77.0755025
1 row selected.
SQL>
Used to read rows from V$SYSSTAT table
Used to read the value associated with each specified parameter

Ideally, a database running OLTP-type transactions should see a hit ratio of 90 percent or more. During long-running batch jobs, the hit ratio may fall into the 70-80 percent range. Anything lower than 70 percent may indicate that a larger database buffer cache is needed.

Many DBAs who are familiar with other database systems on the market will find queries against V$ views to gather performance statistics crude and time-consuming. Fortunately, Oracle has companion packs for its Enterprise Manager system that will show, graphically, a database's performance health. These tools can also go a long way toward helping you tune your database(s) for optimum performance. Third-party vendors, including BMC and Platinum, offer similar tools that also work quite well for non-Oracle databases; these tools are particularly well suited for DBAs supporting a heterogeneous database environment.

Testing New Buffer Cache Settings

Allocating a larger database buffer cache often also means adding memory to the database server. Before writing a purchase order, it's often useful-if not necessary-to know the benefit more memory will bring to the database.

Impact of DB_BLOCK_LRU_STATISTICS
Be aware that DB_BLOCK_LRU_EXTENDED_STATISTICS has a noticeable impact on Oracle's internal overhead. Don't run DB_BLOCK_LRU_EXTENDED_STATISTICS any longer than you must to gather the needed statistics.

Oracle8 has a utility that lets you see the effect on the buffer cache hit ratio after a hypothetical change in the size of the buffer cache. By enabling the DB_BLOCK_LRU_EXTENDED_STATISTICS parameter in the INIT.ORA file, Oracle8 will record the hypothetical statistics in the following tables:

INDXIdentifies each new phantom block for which you're collecting statistics
COUNTIndicates how many additional cache hits would occur if the block number INDX were added to the database buffer cache

Enable DB_BLOCK_LRU_EXTENDED_STATISTICS
  1. Edit the appropriate INIT.ORA file and add the following line:
DB_BLOCK_LRU_EXTENDED_STATISTICS = additional buffers
  1. Shut down the database.
  2. Restart the database.

Disable DB_BLOCK_LRU_EXTENDED_STATISTICS

  1. Edit the appropriate INIT.ORA file and remove the following line:
DB_BLOCK_LRU_EXTENDED_STATISTICS = number
  1. Shut down the database.
  2. Restart the database.

Calculating a New Hit Ratio from Additional Buffer Cache

The database is now running an OLTP application that achieves only an 81 percent database buffer cache hit ratio. Let's test the effect on the hit ratio by adding 2,048 blocks to the buffer cache.

First, you set the number of hypothetical new buffers. Use an operating system editor to edit the INIT.ORA file to include the following line:

DB_BLOCK_LRU_EXTENDED_STATISTICS = 2048

Next, shut down and restart the database. Server Manager is run from the operating system command line. The shutdown and startup session is as follows:

SVRMGR> connect internal;

Connected.

SVRMGR> shutdown immediate;

Database closed.
Database dismounted.
ORACLE instance shut down.

SVRMGR> startup

ORACLE instance started.
Database mounted.
Database opened.

SVRMGR>

At this time, either normal OLTP database operations should resume or test scripts should be run to simulate normal database load. In either case, make sure that enough realistic activity occurs for the database buffer cache hit ratio to stabilize.

Now calculate the number of additional hypothetical cache hits for all 2,048 phantom database buffers, as shown in this Server Manager session:

SVRMGR> SELECT SUM(COUNT) FROM X$KCBRBH;
SUM(COUNT)
----------
     43712
1 row selected.
SVRMGR>
Using X$ tables
X$ tables are owned by the SYS user and usually don't have public synonyms established. These examples use Server Manager because connect internal always connects as the SYS user. You can run these same queries from SQL*Plus, as long as you log in as the SYS user.

We now know that 43,712 additional cache hits would occur if the buffer cache were enlarged by 2,048 blocks (rather than force physical reads).

Finally, you need to calculate the hypothetical cache hit ratio. Let's use the same formula used in the last section to compute the cache hit ratio, but this time subtract 43,712 phantom cache hits from the actual physical reads as shown:

SVRMGR> SELECT ((SUM(DECODE(NAME, 'consistent gets',
     2> VALUE, 0))+
     3> SUM(DECODE(NAME,'db block gets',VALUE, 0)) -
     4> (SUM(DECODE(NAME,'physical reads',VALUE,0))) -
     5> 43712) / (SUM(DECODE(NAME, 'consistent gets',
     6> VALUE, 0))+SUM(DECODE(NAME,'db block gets',VALUE,
     7> 0))) * 100) "Hit Ratio" FROM V$SYSSTAT;
Hit Ratio
----------
92.2935720
1 row selected.
SVRMGR>

You can see here that adding 2,048 blocks to the database buffer cache would cause the cache hit ratio to increase from 81-92 percent.

What's the next step?
Knowing that there would be a very noticeable rise in the database buffer cache hit ratio, you would want to add 2,048 to the DB_BLOCK_BUFFERS parameter in the INIT.ORA file. Don't forget to turn off DB_BLOCK_LRU_EXTENDED_STATISTICS!

Change the size of the database buffer cache

  1. Use any standard operating system text file editor to change the DB_BLOCK_BUFFERS parameter in the INIT.ORA file to reflect the size of the buffer cache in database blocks.
  2. Stop the database.
  3. Restart the database.

Example of Enlarging the Buffer Cache by 2,048 Blocks

The current INIT.ORA file has the following parameter line:

DB_BLOCK_BUFFERS = 8192

In the example from the preceding section, it was determined that adding 2,048 blocks would help improve database response time. Now edit the INIT.ORA file to contain these lines:

Comment INIT.ORA file changes
When you're diagnosing sudden changes in the database's performance or reliability, it's often invaluable to know what crucial parameters were changed recently. Any time parameters are changed in INIT.ORA, you should add at least a one-line description of the change, who made it, and when it was done.

#
# This setting was changed on 2 FEB 98 by TMG x3707. Buffer
# Cache hit ratio rose from 81% to 92% by increasing it
# 2048 blocks
# DB_BLOCK_BUFFERS = 8192
#
DB_BLOCK_BUFFERS = 10240

Whenever the DB_BLOCK_BUFFERS parameter is changed, the database will have to be shut down and restarted for the changes to take effect.

Tuning the Shared Pool

Oracle8's shared pool contains the following:

Recall that every SQL statement executed by the database engine must be parsed, evaluated, and executed. This process is relatively time-consuming, because Oracle8 has very sophisticated methods for determining the optimum execution plan. By caching execution plans and the data dictionary, Oracle8 can quickly execute similar SQL statements by retrieving preprocessed execution plans and data dictionary information from the shared pool.

The goal of tuning the shared pool will be to achieve a +95 percent hit ratio on the library cache and a 95-100 percent hit ratio on the data dictionary cache during all operations.

Examining Library Cache Performance

The V$LIBRARYCACHE view contains information on the library cache, including cache hit statistics. The cache hit ratio is determined by issuing the following SQL statement:

SELECT SUM(PINS)/(SUM(PINS)+SUM(RELOADS))*100
   "Cache Hit Ratio"
FROM V$LIBRARYCACHE;

SQL*Plus will reply with a single row and column that will be the library cache hit ratio.

Example of Calculating the Library Cache Hit Ratio

The following command is issued after you log on to SQL*Plus as a DBA user (the SYSTEM user in this case):

SQL> SELECT SUM(PINS)/(SUM(PINS)+SUM(RELOADS))*100
  2  "Cache Hit Ratio"
  3  FROM V$LIBRARYCACHE;

Cache Hit Ratio
---------------
     99.8039216

SQL>

Here we can see the library cache hit ratio is almost 100 percent, which is ideal. If the library cache hit ratio were lower than 95 percent, you would probably want to add memory to the shared pool.

Examining Data Dictionary Cache Performance

Data dictionary cache statistics are kept in the V$ROWCACHE view, which categorizes data dictionary cache statistics. However, most DBAs will benefit most from knowing an aggregate hit/miss percentage with the following SQL statement:

SELECT (1-(SUM(GETMISSES)/SUM(COUNT)))*100
   "Dictionary Cache Hit Ratio"
FROM V$ROWCACHE;

This statement will return the data dictionary cache hit ratio since the database was last started.

Example of Calculating the Data Dictionary Cache Hit Ratio

The following command is issued after you log on to SQL*Plus as a DBA user (the SYSTEM user in this case):

SQL> SELECT (1-(SUM(GETMISSES)/SUM(COUNT)))*100
  2  "Dictionary Cache Hit Ratio"
  3  FROM V$ROWCACHE;

Dictionary Cache Hit Ratio
--------------------------
                63.1687243

SQL>
Considerations with low hit ratios
When a low hit ratio is encountered, always consider how long the database has been up and running. If a database was just started or started a while ago with little or no subsequent application activity, it's very likely that Oracle simply hasn't had the opportunity to cache the information it will need. Oracle caches information only when it's needed; all information contained in the cache was preceded by at least one cache miss.

Notice that the dictionary cache hit ratio is only 63 percent-far below the ideal ratio of 99 percent and the acceptable threshold of 95 percent. In this case, the shared spool size almost certainly must be increased.

Setting New Shared Pool Parameter Values

Although we examine the shared pool in library cache and data dictionary cache steps, adding memory to either is accomplished by adding memory (in bytes) to the shared pool with the SHARED_POOL_SIZE parameter in the applicable INIT.ORA file. The following line would allocate 3,500,000 bytes (~3.5MB) to an instance's shared pool. This is the default setting found in an INIT.ORA file:

Library versus data dictionary cache allocation
Oracle8 automatically proportions the shared pool into data dictionary and library cache segments. You can't adjust their respective sizes manually. If either cache hit ratio falls below par, you must increase the size of the shared pool.

shared_pool_size = 3500000

Choosing new sizes for the shared pool is something more akin to art than science. Based on the library and data dictionary cache hit ratios, you may need to increase the shared pool perhaps just 25 percent, or to many times the default size.

When choosing new shared pool sizes, be aware that it will usually take a much larger increase in memory to increase a cache hit ratio from 90 to 95 percent than from 70 to 80 percent.

Change the size of the shared pool

  1. Modify the SHARED_POOL_SIZE parameter in the appropriate INIT.ORA file. The following sets the shared pool to 16MB:
#
# Original setting increased to 16MB. Library cache hit
# ratio was 61%.
# TMG: 21 Jan 98
# shared_pool_size = 3500000
SHARED_POOL_SIZE = 16777216

  1. Stop the database.
  2. Restart the database.

Redo Log Buffer

The redo log subsystem is probably one of the most heavily accessed components of Oracle; every change to the database system must go through it. Spooling all changes directly to disk would impair transaction throughput, so Oracle8 buffers redo log information in the SGA to better utilize precious I/O resources.

Considerations for SMP systems
In addition to the redo log buffer size, SMP database DBAs must also consider the number of latches needed to prevent latch contention when accessing the redo log buffer area. Consult Oracle's documentation for information on the V$LATCH view.

It's important to size the redo log buffer appropriately to avoid user database processes from having to wait on space in the buffer to continue operations. During high update transaction rates, the log writer process may not be able to keep up with several user processes all submitting redo log information that must be written. The redo log buffer holds redo log information so that user processes can continue; as the log writer process catches up, it will purge information from the redo log buffer.

Examining Performance

When tuning the redo log buffer, focus on these statistics in the V$SYSSTAT view:

Use the following SQL command to gather redo log statistics:

SELECT NAME,VALUE FROM V$SYSSTAT WHERE NAME IN
('redo buffer allocation retries','redo log space requests');

The result of this query will be two rows listing the value of each parameter.

Example of Redo Log Performance Evaluation

The preceding command is issued and the result is shown in the following sample SQL*Plus session for a DBA user (SYSTEM in this case):

SQL> SELECT NAME,VALUE FROM V$SYSSTAT WHERE NAME IN
  2  ('redo buffer allocation retries',
  3  'redo log space requests');

NAME                                                  VALUE
------------------------------------------------ ----------
redo buffer allocation retries                          835
redo log space requests                                 508

SQL>

Because both statistics are quite high in this case, it is advisable to add memory to the redo log buffers.

Set new redo log buffer parameters

  1. Change the LOG_BUFFER parameter in the appropriate INIT.ORA file to the new size of the redo log buffer, in bytes. Any text file editor may be used for this purpose.
  2. Stop the database.
  3. Restart the database.

Guidelines for LOG_BUFFER values
Oracle's default will depend on your platform and database version. Typically, however, the default is too small. 64KB to 128KB will usually work out well for smaller databases. 512KB to 1MB (or even more) may be needed on larger database systems.

Example of Setting LOG_BUFFER in INIT.ORA

The following INIT.ORA file section was changed in an operating system-supplied text editor to increase the size of the LOG_BUFFER parameter from 16KB to 256KB:

#
# 16k Proved to be too small. Increased the size to 256k
# TMG: 03 Mar 98
# LOG_BUFFER = 16384
#
LOG_BUFFER = 262144

Process Global Area

Whenever a connection is made to an Oracle database, a unique section of memory is allocated to store session-level information. The area of memory allocated for each session is known as the Process Global Area (PGA).

Tuning the PGA is largely done to allow Oracle to scale well based on hardware constraints and the needs and use patterns of the predominate applications. As more and more users begin to utilize a particular database, it will become more important to ensure that Oracle is tuned to scale as well as possible with a given hardware configuration.

Comparing Dedicated Versus Shared Servers

Oracle8 offers two different options for managing each session connected to the database:

Use multithreaded only when necessary
Many DBAs mistakenly believe that Oracle's multithreaded server (MTS) will perform better than the dedicated model. This is usually the result of contemporary lingo that almost always associates "multithreaded" with faster. In reality, MTS will slow a database's operation overall. MTS can drastically increase the number of concurrent database sessions-that is the only advantage it offers. Don't use MTS unless scalability requirements dictate its use.

You can achieve the best performance by using the dedicated server process. Because no sharing of the server processes exists, when a database session needs to access the database, a ready server process is guaranteed to service its needs.

In environments where there could be many hundreds or even thousands of simultaneous connections to the database, Multi-threaded Server can offer better scalability. Short and infrequent queries are often made to the database server in OLTP environments. In the dedicated server model, each session (which may go for hours between database activity) will have its own server process waiting for SQL commands to execute. Each server uses a quantity of system memory and a slot in the operating system's process table, so there's a real limit to the number of the server processes that may be running at any given time. By using a fixed pool of server processes, you can continue to add concurrent users without consuming any more system memory.

Multithreaded Server's downside exists precisely for the reason it offers increased scalability. Each server process can service only one database connection at a time (despite the deceiving "multithreaded" adjective); if more connections need servers than there are servers available, the additional connections must wait until a server process becomes available. Obviously, perceived performance can suffer noticeably when a client must wait for a server process to become available.

Setting up MTS
Installing and configuring Multi-threaded Server can be a rather complex task that's outside the scope of this book. Refer to Oracle's product documentation for installation and configuration instructions.

Managing Sort Space

Sorting will occur if an SQL statement includes ORDER BY or GROUP BY. It can, however, also occur when joining tables and various other Oracle internal operations. Although much can be done to improve sorting performance, the best way to increase performance is to eliminate sorts in the first place when possible.

When Oracle must sort a set of data, it must use a temporary workspace to create the sorted list. Oracle can use two locations:

As you can imagine, sorting in memory is much faster than sorting on disk. The SORT_AREA_SIZE parameter in the INIT.ORA file describes, in bytes, the amount of space available to each server process for performing sorts in memory.

The V$SYSSTAT view keeps statistics on the number of sorts performed in memory and on disk. Use the following SQL statement to retrieve sorting statistics:

SELECT NAME,VALUE FROM V$SYSSTAT WHERE
NAME LIKE 'sort%';
When to read statistics
As with all other statistics, be sure to allow Oracle and the predominate applications to run for a fair amount of time before acting on any statistics gathered. If Oracle and the associated application(s) haven't run long enough (a full business day is typical), the values retrieved may not truly reflect the workings of the database.

OLTP applications should perform almost no sorts on disk (user response is paramount). DSS systems may see larger sorts running on disk but, if possible, these too should be run in memory for the best possible performance.

Tune the database for sorting

  1. Query the V$SYSSTAT view to determine the number of sorts performed in memory and on disk.
  2. If fewer sorts on disk are desired, increase the SORT_AREA_SIZE parameter in the appropriate INIT.ORA file.
  3. Stop and restart the database.
  4. Allow the database to run at least a full business day for accurate sorting statistics to be gathered.
  5. Repeat this procedure until the number of sorts performed on disk is acceptable.

Example of Tuning the Sort Space

The following SQL*Plus session shows the V$SYSSTAT view being checked by the SYSTEM user:

SQL> SELECT NAME,VALUE FROM V$SYSSTAT WHERE
  2  NAME LIKE 'sort%';

NAME                                VALUE
------------------------------ ----------
sorts (memory)                       2922
sorts (disk)                           97
sorts (rows)                        32693

SQL>

Because the primary application is OLTP in nature, the number of sorts performed on disk (97) is a bit high. The following line appears in the relevant INIT.ORA file:

sort_area_size = 65536

The first test to reduce disk sorts will increase the sort_area_size parameter to 128KB. The new section will be as follows:

#
# 97 sorts were being performed on disk. Parameter
# increased to 128k
# TMG: 11 Apr 98
# sort_area_size = 65536
sort_area_size = 131072

After the database was stopped and restarted (to activate the new parameter), one full business day elapsed and the V$SYSSTAT view was queried again. These are the results:

NAME                                VALUE
------------------------------ ----------
sorts (memory)                       3661
sorts (disk)                            3
sorts (rows)                        34014

By increasing the sort_area_size parameter to 128KB, the number of disk sorts has been lowered to an acceptable level.

Managing Data File Contention

Databases often exhibit the characteristic that most data that must be accessed is physically close together. Even though a database may be many gigabytes in size, only a few dozen megabytes or so may be accessed over an hour. If this information is physically close, it's likely that only a handful of database data files are actively being used.

Reducing hot spots by design
When designing a database, try to identify which segments will receive the highest volume of data accesses. Heavily accessed segments ideally should be placed in their own tablespaces with composing data files distributed across different disk drives and controllers.

Data files accessed at much higher rate than most others over a particular time period are known as hot spots. From a tuning perspective, hot spots should be minimized if at all possible. Most database servers are designed with data files distributed over many disk drives to maximize the potential for parallel I/O. Hot spots result in just a few disk drives at 100 percent utilization, whereas most others sit idle. When a disk drive (or controller) is fully utilized, any further requests have to wait in a queue to be serviced.

Fortunately, Oracle keeps statistics on each data file's I/O rates to help you identify hot spots. The V$FILESTAT contains the information needed to identify and reduce data file hot spots.

Locating Data File Hot Spots

The V$FILESTAT view will be queried for physical reads and physical writes on each data file. By looking at the number of physical reads and writes in each of a database's data files, you can discern which data files are hot spots. Be aware that the values reported for physical reads and writes are counted from the time the database was last started. To identify hot spots during a particular period of time, it's necessary to compare the contents of the V$FILESTAT view at the beginning and end of the time period of interest.

The following SQL statement shows the physical read and write values for each data file sorted by the total number of physical accesses:

select name,phyrds,phywrts,(phyrds+phywrts) Total
from v$filestat s, v$datafile d
where s.file#=d.file#
order by total desc;
V$FILESTAT and V$DATAFILE views
This SQL code uses the V$FILESTAT and V$DATAFILE views. This is necessary because the statistics in the V$FILESTAT view are identified by file number only. By inner joining the V$FILESTAT and V$DATAFILE views, file statistics can be mapped to filenames.

By looking at data files at the top of the result from this SQL statement, you can identify which physical devices are being accessed the most. If any of the top data files are on the same physical disk drives or controllers, you should consider moving them to different drives or controllers.

Example of Identifying Hot Spots

The V$FILESTAT view is queried for hot spot information by the SYSTEM user in SQL*Plus. The resulting session is as follows:

SQL> select name,phyrds,phywrts,(phyrds+phywrts) Total
  2  from v$filestat s, v$datafile d
  3  where s.file#=d.file#
  4  order by total desc;
NAME                           PHYRDS     PHYWRTS     TOTAL
----------------------------- -------- ---------- ------
/oracle/sapdata4/stabi.data2    322805     6373      329178

/oracle/sapdata1/stabd.data1     57173     2871       60044
...
/oracle/sapdata5/user1i.data1       11        0          11
/oracle/sapdata2/docud.data1         6        3           9
/oracle/sapdata2/loadd.data1         6        1           7
/oracle/sapdata3/loadi.data1         4        0           4
/oracle/sapdata2/protd.data1         0        0           0
These two data files reside on the same physical device (mounted on /oracle/ sapdata2)

Because btabd.data1 and protd.data2 would be considered hot spots and both reside on the same physical device, it's possible they should be moved to different physical devices. As with all statistics, confirm that they are a representative sample before using them to guide any changes to the database.

Using Striping Strategies

Data-access patterns often unavoidably produce data file hot spots no matter how well the database is designed. You can use two key strategies to minimize data file contention caused by these hot spots:

The simplest and (often) cheapest remedy for contention is to recreate a tablespace identified as housing hot spots with more data files that are smaller in size. The key here is to keep data files smaller than they were in the original; this will force Oracle to spread extents out over more data files (and, therefore, physical devices) and minimize data file contention. This solution is most ideal on servers that have many individual disk drives not being heavily used.

In recent years, RAID technology has been rapidly adopted in many database servers to maximize I/O throughput with little administrative work. Because the hardware automatically stripes physical (not database) storage blocks for each data file across many disk drives, no changes to the database are necessary. RAID technology can help reduce data file contention without causing you much (if any) additional workload.

RAID technology isn't a magic cure
Inexperienced DBAs sometimes mistakenly believe that RAID technology eliminates the need for them to properly design a data-base's physical storage layout (such as isolating tables and indexes on different tablespaces and sizing segments). Unfortunately, although RAID can offer perhaps two or even five times the throughput of a single disk, proper database design can offer overall performance increases of several magnitudes. RAID technology will offer the most benefit to properly designed databases.

By implementing one or both of these strategies, the performance degradation caused by data file contention can be reduced to an acceptable level.

Maximizing Redo Log Performance

Oracle must keep a log of all its write activity in order to facilitate lossless recovery from instance and media failure. The redo logs contain exceedingly detailed information about most every change Oracle makes to a database's physical file components.

Because so much information is written to the active redo log, it can easily become a bottleneck that can significantly limit the performance of an otherwise well-tuned database. When you're tuning the redo logs for performance, your objective will be to minimize the amount of time the database must spend writing redo information to disk.

Sizing Redo Logs to Allow Checkpoint Completion

Checkpoints, while necessary for system integrity, also can drastically affect system performance depending on their frequency and duration. The system should ideally be tuned such that a checkpoint occurs only during a log switch.

Use the checkpoint process
Most sites should use the optional checkpoint process to take care of updating data file headers during checkpoints. By using the checkpoint process, the log writer process won't be interrupted from writing redo log information during a check-point, thereby further reducing performance problems caused by checkpoint activities.

When sizing redo logs, it's absolutely essential that they be sufficiently sized so that checkpoints can finish well before a log switch is necessary. Oracle's V$SYSSTAT view contains the statistical values for background checkpoints started and background checkpoints completed. When the values for these statistics vary by more than 1, at least one checkpoint didn't finish before a log switch was necessary and log sizes must be increased.

Use the following SQL statement to query the V$SYSSTAT view for the background checkpoint statistics:

select name,value from v$sysstat
where name like 'background checkpoint%';

Two rows will be returned: one with the count of background statistics started and another with the count of background statistics finished.

Example of Checking for Checkpoint Completion

The V$SYSSTAT view is queried for the background checkpoint statistics by the SYSTEM user in SQL*Plus. The example session follows:

SQL> select name,value from v$sysstat
  2  where name like 'background checkpoint%';

NAME                                                  VALUE
-------------------------------------------------- --------
background checkpoints started                         4521
background checkpoints completed                       4520

SQL>
Identifying the problem
The two background checkpoint statistics vary by only 1. This indicates that checkpoints are finishing before a log switch is forced. If the background checkpoints completed were 3,788, we would have to increase the size of the redo log files until the two statistics varied by only 1.

Sizing Redo to Allow Archive Completion

When a database is running in ARCHIVELOG mode and a log file switch occurs, the database will immediately begin filling a new redo log group as the ARCH process begins copying one of the completed redo logs to the archive log destination. It's important to make sure that the ARCH process finishes its task well before another redo log group is filled. If the ARCH process can't keep up with redo log generation, Oracle will have to suspend database activity when all redo log groups have filled. Although database activity will resume as soon as ARCH can catch up and release a redo log group, users and batch jobs will most definitely notice the lack of system responsiveness.

Tape ARCHIVELOG storage can be problematic
Thankfully, most databases don't suffer from this affliction unless redo log devices are substantially faster than the devices storing archived redo logs. In particular, archiving redo logs directly to tape can result in the ARCH process not being able to keep up with redo log generation. Always archive redo logs to disk storage. (Disk is cheap!)

You can check the time the ARCH process is using for each filled redo log. Do this by looking at the time of a log switch in the relevant ALERT.LOG file, and then comparing it to the timestamp of the resulting archived redo log file. If the time required for the ARCH process is close to the time between redo log switches, you should implement one or more of the following options:

Designing Redo to Avoid LGWR and ARCH Contention

When a log switch occurs in Oracle, the LGWR process will immediately begin writing to the next redo log group in a circular fashion. Meanwhile, the ARCH process will begin reading the last redo log to make a copy in the archive log destination. Because these operations happen concurrently, each should be working with two distinct physical disk drives.

Recall that when redo log group n is filled up, Oracle will always switch to the n+1 group (unless n is the last redo log group, in which case Oracle will start with group 1 again). Therefore, when redo log group n is full, a log switch occurs-log group n is read by the ARCH process, while LGWR moves on to writing in group n+1.

Remember to isolate redo logs
Because of the high I/O rates on redo log data files, remember to dedicate disk drives exclusively for redo log use. In an ideal system, at least four disk drives (two data files per group on different drives and two groups on completely different devices) will be dedicated for redo log use.

To ensure that ARCH and LGWR are always working from different physical drives, odd-numbered redo log data files should always reside on completely different disk drives (and, ideally, different controllers) than even-numbered redo log data files.


© Copyright, Macmillan Computer Publishing. All rights reserved.