'; zhtm += ''; zhtm += '

' + pPage + ''; zhtm += '
'; figDoc.write(zhtm); figDoc.close(); popUpWin.focus(); } //--> Using Oracle8 -- Chapter 6 - Managing Redo Logs, Rollback Segments, and Temporary Segments

Using Oracle8


Chapter 6

Managing Redo Logs, Rollback Segments, and Temporary Segments


Database Management Structures

To most database users, tables are the important elements, with indexes being a second component they might consider as useful. In an Oracle environment, however, some structures are essential to the efficiency and integrity of the database. You need to know how to build and manage these components for your database to run smoothly and to allow your users to be able to work with their tables in an orderly manner.

Oracle8 can be forgiving, but don't count on it
Although you can run your database without paying attention to your redo logs and temporary segments, you'll almost certainly pay a performance penalty by ignoring them. In most cases, these penalties will be severe and may cause your data-base to fail to meet your users' requirements. Rollback segments are a little more intrusive, at least if you pay any attention to Chapter 5's recommendations to use multiple tablespaces in your database, because you'll find that the default database structures won't support DBA or user-created segments in new tablespaces.

The three structures you look at in this chapter are redo log files, rollback segments, and temporary segments. If you've already read Chapter 5 "Managing Your Database Space," you'll already be aware of some characteristics of these last two. You have been exposed to the concept of redo log files if you've read Chapter 2 "Creating a Database." Although the purpose of these structures will be touched on in this chapter, the emphasis in the following sections is to help you design, build, and manage them to the benefit of your database.

Managing Redo Log Files

One of the slowest operations performed in a relational database is the process of transferring data between disk storage (where it's maintained for long-term safety) and memory (where it needs to reside to be accessed and modified by database users). Although providing you with a number of parameters to tune memory and help avoid unnecessary disk reads, Oracle has its own mechanism-known as deferred writes-to reduce the overhead of unnecessary disk writes. Basically, it means that Oracle blocks that have been modified by user processes while sitting in memory will be written back to disk when they can be written most efficiently. This approach takes into account how recently the last change was made, how many blocks can be written with a single write operation, and how much of the space occupied by these blocks is needed for further disk reads.

One key issue to understand about deferred writes is that blocks aren't moved back to disk just because the changes on them have been committed by a user, and that blocks containing uncommitted changes are just as likely to be written to disk as blocks with committed changes. In other words, the act of issuing a COMMIT doesn't override the deferred write considerations. This, in turn, leads to a situation where neither the contents of memory nor the contents of the data files represent a coherent picture of the database transactions. With such a design, it's essential that Oracle provide a mechanism to restore the database to a consistent state should the contents of memory be lost. Of course, disk loss also needs to be protected against, but you can use a number of techniques, discussed in the chapters in Part V, "Backing Up Your Oracle Database," to help ensure that such a loss can be recovered. There's no method to "back up" the contents of memory in case it should suffer a failure.

The importance of a COMMIT
When users issue the COMMIT command and receive an acknowledgment that the command is processed, the changes to the transaction are considered "permanent." This doesn't mean that they can't be changed in the future, but that the assigned values should be available to all other users of the database until they're changed again. This requires that the DBMS guarantees that committed data won't be lost by any fault of the database itself (database vendors can't control the quality of the media on which their databases are stored). Oracle has to employ a method-redo logs-to ensure that the contents of unwritten blocks containing committed data can be restored should unexpected memory loss occur.

Understand that Oracle can lose its memory area for many reasons, not just the loss or corruption of memory due to a chip failure, but also due to an unexpected operating system failure, loss of a key Oracle process that prevents further processing (and, hence, further access to memory), or a sudden shutdown of the Oracle instance without normal shutdown activity (a SHUTDOWN ABORT). Anytime Oracle stops operating without being able to complete a requested shutdown command, it's said to have suffered "instance failure." In any such case, there's no opportunity to copy block images from memory down to disk nor to reload blocks that contain uncommitted changes. Instance failure will require that, when the instance is restarted, an instance recovery be performed to reapply all changes associated with committed transactions and to remove all uncommitted changes.

Implied SHUTDOWN ABORT
The server manager command STARTUP FORCE is used to cycle an instance-that is, it stops the current instance and starts a new one. However, it performs an implicit SHUTDOWN ABORT command to stop the running instance. Consequently, this command will cause an apparent instance failure, requiring an instance recovery as part of the requested startup process. You shouldn't use this command to restart an instance quickly, but only when you would normally need to use a SHUTDOWN ABORT.

The redo log files contain the information needed to complete an instance recovery. They will allow the recovery operation to re-execute every command that produced part, or all, of a committed database change, whether the affected database blocks were copied back to disk before the memory failure or not. Similarly, they contain enough information to roll back any block changes that were written to disk but not committed before the memory loss.

Without delving into the details of what really goes into a redo log and how the recovery process works-information that's explained fully in the Oracle8 Server Administrator's Guide and the Oracle8 Server Concepts manuals provided as part of the Oracle documentation-the following sections explain what you need to consider when building your database and preparing it for your users and applications.

Sizing Your Redo Logs

If you make a mistake in the arithmetic in your check register, you can go back to the last time you balanced your account with a bank statement and redo the steps to get a valid current balance. But if you don't know when you last had a valid balance, you can't. Similarly, for Oracle to apply redo information following an instance failure, it must be able to find a starting point at which the status of all data and transactions was known before the failure. This starting point is known as a checkpoint.

Very simply, whenever a checkpoint occurs, Oracle forces every changed block now in memory to be written to disk to ensure that the redo records associated with those changes are stored in the redo log. When these steps are complete, a special checkpoint marker record is placed into the redo log. If the instance fails before the next checkpoint completes, the recovery operation knows that the log file and the data files were synchronized at the previous checkpoint. Any changes made to a data block since that checkpoint was completed are recorded in the redo log entries written after its checkpoint marker. Therefore, recovery can begin at the most recent checkpoint marker record.

With this checkpoint mechanism in place, entries older than the most recent checkpoint aren't needed. To conserve disk space, Oracle will reuse the redo logs. When all the logs are filled up, the LGWR process simply starts to write over the first redo log file again. Figure 6.1 shows how this circular use of the redo logs occurs. For this to be successful, Oracle has to ensure that a checkpoint has been taken before reusing a redo, and this is achieved as part of the processing that occurs at a log switch. At this time, LGWR will automatically begin writing new redo information into the next available log file and will also initiate a checkpoint. Therefore, your redo log size has a direct bearing on the frequency of these default checkpoints.

Figure 6.1 : LGWR reuses the redo log files in a circular fashion, overwriting old records as it goes.

You need to concern yourself with this because the amount of data written to a log file between checkpoints affects the length of the instance recovery process (the act of reapplying changes following an instance failure). If a log file holds a day's worth of transactions, it could easily take a day to reapply them after an instance failure. Assuming that it may take a few minutes to a few hours to research and fix whatever problem caused the instance failure, the total recovery time in such a case could be more than a day. Most businesses can't afford to have a key database unavailable for that long, unless it's a planned period of downtime during which other plans have been made to carry on with essential processing. Instance failure is, by definition, an unplanned event.

Of course, if your database is being used primarily for queries (which, by their nature, don't change the contents of the blocks being read), your redo log could be quite small and still not fill up in a 24-hour period. Only when new data is added, or old data is changed or removed, will redo entries be written to the log, and this is very infrequent in read-intensive databases.

Time to apply redo during instance recovery
If you're running Oracle8 on a multi-CPU platform, it's likely that your users are creating transactions and the related redo log records in parallel. When your database needs instance recovery, only one process-the background process SMON-will be reading the redo logs and reapplying the changes made by your users. This serial processing can take substantially longer, in elapsed time, than the transactions took when being processed by multiple users concurrently.

Another factor to consider is that you can set parameters to cause additional checkpoints to be performed automatically as a redo log fills up. The LOG_CHECKPOINT_INTERVAL parameter sets the number of blocks that will be written to a redo log file before a checkpoint is forced to occur, whether or not the file is full. If a redo log file is 1,000 blocks in size, setting LOG_CHECKPOINT_INTERVAL to 250 will cause a checkpoint when it's one-quarter, one-half, and three-quarters full. Although such a setting will reduce your instance recovery time for this redo log, it may be inappropriate should you decide to reduce your redo log size, for example, to expedite archiving (as discussed a little later in this section), and then forget to reset this parameter.

There are a number of considerations when setting checkpoint frequency, as the following two examples demonstrate:

Ideally, you don't want checkpoints to occur more than once every 15 to 20 minutes, and much less frequently if the database is mainly processing queries. The problem with starting checkpoints too frequently is that a number of very active blocks will still be in use between checkpoints. However, they will have to be written out at each checkpoint. Redundant writes waste the disk I/O bandwidth. You may want to experiment with log file sizes after you finish reading about other considerations, such as archive logging (discussed in a little bit), to come close to the ideal size.

Parallel server processes might speed up instance recovery
You can set the RECOVERY _PARALLELISM parameter in your initialization file to an integer value higher than 1 to allow SMON to enlist that number of parallel server processes for recovery. You must also start this number of processes by using the PARALLEL_MIN _SERVERS parameter. These processes will apply the redo in parallel during instance recovery. You may not see significant improvement in recovery time, however, because the parallel server processes must still apply the redo in sequential order, so they're likely to be contending for disk read access to the redo logs as well as contending for space in the database buffer cache.

Before leaving checkpoints, you should be aware of one other factor: To perform instance recovery, a checkpoint marker must be available to indicate the start point. If you have two log files and, for whatever reason the checkpoint following a log switch doesn't complete until the second log fills up, the only checkpoint marker is in the first log file. If Oracle began to write redo records into this first log file again, there's no guarantee that this remaining checkpoint wouldn't be overwritten, leaving no starting point for an instance recovery.

Consequently, Oracle will stop writing further redo entries until the checkpoint process completes and the new marker record can be written. If no redo log entries can be written, Oracle can't preserve the integrity of database changes because block images in memory can't be guaranteed to be recoverable. So, rather than let unrecoverable changes occur, Oracle stops any further transaction processing. To the users, the database will appear completely frozen. Of course, after the checkpoint completes, work will continue as normal. You may have to size your redo logs large enough to avoid this problem because a database that freezes out user activity isn't going to meet performance standards.

A second mechanism that may affect your redo log file size decision is whether you're going to archive your log files. Although this is another topic that belongs in the backup and recovery discussions in Chapter 12, "Understanding Oracle8 Backup Options," we'll take a quick look at it here.

Normally, when a redo log is filled and another one is being written to, the contents of the first log are of no use following the completion of the checkpoint started at the log switch. When the other log file fills up, Oracle can safely begin writing over the contents in the first one. Similarly, because a new checkpoint will be under way, the data in the second log file will soon become unnecessary for instance recovery, so Oracle can switch back to it when the current log fills.

Now consider data file backups. They can't be made continuously, so the restoration of a backed-up data file will almost certainly cause old block images to be placed back into the database. Transactions completed since the backup was made won't be represented. However, if you could keep every redo entry made since the data file backup was made, restoring the blocks in that data file would require nothing different than restoring them following an instance failure.

Oracle offers the capability to "archive" your online redo log files so that they can be preserved for this very purpose. As each redo log fills up, Oracle still switches to the next one and starts a checkpoint, but also marks the completed redo log file for archiving. Either you or a special background process, ARCH, will copy the redo log to a special location where it can be saved for as long as needed.

Users and log file switches when archiving
Be sure to understand the pros and cons of archiving before deciding whether to use it, and then be prepared to monitor your system for problems with postponed log switches until archived copies are made. Not only does processing for current users come to a halt during such times, but also new users attempting to connect to the data-base will be prevented from doing so. This can make the problem very visible to your user community. Even if your users don't let you know, the alter log for your data-base will show you when you have log-switching problems due to tardy archiving.

When you place your database into the mode that requires completed log files to be saved to an archive location, Oracle becomes very adamant that this work be performed. In fact, it won't let the redo activity switch back into a log file until that file has been safely archived. So, if your files are very big and take too long to archive (particularly if they're being copied to a slow disk drive) or so small that they fill up faster than they can be copied, you can run into problems. If the logs can't be switched because the archiving isn't done, no more log records can be written. Only when the archive is complete can work continue again.

During the time that Oracle is waiting for the archive to finish, your users are experiencing the same situation when checkpoint completion was delayed. To them, the database is stuck and they can't get any work done. You may therefore have to adjust your log file size to ensure that the archiving process will complete sooner than the next switch.

Determining the Number of Redo Log Groups

Besides changing the size of your redo log files to avoid problems with checkpoints and archives not completing in time, you can add the amount of redo that's written before a log file is needed again by adding more log files.

If it takes half an hour to fill each log file, and you have two such files, it will take an hour before you'll fill both logs and need to reuse the space in one of them (refer to Figure 6.1). If this isn't always enough time to complete your checkpoints and archives, you have a serious performance problem and should turn to Chapter 20 to find out about redo log tuning. However, until you can resolve the problem, you could increase the length of time before a log file is reused by adding one more file.

Generally, databases with smaller log files and high DML activity will tend to have peak periods during which their log files fill faster than checkpoints or archives complete. By adding more log files, you increase the time taken to fill up the entire set of logs. This allows time for the checkpoint and archive work to catch up from the overloads during the peak periods.

A suggested option for sizing online redo logs
One approach to redo I've seen work successfully is to keep as much online redo log as can be filled between database back-ups. This way, should you have to restore a data file from a backup, the redo needed to recover it up to the point of failure will be in the online redo log. The recovery operations can access the online redo logs more directly than they can archived logs, so the recovery time will be reduced.

You need to find a good balance between the size of your log files, which affects default checkpoint intervals (and hence instance recovery times), and the number of your log files, which with the size determines how long each checkpoint and archive has to complete. In general, you're better off having too much online (as opposed to archived, or offline) redo log rather than too little. Too little will likely cause at least an occasional pause while a checkpoint or an archive completes; too much will simply waste disk space.

Oracle requires you to have a minimum of two redo log files. You can have up to 255, unless your operating system sets a lower maximum number. When you create a database, you can reduce the maximum number of redo log files you will be allowed to create by setting the optional parameter value, MAXLOGFILES, of the CREATE DATABASE command. You can even set an initialization parameter, LOG_FILES, to limit the number of log files that an instance can access.

Determining the Number of Redo Log Members

Each log file is given a log group number, either by you as you add them or automatically by Oracle. We refer to log files with a group number because a group can contain more than one file. Each member of a group will be maintained by Oracle to ensure that it contains the same redo entries. This is done to avoid making a redo log a single point of failure.

When your log groups contain only one member, you risk having the database become unusable if you lose a redo file. Recall from the earlier section, "Sizing Your Redo Logs," that at least one checkpoint completion marker must be available somewhere in your redo logs. If only one such marker happened to be in a set of logs and the file containing the marker was on a disk that crashed, you would no longer have a way of performing instance recovery. This jeopardizes your database and so Oracle, on detecting a missing log file, will stop processing any more transactions and perform a shutdown.

Oracle mirrors versus operating-system mirrors for redo logs
There has been much discussion within Oracle and with Oracle's business partners about the pros and cons of using Oracle's multiplexing versus using a mirrored disk controlled by the operating system. The biggest benefit to Oracle mirrors is that they work on any operating system and on any disks; the biggest disadvantage is that Oracle insists that each available copy is written to before it considers a flush of the redo buffer complete. This synchronous write process can be slower than operating-system mirrors. However, as disk subsystems become faster and add intelligent buffering capability, the latter difference becomes less of an issue. My best advice at this time is to use Oracle mirroring if you have no other option, and to experiment with Oracle and operating system-mirroring if you can.

If each log file is paired with a copy of itself and that copy is on a different disk, a single disk failure won't reduce the database to an unusable state. Even if the only checkpoint record was in the file on a crashed disk, its copy would still contain a valid version of it. Oracle will know to avoid the bad disk for future writes and for any further archiving activity. The term Oracle uses for copied sets of redo logs is multiplexing.

You're strongly encouraged, therefore, to multiplex every log group with at least two members. Depending on the criticality of your systems, you may want even more. Rarely do you need to go beyond three members per group; in fact, with more than that, you're likely to experience performance problems due to the time it takes to write out of the copies of each redo block. If you can mirror your log files at the operating-system level, you can also use mirroring to guard against a single disk loss. If you rely on operating-system mirroring alone, you still run the risk of having Oracle shut itself down if you lose a disk. System mirrors aren't visible to Oracle, so it may think it has lost its only copy of a log file if the primary disk crashes. System mirroring is a good way to create three- or four-way mirroring, however. Create each Oracle log group with two members, and then mirror either one or both members.

Adding Redo to Your Database

When you created your database, you created at least two redo log groups, the minimum number required to start an Oracle database. You may have created more than that, and you could have created each group with one or more members. This section looks at the commands used to add more log groups or more log members to an existing group. You'll see that the syntax is very similar to the log definition portion of the CREATE DATABASE command.

One option you can use when creating a redo log group allows you to identify a thread number. The thread number is useful only in a parallel server (multi-instance) database, so we won't examine its usage here. For further details on this option, see the Oracle8 SQL Reference Manual and the Oracle8 Parallel Server Administration manual.

Syntax conventions used in this book
Throughout this book, square brackets in command syntax indicate optional clauses and an ellipsis ([...]) indicates a clause that can repeat. Another convention used is the | character, which indicates that you choose between one item or the other, not both (for example, choose either K or M). When you actually use commands, don't include the brackets, ellipses, or | character.

The general syntax for creating a log group with a single member is as follows:

ALTER DATABASE [database_name]
     ADD LOGFILE [GROUP [group_number]]
     filename [SIZE size_integer [K|M]] [REUSE]

This code is for a multimember group:

ALTER DATABASE [database_name]
     ADD LOGFILE [GROUP [group_number]]
     (filename, filename [,...])
     [SIZE size_integer [K|M]] [REUSE]

The database name is optional if it's included in the parameter file (as the DB_NAME parameter) for the instance. Otherwise, you need to identify the name with which the database was created and which is stored in the control file. If you omit the group clause (the keyword GROUP and the group number), Oracle will assign the next available group number for you. Every group must have a unique number to identify it.

The filename can be a file system filename (which should be fully qualified with a path name), a raw partition name, or a link. In the multimember case, you should put the filenames inside a pair of parentheses and separate the names with commas.

You must include a SIZE or a REUSE clause. You can include both for file system files, as long as any existing file is the same size as the specification. For file system files, you must provide a size if the file doesn't already exist, and you must include the REUSE keyword if the file does exist; the command will fail if either condition is violated. For raw partitions, the REUSE keyword is meaningless because the new contents will always be written over the contents of the partition; therefore, it makes no difference whether you include it. You must include the file size, however, to avoid using the whole partition-two blocks of space must be reserved in each partition for operating-system information-or possibly writing beyond the partition boundaries. The K and M represent kilobytes and megabytes, respectively. Without either, the size_integer represents bytes.

The SIZE and REUSE options in database redo log groups
If you're creating a log group with multiple members, include the SIZE or REUSE keyword only once for all members of the group. They must all be the same size because they'll all contain the same data. This means-unless you're using raw devices-that if one file exists, they must all exist so that the REUSE option is valid for each named file. If some exist and some don't, you'll have to create the group with only those that exist (or only those that don't) and add the others as additional members. I show you how to do this a little later. No matter how you create them, all the files in a redo log group will have to be same size.

Listing 6.1 shows a script file with three commands, each creating a new redo log group.


Listing 6.1  Create new redo log groups

     01:     ALTER DATABASE ADD LOGFILE
     02:     D:\ORANT\DATABASE\log10.ora SIZE 100K
     03:     /
     04:     ALTER DATABASE ADD LOGFILE GROUP 6
     05:     (E:\DATABASE\log6a.ora, F:\DATABASE\log6b.ora) SIZE 10M
     06:     /
     07:     ALTER DATABASE ADD LOGFILE GROUP 5
     08:     (E:\DATABASE\log5a.log, F:\DATABASE\log5b.log) REUSE
     09:     /

Numbering of code lines
Line numberings were included in Listing 6.1 and other code listings to make discussion about this code easier to reference. The numbers should not be included with any command-line commands, as part of any Oracle scripts, or within SQL statements.

On line 1 of Listing 6.1, the first redo log group will be created with a single member in the group, and the group's number will be assigned by Oracle. Group 6 will have two members, and the group is assigned its group number in the command on line 4. In these first two commands, Oracle will create all new files. Redo log group 5, as created by the command on line 7, will contain two members, both of which will replace existing files.

Adding one or more new members to an existing group can be done by identifying the group number (the simplest syntax) or by identifying the group with a list containing the full path names of all the current members. The syntax for the former when adding just one more member is

ALTER DATABASE database_name
     ADD LOGFILE MEMBER
     filename [REUSE]
     TO GROUP group_number
Different numbers of members per log group
Oracle doesn't require that you use the same number of log file members in each group. In fact, because you can add a new member or members to only one group at a time with the ALTER DATABASE command, you couldn't start mirroring your log files by adding a new member to each group unless they could exist with different numbers of members, at least temporarily. However, even though you could run your database with two members in one redo log, three in another, just one in a third, and so on, I don't recommend this practice. After you decide how many mirrored copies make sense for your requirements, you should use that number in all groups. This way, you won't experience periods of different performance or have to worry, should you lose a disk drive, whether you've lost a single-copy redo log or just one of a mirrored set.

The database name is optional, as when adding a new group. The group number must refer to an existing group. The filename must be a fully qualified file system name, a raw partition, or a link. The REUSE keyword is needed only if you're using a file system file that already exists, in which case it must be the same size as other files in the group. A SIZE clause isn't needed because every member of the group must be the same size as the existing member(s).

The syntax for using the existing filename(s) to add a single member is as follows:

ALTER DATABASE database_name
     ADD LOGFILE MEMBER
     filename [REUSE]
     TO [filename] | [(filename, filename, (,...)]

Everything is as described earlier except that for a group with a single member, the filename alone is used in place of the GROUP clause, whereas a comma-separated list of the existing member's filenames (enclosed in parentheses) is required if the group already has more than one member. In either case, the filenames must be fully specified.

To add multiple members to a group within the same command, you simply change the filename clause to read as follows in either version of the statement:

(filename, filename, (,...)) [REUSE]

The use of REUSE is, as before, required if the files already exist.

Dropping Redo Logs and Handling Problem Logs

The most likely reason you would want to drop a redo log file is because you want to replace it with one of a different size. Once in a while you may drop a log file because you've determined you have more online redo than you need. This is rarely beneficial, however, unless it's the only log file on the disk, because you generally shouldn't share disks where you're writing online redo logs with other file types. You may also need to drop a redo log member when you're experiencing performance problems due to too many multiplexed copies, or because you want to replace one or more members with operating-system mirrored copies.

To drop an entire log group, the following must be true:

If these conditions are met, you can drop the entire log group with an ALTER DATABASE command that identifies the group. As with the ADD LOGFILE MEMBER option of this command (discussed in the preceding section), you identify the group with its group number, its member's filename, or with a list of the filenames of its current members:

ALTER DATABASE database_name
     DROP LOGFILE
     GROUP group_number | filename |
     (filename, filename (,...))

The database name is needed only if the parameter file used to start the instance doesn't include the DB_NAME parameter and the ellipsis ([...]) shows a repeatable field.

You can drop one or more members from an existing log group with the DROP LOGFILE MEMBER variant of this command. You can't drop all members with this command, however; you must use the preceding command to drop the group as a whole. The syntax for dropping a group member is

ALTER DATABASE database_name
     DROP LOGFILE MEMBER
     filename

where the database name has the same requirements as previously discussed, and the filename must be fully qualified, as with all files discussed in these sections.

Once in a while, a redo log group may become damaged to the point where the database can't continue to function and you need to replace the redo group with a clean file or set of members. If the damaged log group isn't yet archived or the log group is one of only two log groups in the database, however, you aren't allowed to drop it. Creating a third log might not help because Oracle will continue to attempt to use the damaged log before moving on to the new one. In such cases, you need to simulate dropping and recreating the log with the CLEAR LOGFILE option of the ALTER DATABASE command. After you do this, you may need to perform a brand new backup of your database because there may be a break in the continuity of your archived logs, and you may have removed the only checkpoint record in the online redo.

If you do have to perform an emergency replacement of an online redo log, use the following command:

ALTER DATABASE database_name
     CLEAR [UNARCHIVED] LOGFILE group_identifier
     [UNRECOVERABLE DATAFILE]

where database_name and group_identifier follow the same characteristics as described earlier for the DROP LOGFILE option. The UNARCHIVED clause is needed if the group was awaiting archiving before being cleared, and the UNRECOVERABLE DATAFILE option is required if the log would have been needed to recover an offline data file.

To find out about the current status of your redo logs, you can query various dynamic performance views. The V$LOGFILE view will show the names of the members of each redo log group and their status. In this view, NULL is a normal status, INVALID indicates that the file is unavailable, DELETED shows that the file has been dropped, and STALE is used when a file is a new member of a group or doesn't contain a complete set of records for some reason. The V$LOG and V$THREAD provide more detailed status information and include records of the archive and system change numbers related to the redo files. Also, the view V$LOG_HISTORY is used mainly by parallel server databases for recovery operations.

SEE ALSO
How to set up redo log archiving for your database,
Learn about tuning your redo logs for checkpoint and archive processing,
More about the alert log and the types of messages it can provide, such as log file switches delayed by checkpoints or archiving,

Managing Rollback Segments

Rollback segments perform two basic functions, both somewhat related:

To perform its function, a rollback segment stores a before image of a column, row, or other block element before the change is applied to the block. By using the address of this changed data, also stored with the before image, a rollback or read-consistency operation can overlay the changed information with this record of what it looked like before the change.

Dirty reads
A "dirty read" is a query that returns value from a row that's part of an as-yet uncommitted transaction. If the transaction is subsequently rolled back, the query has returned a value that's never really been stored in the database. An inconsistent read occurs when a query reads some blocks before a transaction changes them and other blocks after the same transaction changes those.

During a rollback operation, the before image data is applied directly to the data block image where the transaction had made its changes. Rollbacks can occur for a number of reasons, including, but not limited to the following:

In some cases, particularly the latter, the blocks that need rollback information applied may be stored only on disk rather than in memory.

When a read-consistent block image is needed, Oracle first copies the block into a different memory location inside the database buffer cache. The original block image can continue to be manipulated by any active transactions that need to modify it. Oracle then applies the rollback information to the copy of the block, called the "undo block." In some cases, a long-running query may encounter a block that has been changed by multiple transactions subsequent to the start of the query. In such a case, the undo block will be further modified by applying the before images from each transaction until the undo block resembles how the original block looked when the query began. The query will then read the undo block as opposed to the "real" block.

Rather than allowing rollback segments to grow indefinitely, Oracle reuses the blocks that contain before images of completed transactions. Over time, the entire rollback segment is recycled many, many times as new transactions find space for their rollback entries. This reuse of space is controlled rather than haphazard, however. For the read-consistent feature to work, the before images needed by a query need to be available for the whole duration of the query. If a new transaction simply reused any available rollback block, it could be the one needed by an executing query. To help avoid this, the space is used in a circular fashion. The oldest before images are overwritten first. To simplify the code to support this activity, a couple of rules are applied to rollback segments:

This behavior is shown in Figure 6.2.

Figure 6.2 : Oracle uses rollback segment extents in a circular fashion unless they're all busy, in which case it builds a new one.

By cycling through the extents or building new ones when necessary, a block in, say, extent 1 won't be overwritten until all the blocks in all the other extents have been reused. This allows before images to remain available for the longest time possible, given the current size of the rollback segment. Preserving the before images for queries is important because, if a query needs a before image that's not available, the query can't continue. Without the before image, the query can't reconstruct the block in question to look as it did at the query start time and it terminates with an error message: ORA-1555 - Snapshot too old.

The message ORA-1555 - Snapshot too old is usually a warning that at least one of your rollback segments is too small to hold enough records to provide read consistency. If it occurs very infrequently, however, it may simply indicate that a report, or other query-intensive program, ran into a busy period of transaction processing that it usually avoids. If rerunning the problem program succeeds, you may not need to change your rollback segment sizes for this infrequent occurrence.

The ORA-1555 error message
One cause of the ORA-1555 problem needs to be solved by the application developer rather than by a change in the rollback segment sizing. The error occurs if a program is making changes to many rows in a table by using an explicit cursor data-either in PL/SQL or a 3GL language with Oracle precompiled code-to read through the data and additional cursors to make changes to the required rows. If these individual row changes are committed, the query cursor needs to build read-consistent images of the affected blocks. While this may not involve much rollback information itself, it does require the query to find the transaction entry information in the header blocks of the roll-back segments involved. It's the sheer number of transactions, not their size, that causes ORA-1555 errors in this type of program.

The following sections discuss characteristics of transaction rollback and read consistency that you need to consider when determining the size and number of your database's rollback segments.

Determining the Number of Rollback Segments

A rollback segment uses the first block in its first extent to build a list of information about the transactions assigned to it. This list can contain only a fixed number of entries because the block size is itself fixed. Therefore, if your database needs to support lots of concurrent transactions, you should consider adding more rollback segments. Although Oracle will try to balance the workload between the rollback segments as new transactions begin, if every rollback segment is now supporting its maximum number of transactions, new transactions will be forced to wait until others complete.

The SYSTEM rollback segment
When a database is created, the default rollback segment is created in the SYSTEM tablespace, and it takes the default storage parameters of the tablespace. You cannot drop this rollback segment. It is used by Oracle for recursive SQL.

For performance reasons, you really shouldn't allow each rollback segment to support its maximum number of transactions. If you do, you'll overwork the header blocks of the rollback segments. Not only do new transactions have to place their entries on these blocks, but the status of ongoing transactions has to be recorded, including information about the extents they're actively using. Finally, when a transaction completes, the status of the transaction-including a system change number for committed transactions-has to be recorded in the transaction's slot in the header block.

There's no absolute rule as to the average number of transactions you should strive for per rollback segment. In most cases, the best number is between 4 and 10. For longer-running transactions that consume more rollback space, you should use a lower number in this range. Similarly, for shorter transactions, a higher number is a better target.

You can get an idea about what your transaction mix looks like by examining the following dynamic performance tables. When you know the total number of concurrent transactions, divide this number by the appropriate value between 4 and 10, based on transaction size, to find a good number of rollback segments to try. You can always adjust this number later.

SEE ALSO
Determining if you have too many or too few rollback segments by examining contention statistics,
Additional information on rollback segment performance
,

Sizing Your Rollback Segments

After you determine how many rollback segments you need, you need to figure out how large they should be. For most databases, there is a perfect rollback segment size for the normal workload. At this size, rollback segments will contain sufficient space to hold the rollback data needed for all the concurrently active transactions at any point in time. However, they achieve this without being oversized, which would waste space. Once in a while you may have a special job or program that needs an extra-large rollback segment. If so, you can build one such rollback segment and leave it available at all times, or else leave it offline until needed.

Assigning transactions to specific rollback segments
To ensure that a transaction uses a specific rollback segment, you can take other rollback segments offline, or you can explicitly assign the rollback segment with a SET TRANSACTION USE ROLLBACK SEGMENT rollback_segment_ name command. If you have concurrent transactions or the segment is needed by an application that runs outside your control, explicit assignment is better. The SET TRANSACTION command must be executed before every transaction if the same rollback segment is needed for each one.

You generally don't have to worry about making your rollback segments too small because, like other segments, they can grow automatically as more space is needed. This growth does depend on whether the segment has reached the maximum number of extents you've defined for it and on the amount of room remaining in the tablespace where it's stored. See the following section, "Adding Rollback Segments," for details on how to set the extent maximums and tablespace allocation.

I don't recommend letting Oracle take care of rollback segment growth for you for a couple of reasons:

Another problem with automatic growth is that, once in a while, something will occur that will make it grow far larger than is typically necessary. One example I have encountered was a program that, following a minor change, got itself into a processing loop that caused it repeatedly to update the same few records without committing the changes. As a result, the rollback segment handling the transaction kept growing until its tablespace ran completely out of space. At that point, the transaction failed. When that happened, the space in the rollback segment taken up by the runaway transaction entries was freed up for use by subsequent transactions-but the rollback segment was now almost the size of the tablespace. When a different transaction, assigned to another rollback segment, needed more space for its entries, it failed because its rollback segment had insufficient room to grow.

By using the OPTIMAL entry in the STORAGE clause of the CREATE or ALTER ROLLBACK SEGMENT command, you can hone in on the best size for your rollback segments. The OPTIMAL value will cause the rollback to perform a special check when it fills up its current active extent. If the sum of the sizes of the current extents is greater than OPTIMAL, rather than just look to see if the next extent is available to be the active extent, the server checks on the one after that, too. If this one is also available, the server will drop the next extent rather than make it current. Now, if the total rollback segment is at its optimal size, the current extent becomes the one following the dropped extent. But if the total size is still greater than OPTIMAL, the extent following this one is checked for availability and the same process is repeated. Eventually, the rollback segment will be reduced to optimal size by the deletion of extents, and the next remaining extent will become the current extent.

Rollback segment extents are dropped in a specific order
The extents are dropped in the same order that they would have been reused. This activity results in the oldest rollback entries being dropped, preserving the most recent ones for use by ongoing queries.

You can query the dynamic performance table V$ROLLSTAT to determine how many times a rollback segment has grown through the addition of new extents (the EXTENDS column value), and how many times it has shrunk (the SHRINKS column value). If these numbers are low, or zero, the rollback segment is either sized correctly or it may still be larger than needed. You can adjust the value of OPTIMAL downward and check the statistics again later. If they're still low, or zero, your extent may still be oversized. However, if they have started increasing, it means the rollback segment needs to be larger. If the grow-and-shrink counts are high when you first look at the table, the rollback segment has always been too small.

Problems to look for when decreasing rollback segment size
When reducing the size of a roll-back segment, you need to monitor your users' queries to ensure that the number of Snapshot too old messages doesn't increase. Remember that queries may need rollback information long after a transaction is finished. You can't just size your rollback segments to make them as small as the transaction load requires if this will interfere with standard query processing. Even a report program that runs once a month may require you to maintain larger roll-back segments than your records of rollback segment growth and shrinkage would indicate are needed. If the program fails only once or twice a year, the cost of rerunning it may not be as expensive as the cost of the extra disk space needed to support larger rollback segments. But if it fails almost every month, you may need to increase your roll-back segment sizes.

Due to the automatic nature of the extent additions and deletions, you don't have to recreate a rollback segment that's the wrong size-you can control it with the OPTIMAL value once you find its stable size. As mentioned earlier, however, a rollback segment performs optimally when it has between 10 and 20 extents. This number provides the best balance between the need for transactions to find available space and the availability of required rollback entries for queries needing read-consistent data. Of course, based on the discussion of space management in Chapter 5 we're talking about rollback segments where all the extents are the same size. If your rollback segment's ideal size corresponds to this preferred number of extents, you can leave it as now defined. If the number of extents is below 10 or much above 20, however, you should consider dropping it and re-creating it with around 15 equal-sized extents, such that its total space remains the same.

Adding Rollback Segments

If you've just created a new database, there's only one rollback segment, SYSTEM, and it won't support transactions on segments outside the SYSTEM tablespace. From the discussions in Chapter 5 you should be building all your application tables and indexes in alternative tablespaces, and so you'll need additional rollback segments to support transactions against them.

If you have a running database, based on statistical results from your monitoring and tuning activity, you may determine that you need to add one more rollback segment to the existing set.

In either case, you follow exactly the same steps.

Add rollback segments

  1. Create the rollback segment.
  2. Bring the rollback segment online.
  3. Alter the parameter file to bring it online automatically whenever the instance starts.

Creating a Rollback Segment

Listing 6.2 shows the syntax of the command to create a rollback segment.


Listing 6.2  Create a rollback segment

     01:     CREATE [PUBLIC] ROLLBACK SEGMENT segment_name
     02:          [TABLESPACE tablespace_name]
     03:               [STORAGE (   [INITIAL integer [K|M]]
     04:                    [NEXT integer [K|M]]
     05:                    [MINEXTENTS integer]
     06:                    [MAXEXTENTS integer]
     07:                    [OPTIMAL  NULL|integer[K|M]] ) ]

On line 1, PUBLIC causes the rollback segment to be public rather than private. (This distinction is discussed in the next section, "PUBLIC versus PRIVATE Rollback Segments.") segment_name is a valid Oracle name.

On line 3, INITIAL is the size of the first extent, in bytes (default) or in K kilobytes or M megabytes.

NEXT on line 4 is the size of the second and subsequent extents, in bytes (default) or in K kilobytes or M megabytes.

Line 5 shows MINEXTENTS, which is the number of extents (minimum two) included in the rollback segment at creation time and the number of extents that must always belong to the segment.

MAXEXTENTS on line 6 is the largest number of extents the segment can acquire. Although MAXEXTENTS can be set to the value UNLIMITED, this isn't recommended for rollback segments. If you've sized your rollback segments correctly, they shouldn't need to grow much larger than this; unlimited growth would result from erroneous processing. Such processing could fill up the available space, restricting the growth of other rollback segments performing valid work, and would take as long to roll back, when it finally ran out of space, as it did to build all the rollback entries in the first place. Until the rollback of this transaction is completed-which could conceivably take many, many hours, if not days-the space consumed by the rollback entries can't be freed.

No PCTINCREASE option for rollback segments
Every extent, other than the first, must be the same size. With a non-NULL value for OPTIMAL, any extent with no active transactions assigned to it can be dropped if, by so doing, the total segment size will still be greater than the OPTIMAL size. The initial extent is never dropped, however, because it maintains the transaction table in its header block. Also, only the extents that have been inactive the longest are dropped. If there are four inactive extents but an active one between the third and fourth of these, only the first three will be dropped. This is to avoid removing records that might be needed for read-consistent queries.

On line 7 is OPTIMAL, which determines how the rollback segment can shrink. A value of NULL prevents the rollback segment from shrinking automatically; a size (in bytes, kilobytes, or megabytes) causes the rollback segment to shrink automatically by dropping inactive segments.

OPTIMAL must be set to a value no smaller than the sum of bytes in the first MINEXTENTS. This can be computed from the formula

OPTIMAL >= INITIAL + (NEXT * (MINEXTENTS - 1))

PUBLIC Versus PRIVATE Rollback Segments

The main reason Oracle supports public rollback segments is to help DBAs who manage multiple Oracle instances running against a database with Oracle Parallel Server. The management of rollback segments on some hardware platforms running Parallel Server is almost impossible without them being public, mainly because you don't have to name the rollback segment in a parameter file to make it come online when an instance starts up. Instead, each instance takes one or more segments from the pool of public rollback segments as its own.

Although you can use public rollback segments in a non-parallel database, you're encouraged to use private rollback segments. By naming the rollback segments you want to make active in an instance in your parameter file, you have full control over which ones are active. While you can name public rollback segments in the parameter file, two other parameters-TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT-can also bring additional ones online, if available. By using private rollback segments, you're guaranteed that only those named in the parameter will be brought online automatically at instance startup.

How Oracle activates public roll-back segments for an instance
Oracle evaluates the quotient of TRANSACTIONS and TRANSACTIONS_PER_ ROLLBACK_SEGMENT when it starts up. If these parameters were set to 210 and 25, respectively, the result of this calculation would be 8.4. Oracle rounds up this value to the next integer and attempts to acquire this many rollback segments for the instance. First, it counts the number assigned by the ROLLBACK_SEGMENTS parameter and then calculates how many more are needed. To continue the example, sup-pose that five rollback segments were named in ROLLBACK_SEGMENTS; 9 minus 5-or 4 more-would be needed. If at least four public rollback segments aren't yet assigned to an instance, the current instance will take four of these and bring them online for itself. If there are fewer than four, it will bring as many online as are available. If no public rollback segments are available, the instance will continue to run with just the five named in the parameter file.

You can take private and public rollback segments offline and return them to online status while the instance is running, as discussed in the following section.

Altering Rollback Segments

Before working with a rollback segment, you may need to determine its current status. This can be a little confusing due to the number of different data dictionary tables you might need to examine to get the full picture. Table 6.1 shows the various types of rollback segment status and characteristics, along with the data dictionary table and column that contain this information.

Table 6.1  Identifying the status of a rollback segment
Status
Data Dictionary Table
Table Column
ONLINEDBA_ROLLBACK_SEGS V$ROLLSTAT STATUS STATUS
OFFLINEDBA_ROLLBACK_SEGS V$ROLLSTAT STATUS STATUS
PENDING OFFLINEV$ROLLSTAT STATUS
DEFERREDDBA_SEGMENTS SEGMENT_TYPE
PRIVATEDBA_ROLLBACK_SEGS OWNER (= SYS)
PUBLICDBA_ROLLBACK_SEGS OWNER (= PUBLIC)

In an online state, the rollback segment is available for use and may have active transactions running against it. In an offline state, the rollback segment is idle and has no active transactions. A pending offline state is a transition state between being online and being offline. When you alter an online rollback segment to be offline, it won't accept any more transactions, but will continue to process any current transactions until they complete. Until these are all committed or rolled back, the rollback segment remains in the offline pending state.

A deferred rollback segment holds rollback information for transactions that can't complete because the tablespace to which they need to write has gone offline. These transactions will have failed due to the loss of the tablespace, but they can't be rolled back because the blocks in the offline tablespace can't be read or written. To be able to complete the necessary rollbacks when the tablespace comes back online, the associated rollback entries are stored in the SYSTEM tablespace in deferred rollback segments.

Although not truly a status, Table 6.1 also includes an entry for the PRIVATE and PUBLIC rollback segment descriptions so that you know how to identify which is which. As you can see, this is shown indirectly in the OWNER column of the DBA_ROLLBACK_SEGS table, where an entry of SYS indicates that it's a private rollback segment and an entry of PUBLIC shows it to be a public rollback segment.

The ALTER ROLLBACK SEGMENT command lets you change the status of a rollback segment manually. The full syntax for this command is

ALTER ROLLBACK SEGMENT segment_name
     [ONLINE|OFFLINE]
     [SHRINK [TO integer [K|M]]]
     [STORAGE (storage_clause)]

The keywords ONLINE and OFFLINE simply take the rollback segment between the basic states. As discussed earlier, a rollback segment may not go completely offline immediately; it may have to wait until pending transactions complete. If you're taking a rollback segment offline in preparation for dropping it, you may need to wait until it's completely offline, as shown in V$ROLLSTAT. You're not allowed to take the SYSTEM rollback segment offline for any reason.

The SHRINK keyword causes the rollback segment to shrink to its optimal size, or to the size provided when you execute the ALTER ROLLBACK SEGMENT command. As with automatic shrinkage, you can't reduce the size to less than the space taken by MINEXTENTS.

The storage clause of the ALTER ROLLBACK SEGMENT command is identical to its counterpart in the CREATE ROLLBACK SEGMENT statement, with the following provisos:

Dropping and Shrinking Rollback Segments

At times, you may want to change the physical structure of a rollback segment, such as alter its basic extent size or move it to another tablespace. You may even want to remove a rollback segment because you no longer think you need it. If you have a rollback segment that has grown much larger than the optimal size and want it reduced to the optimal size as soon as possible, you can shrink it yourself.

To remove a rollback segment, you must first take it offline, as discussed in the previous section. Recall that even if the command to take it offline works, you may not be able to remove it. Until all the transactions now assigned to the segment complete, the status won't be permanently altered to offline, so you won't be able to drop it. Of course, after you change its status, no further transactions will be assigned to the rollback segment.

As soon as a rollback segment is completely offline-meaning that the status in DBA_ROLLBACK_SEGS and V$ROLLSTAT is OFFLINE-you can remove it. To do this, issue the command

DROP ROLLBACK SEGMENT segment_name

If you need to reduce a rollback segment to its optimal size, you can just wait until this occurs automatically. However, if the segment is taking up space that might be needed by other segments, you can manually cause the segment to shrink by executing the command

ALTER ROLLBACK SEGMENT segment_name SHRINK [TO integer[K|M]]

As soon as you do this, the rollback segment will shrink. If it doesn't shrink to the desired size as specified in the command, or to OPTIMAL if you didn't specify a size, you may need to re-issue the command later. Some extents may still contain active transactions and so can't be dropped. There's also a chance that your command and the SMON background process were both trying to shrink the rollback segment concurrently. To do this, they must both store some rollback information in the segment themselves, and so may be interfering with the extents that each of them are trying to drop.

Working with Temporary Segments

A temporary segment is a database object that stores information for a server process that can't fit all the data it needs into memory during a sort, a hash join, or other related activities. If it resides in a regular tablespace-that is, one of type PERMANENT-the segment will be available to the server for as long as it's needed. At the end of this time, the segment will be dropped and the blocks it was occupying will be returned to the tablespace for use by another segment.

Any number of server processes can be using temporary segments at one time, each managing its own segment and space allocation. If a temporary segment exists in a tablespace defined as a TEMPORARY type, it's not dropped when the initial server process is done with it. Instead, a list of its extents is maintained in the data dictionary, and any other process needing temporary space is allocated one or more of its extents. As with other segments, if there's insufficient space to meet the demand at any given time, more extents are added automatically. The new extents are added to the list of available extents when the processes using them are done.

In PERMANENT or TEMPORARY tablespaces, temporary segments obtain their storage information from the default storage defined for the tablespace. The user has no opportunity to set extent sizes, maximum extent counts, and so on. Chapter 5covers the details of setting up tablespaces with default storage sizes.

Sizing Your Temporary Tablespaces

Temporary tablespaces are difficult to plan for when you first build a database. You don't know just how many SQL statements that might need to use the space are likely to be executing concurrently. You probably won't know how much temporary space any one of them will need. A rule of thumb you might consider is to make the tablespace about half as big as the size of your largest table. This will work against the smaller tables to fit into the tablespace concurrently. However, this may not be sufficient space for work on the largest table to complete, particularly if other work is using the space simultaneously.

As your database is used, you can examine performance statistics to see just how many times your applications need temporary segments. Look at the values in the V$SYSSTAT view for the row where the NAME column value is sorts (disk). The number in the VALUE column is the number of sorts since the instance startup that required space in a temporary segment. If you see that the frequency is high enough for multiple such sorts to be occurring simultaneously, you may want to add to your tablespace size.

Disk sorts versus memory sorts
If you query the V$SYSSTAT view with the statement SELECT * FROM v$sysstat WHERE name LIKE '%sorts%';, you see three rows of data: one for sorts done entirely in memory, one for sorts requiring use of disks (temporary segments), and one showing the total number of rows involved in both types of sorts. If the number of disk sorts is relatively high in comparison to the memory sorts, you may need to tune the memory area provided for sorting, as discussed in Chapter 18.

The number of rows sorted shown in V$SYSSTAT may help you determine whether a few relatively large sorts are requiring the temporary segment space, or if a lot of smaller sorts are just spilling over from the sort memory area. Because the row count is accumulated across memory and disk sorts, however, it can be difficult to tell how many rows are associated with each type. Also, a sort involving lots of rows may not be as memory intensive as a sort of far fewer, but much longer, rows. To take advantage of this statistic, you may have to monitor V$SYSSTAT on a sort-by-sort basis, with some knowledge of the nature of each sort being recorded in this dynamic performance view.

Of course, you may hear from your users if they run out of temporary space, because their applications will fail with an error if they can't acquire sufficient temporary space to complete. You should be careful, however, not to confuse some errors with lack of space in the temporary tablespace:

Setting Storage Options for Your Temporary Tablespaces

As discussed earlier, temporary segments always build their extents based on the default storage values associated with their tablespace definitions. It's therefore critical that you build your temporary tablespaces with appropriate values in the default storage clause.

To understand what storage values are appropriate for temporary segments, you should think about what's being placed into these segments. It's the data that's being swapped out of the space in memory reserved for the type of activity that might need temporary space-either sorts or hash joins. Generally, the space for sorts is the smaller of these two, and the hash join space should be an integer multiple of the sort space size. Therefore, it makes the most sense to build extents that can hold at least the full amount of data likely to be flushed from memory during a sort.

If the extent is exactly the same size as the sort space, each set of data flushed to disk will need just one extent. If the extent is larger than the sort space but not a multiple of it, every other write or so from disk will probably need to write into two extents, which isn't as efficient as writing to a single extent. The problem is reversed after the sort when the data has to read back from disk. Some reads will need to skip from one extent to another. If the extent is an integer multiple of the sort size, each set of data flushed from memory will fit into some part of a single extent. However, the last set of data flushed may not fill the extent being used, and the additional space will be wasted until the subsequent processing completes and the entire extent is released.

It's almost not worth making each extent in a temporary segment the same as the SORT_AREA_SIZE initialization parameter. If the sort requires more space than this parameter allocates, it will need to write at least two sets of data into its temporary segment-the first set that initially fills up the sort space, and the balance of the sorted data. If no balance were left over to sort, the temporary segment wouldn't have been needed. If the extent size had been double the sort space (2 * SORT_AREA_SIZE), only one extent would have been needed.

Use large extent sizes for temporary segments
For sorts that almost fit into memory, such as one that uses less than twice the current sort area size, you may well find yourself tuning memory to hold the sort data completely, leaving only the very large sorts in need of temporary space. Such large sorts may need to write a number of sort runs to disk, maybe in the tens or twenties. It makes sense, therefore, to make each extent sufficiently large to hold these multiple runs, thus avoiding the over-head of having to find the required extent space more than once. Your temporary extent sizes may well be 10 to 100 times as large as your SORT_AREA_SIZE, depending on your situation and database use.

You may also want to set MAXEXTENTS based on the type of tablespace you're using. If it's of type PERMANENT, you should ensure that each concurrent disk sort can grow its temporary segment to a size that will let the other segments reach the same size. For example, in a 200MB tablespace where you expect five concurrent disk sorts, each temporary segment should be allowed to grow to 40MB. If your extent size is 2MB, MAXEXTENTS would need to be 20.

If you're using a TEMPORARY type tablespace, there will probably only be one temporary segment. You can ensure this by building it yourself right after you create your temporary tablespace by using the method described in Chapter 5 You may not want to do this if you're using Oracle Parallel Server, but we'll ignore that case here. If you build the temporary segment yourself, you'll know exactly how many extents can be contained by the tablespace, so just set the tablespace default to that number. Of course, if your tablespace is built with files that can autoextend, you should make the MAXEXTENTS value larger.

SEE ALSO
Building tablespaces specifically for temporary segments,

Managing Your Temporary Segments

Due to their nature, you don't have much control-hence, little management responsibility-for your temporary segments. A couple of issues might affect you: you may run out of space in your temporary tablespace and you may decide to change the value of your SORT_AREA_SIZE parameter.

As with any other tablespace, if you run out of space in your temporary tablespace, you can add a new data file or extend an existing data file, assuming that you have the necessary disk space. After you do this, you may want to look at the MAXEXTENTS value for the tablespace to see whether it can be set higher to take advantage of the additional space.

Should you change the value of the sort space size by altering the parameter SORT_AREA_SIZE, your temporary extents may no longer align with the I/O to and from the sort space. If you're using a PERMANENT type of tablespace, just changing the default storage values for INITIAL and NEXT to the new value will be sufficient to ensure that the next temporary segments will be appropriately sized. If you have a TEMPORARY type tablespace, however, you'll need to drop the segment from it to free up the space to build one with new extent sizes.

To drop the segment(s) from a TEMPORARY tablespace, you need to alter it to be a PERMANENT tablespace. This will cause its contents to be treated as any other temporary segment so they will be dropped automatically. Before a replacement segment is built, you should ensure that the default storage settings are changed to reflect the new extent size you require, as you would do for temporary segments in a PERMANENT tablespace. Therefore, before altering the tablespace back to TEMPORARY status, make sure that you alter the tablespace default storage settings. As when creating a new temporary tablespace of type TEMPORARY, you may want to prebuild the temporary segment to its maximum size as soon as the status is converted. The command to alter the tablespace type is

Converting a tablespace to TEMPORARY requires that all other segments be removed from it
Anytime a tablespace is of type PERMANENT, other types of segments in addition to temporary segments can be created in it. To switch it to a TEMPORARY type, all such segments must be removed. This is true whether the tablespace was originally created to hold different types of segments or if it was made PERMANENT for only a short time while the storage values were being changed.

ALTER TABLESPACE tablespace_name PERMANENT|TEMPORARY

SEE ALSO
More information about setting up temporary tablespaces and their storage options,
Initialization parameters associated with sorts,

© Copyright, Macmillan Computer Publishing. All rights reserved.