Базы данныхИнтернетКомпьютерыОперационные системыПрограммированиеСетиСвязьРазное
Поиск по сайту:
Подпишись на рассылку:

Назад в раздел


Teach Yourself Oracle 8 In 21 Days

Previous chapterNext chapterContents


- Day 8 -
Administering Redo Logs, Control Files, and Rollback Segments

In the last few days you saw how to create a database and how to add datafiles and tablespaces to customize that database for your own use. Remember, the instance is made up of the files used by Oracle, the processes or threads, and the memory used by Oracle. The files used by the database are made up of the following:

  • The datafiles. These are the actual files that are used to store tablespaces.

  • The redo log files (sometimes just known as the redo log). This is where all redo information is kept. Without the redo log files you would not be able to recover from a system failure.

  • The control files. These files contain information vital to the Oracle database. The information that is used to tell the instance where the datafiles and log files reside is stored in the control file.

  • The parameter file. This file contains tuning information that is used by Oracle at startup. This is commonly referred to as the init.ora file.

There are also the RDBMS binaries and other files such as backup files, archive log files, and so on, but they are not really part of the database itself, even though they are essential to the RDBMS.

As you saw on Day 7, "Administering Tablespaces," a tablespace can hold four different types of segments:

  • Data segment--Used to hold tables and clusters

  • Index segment--Used to hold indexes

  • Rollback segment--Special types of segments that are used to store undo information

  • Temporary segment--Used for storing temporary data

Today you will look at the rollback segments. Data segments, index segments, and temp-orary segments are covered on Days 12-15. The rollback segment is not only important to the basic function of the Oracle RDBMS, but it has performance implications as well.

Redo Log Files

The redo log files are used to store redo information. Each time data is changed in the database, a log record is written describing the change(s). With this information the database can be recovered in the event of a system failure.

If a catastrophic system failure occurs, such as a power failure, component failure, or similar occurrence, the Oracle instance will be aborted. The instance will be cut off immediately or, in the event of a disk failure, the instance might crash. If this occurs, all changed data in the buffer cache will be lost; only changes that have been written out to disk will be saved.

New Term: When Oracle is restarted, the information in the redo log file will be used to reproduce changes that have been made to the database, thus saving as much work as possible. All previously committed transactions will be recovered; this is also known as being rolled forward. All transactions that had modified data but had not been committed will be backed out; this is known as rolling back.

The redo log file is necessary for proper recovery. If this file is lost due to a disk failure, you will not be able to recover in the event of a system failure; therefore, you must protect the redo log file against this kind of failure. I recommend you use disk mirroring or RAID-1 on all redo log files.

Because the redo log files are so critical to the recoverability of the system, it is recommended that you do not use a caching disk controller with write-caching unless that cache is backed up with a battery. In the event of a power failure, you must make sure that no redo information is lost. It is often recommended that write-caching not be used at all on the redo log, but I feel that if you have a battery back-up your risk is reduced.


WARNING: If you use a write-caching on the controller that has the redo log files and it is not backed up with a battery, you are in danger of losing data. In the event of a power failure, you will lose redo information and might not be able to recover.

How Does the Redo Log Work?

Each change to the database is logged into the redo log. Because of this, in the event of a failure all changes made since the last backup can be recovered with the use of these redo log files. If the instance should fail due to a power failure or other system failure, the redo log files can recover all changes done since the last checkpoint.


NOTE: A checkpoint causes all in-cache data blocks that have not been written out to disk to be written out to disk. These unwritten, changed buffers are called dirty buffers. These dirty buffers are what cause the system to need to be recovered. If there are no dirty buffers when the system fails, recovery time will be instantaneous.

When a COMMIT operation is performed, the redo information is written into the redo log buffers. The LGWR process writes the redo log files with the information in the redo log buffer. The COMMIT operation is not completed until the redo log has been written. After that has occurred, that transaction is irrevocable and will be recovered in the event of a system failure. You can see how important the redo log file really is.

The redo log is made up of two or more redo log files or log file groups. A log file group is a set of files that Oracle automatically mirrors. In this manner the redo log is protected against disk failure. A redo log group is made up of one or more redo log files and must be protected against disk failure. If you are using disk mirroring to protect the redo log, it is not necessary to use log file groups; because the disk is protected, single log files are sufficient.

New Term: The redo log has two or more log files or log file groups that are used in an alternating fashion. When the first log file has filled up, the logging operation moves to the next redo log file in the chain. If archiving is enabled, when it fills up and the log switch occurs, this file is copied to an archive log file. These archive log files are very important for the recoverability of the system in the event of a catastrophic failure.

Operations on the redo log files are done with the ALTER DATABASE command that was described on Day 6, "Administering Databases and Datafiles." Using the ALTER DATABASE command you can add redo log groups, add redo log files, rename redo log files, and so on.

Log Switches and Checkpoints

New Term: Each time a redo log file or log file group fills up, it switches to the next redo log file in the sequence. This switch, called the log switch, causes several automatic events to occur:

  • Checkpointing--A log switch always causes a checkpoint to occur. The checkpoint flushes all dirty buffers from the Oracle buffer cache. This reduces the amount of time a recovery will take, if needed.

  • Archiving--If archiving is turned on (and it should be), the log switch causes the redo log file that was just active to copy its contents to an archive log file. This archive log file is used in recovery if needed.

  • Log Sequence Number--Each time a redo log file is reused, it is given a log sequence number. This log sequence number is also given to the associated archive log file. By having this number, the RDBMS can keep track of which log file and archive log files have been used.

Archiving and checkpointing are covered on Days 16 and 17, "Understanding Effective Backup Techniques" and "Recovering the Database," where backup and recovery are covered in detail.

Log Switch and Checkpoint Intervals

You can use the LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT initialization parameters to control the checkpoint interval. LOG_CHECKPOINT_INTERVAL The LOG_CHECKPOINT_INTERVAL parameter is set by the administrator to a number of operating system blocks that are used before the log switch occurs. For most operating systems, the size of the operating system block size is 512 bytes, so this parameter will define the number of 512-byte blocks that are used in the redo log before a checkpoint occurs.

If your redo log files are 10MB in size and you want the checkpoint interval to be one tenth of the redo log file or 1MB, use the following formula to determine the value of LOG_CHECKPOINT_INTERVAL:

LOG_CHECKPOINT_INTERVAL = 1MB / 512 (bytes/block) = 2,048 blocks

To accomplish this, set LOG_CHECKPOINT_INTERVAL = 2048 in the parameter file. To have the checkpoint occur only at log switches, set the value of LOG_CHECKPOINT_INTERVAL to be larger than the size of your redo log files. LOG_CHECKPOINT_TIMEOUT The parameter LOG_CHECKPOINT_TIMEOUT specifies a time interval, in seconds, at which the checkpoint will occur. This will automatically run the checkpoint process at this interval. To set the checkpoint to occur every 10 minutes, for example, set LOG_CHECKPOINT_TIMEOUT = 600. By setting the checkpoint interval on a timer, you can be assured that checkpoints will happen regularly, even if there is not much activity at the time.

Forcing a Checkpoint

A checkpoint can be forced by hand. If you want to force a checkpoint, you can do it with the following command:

ALTER SYSTEM CHECKPOINT;

You might want to do this if you think that your system is at risk of some sort of failure, such as from a thunderstorm or other phenomenon that might cause a power outage or similar situation.

Forcing a Log Switch

As with the checkpoint, a log switch can be forced by hand. If you want to force a log switch, you can do it with the following command:

ALTER SYSTEM SWITCH LOGFILE;

It is only in rare circumstances where you will need to switch log files. This may happen when you want to force an archive before some system maintenance or other occurrence where the system may be at risk, or you may want to do this before your regular backup of archive log files.

Sizing the Redo Log Files

Typically the size of the redo log file is based on the capability of the medium that will contain the archive log files. If the archive log files will be written out to cartridge tape that can hold 525MB, you should consider making the redo log files 520MB. This will allow you to copy one archive log file to tape and have a little space left over for a margin of error.

If you do not have a particular medium in mind for archiving, or if the space is unlimited, you should make the redo log file a manageable size. A very large redo log file, say 2GB in size, might be a little unmanageable. Copying such a file can take quite some time.

There is no rule of thumb for the size of the redo log files. Your own preference should help you decide on it. Remember, if you make them too big you could potentially go all day without performing a checkpoint. This can be dangerous because the longer you go without checkpointing, the longer the recovery interval would be in the event of a system failure.

Archiving the Redo Logs

When a log switch occurs, the log records in the filled redo log file are copied to an archive log file if archiving is enabled. This archiving is usually done automatically. Because the redo log file cannot be reused until the archive process has completed, you should make sure that you will not try to reuse that log file before the operation is complete. There are several ways to make sure that the archiving process happens quickly:

  • Archive to disk. You can archive to disk and then copy those archive log files to tape later. This will prevent the archiving process from waiting for a tape drive or other, slower medium to complete.

  • Use multiple log files. By having more than two redo log files, you can simultaneously archive two or more log files while a third is being used for logging.

Archiving is very important to maintain recoverability in the database. Archiving and backups are covered in more detail on Day 16.

Adding Redo Log Files and Groups

As with many of the functions that you have seen in this book, there are several ways to add to the redo log. Most of these utilities have the option of using either a graphical or a command-line utility.

Using the Enterprise Manager

If you choose, you can add log files to the redo log via the Enterprise Manager. To do so, drill down into the database that you want to modify using the Navigator pane and right-click the Redo Log Groups icon. You will see the Create option. Select the Create option and you will see the Create Redo Log Group screen (see Figure 8.1). From here you fill in the size of the new redo log file that you want to create, specify a name under the New Members label, and click Add to add the redo log member. After you have clicked Add, that member will appear in the Current Members list, as shown in Figure 8.2.


NOTE: In this example I have closed all panes except for the Navigator pane.

After you have entered all the filenames for the redo log group members, click OK to create the log group. To add a member to an already existing group, right-click on the Redo Log Group icon. This will bring up the Create Redo Log Member screen, as shown in Figure 8.3.

Figure 8.1.

The Create Redo Log Group screen.

Figure 8.2.

The name of the new member of the redo log group appears in the Current Members list after you add it.

Figure 8.3.

The Create Redo Log Member screen.

Here you type the name of the redo log group member and click OK to create that member. To create a redo log group member with the same characteristics as an existing redo log group member, right-click the redo log group member that you want to duplicate and select the Create Like option. You will then see the Create Log Group Member screen, as shown in Figure 8.4. Simply type the new log group member's name and click the OK button to create the redo log group member.

Figure 8.4.

You can base a new log group member on an existing member after clicking Create Like.

The options for the redo log group creation is somewhat limited in the Enterprise Manager. It is not possible to add redo log group members and groups from the Storage Manager.

Using the ALTER DATABASE Command

Log files or log file groups can be added or modified with the ALTER DATABASE command, as shown on Day 6. I prefer the command-line utilities because they can be scripted, and as such are a permanent record that can be used over and over again. An example of how to use the ALTER DATABASE command to add a log file is shown here:

ALTER DATABASE database
ADD LOGFILE ( `log3a', `log3b' ) SIZE 10M;

To add a new log file to an already existing group, you can use this command:

ALTER DATABASE database
ADD LOGFILE MEMBER `log3c' TO GROUP 3;

If you don't know the group name, you can use the same command and specify the other members of the log file group, as in

ALTER DATABASE database
ADD LOGFILE MEMBER `log3c' TO GROUP ( `log3a', `log3b');

As I have said before, by using a SQL script and the ALTER DATABASE command, you can preserve a permanent record of the change and then use the file as a template for other, similar operations.

Modifying Redo Log Files and Groups

As with many functions you have seen in this book, there are several ways to add to the redo log. Most of these utilities have an option of using either a graphical or a command-line utility.

Using the Enterprise Manager

To modify a redo log group member, right-click that member via the Enterprise Manager's Navigator pane and choose the Quick Edit option. From here you will see the Quick Edit Redo Log Member screen, as shown in Figure 8.5. In this screen you can change the name of the redo log group member, thus causing the ALTER DATABASE RENAME FILE command to be run. As you can see here, the options for modifying the redo log via the Enterprise Manager are very limited. I prefer to use the ALTER DATABASE command because of its flexibility and options.

Figure 8.5.

The Quick Edit Redo Log Member screen.

Using the ALTER DATABASE Command

You can modify log files or log file groups with the ALTER DATABASE command, as shown earlier today and on Day 6. Again, I really prefer the command-line utilities because they can be scripted, and as such are a permanent record that can be used over and over again. For example, a redo log file can be renamed with the command

ALTER DATABASE database
RENAME LOGFILE `log1' TO `log2';

Or you can delete a rollback segment with this command:

ALTER DATABASE database
DROP LOGFILE `log1';

You can drop an entire log file group with the following command:

ALTER DATABASE database
DROP LOGFILE GROUP 3;

If you don't know the number of the log file group, you can drop it by specifying the names of the log file group members:

ALTER DATABASE database
DROP LOGFILE GROUP (`log3a', `log3b');

If necessary, you can drop just a log file group member with this syntax:

ALTER DATABASE database
DROP LOGFILE GROUP MEMBER `log3b';

Characteristics of the Log Files

The redo log files are one of the few files in the Oracle database that are always written to in a sequential manner. Because redo records are only read during recovery, they are write-only files during normal operations.

Because of the sequential nature of the redo log files, by isolating these files onto separate disk volumes you can take advantage of the fact that sequential I/O is much faster than random I/O. Keep in mind that the archival operation reads from the redo log file, so if you have two redo log files on the same disk volume, the archive process in conjunction with the redo log operation will cause random I/O.


NOTE: I use the term disk volume to refer to either a disk drive or set of disk drives in a RAID array.

In most cases, the performance of the redo log operation is not usually a problem. If you are running in a high transaction rate environment, you might need to separate each redo log file on its own disk volume.


TIP: The redo log files should be protected, either by using log file groups or with a RAID array. When using a RAID array (either hardware or software), use RAID-1 for the redo log files. RAID-1 offers the most protection and the fastest write performance.

The performance of the archive log volume is not as important as that of the redo log volume, but it is still fairly important. It is necessary that the archival operation be completed before you need to reuse the redo log file.

In many cases, archival information can be kept on another system and restored when necessary. If you are doing this, or are keeping your archive log files on tape, you might want to archive to disk first and then copy to tape or to the network so you can restore the data more quickly. If you are copying your data to a backup system, you can use RAID-5, which is slower but less costly. In any case, by archiving to a temporary area first, you free up the redo log file in the fastest possible time. This is covered in more detail on Day 16.

Control Files

Control files are used to keep information critical to the operation of the RDBMS. The control file (or files) resides on the operating system file system. These files are used in the startup of the instance to identify where the datafiles and redo log files are in the system. The loss of a control file can be devastating to the operation of the RDBMS. It is always a good idea to have multiple control files on different disk volumes so that a failure does not cause the loss of all the control files. You can add an additional control file after the database has been created by following these steps:

1. Shut down the Oracle instance.

2. Copy the control file to another location on another disk volume.

3. Edit the parameter file to include the new file name in the CONTROL_FILES parameter.

4. Restart the Oracle instance.

The control file can also be created using the CREATE CONTROLFILE command. This creation of the control file should be done only in extreme situations, such as when you need to rename a database or reproduce a control file because all control files are damaged and you don't have backups.

The best way to save and protect your control files is to use the ALTER DATABASE database BACKUP CONTROLFILE command. The options to the ALTER DATABASE database BACKUP CONTROLFILE command are as follows:

  • TO `filename'--Creates a new control file with the name specified as filename. If the file already exists, the optional REUSE qualifier must be used.

  • TO TRACE--This optional parameter writes SQL to a trace file that can be used to re-create the control files. Optionally you can specify the qualifiers RESETLOGS or NORESETLOGS, which will add additional SQL to open the database with these options. The SQL statements are complete enough to start the database, re-create the control files, and recover and open the database appropriately.


TIP: Any time you make changes to the structure of the database by adding datafiles, redo log files, and so on, run the command ALTER DATABASE database BACKUP CONTROLFILE TO TRACE. By doing this you will have a method of re-creating the control files if necessary. This will save you a lot of work if you have to recover the entire system.

The control files are an important part of your system, although they rarely require any maintenance. As long as you do a backup after you make any major changes, you should be in pretty good shape.

Rollback Segments

New Term: Rollback segments record transactional information that will be used in the event that the transaction is rolled back. They also provide read consistency and are used for database recovery. Rollback segments keep the undo information that is used in rollback operations.

Remember, a transaction can be finished by issuing either a COMMIT or a ROLLBACK statement. These statements perform completely opposite operations. A commit operation finishes a transaction by finalizing all the changes that have been made. When the commit operation has finished, the changes cannot be undone. In the event of a system failure, all changes made in this transaction will be recovered. A rollback operation causes all the changes made during the transaction to be undone. When the rollback operation has finished, you must resubmit the transaction to reproduce the changes that were made. After a rollback, it is as if the transaction never occurred.

New Term: Read consistency allows a long-running transaction to always obtain the same data within the query. During the transaction, the data is consistent to a single point in time and does not change. Even though the data might have been changed by another user and the DBWR might even have written it out, other transactions do not see those changes until a COMMIT has occurred. In fact, only transactions that start after this transaction has been committed see those changes.

Rollback segments can be either public or private. A private rollback segment can be used only by the instance that opened the database, and a public rollback segment can be used by any instance. If you are not running Oracle Parallel server, the private and public rollback segments are identical. Rollback segments must be carefully watched and can be tuned in several ways. It is important not only to size the rollback segments correctly but also to create the proper number of rollback segments and properly distribute them according to the number of user processes that require them.

Understanding How Rollback Segments Work

As a transaction is being processed, information relating to changes made to the datafiles by that transaction is constantly being written to the rollback segments. It is important that this information be saved because a rollback would require that all data be restored to its original condition.

The information written by the transaction to the rollback segments is held in rollback entries. Depending on the length of the transaction and the number of changes to data, there might be more than one rollback entry for each transaction. These entries are linked together so that they can easily be used in the event of a rollback.

This information stored in the rollback segments include block information about what blocks have been modified and the data as it was before the change occurred. Remember that the redo log also records information about changes in the database. The redo log, along with the rollback segments, can restore your data up to the point of failure.

Rollback segments are used concurrently by one or more transactions. You can tune the rollback segments to provide for optimal efficiency and space usage. Having more transactions sharing rollback segments causes more contention and uses space more efficiently. Having fewer transactions per rollback segment causes less contention and wastes more space.

New Term: Oracle maintains what is called a transaction table for each rollback segment. The transaction table stores information about what transactions use that rollback segment and the rollback entries for each change done by those transactions.

Each time a new transaction begins, it is assigned to a rollback segment. This can happen in one of two ways:

  • Automatically--Oracle automatically assigns the transaction a rollback segment. The assignment takes place when the first DDL or DML statement is issued. Queries are never assigned rollback segments.

  • Manually--The application can manually specify a rollback segment by using the SET TRANSACTION command with the USE ROLLBACK SEGMENT parameter. This allows the developer to choose the correct size of rollback segment for a particular task. The rollback segment is assigned for the duration of the transaction.

At the end of each transaction, when the commit operation has occurred, the rollback information is released from the rollback segment but is not deleted so as to maintain read-consistent views for other queries that started before the transaction was committed. To retain this information as long as possible, the rollback segments are written as a circular buffer.

You can think of rollback segments as a sort of circular buffer: A rollback segment must have at least two extents (usually more). When a transaction fills up one extent, it starts using the next extent in sequence. When it gets to the last extent, the transaction continues with extent 1 again if it is available, as shown in Figure 8.6.

Figure 8.6.

A logical representation of a rollback segment.

If the transaction uses the last extent in the segment, it looks to see whether the first extent is available. If it is not, another extent is created, as shown in Figures 8.7 and 8.8. The number of extents used for rollback segments is determined in the definitions of the rollback segments when you create them.

Figure 8.7.

A rollback segment with all extents used.

Figure 8.8.

A rollback segment showing dynamic growth.

Creating Rollback Segments

Rollback segments are created graphically with Enterprise Manager or Storage Manager or on the command line with the CREATE ROLLBACK SEGMENT command. Although all three of these are functional, some have more functionality than others, as you will learn in the following sections.

Using the Enterprise Manager

To create a rollback segment with Enterprise Manager, drill down through the database that you will be operating on until you get to the Rollback Segment entry. When you right-click this icon you are given a list of options.


NOTE: For this example I have chosen to display only the Navigator pane.

After you have selected Create from the options you will see the Create Rollback Segment screen. Type a name for the rollback segment and choose a tablespace in which you want to create the rollback segment.


TIP: I have clicked the Show SQL button to display the SQL used to create the rollback segment.

If you click the Online button, the SQL statement will be altered to put the rollback segment online after it is created, as shown in Figure 8.9. As you can see, there are not a whole lot of options available when creating rollback segments with the Enterprise Manager. It is also possible to create rollback segments with the Storage Manager, which gives you more flexibility and options.

Using the Storage Manager

The left side of the Storage Manager is similar to the Enterprise Manager, as shown in Figure 8.10.

Click on the Rollback Segment icon, and the tree in the left side of the Storage Manager will expand to show the existing rollback segments. The right side of the Storage Manager shows the name, tablespace name, status, size, and high water mark of the rollback segments. (The high water mark indicates how much rollback data can be used in that extent before a new one is allocated.)

Figure 8.9.

Clicking the Online button puts the rollback segment online after you create it.

Figure 8.10.

The Storage Manager.

By right-clicking on the Rollback Segment icon you will see a menu that includes an option to create a rollback segment. By clicking the Create button you will see a screen that is identical to the Create Rollback Segment screen used by the Enterprise Manager. This is shown in Figure 8.11.

Figure 8.11.

The Create Rollback Segment dialog box in the Storage Manager.

As you can see, with both the Enterprise Manager and the Storage Manager, the options available when creating rollback segments are very limited. To create rollback segments with more options, use the CREATE ROLLBACK SEGMENT command.

Using the CREATE ROLLBACK SEGMENT Command

If you create a rollback segment with the CREATE ROLLBACK SEGMENT command, you have two advantages: the ability to create the rollback segments with more options and the advantage of using cut and paste in your editor to create multiple rollback segments. If you are adding 50 or 100 rollback segments, using Enterprise Manager and Schema Manager can become quite tedious. By using cut and paste in your editor and just changing the rollback segment name, you can more easily create a large number of rollback segments. An example of creating a rollback segment with the CREATE ROLLBACK SEGMENT command is shown here:

CREATE [ PUBLIC or PRIVATE ] ROLLBACK SEGMENT rsname
TABLESPACE tsname
STORAGE (
INITIAL number K or M
NEXT number K or M
OPTIMAL number K or M
MINEXTENTS number
MAXEXTENTS number
);

The parameters of CREATE PUBLIC ROLLBACK SEGMENT are as follows:

  • rsname--The name of the rollback segment you are creating.

  • TABLESPACEtsname--Specifies the name of the tablespace where that rollback segment will be created.

  • INITIALnumber K or M--The initial extent size in K (kilobytes) or M (megabytes).

  • NEXTnumber K or M--The size of the second extent in K (kilobytes) or M (megabytes). With rollback segments, it is always a good idea to make all extents the same size because there is no distinction between different extents.

  • OPTIMALnumber K or M--Specifies the size that you would like the rollback segment to try to stay, in K (kilobytes) or M (megabytes). When extents are no longer needed, they are eliminated until this size is reached.

  • MINEXTENTSnumber--The minimum number of extents. This is also the number allocated when the segment is created.

  • MAXEXTENTSnumber--The maximum number of extents that can be dynamically allocated.

Initially, there are MINEXTENTS number of extents in the rollback segment. As extents fill up, they are used in a circular fashion, returning to the first extent when all others are filled. If a rollback segment has used all the space in all the extents and MAXEXTENTS has not been reached, another extent is created. If the size of the rollback segment is larger than OPTIMAL and there are unused extents, the unused extents are dropped from the rollback segment.

Both the creation and destruction of a rollback segment extents cause overhead in the system. In addition to the overhead created by the addition of extents to a rollback segment, the transaction needing to write into that rollback segment must wait for the extent to be created before it can continue. The following sections explain how to tune your rollback segments.

Tuning Rollback Segments

To properly configure a system's rollback segments, you must create enough rollback segments, and they must be of a sufficient size. That seems fairly simple, but it is not. You can observe how the rollback segments are being used, and from that determine what needs to be done.

Determining the Number of Rollback Segments

The number of rollback segments should be determined by the number of concurrent transactions in the database. Remember--the fewer transactions per rollback segment, the less contention. A good rule of thumb is to create about one rollback segment for every four concurrent transactions.

Rollback contention occurs when too many transactions try to use the same rollback segment at the same time, and some of them have to wait. You can tell whether you are seeing contention on rollback segments by looking at the dynamic performance table, V$WAITSTAT. Following is the data contained by V$WAITSTAT that is related to rollback segments:

  • UNDO HEADER--The number of waits for buffers containing rollback header blocks.

  • UNDO BLOCK--The number of waits for buffers containing rollback blocks other than header blocks.

  • SYSTEM UNDO HEADER--Same as UNDO HEADER for the SYSTEM rollback segment.

  • SYSTEM UNDO BLOCK--Same as UNDO BLOCK for the SYSTEM rollback segment.

The system rollback segment is the original rollback segment that was created when the database was created. This rollback segment is used primarily for special system functions but is sometimes used when no other rollback segment is available. Typically the SYSTEM rollback segment is not used, and you do not need to be concerned about it.

You can view these values with the SQL statement shown in Listing 8.1.

INPUT :

Listing 8.1. Rollback segment waits.

SQL> SELECT class, count
  2  FROM V$WAITSTAT
  3  WHERE class IN
  4 (`undo header', `undo block', `system undo header', `system undo block');

OUTPUT:
CLASS                  COUNT
------------------ --------
system undo header         0
system undo block          0
undo header                0
undo block                 0

Compare these values with the total number of requests for data. Remember (from earlier in the chapter) that the number of requests for data is equal to the sum of DB BUFFER GETS and CONSISTENT GETS from V$SYSSTAT. Also remember that you can extract that information with the query shown in Listing 8.2.

INPUT :

Listing 8.2. Total number of rollback requests.

SQL> SELECT SUM(value) "Data Requests"
  2  FROM v$sysstat
  3 WHERE name IN (`db block gets', `consistent gets');

OUTPUT:Data Requests
------------

         5105

ANLYSIS:

If the number of waits for any of the rollback segment blocks or headers exceeds more than 1% of the total number of requests, you should reduce the contention by adding more rollback segments.

In this example the total number of requests is 5,105, as shown in Listing 8.2, whereas the number of waits were all 0, as indicated in Listing 8.1. This indicates that there was no contention.

Determining the Size of Rollback Segments

The advantage of small rollback segments is that they tend to remain cached, but roll- back segments that are too small will grow and shrink unnecessarily. Long-running transactions tend to like larger rollback segments because they usually generate more rollback information.

It is possible to create several different sizes of rollback segments. Each type of rollback segment should be used by the application developer based on the type and length of the transaction (see the following).

  • OLTP--OLTP transactions are characterized by many concurrent transactions, each modifying perhaps only a small amount of data. These types of transactions benefit from a reduction of contention and quick access from cached rollback segments. Try to create many small rollback segments of perhaps 10KB to 20KB in size, each with 2 to 4 extents (optimally with a rollback segment available for each transaction).

  • The small size of the rollback segments provides for a better chance of being cached in the SGA. There is probably very little dynamic growth of the extents.

  • Long Queries--For long queries where read consistency calls for quite a bit of rollback information to be accessed, use a larger rollback segment. A good rule of thumb is to create rollback segments approximately 10% the size of the largest table (most SQL statements affect only about 10% of the data in a table).

  • Large Updates--For transactions that update large amounts of data, you should also use a larger rollback segment. As is the case with the long queries, it is appropriate to create rollback segments approximately 10% the size of the largest table.

Determining the Size and Number of Extents

In general, the best performance of rollback I/O performance can be obtained when there are approximately 10 to 20 extents of equal size per rollback segment. To determine the size and number of extents, use the following formula:

Rollback segment size = Rsize = Size of largest table / 10

Number of extents = NE = 10

Size of extents = Esize = Rsize / NE

When creating the rollback segments, use the value of Esize for INITIAL and NEXT; use the value of NE for MINEXTENTS. Even when using these rules, you might not achieve the most effective size for your rollback segments. If dynamic growth is occurring, you might be losing performance.

Avoiding Dynamic Growth

As stated earlier, you want to avoid the dynamic space management that causes additional overhead and transactional delays. To determine whether rollback segments are a problem, look in the dynamic performance table, V$ROLLSTAT. The following columns are of particular interest:

  • EXTENTS--Number of rollback extents.

  • RSSIZE--The size (in bytes) of the rollback segment.

  • OPTSIZE--The size to which OPTIMAL was set.

  • AVEACTIVE--The current average size of active extents. Active extents are defined as extents with uncommitted transaction data.

  • AVESHRINE--The total size of free extents divided by the number of shrinks (see the second item following).

  • EXTEND--The number of times the rollback segment added an extent.

  • SHRINK--The number of times the rollback segment shrank. Each shrink may be one or more extents at a time.

  • HWMSIZE--The high water mark of rollback segment size. This is the largest that the segment size ever grew to be.

You can look at these statistics by using a SQL statement like the one shown in Listing 8.3.

INPUT: Listing 8.3. Check statistics.


SQL> SELECT substr(name,1,40), extents, rssize, aveactive,
aveshrink, extends, shrinks
  2  FROM v$rollname rn, v$rollstat rs
  3 WHERE rn.usn = rs.usn;

OUTPUT :

SUBSTR(NAME,1,40)      EXTENTS  RSSIZE AVEACTIVE AVESHRINK   EXTENDS  SHRINKS
---------------------- ------- ------- --------- --------- ---------  -------
SYSTEM                       4  202752         0         0         0        0
RB_TEMP                     53  540672     23929         0         0        0
RB1                          2  202752         0         0         0        0
RB2                          2  202752     55193         0         0        0

If the average size is close to the size set for OPTIMAL, OPTIMAL is set correctly. If either extends or shrinks is high, you must increase the value for OPTIMAL.

Summary

You have now seen all the basic files that are used by Oracle. The Oracle instance is made up of the Oracle files, the processes, and the memory used by Oracle. The Oracle files are made up of the datafiles, the redo log files, the control files, and the parameter file. There are more files used in your day-to-day operations such as archive log files, export files, and so on, but they are not critical to the operation of the Oracle instance.

This chapter covers the redo log files, control files, and rollback segments. The redo log files are used to store system redo information, which is used to recover the database in the event of a catastrophic system failure. The redo log, in conjunction with the archive log files, enables Oracle to recover transactions that were committed before the failure.

The control file is used to keep internal Oracle control information. This file is used to tell Oracle where the data and redo log files are in the operating system. The control files are critical to the operation of the instance.

Finally, you learned about rollback segments. Rollback segments keep undo information that is used to roll back transactions and for read consistency. The rollback segments have a lot of tuning options, which you learned about in this chapter.

What's Next?

In tomorrow's lesson, "Managing Data," you will learn how to use several of the data-manipulation tools that come with your server: Export, Import, and the SQL*Loader. Export and Import are used to move data in and out of the database. The SQL*Loader utility is just used for loading data into the database.

Q&A

Q What files are used in the Oracle instance?

A There are several different files used in the Oracle instance. There are one or more datafiles, two or more redo log files, the control files, and the parameter file.

Q What Oracle object is used for recovery operations?

A The redo log files and the archive log files are used for database recovery.

Q What Oracle object is used for read consistency?

A The rollback segments are used for read consistency.

Q What is read consistency?

A Read consistency allows a long-running transaction to always obtain the same data within the query.

Workshop

The workshop provides quiz questions to help you solidify your understanding of the material covered and exercises to provide you with experience in using what you've learned. For answers to quiz questions, see Appendix A, "Answers."

Quiz

1. How many redo log files do you need?

2. What is a log group?

3. What is a control file used for?

4. Can you have more than one control file?

5. Can the instance be started without a control file?

6. What can you do to help re-create the control file?

7. Does a log switch force a checkpoint?

8. Does a checkpoint force a log switch?

9. What is a rollback segment used for?

10. How big is a rollback segment?

Exercises

1. Back up your control file to trace.

2. Use the Storage Manager to determine how much space your rollback segments use.

3. Add another control file to your system.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.



  • Главная
  • Новости
  • Новинки
  • Скрипты
  • Форум
  • Ссылки
  • О сайте




  • Emanual.ru – это сайт, посвящённый всем значимым событиям в IT-индустрии: новейшие разработки, уникальные методы и горячие новости! Тонны информации, полезной как для обычных пользователей, так и для самых продвинутых программистов! Интересные обсуждения на актуальные темы и огромная аудитория, которая может быть интересна широкому кругу рекламодателей. У нас вы узнаете всё о компьютерах, базах данных, операционных системах, сетях, инфраструктурах, связях и программированию на популярных языках!
     Copyright © 2001-2024
    Реклама на сайте