Friday, September 11, 2009

Restore spfile or controlfile from autobackup beyond 366 days

The restore of spfile and control file from autobackup using maxdays option will fail if the maxdays specified is beyond 366 days:
RMAN> restore controlfile from autobackup maxseq 200 maxdays 500;
Starting restore at 21-oct-08 14:58using channel ORA_DISK_1
RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03002: failure of restore command at 10/21/2008 14:58:42RMAN-06494: MAXDAYS = 500 is out of range (1-366)
So how to restore spfile and control file from a backup taken more than 366 days ago? I found that when RMAN performs restore from autobackup file, it did not check the timestamp of the autobackup file but the timestamp that is stated in the autobackup file name. By changing the autobackup file name so that the timestamp is within the accepted range, it can find and read the autobackup file and restore the controlfile / spfile successfully.
Without the maxdays option, RMAN will only look for autobackup file with timestamp within the last 7 days. So if today is 22 Oct 08 and your autobackup file name in $ORACLE_HOME/dbs is “c-1154185322-20081012-01″, the “restore controlfile from autobackup” command will not be able to find or read the autobackup file.
Change it to “c-1154185322-20081021-01″ and RMAN will be able to restore the control file from autobackup with “restore controlfile from autobackup” command. I don’t have a controlfile autobackup created more than 366 days ago, but I think this trick can also address 366-maxdays limitation.

RMAN Restore and Recover of a Database When the Repository and spfile/init.ora Files are Lost

Oracle Database RMAN Restore/Recovery without spfile/init.ora Files
You have for whatever reason lost your spfile and your initORACLESID.ora files and you find yourself in a situation where you need to restore and recover your database.
This doesn’t happen very often and hopefully it never happens to you. The situation can arise during clone (duplicating a database using RMAN) but for whatever the reasons may be for the lost of your initialization files (media problems, etc.) here’s an outline of steps that may help to get your database back up and running.
If you only have the physical backuppieces on disk or tape and have lost your database and initialization files this is basically a disaster and or you have a need to restore and recover a database from and old backup that has long since been deleted or aged out of the RMAN Repository (examples would be a disk backup that was copied to tape prior to its deletion from RMAN).
Oracle 8i you don’t have spfiles but the Disk Backuppieces need to be restored or located in their original backup location, if there are backuppieces that are unknown to the restored control file it isn’t possible to catalog these files in this release but you can manually extract the information from the backupieces using PL/SQL.
Oracle 9i you can restore the spfile from an autobackup as well as the controlfile. You can’t catalog backuppieces unknown to the restored controlfile (same as 8i) but you can use PL/SQL to extract the information from the backuppieces. The disk backupieces must be located in their original backup locations.
Oracle 10g and 11g allows you to restore the spfile and controlfile from autobackups. You don’t have to place the original disk backuppieces in the same exact location as the original backup location. You can catalog unknown backuppieces that are missing from the restored controlfile and you can use PL/SQL to extract any missing information that you may need.
The goal is to restore the spfile and controlfile either from an autobackup controlfile or via a manual extraction process from an rman backuppiece so that the instance can be mounted. The restored controlfile then becomes the RMAN repository (no catalog), allowing a ‘normal’ catalog-free restore and recovery of the database to be executed.
The restore controlfile should therefore should also contain the metadata for the database and archivelog backups which will need to be restored and recovered.
The following constraints have to be acknowledged and understood:
It is not possible to restore an init.ora file from a backuppiece as init.ora files are never included in a backup for any Oracle release.
It is your responsibility to identify the required backuppieces required for the restore. This is where RMAN reports/listings come into play, if you generate these after each backup and store somewhere you can refer to these reports or similar type of listings.
The DBID of the target database has to be known
Unless you are using 10g+ all disk backuppieces must reside in the original backup location in case you need to restore to a new host and identical backup directory must be created on that host.
If you have to restore the spfile, controlfiles and datafiles by extracting this information from the backuppieces you will need to contact Oracle Support Services to work with them in order to accomplish these tasks.
Keep the backup logs for at least one complete backup cycle so that the backuppieces and the target DBID can be identified by reviewing these files.
Pick a format for the backuppiece that makes it easy to identify these files by name should the log files no longer be available.
8i Only – backup your init.ora file using any means possible. I have a crontab script that executes nightly that creates copies of my spfiles (10g and 11g) as well as the listener.ora, tnsnames.ora and other files that aid in the recovery process should I find myself in a situation that requires access to these files. I’ll include the script I use at the end of this post.
8i Only – make sure you backup the controlfile after every backup or archivelog backup, this is just good practice and you’ll see why if you have find yourself in a situation where you need this file. You should always have a controlfile backup that contains the latest backup metadata.
9i+ always use an spfile because RMAN can backup this file automatically whenever an operation occurs (within RMAN) that warrants a backup.
9i+ always turn on CONTROLFILE AUTOBACKUP ON
Become as familiar as possible with your RMAN configuration, monitor where the backups are written, review the logs daily, review your backup frequency and most importantly know where the backup of the controlfile is located at.
Restoring When Everything is Gone
Restore the init.ora file – applies for All Releases. RMAN has to connect to a running instance (a database started with an init.ora file in nomount mode). If an init.ora file is used rather than a spfile this needs to be restored from an OS backup or manually recreated in the proper format.
Restoring the SPFILE and CONTROLFILE when CONTROLFILE AUTOBACKUP is NOT USED – relevant to Oracle 8i (where controlfile AUTOBACKUP didn’t exist), Oracle 9i+ where the persistent configuration CONTROLFILE AUTOBACKUP was set to OFF (not the default).
Figure out the latest controlfile backuppiece – if at all possible this should contain the latest metadata for the database and archivelog backups that required for the restore/recovery purposes. Use the past RMAN logs, check available backup directories on disk or query the media manager for your tape catalog to identify this information.
Extract the controlfile – Oracle 9i+ you can restore the control file from an explict backuppiece. Start the database in nomount mode via SQL*Plus and issue the command RMAN> restore controlfile from ‘backuppiece’;. If you are having problems identifying which backuppiece may contain a controlfile you will have to try various backuppieces until you locate a controlfile. If you can not locate one you’ll have to call Oracle Support for help and you’ll need to provide them with the following information: Current Schema of the database, files that were backed up, what time they were backed up and names of the backup pieces containing the Database’s files.
Mount the Instance – once the controlfile has been located and extracted you can continue and the next step is to mount the instance so jump down to Step 3.
Restoring the SPFILE and CONTROLFILE when CONTROLFILE AUTOBACKUP is ON – Oracle 9i+ where the parameter CONTROLFILE AUTOBACKUP is set to ON (default). The autobackup controlfile feature is there to help ease the pain of recovery when the RMAN repository is gone. Controlfile autobackup formats have to include the %F which makes them easy to find and identify. %F equates to – “c-t-yyyymmdd-” (I’m having issues display the proper syntax due to my inability to figure out how to block quote a block of text so I’ll fix this later when I figure out how to do that) . If the controlfile backups are written to disk, the default location is /dbs (unix) and /database (windows) so that is the locations to look for the autobackup files first. If you don’t find them they were written to a non-default directory and if you don’t know where the non-default directory is you’ll have to do a search of ‘find’ in unix to locate them. If you find the file you can use the ‘SET CONTROLFILE AUTOBACKUP FORMAT’ in the script below to force RMAN to use the correct location for the autobackup file. For autobackups written to tape, no further action is required other than to allocate a channel – check the rman logs to find out what the media manager environment variables if any were specified for the channel and make sure you set up a similar if the same variable configuration.
Startup the Instance in NOMOUNT mode as follows (you can’t use SQL*Plus for this so don’t try):$ rman target /
RMAN> startup nomount;RMAN> set dbid=”dbid#”;RMAN> run {set controlfile autobackup format for device type disk to ‘path/%F’;restore spfile from autobackup;restore controlfile from autobackup;startup force mount;}
Notes: You get the dbid from the name of the controlfile autobackup filename or the rman backup logs. If you can’t find or identify the DBID you’re stuck. The ’set controlfile autobackup format’ is only necessary if the controlfile autobackup is in a non-default location - set ‘path’ accordingly. If you are using tapes for the restore: set controlfile autobackup format for device type sbt to ‘%F’;. ’startup FORCE mount’ is required because the instance was originally started without an spfile and if you try to mount without restarting it will fail.
3. Query the Backup History in the Restored Controlfile – Oracle 8i,9i,10g and 11g – after you’ve restored the controlfile and mounted the instance you now need to confirm that the controlfile actually contains the metadata for the database and the archivelog backups that will be required to restore and recover the database. You can use a command similiar to the following for this purpose:
RMAN> list backup;RMAN> list backup of database completed after ‘date’;RMAN> list backup of database completed between ’start date’ and ‘end date’;
The date must be in the appropriate NLS format (NLS_DATE_FORMAT).
Once you’ve found the database backup(s) you need then get the checkpoint scn of the backup (you’ll need this for the restore). You can see the SCN in the output of the above list commands. Then check that the archivelogs needed for the recovery have been backed up and recorded:
RMAN> list backup of archivelog from scn=xxxx;
Now if all the required metadata required for the restore and recovery are present you can continue on with the restore and recovery steps. You need to pick a archive log sequence number to specify for the restore and recovery process.
RMAN> run {set until sequence = xx thread 1;restore database;recover database;alter database open resetlogs;}
3.1 Restored CONTROLFILE does NOT CONTAIN THE DATABASE BACKUP METADATA – Oracle Releases 8i and 9i – located a later (newer) controlfile and try again. Otherwise, phone Oracle Support Services for assistance.
Once the datafiles and archivelogs have been extracted the recovery can be start via RMAN to recover up to (and including the log sequence xx).
RMAN> run {set until sequence xx thread 1;recover database;alter database open resetlogs;}
Oracle Release 10g+
Catalog the required backuppiece(s) into the restored backuppiece ‘X’;
For Disk Backup Pieces:
RMAN> catalog backuppiece ‘path\X’;
Where ‘X’ is the name of the backuppiece.
All of the information regarding the backuppiece content are in the backuppiece header and once you catalog the backuppiece you can then query the contents (list backupset xx;) and proceed with the restore and recovery of the database as normal.

Oracle9i/10g RMAN Duplicate Database Errors

Recovering from RMAN Errors
The options for Manual Completion of an RMAN Duplication Task vary depending on which phase of the duplication failed. The following examples cover Oracle 9i and 10g. Manual completion steps out outlined per phase and most likely you’ll only two to execute a couple of the steps depending on where the failure occurred.
Restore Failure
If you receive an error during the RMAN restore of the database you need to determine what caused the problem and fix it. If very few files have been restored it may be easier to just start over and rerun the task from the beginning. If the duplication process failed after running for a long period of time and you rather not to start from the beginning (especially if it takes hours+) then you can try to recover manually and attempt to complete the process.
A likely cause if you are going from one server to another server is missing files or the rare outside possibility of a bad block in a file required for restore. You need to address whatever problem caused the file(s) to be missing from the restore location.
Remember that in order to successfully duplicate a database using RMAN’s DUPLICATE feature is that ALL files required to restore the database must be present on the remote server and in the same exact location unless you catalog the files if they are intentionally located in another directory. RMAN will not even start the restore process if it can’t find the backupsets in the expected location. A reason RMAN may be interrupted in this phase is because during the copy operating you ran out of disk space in the filesystem where you were depositing the backupsets, backup current controlfile and backup spfile and RMAN can’t locate one of the backupsets or arcivelog files.
$ export ORACLE_SID= (name of database cloning to)$ rman target /RMAN> run {set until scn xxxxxxxx;restore current controlfile from ‘restore directory’;alter database mount;set newname for datafile 1 to ”;set newname for datafile 2 to ”;….restore datafile i,i2,….;}
You need to identify the SCN from the output from the failed RMAN duplicate log and you must use the ‘SET NEWNAME’ for each datafile that remains to be restored as DB_FILE_NAME_CONVERT will not work with a normal restore.
For Oracle 10g+ it really is best to start over if RMAN failed during the first phase or restore step. Any files that have already been restored will be skipped and the duplicate process can be restarted without manual intervention.
Phase 2 Failure
This is the controlfile creation or switch of the datafile names after the datafiles have been restored. You need to review the log files and identify what the problem is and make sure you make a list to use for all the datafiles that have not been switched over. You can then attempt to complete this step manually by rename each datafile if the auxiliary instance uses a different file structure or the ASM Disk Group is different than that of the target’s directory structure or ASM Disk Group.
After the rename (switch) of all the datafiles that need to be renamed:
CREATE CONTROLFILE REUSE SET DATABASE RESETLOGS ARCHIVELOG…
SQL> alter database backup controlfile to trace;
Phase 3 Failure
Failure during the recovery of the restored datafiles. This is the next phase where each datafile is recovered to either a point in time or SCN. Determine the cause from the log file and then to continue after fixing the problem:
$ rman target / auxiliary sys/@RMAN> run {set until scn xxxxxxx;recover clone database;alter clone database open resetlogs;}
Get the ‘UNTIL SCN’ value from the duplicate logfile, connect to the target. Archivelogs will be automatically restored at 10g, restored into the Flash Recovery Area if this is defined. After completing recovery, change the Database Identifier (DBID) using the NID utility on Windows:
$ nid target=sys/oracle
DBNEWID …….…..Change database ID of database AUX? (Y/N)=>Y
The manual duplication process should be complete and you can jump down to Step 6 – Final Actions.
Phase 4 Failure
This phase is the controlfile recreation phase. Check the rman duplicate log and identify the reason the recovery didn’t complete – look for:
media recovery completeFinished recover at
Figure out what the problem is and fix the cause then execute the following after fixing the cause:
CREATE CONTROLFILE REUSE SET DATABASE ‘AUX’ RESETLOGS ARCHIVELOG…
Make sure ALL the files have been restored in the DATAFILE section of the RMAN duplicate log.
Phase 5 Failure
This failure would be in the phase that opens the database with resetlogs. Check the log file again and fix the problem. Look for Thread x closed at log sequence y
If the resetlogs was completed, determine what cause the error and fix the problem and restart the auxiliary instance. If resetlogs wasn’t completed successfully, determine what cause that problem and then open the clone database with resetlogs using RMAN (you can’t use SQL*Plus for this step) and connect to the target database first:
$ rman target / auxiliary sys/oracle@RMAN> alter clone database open resetlogs;
If the duplication process failed only in steps 5 then you are done, no further action is required. The DBID will have already been changed. Otherwise, execute the NID command to change the DBID (Windows).
Final Steps
$ rman target / auxiliary sys/oracle@RMAN> alter clone database open resetlogs;
Add any temp files missing to the new cloned auxiliary database. Files that were manuall restored to the auxiliary instance will be cataloged as datafile copies. Connect to the original target and execute:
RMAN> list copy of database;RMAN> crosscheck copy of datafile ;RMAN> delete expired copy of datafile ;

RMAN Duplicate Database Feature in 11G

RMAN Duplicate Database Feature in 11G
You can create a duplicate database using the RMAN duplicate command. The duplicate database will have a different DBID from the source database and it functions entirely independently because it is completely independent once the duplication has taken place and the duped database is open for use.
Starting with Oracle 11g there are now two ways to duplicate a database:
1. Active database duplication2. Backup-based duplication
Active database duplication involves copying the live running database over the network to the auxiliary destination and creating the duplicate database. The backup-based duplication requires copying over using NFS to make available the backupset(s) to the destination database. The only difference between the two is you do not need pre-existing RMAN backups and copies (archivelogs). The duplication work is performed by the auxiliary channel and this channel corresponds to a server session on the auxiliary instance on the auxiliary (destination) host.
The active database duplication is slower because it is using the network to transport the data blocks instead of accessing existing RMAN backupsets. RMAN carries out the following steps as part of the duplication process:
1. Creates a control file for the duplicate database2. Restarts the auxiliary instance and mounts the duplicate control file3. Creates the duplicate datafiles and recovers them with incremental backups and archived redo logs4. Opens the duplicate database with the RESETLOGS option
In the case of active database duplication, RMAN copies the target datafiles over the network to the auxiliary (destination) instance.
A RAC Target database can be duplicated as well, this feature is not restricted to non-RAC databases. The procedure is the same as what is outlined below. If the auxiliary database needs to be a RAC-database then you start the process to duplicate a single instance and convert the auxiliary to RAC after the duplicate process has succeeded.
The next section is devoted to describing the process for Active Database Duplication.
Active Database Duplication
1. Prepare the auxiliary database instance2. Create the initialization parameter file for the Auxiliary instance
If you are using SPFILE then according to Oracle the only parameter required for the duplicate database is the DB_NAME parameter. The rest of the parameters can be set in the DUPLICATE command. If you are not using the SPFILE technique for your Oracle initialization parameter file then you will need to create an init.ora file and set the initialization parameters. Again according to Oracle the only required parameters are:
DB_NAMECONTROL_FILESDB_BLOCK_SIZEDB_FILE_NAME_CONVERTLOG_FILE_NAME_CONVERTDB_RECOVERY_FILE_DEST
However, I have found if I use a full meaning a copy of one of my running Database’s parameter file it is easier than specifying the parameters in the DUPLICATE command itself. That is a personal preference and you should go with whatever works best for you. I use SPFILE files for all my Oracle Databases and whenever I have a need to duplicate one of them I will generate (create) an init.ora initialization parameter file for use with RMAN. I will then use my favorite editor to search and replace the database specific parameters with my new Oracle Database parameters like DB_NAME, DB_RECOVERY_FILE_DEST, etc.
3. Create the Oracle Password file for the Auxiliary Instance – this is a requirement for RMAN in order to duplicate a database you need to connect directly to the auxiliary instance using the password file with the same SYSDBA password as the target database’s password. The passwords have to match exactly in order for this to work. You can specify the PASSWORD FILE option on the DUPLICATE command in which case if you do RMAN will copy the source database password file to the destination host and overwrite any existing password file with the same name as the auxiliary instance’s name.
4. Make the necessary changes to the listener.ora and tnsnames.ora file in order to establish SQL*Net connectivity before starting the RMAN duplicate session. You have to be able to connect through Oracle Net to the target and the auxiliary instance in order to use Active Database duplication.5. Start the Auxiliary instance from SQL*Plus – start the database and put it in nomount mode but first take care of the following steps:
I always create a link in $ORACLE_HOME/dbs to the initialization file and I just rename this link when I’m finished with RMAN to switch over to using a spfile instead of the pfile method. This way you do not have to type in a lot of characters whenever you stop, start the auxiliary instance because more than likely you will have to execute the process more than once unless you get it right the very first time! I set up the ADR – in Oracle 11G the new Automatic Diagnostic Recovery filesystem (if you aren’t using ADR set up the $ORACLE_BASE/admin/$ORACLE_SID/bdump/…._) directories prior to bringing up the auxiliary instance in nomount mode. You will need to create the supporting directories prior to starting the auxiliary instance.
Windows:
Create the parameter file (using Oracle’s example):
initNEW.oraDB_NAME=NEWdiagnostic_dest=’E:\opt\oracle’DB_FILE_name_CONVERT=(’I:\oradata\OLD’,'E:\oradata\NEW’)LOG_FILE_NAME_CONVERT=(’I:\oradata\onlinelog\OLD’,'E:\oradata\onlinelog\NEW’)SGA_TARGET=26214400CONTROL_FILES=’E:\oradata\controlfile\NEW\control01.dbf’COMPATIBLE=11.1.0.0.0
Create the Database Service (Windows Only) and password file:
% set ORACLE_SID=NEW% set ORACLE_HOME=E:\opt\app\oracle\product\11.1.0\db_1% oradim -NEW -SID NEW% orapwd FILE=E:\opt\app\oracle\product\11.1.0\db_1\database\PWPDNEW.ora PASSWORD=sys
% sqlplus “/ as sysdba”sql> startup nomount;
6. Create the necessary Oracle Net connectivity in the listener.ora and tnsnames.ora files – you need to make sure you specify SERVER = DEDICATED in the tnsnames.ora entry for your auxiliary instance and if you are using Oracle on HP-UX 11i v3 Itanium there is a bug that requires you to use the string ‘(UR=A)’ at the end of the tnsnames.ora entry (put it after the SERVICE_NAME entry). Confirm the connectivity to the target, auxiliary and you may want to confirm connectivity to your RMAN catalog even though you will not be using the catalog for this type of database duplication.
7. Start RMAN and connect to the source database by specifying the source as the target database. The duplicate database instance will be specified in the AUXILIARY connection. You can invoke the RMAN client on any host so long as that host has connectivity and you can connect to all of the required database instances. If the auxiliary instance requires a text-based initialization parameter file (pfile) then this file must exist and it must reside on the same host that runs the RMAN client application.
% rmanrman> connect target sys/sys@old;rman> connect auxiliary sys/sys
8. Next you will be issuing the DUPLICATE database command in order to start the duplicate process and the simplest case is when you duplicate the target database to a different host and use a different directory structure. This example will assume you are using a recovery catalog, the target database is on hosta and it contains four datafiles. You duplicate the target to database AUX on a different host (hostb) and hostb has a different directory structure. The tablespace USERS is a read-only tablespace for the purpose of this example. Execute the duplciate database command from the Auxiliary site:
rman duplicate target database to ‘NEW’ from active database db_file_name_convert ‘i:\oradata\OLD’,'e:\oradata\NEW’;
A dedicated listener configuration for RMAN is required. Using instance registration requires that the database be mounted in order to register with the listener. RMAN also requires SYSDBA access to the nomount instance (Auxiliary). The control files will be create using the location and names specified in the Oracle initialization file. The use of log_file_name_convert and db_file_name_convert instructs RMAN to generate the “set newname” commands for you. You can create your own set instead of using these two commands and you can use the ‘logfile’ command to specify where the log files will be created and what size they will be created with

Tuesday, August 18, 2009

Oracle 11g AWR

Automatic Performance Statistics
This chapter discusses the gathering of performance statistics. This chapter contains the following topics:
· Overview of Data Gathering
· Overview of the Automatic Workload Repository
· Managing the Automatic Workload Repository
5.1 Overview of Data Gathering
To effectively diagnose performance problems, statistics must be available. Oracle generates many types of cumulative statistics for the system, sessions, and individual SQL statements. Oracle also tracks cumulative statistics on segments and services. When analyzing a performance problem in any of these scopes, you typically look at the change in statistics (delta value) over the period of time you are interested in. Specifically, you look at the difference between the cumulative value of a statistic at the start of the period and the cumulative value at the end.
Cumulative values for statistics are generally available through dynamic performance views, such as the V$SESSTAT and V$SYSSTAT views. Note that the cumulative values in dynamic views are reset when the database instance is shutdown. The Automatic Workload Repository (AWR) automatically persists the cumulative and delta values for most of the statistics at all levels except the session level. This process is repeated on a regular time period and the result is called an AWR snapshot. The delta values captured by the snapshot represent the changes for each statistic over the time period. See "Overview of the Automatic Workload Repository".
Another type of statistic collected by Oracle is called a metric. A metric is defined as the rate of change in some cumulative statistic. That rate can be measured against a variety of units, including time, transactions, or database calls. For example, the number database calls per second is a metric. Metric values are exposed in some V$ views, where the values are the average over a fairly small time interval, typically 60 seconds. A history of recent metric values is available through V$ views, and some of the data is also persisted by AWR snapshots.
A third type of statistical data collected by Oracle is sampled data. This sampling is performed by the active session history (ASH) sampler. ASH samples the current state of all active sessions. This data is collected into memory and can be accessed by a V$ view. It is also written out to persistent store by the AWR snapshot processing. See "Active Session History".
A powerful tool for diagnosing performance problems is the use of statistical baselines. A statistical baseline is collection of statistic rates usually taken over time period where the system is performing well at peak load. Comparing statistics captured during a period of bad performance to a baseline helps discover specific statistics that have increased significantly and could be the cause of the problem.
AWR supports the capture of baseline data by enabling you to specify and preserve a pair or range of AWR snapshots as a baseline. Carefully consider the time period you choose as a baseline; the baseline should be a good representation of the peak load on the system. In the future, you can compare these baselines with snapshots captured during periods of poor performance.
Oracle Enterprise Manager is the recommended tool for viewing both real time data in the dynamic performance views and historical data from the AWR history tables. Enterprise Manager can also be used to capture operating system and network statistical data that can be correlated with AWR data. For more information, see Oracle Database 2 Day + Performance Tuning Guide.
This section covers the following topics:
· Database Statistics
· Operating System Statistics
· Interpreting Statistics
5.1.1 Database Statistics
Database statistics provide information on the type of load on the database and the internal and external resources used by the database. This section describes some of the more important statistics:
· Wait Events
· Time Model Statistics
· Active Session History
· System and Session Statistics
5.1.1.1 Wait Events
Wait events are statistics that are incremented by a server process/thread to indicate that it had to wait for an event to complete before being able to continue processing. Wait event data reveals various symptoms of problems that might be impacting performance, such as latch contention, buffer contention, and I/O contention.
To enable easier high-level analysis of the wait events, the events are grouped into classes. The wait event classes include: Administrative, Application, Cluster, Commit, Concurrency, Configuration, Idle, Network, Other, Scheduler, System I/O, and User I/O.
The wait classes are based on a common solution that usually applies to fixing a problem with the wait event. For example, exclusive TX locks are generally an application level issue and HW locks are generally a configuration issue.
The following list includes common examples of the waits in some of the classes:
· Application: locks waits caused by row level locking or explicit lock commands
· Commit: waits for redo log write confirmation after a commit
· Idle: wait events that signify the session is inactive, such as SQL*Net message from client
· Network: waits for data to be sent over the network
· User I/O: wait for blocks to be read off a disk
Wait event statistics for an instance include statistics for both background and foreground processes. Because you would typically focus your effort in tuning foreground activities, overall instance activity is broken down into foreground and background statistics in the relevant V$ views to facilitate tuning.
The V$SYSTEM_EVENT view provides wait event statistics for the foreground activities of an instance and the wait event statistics for the instance as a whole. The V$SYSTEM_WAIT_CLASS view provides these wait event statistics (foreground and instance statistics) after aggregating to wait classes.
The V$SESSION_EVENT and V$SYSTEM_WAIT_CLASS views provide wait event and wait class statistics at session level.
See Also:
Oracle Database Reference for more information about Oracle wait events
5.1.1.2 Time Model Statistics
When tuning an Oracle system, each component has its own set of statistics. To look at the system as a whole, it is necessary to have a common scale for comparisons. Because of this, most Oracle advisories and reports describe statistics in terms of time. In addition, the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views provide time model statistics. Using the common time instrumentation helps to identify quantitative effects on the database operations.
The most important of the time model statistics is DB time. This statistics represents the total time spent in database calls and is an indicator of the total instance workload. It is calculated by aggregating the CPU and wait times of all sessions not waiting on idle wait events (non-idle user sessions).
DB time is measured cumulatively from the time that the instance was started. Because DB time it is calculated by combining the times from all non-idle user sessions, it is possible that the DB time can exceed the actual time elapsed since the instance started up. For example, an instance that has been running for 30 minutes could have four active user sessions whose cumulative DB time is approximately 120 minutes.
The objective for tuning an Oracle system could be stated as reducing the time that users spend in performing some action on the database, or simply reducing DB time. Other time model statistics provide quantitative effects (in time) on specific actions, such as logon operations and hard and soft parses.
See Also:
Oracle Database Reference for information about the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views
5.1.1.3 Active Session History
The V$ACTIVE_SESSION_HISTORY view provides sampled session activity in the instance. Active sessions are sampled every second and are stored in a circular buffer in SGA. Any session that is connected to the database and is waiting for an event that does not belong to the Idle wait class is considered as an active session. This includes any session that was on the CPU at the time of sampling.
Each session sample is a set of rows and the V$ACTIVE_SESSION_HISTORY view returns one row for each active session per sample, returning the latest session sample rows first. Because the active session samples are stored in a circular buffer in SGA, the greater the system activity, the smaller the number of seconds of session activity that can be stored in the circular buffer. This means that the duration for which a session sample appears in the V$ view, or the number of seconds of session activity that is displayed in the V$ view, is completely dependent on the database activity.
As part of the Automatic Workload Repository (AWR) snapshots, the content of V$ACTIVE_SESSION_HISTORY is also flushed to disk. Because the content of this V$ view can get quite large during heavy system activity, only a portion of the session samples is written to disk.
By capturing only active sessions, a manageable set of data is represented with the size being directly related to the work being performed rather than the number of sessions allowed on the system. Using the Active Session History enables you to examine and perform detailed analysis on both current data in the V$ACTIVE_SESSION_HISTORY view and historical data in the DBA_HIST_ACTIVE_SESS_HISTORY view, often avoiding the need to replay the workload to gather additional performance tracing information. Active Session History also contains execution plan information for each SQL statement that is captured. This information can be used to identify which part of the SQL execution contributed most significantly to the SQL elapsed time. The data present in ASH can be rolled up on various dimensions that it captures, including the following:
· SQL identifier of SQL statement
· SQL plan identifier and hash value of the SQL plan used to execute the SQL statement
· SQL execution plan information
· Object number, file number, and block number
· Wait event identifier and parameters
· Session identifier and session serial number
· Module and action name
· Client identifier of the session
· Service hash identifier
· Consumer group identifier
See Also:
Oracle Database Reference for more information about the V$ACTIVE_SESSION_HISTORY view
Active Session History information over a specified duration can be gathered into a report. For more information, see "Generating Active Session History Reports".
5.1.1.4 System and Session Statistics
A large number of cumulative database statistics are available on a system and session level through the V$SYSSTAT and V$SESSTAT views.
See Also:
Oracle Database Reference for information about the V$SYSSTAT and V$SESSTAT views
5.1.2 Operating System Statistics
Operating system statistics provide information on the usage and performance of the main hardware components of the system, and the performance of the operating system itself. This information is crucial for detecting potential resource exhaustion, such as CPU cycles and physical memory, and for detecting bad performance of peripherals, such as disk drives.
Operating system statistics are only an indication of how the hardware and operating system are working. Many system performance analysts react to a hardware resource shortage by installing more hardware. This is a reactionary response to a series of symptoms shown in the operating system statistics. It is always best to consider operating system statistics as a diagnostic tool, similar to the way many doctors use body temperature, pulse rate, and patient pain when making a diagnosis. To help identify bottlenecks, gather operating system statistics for all servers in the system under performance analysis.
Operating system statistics include the following:
· CPU Statistics
· Virtual Memory Statistics
· Disk I/O Statistics
· Network Statistics
For information on tools for gathering operating statistics, see "Operating System Data Gathering Tools".
5.1.2.1 CPU Statistics
CPU utilization is the most important operating system statistic in the tuning process. Get CPU utilization for the entire system and for each individual CPU on multi-processor environments. Utilization for each CPU can detect single-threading and scalability issues.
Most operating systems report CPU usage as time spent in user space or mode and time spent in kernel space or mode. These additional statistics allow better analysis of what is actually being executed on the CPU.
On an Oracle data server system, where there is generally only one application running, the server runs database activity in user space. Activities required to service database requests (such as scheduling, synchronization, I/O, memory management, and process/thread creation and tear down) run in kernel mode. In a system where all CPU is fully utilized, a healthy Oracle system runs between 65% and 95% in user space.
The V$OSSTAT view captures machine level information in the database, making it easier for you to determine if there are hardware level resource issues. The V$SYSMETRIC_HISTORY view shows a one-hour history of the Host CPU Utilization metric, a representation of percentage of CPU usage at each one-minute interval. The V$SYS_TIME_MODEL view supplies statistics on the CPU usage by the Oracle database. Using both sets of statistics enable you to determine whether the Oracle database or other system activity is the cause of the CPU problems.
5.1.2.2 Virtual Memory Statistics
Virtual memory statistics should mainly be used as a check to validate that there is very little paging or swapping activity on the system. System performance degrades rapidly and unpredictably when paging or swapping occurs.
Individual process memory statistics can detect memory leaks due to a programming failure to deallocate memory taken from the process heap. These statistics should be used to validate that memory usage does not increase after the system has reached a steady state after startup. This problem is particularly acute on shared server applications on middle tier machines where session state may persist across user interactions, and on completion state information that is not fully deallocated.
5.1.2.3 Disk I/O Statistics
Because the database resides on a set of disks, the performance of the I/O subsystem is very important to the performance of the database. Most operating systems provide extensive statistics on disk performance. The most important disk statistics are the current response time and the length of the disk queues. These statistics show if the disk is performing optimally or if the disk is being overworked.
Measure the normal performance of the I/O system; typical values for a single block read range from 5 to 20 milliseconds, depending on the hardware used. If the hardware shows response times much higher than the normal performance value, then it is performing badly or is overworked. This is your bottleneck. If disk queues start to exceed two, then the disk is a potential bottleneck of the system.
Oracle Database also maintains a consistent set of I/O statistics for the I/O calls it issues. These statistics are captured for both single and multi block read and write operations in the following dimensions:
· Consumer group
When Oracle Database Resource Manager is enabled, I/O statistics for all consumer groups that are part of the currently enabled resource plan are captured in the V$IOSTAT_CONSUMER_GROUP view. These cumulative statistics will be sampled every hour and stored as historical statistics in the AWR.
· Database file
I/O statistics of database files that are or have been accessed are captured in the V$IOSTAT_FILE view.
· Database function
I/O statistics for database functions (such as the LGWR and DBWR) are captured in the V$IOSTAT_FUNCTION view
For information about using views in Oracle Database to identify I/O problems, see "Identifying I/O Problems Using V$ Views".
5.1.2.4 Network Statistics
Network statistics can be used in much the same way as disk statistics to determine if a network or network interface is overloaded or not performing optimally. In today's networked applications, network latency can be a large portion of the actual user response time. For this reason, these statistics are a crucial debugging tool.
Oracle Database maintains a set of network I/O statistics in the V$IOSTAT_NETWORK view. For information about using the V$IOSTAT_NETWORK view in Oracle Database to identify network issues, see "Identifying Network Issues".
5.1.2.5 Operating System Data Gathering Tools
Table 5-1 shows the various tools for gathering operating statistics on UNIX. For Windows, use the Performance Monitor tool.
Table 5-1 UNIX Tools for Operating Statistics
Component
UNIX Tool
CPU
sar, vmstat, mpstat, iostat
Memory
sar, vmstat
Disk
sar, iostat
Network
netstat

5.1.3 Interpreting Statistics
When initially examining performance data, you can formulate potential theories by examining your statistics. One way to ensure that your interpretation of the statistics is correct is to perform cross-checks with other data. This establishes whether a statistic or event is really of interest. Also, because foreground activities are tunable, it is better to first analyze the statistics from foreground activities before analyzing the statistics from background activities.
Some pitfalls are discussed in the following sections:
· Hit ratios
When tuning, it is common to compute a ratio that helps determine whether there is a problem. Such ratios include the buffer cache hit ratio, the soft-parse ratio, and the latch hit ratio. These ratios should not be used as 'hard and fast' identifiers of whether or not there is a performance bottleneck. Rather, they should be used as indicators. In order to identify whether there is a bottleneck, other related evidence should be examined. See "Calculating the Buffer Cache Hit Ratio".
· Wait events with timed statistics
Setting TIMED_STATISTICS to true at the instance level directs the Oracle server to gather wait time for events, in addition to wait counts already available. This data is useful for comparing the total wait time for an event to the total elapsed time between the performance data collections. For example, if the wait event accounts for only 30 seconds out of a two hour period, then there is probably little to be gained by investigating this event, even though it may be the highest ranked wait event when ordered by time waited. However, if the event accounts for 30 minutes of a 45 minute period, then the event is worth investigating. See "Wait Events".
Note:
Timed statistics are automatically collected for the database if the initialization parameter STATISTICS_LEVEL is set to TYPICAL or ALL. If STATISTICS_LEVEL is set to BASIC, then you must set TIMED_STATISTICS to TRUE to enable collection of timed statistics. Note that setting STATISTICS_LEVEL to BASIC disables many automatic features and is not recommended.
If you explicitly set DB_CACHE_ADVICE, TIMED_STATISTICS, or TIMED_OS_STATISTICS, either in the initialization parameter file or by using ALTER_SYSTEM or ALTER SESSION, the explicitly set value overrides the value derived from STATISTICS_LEVEL.
· Comparing Oracle statistics with other factors
When looking at statistics, it is important to consider other factors that influence whether the statistic is of value. Such factors include the user load and the hardware capability. Even an event that had a wait of 30 minutes in a 45 minute snapshot might not be indicative of a problem if you discover that there were 2000 users on the system, and the host hardware was a 64 node machine.
· Wait events without timed statistics
If TIMED_STATISTICS is false, then the amount of time waited for an event is not available. Therefore, it is only possible to order wait events by the number of times each event was waited for. Although the events with the largest number of waits might indicate the potential bottleneck, they might not be the main bottleneck. This can happen when an event is waited for a large number of times, but the total time waited for that event is small. The converse is also true: an event with fewer waits might be a problem if the wait time is a significant proportion of the total wait time. Without having the wait times to use for comparison, it is difficult to determine whether a wait event is really of interest.
· Idle wait events
Oracle uses some wait events to indicate if the Oracle server process is idle. Typically, these events are of no value when investigating performance problems, and they should be ignored when examining the wait events. See "Idle Wait Events".
· Computed statistics
When interpreting computed statistics (such as rates, statistics normalized over transactions, or ratios), it is important to cross-verify the computed statistic with the actual statistic counts. This confirms whether the derived rates are really of interest: small statistic counts usually can discount an unusual ratio. For example, on initial examination, a soft-parse ratio of 50% generally indicates a potential tuning area. If, however, there was only one hard parse and one soft parse during the data collection interval, then the soft-parse ratio would be 50%, even though the statistic counts show this is not an area of concern. In this case, the ratio is not of interest due to the low raw statistic counts.
See Also:
o "Setting the Level of Statistics Collection" for information about STATISTICS_LEVEL settings
o Oracle Database Reference for information on the STATISTICS_LEVEL initialization parameter
5.2 Overview of the Automatic Workload Repository
The Automatic Workload Repository (AWR) collects, processes, and maintains performance statistics for problem detection and self-tuning purposes. This data is both in memory and stored in the database. The gathered data can be displayed in both reports and views.
The statistics collected and processed by AWR include:
· Object statistics that determine both access and usage statistics of database segments
· Time model statistics based on time usage for activities, displayed in the V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views
· Some of the system and session statistics collected in the V$SYSSTAT and V$SESSTAT views
· SQL statements that are producing the highest load on the system, based on criteria such as elapsed time and CPU time
· Active Session History (ASH) statistics, representing the history of recent sessions activity
Gathering database statistics using the AWR is enabled by default and is controlled by the STATISTICS_LEVEL initialization parameter. The STATISTICS_LEVEL initialization parameter should be set to the TYPICAL or ALL to enable statistics gathering by the AWR. The default setting is TYPICAL. Setting the STATISTICS_LEVEL parameter to BASIC disables many Oracle Database features, including the AWR, and is not recommended. If the STATISTICS_LEVEL parameter is set to BASIC, you can still manually capture AWR statistics using the DBMS_WORKLOAD_REPOSITORY package. However, because in-memory collection of many system statistics—such as segments statistics and memory advisor information—will be disabled, the statistics captured in these snapshots may not be complete. For information about the STATISTICS_LEVEL initialization parameter, see Oracle Database Reference.
This section describes the Automatic Workload Repository and contains the following topics:
· Snapshots
· Baselines
· Adaptive Thresholds
· Space Consumption
5.2.1 Snapshots
Snapshots are sets of historical data for specific time periods that are used for performance comparisons by ADDM. By default, Oracle Database automatically generates snapshots of the performance data once every hour and retains the statistics in the workload repository for 8 days. You can also manually create snapshots, but this is usually not necessary. The data in the snapshot interval is then analyzed by the Automatic Database Diagnostic Monitor (ADDM). For information about ADDM, see "Overview of the Automatic Database Diagnostic Monitor".
AWR compares the difference between snapshots to determine which SQL statements to capture based on the effect on the system load. This reduces the number of SQL statements that must be captured over time.
For information about managing snapshots, see "Managing Snapshots".
5.2.2 Baselines
A baseline contains performance data from a specific time period that is preserved for comparison with other similar workload periods when performance problems occur. The snapshots contained in a baseline are excluded from the automatic AWR purging process and are retained indefinitely.
There are several types of available baselines in Oracle Database:
· Fixed Baselines
· Moving Window Baseline
· Baseline Templates
5.2.2.1 Fixed Baselines
A fixed baseline corresponds to a fixed, contiguous time period in the past that you specify. Before creating a fixed baseline, carefully consider the time period you choose as a baseline, because the baseline should represent the system operating at an optimal level. In the future, you can compare the baseline with other baselines or snapshots captured during periods of poor performance to analyze performance degradation over time.
For information about managing fixed baselines, see "Managing Baselines".
5.2.2.2 Moving Window Baseline
A moving window baseline corresponds to all AWR data that exists within the AWR retention period. This is useful when using adaptive thresholds because the AWR data in the entire AWR retention period can be used to compute metric threshold values.
Oracle Database automatically maintains a system-defined moving window baseline. The default window size for the system-defined moving window baseline is the current AWR retention period, which by default is 8 days. If you are planning to use adaptive thresholds, consider using a larger moving window—such as 30 days—to accurately compute threshold values. You can resize the moving window baseline by changing the number of days in the moving window to a value that is equal to or less than the number of days in the AWR retention period. Therefore, to increase the size of a moving window, you must first increase the AWR retention period accordingly.
For information about resizing the moving window baseline, see "Modifying the Window Size of the Default Moving Window Baseline".
5.2.2.3 Baseline Templates
You can also create baselines for a contiguous time period in the future using baseline templates. There are two types of baseline templates: single and repeating.
A single baseline template can be used to create a baseline for a single contiguous time period in the future. This is useful if you know beforehand of a time period that you want to capture in the future. For example, you may want to capture the AWR data during a system test that is scheduled for the upcoming weekend. In this case, you can create a single baseline template to automatically capture the time period when the test will take place.
A repeating baseline template can be used to create and drop baselines based on a repeating time schedule. This is useful if you want Oracle Database to automatically capture a contiguous time period on an ongoing basis. For example, you may want to capture the AWR data during every Monday morning for a month. In this case, you can create a repeating baseline template to automatically create baselines on a repeating schedule for every Monday, and automatically remove older baselines after a specified expiration interval such as one month.
For information about managing baseline templates, see "Managing Baseline Templates".
5.2.3 Adaptive Thresholds
Adaptive thresholds enable you to monitor and detect performance issues while minimizing administrative overhead. Adaptive thresholds can automatically set warning and critical alert thresholds for some system metrics using statistics derived from metric values captured in the moving window baseline. The statistics for these thresholds are recomputed weekly and might result in new thresholds as system performance evolves over time. In addition to recalculating thresholds weekly, adaptive thresholds might compute different thresholds values for different times of the day or week based on periodic workload patterns.
For example, many databases support an online transaction processing (OLTP) workload during the day and batch processing at night. The performance metric for response time per transaction can be useful for detecting degradation in OLTP performance during the day. However, a useful OLTP threshold value is almost certainly too low for batch workloads, where long-running transactions might be common. As a result, threshold values appropriate to OLTP might trigger frequent false performance alerts during batch processing. Adaptive thresholds can detect such a workload pattern and automatically set different threshold values for the daytime and nighttime.
Note:
In Oracle Database 11g Release 1 (11.1), Oracle Database automatically determines the appropriate time groupings for a database. However, before Oracle Database 11g Release 1 (11.1), time groupings were specified manually by the database administrator.
There are two types of adaptive thresholds:
· Percentage of maximum: The threshold value is computed as a percentage multiple of the maximum value observed for the data in the moving window baseline.
· Significance level: The threshold value is set to a statistical percentile that represents how unusual it is to observe values above the threshold value based the data in the moving window baseline. Specify one of the following percentiles:
o High (.95): Only 5 in 100 observations are expected to exceed this value.
o Very High (.99): Only 1 in 100 observations are expected to exceed this value.
o Severe (.999): Only 1 in 1,000 observations are expected to exceed this value.
o Extreme (.9999): Only 1 in 10,000 observations are expected to exceed this value.
Note:
When you specify Severe (.999) or Extreme (.9999), Oracle Database performs an internal calculation to set the threshold value. In some cases, Oracle Database cannot establish the threshold value at these levels using the data in the baseline, and the significance level threshold is not set.
If you are not receiving alerts as expected, and you specified a Severe (.999) or Extreme (.9999) significance level threshold, then you can try setting the significance level threshold to a lower value, such as Very High (.99) or High (.95). Alternatively, you can set a percentage of maximum threshold instead of a significance level threshold. If you change the threshold and find that you are receiving too many alerts, then you can try increasing the number of occurrences to cause an alert.
Percentage of maximum thresholds are most useful when a system is sized for peak workloads, and you want to be alerted when the current workload volume is approaching or exceeding previous high values. Metrics that have an unknown but definite limiting value are good candidates for these settings. For example, the redo generated per second metric is typically a good candidate for a percentage of maximum threshold.
Significance level thresholds are most useful for metrics that should exhibit statistically stable behavior when the system is operating normally, but might vary over a wide range when the system is performing poorly. For example, the response time per transaction metric should be stable for a well-tuned OLTP system, but may fluctuate widely when performance issues arise. Significance level thresholds are meant to generate alerts when conditions produce both unusual metric values and unusual system performance.
Note:
The primary interface for managing baseline metrics is Oracle Enterprise Manager. To create an adaptive threshold for a baseline metric, use Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide.
See Also:
"Moving Window Baseline"
5.2.4 Space Consumption
The space consumed by the Automatic Workload Repository is determined by several factors:
· Number of active sessions in the system at any given time
· Snapshot interval
The snapshot interval determines the frequency at which snapshots are captured. A smaller snapshot interval increases the frequency, which increases the volume of data collected by the Automatic Workload Repository.
· Historical data retention period
The retention period determines how long this data is retained before being purged. A longer retention period increases the space consumed by the Automatic Workload Repository.
By default, snapshots are captured once every hour and are retained in the database for 8 days. With these default settings, a typical system with an average of 10 concurrent active sessions can require approximately 200 to 300 MB of space for its AWR data. It is possible to change the default values for both snapshot interval and retention period. For more information, see "Modifying Snapshot Settings" for information on modifying AWR settings.
The Automatic Workload Repository space consumption can be reduced by the increasing the snapshot interval and reducing the retention period. When reducing the retention period, note that several Oracle self-managing features depend on AWR data for proper functioning. Not having enough data can affect the validity and accuracy of these components and features, including:
· Automatic Database Diagnostic Monitor
· SQL Tuning Advisor
· Undo Advisor
· Segment Advisor
If possible, Oracle recommends that you set the AWR retention period large enough to capture at least one complete workload cycle. If your system experiences weekly workload cycles, such as OLTP workload during weekdays and batch jobs during the weekend, you do not need to change the default AWR retention period of 8 days. However if your system is subjected to a monthly peak load during month end book closing, you may have to set the retention period to one month.
Under exceptional circumstances, the automatic snapshot collection can be completely turned off by setting the snapshot interval to 0. Under this condition, the automatic collection of the workload and statistical data is stopped and much of the Oracle self-management functionality is not operational. In addition, you will not be able to manually create snapshots. For this reason, Oracle strongly recommends that you do not turn off the automatic snapshot collection.
5.3 Managing the Automatic Workload Repository
This section describes how to manage the Automatic Workload Repository and contains the following topics:
· Managing Snapshots
· Managing Baselines
· Managing Baseline Templates
· Transporting Automatic Workload Repository Data
· Using Automatic Workload Repository Views
· Generating Automatic Workload Repository Reports
· Generating Automatic Workload Repository Compare Periods Reports
· Generating Active Session History Reports
· Using Active Session History Reports
For a description of the Automatic Workload Repository, see "Overview of the Automatic Workload Repository".
5.3.1 Managing Snapshots
By default, Oracle Database generates snapshots once every hour, and retains the statistics in the workload repository for 8 days. When necessary, you can use DBMS_WORKLOAD_REPOSITORY procedures to manually create, drop, and modify the snapshots. To invoke these procedures, a user must be granted the DBA role. For more information about snapshots, see "Snapshots".
The primary interface for managing snapshots is Oracle Enterprise Manager. Whenever possible, you should manage snapshots using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can manage snapshots using the DBMS_WORKLOAD_REPOSITORY package, as described in the following sections:
· Creating Snapshots
· Dropping Snapshots
· Modifying Snapshot Settings
See Also:
Oracle Database PL/SQL Packages and Types Reference for detailed information on the DBMS_WORKLOAD_REPOSITORY package
5.3.1.1 Creating Snapshots
You can manually create snapshots with the CREATE_SNAPSHOT procedure if you want to capture statistics at times different than those of the automatically generated snapshots. For example:BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();END;/
In this example, a snapshot for the instance is created immediately with the flush level specified to the default flush level of TYPICAL. You can view this snapshot in the DBA_HIST_SNAPSHOT view.
5.3.1.2 Dropping Snapshots
You can drop a range of snapshots using the DROP_SNAPSHOT_RANGE procedure. To view a list of the snapshot Ids along with database Ids, check the DBA_HIST_SNAPSHOT view. For example, you can drop the following range of snapshots:BEGIN DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 22, high_snap_id => 32, dbid => 3310949047);END;/
In the example, the range of snapshot Ids to drop is specified from 22 to 32. The optional database identifier is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value.
Active Session History data (ASH) that belongs to the time period specified by the snapshot range is also purged when the DROP_SNAPSHOT_RANGE procedure is called.
5.3.1.3 Modifying Snapshot Settings
You can adjust the interval, retention, and captured Top SQL of snapshot generation for a specified database ID, but note that this can affect the precision of the Oracle diagnostic tools.
The INTERVAL setting affects how often in minutes that snapshots are automatically generated. The RETENTION setting affects how long in minutes that snapshots are stored in the workload repository. The TOPNSQL setting affects the number of Top SQL to flush for each SQL criteria (Elapsed Time, CPU Time, Parse Calls, sharable Memory, and Version Count). The value for this setting will not be affected by the statistics/flush level and will override the system default behavior for the AWR SQL collection. It is possible to set the value for this setting to MAXIMUM to capture the complete set of SQL in the cursor cache, though by doing so (or by setting the value to a very high number) may lead to possible space and performance issues since there will more data to collect and store. To adjust the settings, use the MODIFY_SNAPSHOT_SETTINGS procedure. For example:BEGIN DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retention => 43200, interval => 30, topnsql => 100, dbid => 3310949047);END;/
In this example, the retention period is specified as 43200 minutes (30 days), the interval between each snapshot is specified as 30 minutes, and the number of Top SQL to flush for each SQL criteria as 100. If NULL is specified, the existing value is preserved. The optional database identifier is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value. You can check the current settings for your database instance with the DBA_HIST_WR_CONTROL view.
5.3.2 Managing Baselines
This section describes how to manage baselines. For more information about baselines, see "Baselines".
The primary interface for managing baselines is Oracle Enterprise Manager. Whenever possible, you should manage baselines using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can manage baselines using the DBMS_WORKLOAD_REPOSITORY package, as described in the following sections:
· Creating a Baseline
· Dropping a Baseline
· Renaming a Baseline
· Displaying Baseline Metrics
· Modifying the Window Size of the Default Moving Window Baseline
See Also:
Oracle Database PL/SQL Packages and Types Reference for detailed information on the DBMS_WORKLOAD_REPOSITORY package
5.3.2.1 Creating a Baseline
This section describes how to create a baseline using an existing range of snapshots.
To create a baseline:
1. Review the existing snapshots in the DBA_HIST_SNAPSHOT view to determine the range of snapshots that you want to use.
2. Use the CREATE_BASELINE procedure to create a baseline using the desired range of snapshots:3. BEGIN4. DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id => 270, 5. end_snap_id => 280, baseline_name => 'peak baseline', 6. dbid => 3310949047, expiration => 30);7. END;8. /
In this example, 270 is the start snapshot sequence number and 280 is the end snapshot sequence. The name of baseline is peak baseline. The optional database identifier is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value. The optional expiration parameter is set to 30, so the baseline will expire and be dropped automatically after 30 days. If you do not specify a value for expiration, the baseline will never expire.
The system automatically assign a unique baseline ID to the new baseline when the baseline is created. The baseline ID and database identifier are displayed in the DBA_HIST_BASELINE view.
5.3.2.2 Dropping a Baseline
This section describes how to drop an existing baseline. Periodically, you may want to drop a baseline that is no longer used to conserve disk space. The snapshots associated with a baseline are retained indefinitely until you explicitly drop the baseline or the baseline has expired.
To drop a baseline:
1. Review the existing baselines in the DBA_HIST_BASELINE view to determine the baseline that you want to drop.
2. Use the DROP_BASELINE procedure to drop the desired baseline:3. BEGIN4. DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'peak baseline',5. cascade => FALSE, dbid => 3310949047);6. END;7. /
In the example, the name of baseline is peak baseline. The cascade parameter is set to FALSE, which specifies that only the baseline is dropped. Setting this parameter to TRUE specifies that the drop operation will also remove the snapshots associated with the baseline. The optional dbid parameter specifies the database identifier, which in this example is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value.
5.3.2.3 Renaming a Baseline
This section describes how to rename a baseline.
To rename a baseline:
1. Review the existing baselines in the DBA_HIST_BASELINE view to determine the baseline that you want to rename.
2. Use the RENAME_BASELINE procedure to rename the desired baseline:3. BEGIN4. DBMS_WORKLOAD_REPOSITORY.RENAME_BASELINE (5. old_baseline_name => 'peak baseline', 6. new_baseline_name => 'peak mondays', 7. dbid => 3310949047);8. END;9. /
In this example, the name of the baseline is renamed from peak baseline, as specified by the old_baseline_name parameter, to peak mondays, as specified by the new_baseline_name parameter. The optional dbid parameter specifies the database identifier, which in this example is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value.
5.3.2.4 Displaying Baseline Metrics
This section describes how to display metric threshold settings during the time period captured in a baseline. When used with adaptive thresholds, a baseline contains AWR data that can be used to compute metric threshold values. The SELECT_BASELINE_METRICS function enables you to display the summary statistics for metric values in a baseline period.
To display metric information in a baseline:
1. Review the existing baselines in the DBA_HIST_BASELINE view to determine the baseline for which you want to display metric information.
2. Use the SELECT_BASELINE_METRICS function to display the metric information for the desired baseline:3. BEGIN4. DBMS_WORKLOAD_REPOSITORY.SELECT_BASELINE_METRICS (5. baseline_name => 'peak baseline', 6. dbid => 3310949047,7. instance_num => '1');8. END;9. /
In this example, the name of baseline is peak baseline. The optional dbid parameter specifies the database identifier, which in this example is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value. The optional instance_num parameter specifies the instance number, which in this example is 1. If you do not specify a value for instance_num, the local instance is used as the default value.
5.3.2.5 Modifying the Window Size of the Default Moving Window Baseline
This section describes how to modify the window size of the default moving window baseline. For information about the default moving window baseline, see "Moving Window Baseline".
To resize the default moving window baseline, use the MODIFY_BASELINE_WINDOW_SIZE procedure:BEGIN DBMS_WORKLOAD_REPOSITORY.MODIFY_BASELINE_WINDOW_SIZE ( window_size => 30, dbid => 3310949047);END;/
The window_size parameter is used to specify the new window size, in number of days, for the default moving window size. In this example, the window_size parameter is set to 30. The window size must be set to a value that is equal to or less than the value of the AWR retention setting. To set a window size that is greater than the current AWR retention period, you must first increase the value of the retention parameter, as described in "Modifying Snapshot Settings".
In this example, the optional dbid parameter specifies the database identifier is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value.
5.3.3 Managing Baseline Templates
This section describes how to manage baseline templates. You can automatically create baselines to capture specified time periods in the future using baseline templates. For information about baseline templates, see "Baseline Templates".
The primary interface for managing baseline templates is Oracle Enterprise Manager. Whenever possible, you should manage baseline templates using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can manage baseline templates using the DBMS_WORKLOAD_REPOSITORY package, as described in the following sections:
This section contains the following topics:
· Creating a Single Baseline Template
· Creating a Repeating Baseline Template
· Dropping a Baseline Template
See Also:
Oracle Database PL/SQL Packages and Types Reference for detailed information on the DBMS_WORKLOAD_REPOSITORY package
5.3.3.1 Creating a Single Baseline Template
This section describes how to create a single baseline template. A single baseline template can be used to create a baseline during a single, fixed time interval in the future. For example, you can create a single baseline template to generate a baseline that is captured on April 2, 2007 from 5:00 p.m. to 8:00 p.m.
To create a single baseline template, use the CREATE_BASELINE_TEMPLATE procedure:BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE ( start_time => '2007-04-02 17:00:00 PST', end_time => '2007-04-02 20:00:00 PST', baseline_name => 'baseline_070402', template_name => 'template_070402', expiration => 30, dbid => 3310949047);END;/
The start_time parameter specifies the start time for the baseline to be created. The end_time parameter specifies the end time for the baseline to be created. The baseline_name parameter specifies the name of the baseline to be created. The template_name parameter specifies the name of the baseline template. The optional expiration parameter specifies the expiration, in number of days, for the baseline. If unspecified, the baseline will never expire. The optional dbid parameter specifies the database identifier. If unspecified, the local database identifier is used as the default value.
In this example, a baseline template named template_070402 is created that will generate a baseline named baseline_070402 for the time period from 5:00 p.m. to 8:00 p.m. on April 2, 2007 on the database with a database ID of 3310949047. The baseline will expire after 30 days.
5.3.3.2 Creating a Repeating Baseline Template
This section describes how to create a repeating baseline template. A repeating baseline template can be used to automatically create baselines that repeat during a particular time interval over a specific period in the future. For example, you can create a repeating baseline template to generate a baseline that repeats every Monday from 5:00 p.m. to 8:00 p.m. for the year 2007.
To create a repeating baseline template, use the CREATE_BASELINE_TEMPLATE procedure:BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE ( day_of_week => 'monday', hour_in_day => 17, duration => 3, expiration => 30, start_time => '2007-04-02 17:00:00 PST', end_time => '2007-12-31 20:00:00 PST', baseline_name_prefix => 'baseline_2007_mondays_', template_name => 'template_2007_mondays', dbid => 3310949047);END;/
The day_of_week parameter specifies the day of the week on which the baseline will repeat. The hour_in_day parameter specifies the hour in the day when the baseline will start. The duration parameter specifies the duration, in number of hours, that the baseline will last. The expiration parameter specifies the number of days to retain each created baseline. If set to NULL, the baselines will never expire. The start_time parameter specifies the start time for the baseline to be created. The end_time parameter specifies the end time for the baseline to be created. The baseline_name_prefix parameter specifies the name of the baseline prefix that will be appended to the data information when the baseline is created. The template_name parameter specifies the name of the baseline template. The optional dbid parameter specifies the database identifier. If unspecified, the local database identifier is used as the default value.
In this example, a baseline template named template_2007_mondays is created that will generate a baseline on every Monday from 5:00 p.m. to 8:00 p.m. beginning on April 2, 2007 at 5:00 p.m. and ending on December 31, 2007 at 8:00 p.m. on the database with a database ID of 3310949047. Each of the baselines will be created with a baseline name with the prefix baseline_2007_mondays_ and will expire after 30 days.
5.3.3.3 Dropping a Baseline Template
This section describes how to drop an existing baseline template. Periodically, you may want to remove baselines templates that are no longer used to conserve disk space.
To drop a baseline template:
1. Review the existing baselines in the DBA_HIST_BASELINE_TEMPLATE view to determine the baseline template you want to drop.
2. Use the DROP_BASELINE_TEMPLATE procedure to drop the desired baseline template:3. BEGIN4. DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE_TEMPLATE (5. template_name => 'template_2007_mondays',6. dbid => 3310949047);7. END;8. /
The template_name parameter specifies the name of the baseline template that will be dropped. In the example, the name of baseline template that will be dropped is template_2007_mondays. The optional dbid parameter specifies the database identifier, which in this example is 3310949047. If you do not specify a value for dbid, the local database identifier is used as the default value.
5.3.4 Transporting Automatic Workload Repository Data
Oracle Database enables you to transport AWR data between systems. This is useful in cases where you want to use a separate system to perform analysis of the AWR data. To transport AWR data, you must first extract the AWR snapshot data from the database on the source system, then load the data into the database on the target system, as described in the following sections:
· Extracting AWR Data
· Loading AWR Data
5.3.4.1 Extracting AWR Data
The awrextr.sql script extracts the AWR data for a range of snapshots from the database into a Data Pump export file. Once created, this dump file can be transported to another system where the extracted data can be loaded. To run the awrextr.sql script, you must be connected to the database as the SYS user.
To extract AWR data:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrextr.sql
A list of the databases in the AWR schema is displayed.
3. Specify the database from which the AWR data will be extracted:4. Enter value for db_id: 1377863381
In this example, the database with the database identifier of 1377863381 is selected.
5. Specify the number of days for which you want to list snapshot Ids.6. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the last 2 days are displayed.
7. Define the range of snapshots for which AWR data will be extracted by specifying a beginning and ending snapshot ID:8. Enter value for begin_snap: 309. Enter value for end_snap: 40
In this example, the snapshot with a snapshot ID of 30 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 40 is selected as the ending snapshot.
10. A list of directory objects is displayed.
Specify the directory object pointing to the directory where the export dump file will be stored:Enter value for directory_name: DATA_PUMP_DIR
In this example, the directory object DATA_PUMP_DIR is selected.
11. Specify the prefix for name of the export dump file (the .dmp suffix will be automatically appended):12. Enter value for file_name: awrdata_30_40
In this example, an export dump file named awrdata_30_40 will be created in the directory corresponding to the directory object you specified:Dump file set for SYS.SYS_EXPORT_TABLE_01 is:C:\ORACLE\PRODUCT\11.1.0.5\DB_1\RDBMS\LOG\AWRDATA_30_40.DMPJob "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 08:58:20
Depending on the amount of AWR data that needs to be extracted, the AWR extract operation may take a while to complete. Once the dump file is created, you can use Data Pump to transport the file to another system.
See Also:
Oracle Database Utilities for information about using Data Pump
5.3.4.2 Loading AWR Data
Once the export dump file is transported to the target system, you can load the extracted AWR data using the awrload.sql script. The awrload.sql script will first create a staging schema where the snapshot data is transferred from the Data Pump file into the database. The data is then transferred from the staging schema into the appropriate AWR tables. To run the awrload.sql script, you must be connected to the database as the SYS user.
To load AWR data:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrload.sql
A list of directory objects is displayed.
3. Specify the directory object pointing to the directory where the export dump file is located:4. Enter value for directory_name: DATA_PUMP_DIR
In this example, the directory object DATA_PUMP_DIR is selected.
5. Specify the prefix for name of the export dump file (the .dmp suffix will be automatically appended):6. Enter value for file_name: awrdata_30_40
In this example, the export dump file named awrdata_30_40 is selected.
7. Specify the name of the staging schema where the AWR data will be loaded:8. Enter value for schema_name: AWR_STAGE
In this example, a staging schema named AWR_STAGE will be created where the AWR data will be loaded.
9. Specify the default tablespace for the staging schema:10. Enter value for default_tablespace: SYSAUX
In this example, the SYSAUX tablespace is selected.
11. Specify the temporary tablespace for the staging schema:12. Enter value for temporary_tablespace: TEMP
In this example, the TEMP tablespace is selected.
13. A staging schema named AWR_STAGE will be created where the AWR data will be loaded. After the AWR data is loaded into the AWR_STAGE schema, the data will be transferred into the AWR tables in the SYS schema:14. Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT15. Completed 113 CONSTRAINT objects in 11 seconds16. Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT17. Completed 1 REF_CONSTRAINT objects in 1 seconds18. Job "SYS"."SYS_IMPORT_FULL_03" successfully completed at 09:29:3019. ... Dropping AWR_STAGE user20. End of AWR Load
Depending on the amount of AWR data that needs to be loaded, the AWR load operation may take a while to complete. After the AWR data is loaded, the staging schema will be dropped automatically.
5.3.5 Using Automatic Workload Repository Views
Typically, you would view the AWR data through Oracle Enterprise Manager or AWR reports. However, you can also view the statistics using the following views:
· V$ACTIVE_SESSION_HISTORY
This view displays active database session activity, sampled once every second. See "Active Session History".
· V$ metric views provide metric data to track the performance of the system
The metric views are organized into various groups, such as event, event class, system, session, service, file, and tablespace metrics. These groups are identified in the V$METRICGROUP view.
· DBA_HIST views
The DBA_HIST views displays historical data stored in the database. This group of views includes:
o DBA_HIST_ACTIVE_SESS_HISTORY displays the history of the contents of the in-memory active session history for recent system activity
o DBA_HIST_BASELINE displays information about the baselines captured on the system, such as the time range of each baseline and the baseline type
o DBA_HIST_BASELINE_DETAILS displays details about a specific baseline
o DBA_HIST_BASELINE_TEMPLATE displays information about the baseline templates used by the system to generate baselines
o DBA_HIST_DATABASE_INSTANCE displays information about the database environment
o DBA_HIST_SNAPSHOT displays information on snapshots in the system
o DBA_HIST_SQL_PLAN displays the SQL execution plans
o DBA_HIST_WR_CONTROL displays the settings for controlling AWR
See Also:
Oracle Database Reference for information on dynamic and static data dictionary views
5.3.6 Generating Automatic Workload Repository Reports
An AWR report shows data captured between two snapshots (or two points in time). The AWR reports are divided into multiple sections. The HTML report includes links that can be used to navigate quickly between sections. The content of the report contains the workload profile of the system for the selected range of snapshots.
The primary interface for generating AWR reports is Oracle Enterprise Manager. Whenever possible, you should generate AWR reports using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can generate AWR reports by running SQL scripts, as described in the following sections:
· Generating an AWR Report for a Snapshot Range
· Generating an AWR Report for a Snapshot Range on a Specified Database Instance
· Generating an AWR Report for a SQL Statement
· Generating an AWR Report for a SQL Statement on a Specified Database Instance
To run these scripts, you must be granted the DBA role.
Note:
If you run a report on a database that does not have any workload activity during the specified range of snapshots, calculated percentages for some report statistics can be less than 0 or greater than 100. This result simply means that there is no meaningful value for the statistic.
5.3.6.1 Generating an AWR Report for a Snapshot Range
The awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids.
To generate an AWR report for a range of snapshots:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrrpt.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: text
In this example, a text report is chosen.
5. Specify the number of days for which you want to list snapshot Ids.6. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the last 2 days are displayed.
7. Specify a beginning and ending snapshot ID for the workload repository report:8. Enter value for begin_snap: 1509. Enter value for end_snap: 160
In this example, the snapshot with a snapshot ID of 150 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 160 is selected as the ending snapshot.
10. Enter a report name, or accept the default report name:11. Enter value for report_name: 12. Using the report name awrrpt_1_150_160
In this example, the default name is accepted and an AWR report named awrrpt_1_150_160 is generated.
5.3.6.2 Generating an AWR Report for a Snapshot Range on a Specified Database Instance
The awrrpti.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids on a specified database and instance. This script enables you to specify a database and instance before entering a range of snapshot Ids.
To generate an AWR report for a range of snapshots on a specific database instance:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrrpti.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: text
In this example, a text report is chosen.
A list of available database identifiers and instance numbers are displayed:Instances in this Workload Repository schema~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host----------- -------- ------------ ------------ ------------ 3309173529 1 MAIN main examp1690 3309173529 1 TINT251 tint251 samp251
5. Enter the values for the database identifier (dbid) and instance number (inst_num):6. Enter value for dbid: 33091735297. Using 3309173529 for database Id8. Enter value for inst_num: 1
9. Specify the number of days for which you want to list snapshot Ids.10. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the last 2 days are displayed.
11. Specify a beginning and ending snapshot ID for the workload repository report:12. Enter value for begin_snap: 15013. Enter value for end_snap: 160
In this example, the snapshot with a snapshot ID of 150 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 160 is selected as the ending snapshot.
14. Enter a report name, or accept the default report name:15. Enter value for report_name: 16. Using the report name awrrpt_1_150_160
In this example, the default name is accepted and an AWR report named awrrpt_1_150_160 is generated on the database instance with a database ID value of 3309173529.
5.3.6.3 Generating an AWR Report for a SQL Statement
The awrsqrpt.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a range of snapshot Ids. Run this report to inspect or debug the performance of a SQL statement.
To generate an AWR report for a particular SQL statement:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrsqrpt.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: html
In this example, an HTML report is chosen.
5. Specify the number of days for which you want to list snapshot Ids.6. Enter value for num_days: 1
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the previous day are displayed.
7. Specify a beginning and ending snapshot ID for the workload repository report:8. Enter value for begin_snap: 1469. Enter value for end_snap: 147
In this example, the snapshot with a snapshot ID of 146 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 147 is selected as the ending snapshot.
10. Specify the SQL ID of a particular SQL statement to display statistics:11. Enter value for sql_id: 2b064ybzkwf1y
In this example, the SQL statement with a SQL ID of 2b064ybzkwf1y is selected.
12. Enter a report name, or accept the default report name:13. Enter value for report_name: 14. Using the report name awrrpt_1_146_147.html
In this example, the default name is accepted and an AWR report named awrrpt_1_146_147 is generated.
5.3.6.4 Generating an AWR Report for a SQL Statement on a Specified Database Instance
The awrsqrpi.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a range of snapshot Ids on a specified database and instance. This script enables you to specify a database and instance before selecting a SQL statement. Run this report to inspect or debug the performance of a SQL statement on a specific database and instance.
To generate an AWR report for a particular SQL statement on a specified database instance:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrsqrpi.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: html
In this example, an HTML report is chosen.
A list of available database identifiers and instance numbers are displayed:Instances in this Workload Repository schema~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host----------- -------- ------------ ------------ ------------ 3309173529 1 MAIN main examp1690 3309173529 1 TINT251 tint251 samp251
5. Enter the values for the database identifier (dbid) and instance number (inst_num):6. Enter value for dbid: 33091735297. Using 3309173529 for database Id8. Enter value for inst_num: 19. Using 1 for instance number
10. Specify the number of days for which you want to list snapshot Ids.11. Enter value for num_days: 1
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the previous day are displayed.
12. Specify a beginning and ending snapshot ID for the workload repository report:13. Enter value for begin_snap: 14614. Enter value for end_snap: 147
In this example, the snapshot with a snapshot ID of 146 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 147 is selected as the ending snapshot.
15. Specify the SQL ID of a particular SQL statement to display statistics:16. Enter value for sql_id: 2b064ybzkwf1y
In this example, the SQL statement with a SQL ID of 2b064ybzkwf1y is selected.
17. Enter a report name, or accept the default report name:18. Enter value for report_name: 19. Using the report name awrrpt_1_146_147.html
In this example, the default name is accepted and an AWR report named awrrpt_1_146_147 is generated on the database instance with a database ID value of 3309173529.
5.3.7 Generating Automatic Workload Repository Compare Periods Reports
While an AWR report shows AWR data between two snapshots (or two points in time), the AWR Compare Periods report shows the difference between two periods (or two AWR reports, which equates to four snapshots). Using the AWR Compare Periods report helps you to identify detailed performance attributes and configuration settings that differ between two time periods. For example, if the application workload is known to be stable between 10:00 p.m. and midnight every night, but the performance on a particular Thursday was poor between 10:00 p.m. and 11:00 p.m., generating an AWR Compare Periods report for Thursday from 10:00 p.m. to 11:00 p.m. and Wednesday from 10:00 p.m. to 11:00 p.m. should identify configuration settings, workload profile, and statistics that were different in these two time periods. Based on the differences identified, the cause of the performance degradation can be more easily diagnosed. The two time periods selected for the AWR Compare Periods Report can be of different durations, because the report normalizes the statistics by the amount of time spent on the database for each time period, and presents statistical data ordered by the largest difference between the periods.
The AWR Compare Periods reports are divided into multiple sections. The HTML report includes links that can be used to navigate quickly between sections. The content of the report contains the workload profile of the system for the selected range of snapshots.
The primary interface for generating AWR Compare Periods reports is Oracle Enterprise Manager. Whenever possible, you should generate AWR Compare Periods reports using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can generate AWR Compare Periods reports by running SQL scripts, as described in the following sections:
· Generating an AWR Compare Periods Report
· Generating an AWR Compare Periods Report on a Specified Database Instance
To run these scripts, you must be granted the DBA role.
5.3.7.1 Generating an AWR Compare Periods Report
The awrddrpt.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration settings between two selected time periods.
To generate an AWR Compare Periods report:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrddrpt.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: html
In this example, an HTML report is chosen.
5. Specify the number of days for which you want to list snapshot Ids in the first time period.6. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the last 2 days are displayed.
7. Specify a beginning and ending snapshot ID for the first time period:8. Enter value for begin_snap: 1029. Enter value for end_snap: 103
In this example, the snapshot with a snapshot ID of 102 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 103 is selected as the ending snapshot for the first time period.
10. Specify the number of days for which you want to list snapshot Ids in the second time period.11. Enter value for num_days2: 1
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the previous day are displayed.
12. Specify a beginning and ending snapshot ID for the second time period:13. Enter value for begin_snap2: 12614. Enter value for end_snap2: 127
In this example, the snapshot with a snapshot ID of 126 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 127 is selected as the ending snapshot for the second time period.
15. Enter a report name, or accept the default report name:16. Enter value for report_name: 17. Using the report name awrdiff_1_102_1_126.txt
In this example, the default name is accepted and an AWR report named awrdiff_1_102_126 is generated.
5.3.7.2 Generating an AWR Compare Periods Report on a Specified Database Instance
The awrddrpi.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration settings between two selected time periods on a specific database and instance. This script enables you to specify a database and instance before selecting time periods to compare.
To generate an AWR Compare Periods report on a specified database instance:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/awrddrpi.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: text
In this example, a text report is chosen.
5. A list of available database identifiers and instance numbers are displayed:6. Instances in this Workload Repository schema7. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~8. DB Id Inst Num DB Name Instance Host9. ----------- -------- ------------ ------------ ------------10. 3309173529 1 MAIN main examp169011. 3309173529 1 TINT251 tint251 samp251
Enter the values for the database identifier (dbid) and instance number (inst_num) for the first time period:Enter value for dbid: 3309173529Using 3309173529 for Database Id for the first pair of snapshotsEnter value for inst_num: 1Using 1 for Instance Number for the first pair of snapshots
12. Specify the number of days for which you want to list snapshot Ids in the first time period.13. Enter value for num_days: 2
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the last 2 days are displayed.
14. Specify a beginning and ending snapshot ID for the first time period:15. Enter value for begin_snap: 10216. Enter value for end_snap: 103
In this example, the snapshot with a snapshot ID of 102 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 103 is selected as the ending snapshot for the first time period.
17. Enter the values for the database identifier (dbid) and instance number (inst_num) for the second time period:18. Enter value for dbid2: 330917352919. Using 3309173529 for Database Id for the second pair of snapshots20. Enter value for inst_num2: 121. Using 1 for Instance Number for the second pair of snapshots
22. Specify the number of days for which you want to list snapshot Ids in the second time period.23. Enter value for num_days2: 1
A list of existing snapshots for the specified time range is displayed. In this example, snapshots captured in the previous day are displayed.
24. Specify a beginning and ending snapshot ID for the second time period:25. Enter value for begin_snap2: 12626. Enter value for end_snap2: 127
In this example, the snapshot with a snapshot ID of 126 is selected as the beginning snapshot, and the snapshot with a snapshot ID of 127 is selected as the ending snapshot for the second time period.
27. Enter a report name, or accept the default report name:28. Enter value for report_name: 29. Using the report name awrdiff_1_102_1_126.txt
In this example, the default name is accepted and an AWR report named awrdiff_1_102_126 is generated on the database instance with a database ID value of 3309173529.
5.3.8 Generating Active Session History Reports
Use Active Session History (ASH) reports to perform analysis of:
· Transient performance problems that typically last for a few minutes
· Scoped or targeted performance analysis by various dimensions or their combinations, such as time, session, module, action, or SQL_ID
Transient performance problems are short-lived and do not appear in the Automatic Database Diagnostics Monitor (ADDM) analysis. ADDM tries to report the most significant performance problems during an analysis period in terms of their impact on DB time. If a particular problem lasts for a very short duration, its severity might be averaged out or minimized by other performance problems in the entire analysis period; therefore, the problem may not appear in the ADDM findings. Whether or not a performance problem is captured by ADDM depends on its duration compared to the interval between the Automatic Workload Repository (AWR) snapshots.
If a performance problem lasts for a significant portion of the time between snapshots, it will be captured by ADDM. For example, if the snapshot interval is set to one hour, a performance problem that lasts for 30 minutes should not be considered as a transient performance problem because its duration represents a significant portion of the snapshot interval and will likely be captured by ADDM.
On the other hand, a performance problem that lasts for only 2 minutes could be a transient performance problem because its duration represents a small portion of the snapshot interval and will likely not show up in the ADDM findings. For example, if the user notifies you that the system was slow between 10:00 p.m. and 10:10 p.m., but the ADDM analysis for the time period between 10:00 p.m. and 11:00 p.m. does not show a performance problem, it is likely that a transient performance problem occurred that lasted for only a few minutes of the 10-minute interval reported by the user.
The ASH reports are divided into multiple sections. The HTML report includes links that can be used to navigate quickly between sections. The content of the report contains ASH information used to identify blocker and waiter identities and their associated transaction identifiers and SQL for a specified duration. For more information on ASH, see "Active Session History".
The primary interface for generating ASH reports is Oracle Enterprise Manager. Whenever possible, you should generate ASH reports using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can generate ASH reports by running SQL scripts, as described in the following sections:
· Generating an ASH Report
· Generating an ASH Report on a Specified Database Instance
5.3.8.1 Generating an ASH Report
The ashrpt.sql SQL script generates an HTML or text report that displays ASH information for a specified duration.
To generate an ASH report:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/ashrpt.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: text
In this example, a text report is chosen.
5. Specify the begin time in minutes before the system date:6. Enter value for begin_time: -10
In this example, 10 minutes before the current time is selected.
7. Enter the duration in minutes that the report for which you want to capture ASH information from the begin time.8. Enter value for duration:
In this example, the default duration of system date minus begin time is accepted.
9. Enter a report name, or accept the default report name:10. Enter value for report_name: 11. Using the report name ashrpt_1_0310_0131.txt
In this example, the default name is accepted and an ASH report named ashrpt_1_0310_0131 is generated. The report will gather ASH information beginning from 10 minutes before the current time and ending at the current time.
5.3.8.2 Generating an ASH Report on a Specified Database Instance
The ashrpti.sql SQL script generates an HTML or text report that displays ASH information for a specified duration for a specified database and instance. This script enables you to specify a database and instance before setting the time frame to collect ASH information.
To generate an ASH report on a specified database instance:
1. At the SQL prompt, enter:2. @$ORACLE_HOME/rdbms/admin/ashrpti.sql
3. Specify whether you want an HTML or a text report:4. Enter value for report_type: html
In this example, an HTML report is chosen.
5. A list of available database Ids and instance numbers are displayed:6. Instances in this Workload Repository schema7. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~8. DB Id Inst Num DB Name Instance Host9. ----------- -------- ------------ ------------ ------------10. 3309173529 1 MAIN main examp169011. 3309173529 1 TINT251 tint251 samp251
Enter the values for the database identifier (dbid) and instance number (inst_num):Enter value for dbid: 3309173529Using 3309173529 for database idEnter value for inst_num: 1
12. Specify the begin time in minutes before the system date:13. Enter value for begin_time: -10
In this example, 10 minutes before the current time is selected.
14. Enter the duration in minutes that the report for which you want to capture ASH information from the begin time.15. Enter value for duration:
In this example, the default duration of system date minus begin time is accepted.
16. Enter a report name, or accept the default report name:17. Enter value for report_name: 18. Using the report name ashrpt_1_0310_0131.txt
In this example, the default name is accepted and an ASH report named ashrpt_1_0310_0131 is generated. The report will gather ASH information on the database instance with a database ID value of 3309173529 beginning from 10 minutes before the current time and ending at the current time.
5.3.9 Using Active Session History Reports
After generating an ASH report, you can review the contents to identify transient performance problems.
The contents of the ASH report are divided into the following sections:
· Top Events
· Load Profile
· Top SQL
· Top PL/SQL
· Top Java
· Top Sessions
· Top Objects/Files/Latches
· Activity Over Time
See Also:
Oracle Real Application Clusters Administration and Deployment Guide for information about sections in the ASH report that are specific to Oracle Real Application Clusters (RAC)
5.3.9.1 Top Events
The Top Events section describes the top wait events of the sampled session activity categorized by user, background, and priority. Use the information in this section to identify the wait events that may be the cause of the transient performance problem.
The Top Events section contains the following subsections:
· Top User Events
· Top Background Events
· Top Event P1/P2/P3 Values
5.3.9.1.1 Top User Events
The Top User Events subsection lists the top wait events from user processes that accounted for the highest percentages of sampled session activity.
5.3.9.1.2 Top Background Events
The Top Background Events subsection lists the top wait events from backgrounds that accounted for the highest percentages of sampled session activity.
5.3.9.1.3 Top Event P1/P2/P3 Values
The Top Event P1/P2/P3 subsection lists the wait event parameter values of the top wait events that accounted for the highest percentages of sampled session activity, ordered by the percentage of total wait time (% Event). For each wait event, values in the P1 Value, P2 Value, P3 Value column correspond to wait event parameters displayed in the Parameter 1, Parameter 2, and Parameter 3 columns.
5.3.9.2 Load Profile
The Load Profile section describes the load analyzed in the sampled session activity. Use the information in this section to identify the service, client, or SQL command type that may be the cause of the transient performance problem.
The Load Profile section contains the following subsections:
· Top Service/Module
· Top Client IDs
· Top SQL Command Types
· Top Phases of Execution
5.3.9.2.1 Top Service/Module
The Top Service/Module subsection lists the services and modules that accounted for the highest percentages of sampled session activity.
5.3.9.2.2 Top Client IDs
The Top Client IDs subsection lists the clients that accounted for the highest percentages of sampled session activity based on their client ID, which is the application-specific identifier of the database session.
5.3.9.2.3 Top SQL Command Types
The Top SQL Command Types subsection lists the SQL command types, such as SELECT, UPDATE, INSERT, and DELETE, that accounted for the highest percentages of sampled session activity.
5.3.9.2.4 Top Phases of Execution
The Top Phases of Execution subsection lists the phases of execution, such as SQL, PL/SQL, and Java compilation and execution, that accounted for the highest percentages of sampled session activity.
5.3.9.3 Top SQL
The Top SQL section describes the top SQL statements of the sampled session activity. Use this information to identify high-load SQL statements that may be the cause of the transient performance problem.
The Top SQL section contains the following subsections:
· Top SQL with Top Events
· Top SQL with Top Row Sources
· Top SQL Using Literals
· Top Parsing Module/Action
· Complete List of SQL Text
5.3.9.3.1 Top SQL with Top Events
The Top SQL with Top Events subsection lists the SQL statements that accounted for the highest percentages of sampled session activity and the top wait events that were encountered by these SQL statements. The Sampled # of Executions column shows how many distinct executions of a particular SQL statement were sampled.
5.3.9.3.2 Top SQL with Top Row Sources
The Top SQL with Top Row Sources subsection lists the SQL statements that accounted for the highest percentages of sampled session activity and their detailed execution plan information. You can use this information to identify which part of the SQL execution contributed significantly to the SQL elapsed time.
5.3.9.3.3 Top SQL Using Literals
The Top SQL Using Literals subsection lists the SQL statements using literals that accounted for the highest percentages of sampled session activity. You should review the statements listed in this report to determine whether the literals can be replaced with bind variables.
5.3.9.3.4 Top Parsing Module/Action
The Top Parsing Module/Action subsection lists the module and action that accounted for the highest percentages of sampled session activity while parsing the SQL statement.
5.3.9.3.5 Complete List of SQL Text
The Complete List of SQL Text subsection displays the entire text of the Top SQL statements shown in this section.
5.3.9.4 Top PL/SQL
The Top PL/SQL section lists the PL/SQL procedures that accounted for the highest percentages of sampled session activity. The PL/SQL Entry Subprogram column lists the application's top-level entry point into PL/SQL. The PL/SQL Current Subprogram column lists the PL/SQL subprogram being executed at the point of sampling. If the value of this column is SQL, then the % Current column shows the percentage of time spent executing SQL for this subprogram.
5.3.9.5 Top Java
The Top Java section describes the top Java programs in the sampled session activity.
5.3.9.6 Top Sessions
The Top Sessions section describes the sessions that were waiting for a particular wait event. Use this information to identify the sessions that accounted for the highest percentages of sampled session activity, which may be the cause of the transient performance problem.
The Top Sessions section contains the following subsections:
· Top Sessions
· Top Blocking Sessions
· Top Sessions Running PQs
5.3.9.6.1 Top Sessions
The Top Session subsection lists the sessions that were waiting for a particular wait event that accounted for the highest percentages of sampled session activity.
5.3.9.6.2 Top Blocking Sessions
The Top Blocking Sessions subsection lists the blocking sessions that accounted for the highest percentages of sampled session activity.
5.3.9.6.3 Top Sessions Running PQs
The Top Sessions Running PQs subsection lists the sessions running parallel queries (PQs) that were waiting for a particular wait event, which accounted for the highest percentages of sampled session activity.
5.3.9.7 Top Objects/Files/Latches
The Top Objects/Files/Latches section provides additional information about the most commonly-used database resources and contains the following subsections:
· Top DB Objects
· Top DB Files
· Top Latches
5.3.9.7.1 Top DB Objects
The Top DB Objects subsection lists the database objects (such as tables and indexes) that accounted for the highest percentages of sampled session activity.
5.3.9.7.2 Top DB Files
The Top DB Files subsection lists the database files that accounted for the highest percentages of sampled session activity.
5.3.9.7.3 Top Latches
The Top Latches subsection lists the latches that accounted for the highest percentages of sampled session activity.
Latches are simple, low-level serialization mechanisms to protect shared data structures in the System Global Area (SGA). For example, latches protect the list of users currently accessing the database and the data structures describing the blocks in the buffer cache. A server or background process acquires a latch for a very short time while manipulating or looking at one of these structures. The implementation of latches is operating system-dependent, particularly in regard to whether and how long a process will wait for a latch.
5.3.9.8 Activity Over Time
The Activity Over Time section is one of the most informative sections of the ASH report. This section is particularly useful for longer time periods because it provides in-depth details about activities and workload profiles during the analysis period. The Activity Over Time section is divided into 10 time slots. The size of each time slot varies based on the duration of the analysis period. The first and last slots are usually odd-sized. All inner slots are equally sized and can be compared to each other. For example, if the analysis period lasts for 10 minutes, then all time slots will 1 minute each. On the other hand, if the analysis period lasts for 9 minutes and 30 seconds, then the outer slots may be 15 seconds each and the inner slots will be 1 minute each.
Each of the time slots contains information regarding that particular time slot, as described in Table 5-2.
Table 5-2 Activity Over Time
Column
Description
Slot Time (Duration)
Duration of the slot
Slot Count
Number of sampled sessions in the slot
Event
Top three wait events in the slot
Event Count
Number of ASH samples waiting for the wait event
% Event
Percentage of ASH samples waiting for wait events in the entire analysis period

When comparing the inner slots, perform a skew analysis by identifying spikes in the Event Count and Slot Count columns. A spike in the Event Count column indicates an increase in the number of sampled sessions waiting for a particular event. A spike in the Slot Count column indicates an increase in active sessions, because ASH data is sampled from active sessions only and a relative increase in database workload. Typically, when the number of active session samples and the number of sessions associated with a wait event increases, the slot may be the cause of the transient performance problem.
To generate the ASH report with a user-defined slot size, run the ashrpti.sql script, as described in "Generating an ASH Report on a Specified Database Instance".