Here is an illustrated Nutshell FAQ on Oracle explained in-depth using a hands-on approach and constructivist sequence. This crystalizes stacks of books, several classes, thousands of user group entries, and years of hard-won experience.
Oracle's Documentation home page (tahiti.oracle.com)
The Oracle Database Management System (DBMS) consists of several components (loosely similar to UNIX's implementation of the multitasking operating system):
Oracle Installation ProductsThe Oracle 10g Universal Installer is installed in C:\Program Files\Oracle\Inventory.
When Oracle Personal Express Edition (XE) is installed, it creates by default
folder C:\oraclexe with the folders in the table.
Different installers use other drive/folder paths.
Envrionment variables enable the root path to be referenced by
$ORACLE_BASE within Unix and
"xe" is the database Instance Identifier (SID) for Express Edition.
The path to executables (exe and dll files) within the bin folder containing Oracle supplied command line utilities for a Oracle version is stored within OS environment variable $ORACLE_HOME in Unix.
On Windows machines, rather than setting a %ORACLE_HOME% environment variable, Oracle uses the path in Windows Registry key HKEY_LOCAL_MACHINE/SOFTWARE/ORACLE using the Regedit utility
Oracle 11g only requires user specification of ORACLE_BASE, since it creates ORACLE_HOME from it.
Change OS Environment variable PATH
to replace the path with variable
To change from using Oracle 10g to Oracle 11g on the same machine, change the value of ORACLE_HOME.
To connect to a network different than the one active while Oracle was installed, change in ...\db_1\NETWORK\ADMIN\listener.ora the default port from 1521 to 1525.
Express Edition Files Installed
Enterprise Edition Files Installed
The default SAP installation names Oracle datafiles with a prefix which matches its tablespace name. Rather than the dbf filename suffix, SAP requires datafiles to have names such as "psappooli.data3" for the third datafile for the SAP POOL data indexes.
Programs > Settings > Control Panel > Administrative Tools > Services (services.msc).
Oracle 10g takes VM Size of 651,540K.
A different service name is assigned to process each instance. Service names contain "XE" for Express Edition, or "ORCL" or "OraDb10g_home1" automatically assigned by the Enterprise edition installer.
Statup is "Automatic" when that the service starts every time Windows boots up.
Executables for running services and the dll files they use (oraclient10.dll, etc. also in the same bin folder) cannot be deleted (to uninstall Oracle) until their service is stopped.
Advanced Database architectures include: Oracle Managed Files (OMF) to automate creation and dropping of datafiles and management of redo log and control files; Partitioning of tables by range, list, hash, composition; Replication (not suited to failover and backup); Standby (Failover) Databases, Grid Computing, which consists of OEM Grid Control of Oracle RAC (Real Application Clusters) and Oracle 10g platform independent Cluster Ready Services (CRS) to handle failover of services to surviving nodes
ASM, Transportable Tablespaces, Streams, Scheduler.
Oracle features for a grid computing architecture: –Oracle Real Application Clusters (RAC) –Automated Storage Management (ASM) –Oracle Transportable Tablespaces –Oracle Streams –Oracle Scheduler –Oracle Enterprise Manager Grid Control
If Oracle Windows Express Edition (XE) has been installed
StartDB.bat within folder ORACLE_HOME/bin which starts services OracleService... and ...TNSListener established during Oracle installation.
If the Oracle Enterprise Edition was installed it is assumed that the database just runs.
Since Oracle reads the pfile only when it starts, changes to it (using a Notepad or other text editor) requires the databse to be restarted.
Oracle 10g added a way to change init parameter values dynamically (while the database is running):
ALTER SESSION set parameter = value ; ALTER SYSTEM set parameter = value SCOPE = SPFILE;
SCOPE = MEMORY; applies to the current instance only, and will not be retained on restart.
SCOPE = BOTH; applies to the current instance and when the database starts again.
Oracle 10g introduced the spfile (server parameter file) named spfileSID.ora to retain parms so that they can be applied the next time the database starts.
Dynamic changes to parameters can also be made by Oracle's autotune intelligence.
Since the spfile is open for update as the database runs, it can't be edited like the pfile can. But the spfile can be backed-up to a pfile with a command like:
create pfile=/path/to/backup.ora from spfile;
Oracle 10g by default looks for a spfile. But if a spfile can't be found, Oracle looks for the pfile (as before).
When the database is restarted using spfile, the DBA must login again with proper privileges.
#, the pound sign, precedes a comment in the static parameter file.
The V$PARAMETER view of the internal x$ksppi table lists all initialization parameters for a database.
processes=150 is the default number of processes and sessions.
Upon start-up, if Oracle can't use the Control File for the instance, Oracle starts processes with NOMOUNT on the datafiles.
There are several ways to get into the Oracle database:
Oracle's Database Control is for ...
tnsping.exe to TNS
Oracle's foreground communications utilities are known as SQL*Net (pre-Oracle8 and still supported by Oracle8) or Net8. Net8 Administrator's Guide
"SQL*Net message to/from client" and other idle events are listed in the stats$idle_event table.
The maximum number of connections through Oracle Net Services is specified by the ???
A good test to check that everything is working is to run the application SQL Worksheet from the Enterprise Manager folder. When the SQL Worksheet logon screen is displayed, enter a valid username and password, but leave the service name field blank so that it will use the default database. Don't change the contents of the Connect As box.
Each connection to Oracle takes 2-4 MB of memory.
Programs > Oracle > Run SQL Command Line runs sqlplus.exe using ORACLE_HOME\sqlplus\mesg\*.msb files.
Within ORACLE_HOME\sqlplus\admin are several sql files:
Run pupbld.bat to install product user profile tables to restrict commands per user.
http://2600client:5560/isqlplus is Oracle's web based browser utility used to query the database over the Internet via Apache web server.
In iSQL *Plus the query section is separate from the result section, which can include graphical output in HTML format. http://www.oracle.com/technology/tech/sql_plus/index.html
Oracle metrics include these views of session activity in V$SESSTAT:
Kill a session (as a SYS user) by finding the SID and SERIAL# from V$Session and use this statement:
ALTER SYSTEM KILL SESSION '12, 345'
It's more problematic to stop Oracle because we want to avoid losing data if it is in the middle of saving data to permanent storage.
With Windows Express Edition:
Within each process are threads. The many Oracle background tasks runs within a single multi-threaded process oracle.exe.
On UNIX, Oracle's processes are listed with command "ps -ef | grep ora".
Run... Command-line scripts within Oracle bin folder:
Where OraHomeXX is the $HOME_NAME environment variable. (ORANT in versions prior to 8i).
DB Visualizer Personal, at $173 (+$73/year) is a bargain. It supports an extensive collection of databases and the majority of operating systems.
TOAD for Quest software integrates its full-function SQL Editor into a Multi-tabbed Procedure Editor and Debugger that works on multiple files in parallel with SCC-compliant version control. Its Schema Browser manages database objects SQL Modeler builds complex queries integrating server and session statistics to provide tuning hints and quick views of the server environment. Its DBA Module utility compares and synchronizes database schemas to manage space, administer security, and import/export data. Its free version lasts 30 days.
Spotlight from Quest Software provides real-time diagnostics.
ER Studio is a professional tool for generating intricate physical DDL (Data Definition Language) from complex logical maps of databases.
DB Tools for Oracle from SoftTree Technologies has a $670/user suite for developing applications which use Oracle databases and administering Oracle databases, including a Test Data Generator, File Loader, LOB Viewer and Loader, DB Difference
Its DB Monitor Expert graphically presents performance diagnostics.
DBHawk (formerly DBConsole) from DataSparc is a free, platform independent tool that provides administration and development of Oracle databases through a web browser.
dbXpert for Oracle from Vecal is a Windows-based client program to assist in the development and administration of Oracle databases. It provides object and session information; syntax highlighted automatically; user-friendly data browser; and a script library for frequently executed SQL statements.
Oracle provide sql files within %ORACLE_HOME%\demo\schema\human_resources
During installation, the Installer creates Local Group ORA_DBA described as "Members can connect to the Oracle database as a DBA without a password."
In most enterprise environments (e.g., a business with an IT dept.), databases are created with DDL scripts rather than interactively. An SCM dept enforces a process that involves checking in those scripts into a CVS (source repository) along with other assets of the company. SCM uses software such as Kantana to coordinate versions of code that matches specific versions of database creation scripts. Process standards like CMM, ISO 9000, and ITIL require this repeatable process. Using the Enterprise Manager GUI is not considered repeatable because it's done by "human hands".
Here are the DBA actions needed before writing PL/SQL programs:
View ALL_USERS lists USERNAME CREATED dates. View USER_USERS
Oracle Server Manager has both a line mode and GUI interface. It replaces the SQL*DBA utility used in Oracle version 6 and Oracle7 releases prior to 7.3. After Oracle is installed:
To change the password to account xxx in SQL:
ALTER USER xxx IDENTIFIED BY 'password';
To change the password to account xxx in SQL*Plus:
Changing password for xxx
Retype new password:
Part of Cary Millsap's 1991 industry-standard OFA (implemented after release 8.1.3) is a consistent set of folders under the \oracle\admin folder for each instance (such as \ORCL):
Note: \Core is only used for UNIX systems, which use mount points instead of drive letters. For OFA on UNIX, install to u01/app/oracle/product/9.2.0/... installed from the u01/app/oracle/admin/cdrom/ automount holding the entire install CDROM
Additionally, the LOG_ARCHIVE_DEST points to folder /oracle/<sapsid>/saparch/SAPSID_
If the OFA file naming standard seem like files are diced and sliced a lot ... that's the idea! The OFA structure enables speedier performance than having all files jumbled together because INIT.ORA can point to folders placed on separate (parallel) devices so that I/O is spread around. Fragmentation within a file type can be better controlled when file types are fragmented (distributed). Got that?
Oracle stores and maintains user data using several types of data structures:
When processing DML (Data Manipulation Language) to insert, update, or delete data, Oracle first writes 1) after images to Online Redo Log Files, then 2) to an Undo Log file, then 3) to the data file. A SQL COMMIT statement marks undo entries for the current session as no longer needed for rollback.
Oracle in a Nutshell: A Desktop Quick Reference By Rick Greenwald, David C. Kreines
Conceptual relationships between Logical and Physical schemas:
The DBA_TABLESPACES view lists all tables created.data files. However, each datafile can belong to only one tablespace. To display the V$DATAFILE mapping between database files and tablespaces:
SELECT tablespace_name, file_name, bytes FROM dba_data_files ORDER BY tablespace_name, file_name;
The “nn" above is a placeholder for a zero-padded count of the physical files multiplexed (mirrored) over several drives (for better performance and redundancy).
User tablespaces are separate from the system tablespace. Ideally, tablespaces are created in raw (unformatted) disk partitions.
SAP requires index file names to be eight characters long,
beginning with the table name, then
a variable number of (between one and three) underscore (_) characters,
and ending with a number.
Fragmentation Indexes should be stored in a different physical disk location than their associated tables.
With 10g's extent management feature, fragmentation does not occur because ...
Index segments must be sized appropriately or fragmentation will slow database performance.
Use this query to rebuild the index:
alter index <index-name> rebuild tablespace <index-tablespace>;
Applications can continue to use the table while indexes are rebuilt with this command:
select sum(bytes) from dba_extents where segment_name=’<index-name>’;This rebuild command does not allow application access:
analyze index <index_name> validate structure;
The DBA_FREE_SPACE_COALESCED view gives information on tablespaces with extents which can be coalesced.
Tables are rebuilt this way:
Rollback segments store the “before" image of data prior to transactions being applied to the database so that they can be used to restore specific transactions to "back out" incomplete transaction sets as if they never occured.
The "user rollbacks" statistic in V$SYSSTAT reports how many times users manually issue a ROLLBACK statement or when an error occurs during user transactions.
The number of rollback segments available may need to be increased if Rollback Segment Contention (whether a process had to wait to get access to a rollback segment) if this query reports more than 1%:
Brian Cryer recommends that rollback segments be isolated in their own tablespace, preferably on a separate disk from other active tablespaces.
After they have dynamically extended, rollback segments shrink back to the size specified in the OPTIMAL parameter. OPTIMAL should be close to the “Avg.Active” from this query:
Select segment_name, shrinks, aveshrink, aveactive "Avg.Active" from v$rollstat v, dba_rollback_segs d where v.usn = d.segment_id;
–sqlplus /nolog –CONNECT /@ORACLASS AS SYSDBA
This is easier than using Oracle's own authentication to connect, which involves supplying an encrypted Password file. This file is created by the SYSDBA or SYSOPER. On Windows, orapwd to a PWD<%ORACLE_SID%>.ORA file in %ORACLE_HOME%\database. On Unix, create orapw<ORACLE_SID>.ora in $ORACLE_HOME/dbs.
REMOTE_LOGIN_PASSWORD FILE = EXCLUSIVE
SYSOPER has system privileges to start up, shut down, and back up the database; This role can also modify database components.
An extent is a unit of storage made up of one or more contiguous Oracle blocks obtained in a single allocation. When a segment can no longer fit new data into its currently allocated extents, another extent is allocated.
When a database is created, the extents allocated are specified by the STORAGE clause of the CREATE command. The initial storage value cannot be changed. Each extent in a database segment can be of the same or different size. The size of the next extent allocated is dependent upon several factors.
The maximum number of extents a single database object can have depends on the Oracle block size.
The DBA_EXTENTS view displays information about Extents. This query lists database objects with over 10 extents allocated:
select * from dba_segments where extents > 10;
Oracle bumps the high-water mark for a table in increments of five blocks.
select * from dictionary where table_name like ‘DBA_TABLES%';
The DBA_SYNONYMS view lists information on all synonyms.
When using these views you should keep timed_statistics set to true (TIMED_STATISTICS=TRUE) in your init.ora file becuase doing so has minimal effect on performance.
For more information on enqueues:
Datafiles are created using SQL commands CREATE TABLESPACE or ALTER TABLESPACE
Each process needs to open a number of files (control file, log file, database file)
to work against the database.
Among V$SYSSTAT and LoadRunner statistics:
"physical writes" counts the number of I/O requests to the operating system to write a database block to the disk subsystem. Writes are usually performed by background processes DBWR or LGWR.
Although ASM enables DBAs to avoid the bureaucracy associated with arrays usually owned by network storage administrators or having to learn how to deal with RAID drives, some DBAs may not welcome the additional workload, such as setting up and maintaining on each server a whole special Oracle Instance needed to map file extents to physical disk blocks. This architecture enables hard disks to be added without drama. But will this force DBAs to handle disk drives as well now?
Oracle provides a view to list disks in diskgroups, issue:
select * from v$asm_disk;
In addition to the "Normal" ASM setting where data is mirrored on two separate disks, ASM offers a "High Redundancy" option where data is mirrored on three separate disks for three-way mirroring. DBAs setup failgroups to specify fail-over.
ASM allows for variable strip width. The default stripe size of 1 Mbyte for datafiles can be changed to 128K for fast transfer needed with redo logs.
These advantages should be considered with recognition that RAID drives in a separate machine reduces CPU cycles on the server and provides additional caching on solid state memory that usually comes with large arrays.
Oracle blocks are physically stored on devices in operating system (OS) blocks.
The table header structure maintains one or more lists of blocks which have free space for insertion.
If more processes are attempting to insert than there are free lists,
some will have to wait for access to a free list.
Free list contention occurs when more than one process is attempting to insert data into a given table but the
Free list waits as a percentage of logical reads (from buffer) should be a very low 1% or less as reported with this SQL query:
select round((sum(decode(w.class,'free list',count,0)) / (sum(decode(name,'db block gets',value,0)) + sum(decode(name,'consistent gets',value,0))))*100,2) from v$waitstat w, v$sysstat
A "db block get" is physical I/O.
The "consistent gets" statistic in V$SYSSTAT counts the number of times a consistent read (CR)
was requested for a block.
The "consistent changes" statistic in V$SYSSTAT counts the number of times a database block has rollback entries applied to perform a consistent read (CR) on the block. This consumes a great deal of resources.
Table ClustersTo reduce I/O, tables that are often used together in queries can be physically grouped into the same data blocks. These table cluster objects hold the values of the columns in the clustered tables. The cluster key defines the column used to join the tables together.
The "Cluster key scans" statistic reports the number of cluster scans that have been started.
Dictionary files are created with the DEFAULT_BLOCK_SIZE.
With Oracle 8i the default is 8K bytes.
OLTP (On-Line Transaction Processing) apps that generally read (get) and write small amounts of data at a time
usually benefit from a smaller block size (4KB).
Block sizes should be larger than the average_row_len obtained from the dba_tables view in order to avoid row chaining, which is when a row must be spread over two or more blocks becuase the row is so large or has so many fields that it cannot fit into a single data block.
select round((cont.value/(scn.value+rid.value))*100,2) from v$sysstat cont, v$sysstat scn, v$sysstat rid where cont.name= 'table fetch continued row' and scn.name= 'table scan rows gotten' and rid.name= 'table fetch by rowid'
Row Migration occurs when an update to a row cannot fit within the current block, so data for the row must be migrated to a new block, leaving a pointer to the new location in the original block. This additional processing time and I/O occurs when the STORAGE parameter PCT_USED is high and PCT_FREE is low. Exporting and re-Importing may eliminate this problem.
When Oracle starts, it refers to a single control file, but several control files exists for a database because multiple control files are created and updated for redundancy. Oracle recommends specifying at least 3 (up to 8) duplicates on separate disk drives. These files are read based on CONTROL_FILES= parameter specified in init.ora or initORACLASS.ora. Starting the SQL Loader utility requires specification of a control file:
SQLLDR \'SYS/password AS SYSDBA\' control=foo.ctl
As the database runs, the control file holds the log sequence number linking redo log files with datafiles.
That's why the instance needs to be bounced when redo logs or datafiles are added or moved. Restart DB (mounted mode) with PFILE parameter.
When creating the control file for a production database, MAXINSTANCES, MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, and MAXLOGHISTORY should be set and watched to avoid an emergency to extend them. Setting these requires the creation of a new control file.
Since this file is stored in binary format, create a text formatted user trace file:
To restore an entire database, a backup is restored, then rolled forward using archived and online redo log files to make changes to a point-in-time before the error occurred.
A "redo log" refers to a set of redo log files. Oracle switches to another log group when one is full.
Oracle redo files are "circular" because when it's done writing to the last file in the series, it begins overwriting to the first online redo log file. Several Redo Logs may reside inside an .arc file recognized by the operating system. Oracle can multiplex (mirror) Redo logs on several different disks that are updated simultaneously. Each member of a log group is an exact mirror of the others, and redo log entries are written to each member in parallel. This is instead of using RAID-1 or RAID-5.
NOTE: By contrast, Flashback files (new to 10g) capture changes to specific blocks on physical disk drives to return to the state of the database at specified points in time. Flashbacks operate in addition to redo logs. NOTE: Within ORACLE_HOME\sqlplus\admin, movelogs.sql moves online logs to a new flash recovery area, which is specified in INIT.ORA parameter db_recovery_file_dest.
The "redo entries" statistic viewed with V$SYSSTAT
increments each time redo entries are copied into the redo log buffer.
[_] To reduce use of computer power and disk space, datafiles can be set to be updated without redo logs by setting the NOACHIVELOG run parameter.
The V$LOGFILE view returns information about member .rdo file path for each GROUP#.
If there isn't enough space in the redo buffer, a redo space wait occurs. An indicator of this is when the rate of requests ("req") is higher than the ("wrt") rate required of the log writer:
select round((req.value/wrt.value)*100,2) from v$sysstat req, v$sysstat wrt where req.name='redo log space requests' and wrt.name='redo writes'
The "redo log space requests" statistic in V$SYSSTAT and LoadRunner tracks the number of times a log file switch has occured -- when the active log file is full and Oracle must wait for disk space to be allocated to hold more redo log entries.
The "Log File Switch" wait event counter is incremented when commit requests wait for "logfile switch (archiving needed)" or "logfile switch (chkpt. Incomplete)". This web page recommendes: "Ensure that the archive disk is not full or slow. DBWR may be too slow because of I/O. You may need to add more or larger redo logs, and you may potentially need to add database writers if the DBWR is the problem. "
"Small Log files in relation to the size of the SGA or the commit rate of the workload can cause problems. When the log switch occurs, Oracle must ensure that all committed dirty buffers are written to disk before switching to a new log file. If you have a large SGA full of dirty buffers and small redo log files, a log switch must wait for DBWR to write dirty buffers to disk before continuing. "
The "log file space/switch" wait event in V$SESSION_WAIT...
Before performing maintenance on the active log group, manually trigger a log switch from the Enterprise Manager console (OEM), SQL*Plus Worksheet, connect as the SYS user and execute:
ALTER SYSTEM SWITCH LOGFILE;The system should respond with "System altered"
Its location is specified by the BACKGROUND_DUMP_DEST and USER_DUMP_DEST parameters before 11g and DIAGNOSTIC_DEST in 11g. By default, that's the $ORACLE_HOME/admin/$ORACLE_SID/bdump directory before 11g and $ORACLE_BASE/diag/$ORACLE_SID in 11g.
The format of alert log is in plain text. Oracle 11g writes an additional alert log named log.xml in XML format.
Standard SQL (and thus server access) was needed to access the log, but 11g provides a view:
select name, value from v$diag_info.
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
The instance needs to be bounced after this change.
Trace (.trc) files can be used to track down problems not causing critical failures.
The services necessary to run the Oracle instance are created using the oradim utility. If it's run without arguments, a GUI interface appears.
oradim -new -sid TEST -startmode manual -pfile "E:\oracle\admin\test\pfile\init.ora"
The .sql file contains SQL statements such as this:
CREATE TABLESPACE DATA DATAFILE 'h:\...\...data\sid\DATA01.dbf'
Create an instance database:
v$instance contains the startup_time field displaying the instance start time.
Resizing a database:
ALTER DATABASE DATAFILE E:\....DBF RESIZE 300K;
SELECT TABLESPACE_NAME, BYTES FROM DBA_DATA_FILES;
Its contents is not visible to other processes unless (MTS) Oracle MultiThreaded Server is used. Kinda like a private room, baby ;)
In a MTS environment, the UGA (User Global Area) is used instead of the PGA.
The number of concurrent connections is limited by room in the PGA/UGA. The V$SESSTAT view used by LoadRunner Oracle statistics report:
Prior to 9i, the PGA/UGA total size was set by a combination of parameters OPEN_LINKS, DB_FILES, LOG_FILES, and HASH_AREA_SIZE.
When the 9i PGA_AGGREGATE_TARGET size is set, Orcle ignores older HASH_AREA_SIZE, sort_area_size, and sort_area_retained_size parms because it can manage the PGA in aggregate (total) when WORKAREA_SIZE_POLICY is set to AUTOMATIC (rather than MANUAL) mode using v$pga_target_advice.
The PGA area is too small when "multi-pass exec" is high.
Monitor the proportion of sorts in memory vs. from disk with this query:
select round((mem.value/(mem.value+dsk.value))*100,2) from v$sysstat mem, v$sysstat dsk where mem.name='sorts (memory)'and dsk.name='sorts (disk)'
"SORTS (DISK)" displays the number of sorts that needed temporary segments created.
"SORTS (ROWS)" displays the total number of rows sorted by all sorts performed.
Oracle STATSPACK reports (available since Oracle8i) include the number of "Rows per Sort" calculation.
Oracle Instance. Its contents is visible/accessible to several user applications and system processes. Kinda like a hotel lobby
Oracle provides a SQL interface to view memory in the SGA with x$ tables,
whose names can be queried with
Data can be accessed from RAM 2,000 times faster than with physical i/o from a hard disk,
The LOCK_SGA parameter locks the SGA into physical memory and prevents it from being paged out.
To clear the SGA of data, use the SQL command:
The v$sga view or "SHOW SGA" SQL statement displays the size (in bytes) of each major component of the SGA:
The parameters to control the size of these areas are described in SGA Management.
System Global Area's Shared Pool contains:
Shared SQL areas and private SQL areas created for each transaction initiated.
View v$db_object_cache displays objects in the library cache (tables, indexes, clusters, synonym definitions, PL/SQL procedures/packages, triggers).
The library cache hit ratio reported in STATSPACK reports is calculated with this:
Select sum(pinhits) / sum(pins) "Hit Ratio", sum(reloads) / sum(pins) "Reload percent" From v$librarycache Where namespace in ('SQL AREA', 'TABLE/PROCEDURE', 'BODY', 'TRIGGER');
Brian Cryer feels that if this hit ratio is not at least 85%, the SHARED_POOL_SIZE may not be large enough. Although less likely, the init.ora parameter OPEN_CURSORS (default 100) may also need to increased.Oracle user processes saves in the Shared SQL pool intermediate results from SQL commands it previously optimized. So before performing a hard disk operation, Oracle can service identical requests simply by retrieving them from its cached memory for reuse.
when a SQL statement is issued by a user process. a workheap is allocated, and parse tree and execution plan is built in the shared pool.
The shared pool also contains headers of PL/SQL packages and procedures that have been executed, the library cache, data dictionary cache, and performance information for each type of object in the library cache.
To get the current size of the shared pool:
Oracle Metalink document 263809.1 says the minimum shared_pool_size setting for 64-bit Oracle 10g is 144 MB.
To get the amount of free memory in the shared pool:
To get the percentage of free memory in the shared pool:
If this percentage is above 35% even under intense load, it is too high. Low free values may be a cause for other issues with components of the SGA.
bind variables, environment, and session parameters, runtime stacks, and buffers. Each private area is deallocated after the cursor to the session is closed.
The number of private SQL areas a user session can have open at one time is limited by the value of the OPEN_CURSORS init.ora parameter.
The v$librarycache view lists the get hit ratio for each namespace (SQL AREA).
This is not specific to SQL and PL/SQL blocks.
When Oracle needs to reparse SQL or PL/SQL blocks when it attempts to execute it, a Shared pool reload occurs.
select round(sum(reloads)/sum(pins)*100,2) from v$librarycache where namespace in ('SQL AREA','TABLE/PROCEDURE','BODY','TRIGGER')
To identify SQL statements have had to be re-parsed, use this query provided by Brian Cryer:
select executions, t.sql_text from v$sqlarea a, v$sqltext t where parse_calls >1 and parse_calls = executions and a.address=t.address and executions > 10000 order by executions desc;
If this returns a large number of rows, increase the number of ‘executions’ specified in the query.
The internal information includes extent locations datafiles and segments associated with the tables and columns referenced by the SQL.
The Data Dictionary Object Alert reports excessive requests or contention here.
But calculate the Dictionary Cache Hit Ratio to monitor the proportion of requests for information from the data dictionary table rather than from the data dictionary cache:
select sum(gets-getmisses)*100/sum(gets) from v$rowcacheCache misses should be expected after instance startup since the data dictionary cache is empty at startup.
cache misses should decrease as more data is read into the cache until a "steady state" is reached, in which the most frequently used dictionary data is in the cache.
Brian Cryer feels that this miss ratio should be less than 15%.
It's counter-intuitive, but Oracle is faster with a smaller pool if a lot of non-reusable SQL are cycling in and out of the pool. This is because the larger the space, the more time Oracle needs to manage that space.
The effectiveness of the buffer cache is usually measured by the buffer cache hit ratio , which is the ratio of physical requests for data satisfied by a logical read of data already in the buffer cache. This ratio requires a bit of calculation because the number of logical reads is the total of "consistent gets" and "database block gets" statistics maintained by V$SYSSTAT and LoadRunner:
select round((1-(pr.value/(bg.value+cg.value)))*100,2) from v$sysstat pr, v$sysstat bg, v$sysstat cg where pr.name='physical reads' and bg.name='db block gets' and cg.name='consistent gets'
Action is needed if this falls below 90%, according to Brian Cryer.
While higher percentages are better since access to data from memory is faster than with an IO operation to disk, there is a point of diminishing returns when increasing the size of the database buffer takes space needed by other parts of the SGA so that the SGA paging seriously degrades performance.
[_] In 8i, DB_BLOCK_BUFFERS was used to adjust the size of the Buffer Cache. This is deprecated by 9i, which specifies buffer size in DB_CACHE_SIZE bytes, which can be changed without shutting down the database. To help in selecting the optimum size, Oracle 9i provides the v$db_cache_advice view as part of overall SGA memory management.
Since 8i, the Buffer Cache is made up of sections:
Each of these pools are split into "working data sets" with their own LRU latches.
Because of the wide mix of single point random reads, large sequential reads, and very large direct reads,
Oracle retrieves full table scans to the Least Recently Used (LRU) low/cold area.
(Technically, the table is set with with a "forget bit" buffer flag).
[_] Whether a table is Long or short can be defined by Optimizer hints coming down into the row source access layer of Oracle. But it can be set with the initialization parameter SMALL_TABLE_THRESHOLD. In Parallel Server environments supporting parallel query servers across instances, a table is considered small for scanning purposes if the cache partition fits in the current SGA. Oracle can scan short tables that completely fit in memory quicker than using an index.
If Oracle needs to do I/O, full table scans of long tables is generally bad for overall performance.
The proportion of table scans occuring on short tables is calculated with this query:
select round((shrt.value/(shrt.value+lng.value))*100,2) from v$sysstat shrt, v$sysstat lng where shrt.name='table scans (short tables)'and lng.name='table scans (long tables)'The "table scans (long tables)" statistic in V$SYSSTAT counts the rows processed during full table scan operations.
The "table scans (short tables)" statistic in V$SYSSTAT counts the rows processed when the table fit in the SGA.
A low proportion of short scans may indicate lack of indexes on large tables poorly written SQL which fails to use existing indexes which requires a large percentage of the table to be returned.
The "table scan blocks" statistic in V$SYSSTAT tracks the number of rows retrieved sequentially by Oracle.
The v$bh view displays the status of each database cache buffer:
In single-instance mode, concurrent I/O on a database block increments the "buffer busy" wait event
statistic in the V$SESSION_WAIT table.
To decrease buffer waits, consider:
This web page recommends looking into $WAITSTAT for clues:
The average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS" select a.average_wait "SEQ READ", b.average_wait "SCAT READ" from sys.v_$system_event a, sys.v_$system_event b where a.event = 'db file sequential read' and b.event = 'db file scattered read';
High counts here may indicate missing or suppressed indexes.
High counts are more acceptable if the database resides on solid-state hard disk devices which are six times faster than traditional disk devices (100,000 I/Os per second).
In general, this event is indicative of disk contention on index reads since single block reads are usually the result of using indexes and short tables (usually a single-block read with p3=1, but can be multiple blocks). This event is also used for rebuilding the controlfile and reading datafile headers (P2=1).
It is usually good for this number to be large for a high-transaction OLTP system since there is less overhead reading from single rather than multiple blocks in cache.
However, index scans can also be high if SQL is not coded efficiently, such as with unselective indexing
or poor joining order for multiple table joins.
Use the Statspack report to help identify such queries in question.
[_] Consider adjusting the "PGA AGGREGATE TARGET" parameter to reduce waits on sequential reads.
Waits for sequential read can also be high if the DB_CACHE_SIZE is too small.
WISE (Workload Interface Statistical Engine) is called "intelligent" becuase it graphs STATSPACK Oracle10g Automated Workload Repository (AWR) ADDM, ASH time-series data as hourly and daily trends graphs, compared against historical baselines in a repository. This visualization allows Sr. DBAs to proactively spot subtle and signatures causing repeating performance problems.
Its size is initialized according to the LOG_BUFFER (bytes) parameter.
The "change write time" statistic in v$sysstats reports the elapsed time for redo write for changes made to CURRENT blocks, in 10s of milliseconds.
Entries in the redo log buffer are necessarily sequential for sequencing redo operations. To enforce "mutual exclusion", user processes must use a simple low-level serialization mechanism called latches. Like a baton being passed among runners, Oracle provides only one allocation latch to ensure that only one Oracle user process at a time may allocate and update a particular space in the redo buffer.
[_] The number of redo copy latches should be the number of CPUs in the parameter LOG_SIMULTANEOUS_COPIES. This is because multiple redo copy latches can exist on machines with multiple CPUs since each CPU can have its own redo log buffer to copy entries to the redo log buffer concurrently.
[_] If a redo entry fits into the threshold size defined by the initialization parameter LOG_SMALL_ENTRY_MAX_SIZE, it can be copied on the redo allocation latch. The number of times this happens is calculated thus:
select round(greatest((sum(decode(ln.name,'redo allocation',misses,0)) /greatest(sum(decode(ln.name,'redo allocation',gets,0)),1)), (sum(decode(ln.name,'redo allocation',immediate_misses,0)) /greatest(sum(decode(ln.name,'redo allocation',immediate_gets,0)) +sum(decode(ln.name,'redo allocation',immediate_misses,0)),1)))*100,2) from v$latch l,v$latchname ln where l.latch#=ln.latch#
If a redo entry does not fit, the user process must obtain a redo copy latch before copying the entry into the buffer.
While holding a redo copy latch, the user process obtains the allocation latch, performs allocation, and releases the allocation latch.
Next the process performs the "copy under the copy latch" into its allocated space in the buffer and then releases the redo copy latch. The number of times this happens is calculated thus:
select round(greatest((sum(decode(ln.name,'redo copy',misses,0)) /greatest(sum(decode(ln.name,'redo copy',gets,0)),1)), (sum(decode(ln.name,'redo copy',immediate_misses,0)) /greatest(sum(decode(ln.name,'redo copy',immediate_gets,0)) +sum(decode(ln.name,'redo copy',immediate_misses,0)),1)) )*100,2) from v$latch l,v$latchname ln where l.latch#=ln.latch#
Because the user process does not obtain the copy latch while holding the allocation latch, it holds the allocation latch for a very short period of time. That's why if a user process is not willing to wait, it issues a timeout.
After allocating space for a redo entry, the user process copies the entry into the buffer.
The frequency of waits occuring is calculated thus:
select round(((sum(gets) - sum(misses)) / sum(gets))*100,2) from v$latch
The percentage of time that a latch which is willing to wait and retry after a miss is calculated thus:
select round(((sum(immediate_gets) - sum(immediate_misses)) / sum(immediate_gets))*100,2) from v$latch
The "Latch Free" wait event counter increments when a latch free miss occurs.
It is also used by the Recovery Manager (RMAN) process doing backups introduced with Oracle 8 to replace Oracle's EBU (Enterprise Backup Utility) for Oracle 7.
3rd party backup solutions that interface with Oracle's Media Management Layer (MML) include:
Since a bug was fixed in 8.1.7, get the high water mark (HWM) of connected sessions since startup with v$resource_limit.
The V$SGA_DYNAMIC_COMPONENTS view displays the size of "granules" of memory units Oracle adjusts memory:
4 MB when the total size is under 128MB.
This view also displays the current_size of components of in the SGA:
On 32-bit machines, the maximum SGA cannot exceed 1.7G from the 4GB address space on many unix platforms.
With 9i, the components consuming memory in the SGA had to be (continually) adjusted manually (by changing the init.ora) to avoid ORA-04031 out-of-memory allocation faults.
With 10g, DBAs set a single new SGA_TARGET initialization parameter a space that Oracle Automatic Shared Memory Management (ASMM) automatically distributes. The SGA_MAX_SIZE cannot be dynamically resized.
Query the V$SGA_TARGET_ADVICE view and set the SGA_TARGET to the sga_size value that corresponds to the least estimated physical reads ("ESTD_PHYSICAL_READS"). Use command:
The SGA_TARGET value includes space previously specified for the shared_pool_size and startup overhead of other pools.
The SGA_TARGET value also includes "Fixed Size" space that Oracle sets based on the operating system.
The paradigm shift in thinking is that values to maximize in 9i in 10g become "overhead" values to be minimized. So in 10g do this:
SQL> alter system set db_cache_size=0;
Query the v$sgainfo dynamic view for the "Startup overhead in shared pool" value.
Manual tuning in 10g involves setting the minimum amount of space that should be allocated at start-up so that Oracle has the memory already dedicated when its needed.
Explicit settings are still needed for components that 10g does not adjust automatically:
The keep buffer cache, the recycle buffer cache, and nonstandard block-size buffer caches.
The PROCESSES parameter in init.ora specifies the maximum number of processes accessing the database.
select * from v$session where type ='BACKGROUND';
Set the DB_WRITERS parameter to define as many DBWR processes as there are physical disks used to store data files.
The "write requests" statistic in V$SYSSTAT and LoadRunner stores the number of times DBWR takes a batch of dirty buffers and writes them to disk.
This activity is counted by the "redo sync writes" statistic in V$SYSSTAT when the redo log buffer must be written out to disk due to a commit.
The "Log File Sync" wait event tracks how often the redo log file sync process must wait to complete.
Other background system processes ...
plus operating system Process ID for each running process.
The location of trace files is specified in parameter BACKGROUND_DUMP_DEST. By default, that's folder /admin/INSTANCE_NAME/bdump under the ORACLE_BASE folder.
Parameter job_queue_processes (default 10) defines the maximum number of scheduled jobs allowed to execute simultaneously.
The "n" can be as large as the number specified in parameter JOB_QUEUE_PROCESSES for the instance.
With Shared Server, client connections are ultimately sent to a Dispatcher, which binds the client to an already established virtual circuit.
As a general rule, on start-up a dispatcher should be created for every 1000 connections (rounded up to the next integer) as specified by the SHARED_SERVERS parameter. But monitor views V$QUEUE, V$DISPATCHER, and V$DISPATCHER_RATE. Parameter MAX_SHARED_SERVERS limits the number of shared server processes running simultaneously.
Parameter SHARED_SERVER_SESSIONS specifies the total number of shared server user sessions to allow, used to reserve user sessions for dedicated servers.
A shared dispatcher process is configured using init.ora parameter:
dispatchers = (PROTOCOL=TCP)(SERVICE=orclXDB)
Within a share Multi-Threaded Server (MTS), a dispatcher process is configured using init.ora parameter:
MTS_DISPATCHERS = ("tcp, 4", "spx, 2")
Implement Shared Server in tnsnames.ora:
(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp) (HOST=sales-server)(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (SERVER=shared) ))
SAP stores the master tnsnames.ora and sqlnet.ora files in folder /usr/sap/trans along with the common transport directory sapconf.
Oracle dynamically adjusts the number of shared server processes based on the length of the request queue.
The "nnn" can be as large as the number specified in parameter ???.
The "nnn" can range between PARALLEL_MIN_SERVERS and PARALLEL_MAX_SERVERS.
For cross-instance consistent read requests, it creates a consistent read version of the block for the requesting instance,
The "x" varies depending on the amount of messaging traffic among nodes in the cluster, but can be as large as 10 Global Cache Service Processes provided by RAC software.
The "n" can be as large as the number specified in parameter GC_LCK_PROCS.
Applications and utilities access the RDBMS through a user process which connects to a server process,
which can be dedicated to one user process or shared by many user processes.
The "Parse time CPU" V$SYSTAT and LoadRunner Oracle monitor statistic reports the centiseconds (10s of millseconds) spent in parsing.
The total "Parse time elapsed" is the total waiting time that parse resources had to wait.
A "hard parse" is when a SQL statement is not found in the shared pool library cache, and therefore needs more work to optimize.
The "parse count" statistic in V$SYSSTAT and LoadRunner counts the number of times the user makes parse functions from either the OCI or the Oracle Precompilers, independant of any cache benefits in the library cache. This statistic is used as a denominator to determine the V$LIBRARY_CACHE table's library cache hit ratio against the shared SQL area.
select round((prs.value/(prs.value+exe.value))*100,2) from v$sysstat prs, v$sysstat exe where prs.name like 'parse count (hard)' and exe.name= 'execute count'
Roger Schrag of dbspecialists uses this 10g SQL to identify recursive activity
SELECT A.username, COUNT(*), SUM (B.sharable_mem) sharable_mem, SUM (B.persistent_mem) persistent_mem, SUM (B.runtime_mem) runtime_mem, SUM (B.sharable_mem + B.persistent_mem + B.runtime_mem) total_mem FROM dba_users A, v$sql B WHERE A.username IN ('DBSNMP', 'SYS', 'SYSTEM', 'SYSMAN') AND B.parsing_user_id = A.user_id GROUP BY A.username;
Oracle allocates resources (Call State Objects) to keep track of relevant user call data structures every time there is a login, parse, or execute.
"user calls" and "recursive calls" statistic in V$SYSSTAT and LoadRunner Oracle monitors track the total number of calls since inception.
A high recursive call ratio of ("rcv") recursive calls vs. ("usr") "user calls" gives an indication of how much internal work activity is occurring:
select round((rcv.value/(rcv.value+usr.value))*100,2) from v$sysstat rcv, v$sysstat usr where rcv.name='recursive calls' and usr.name='user calls'
Because session time is the total of time spent on user and recursive operations,
Spool output to the output.txt file:
The default extension for spool files is .lst or .lis.
Turn it off!
View DBA_OBJECTS contains information about every database object (including those owned by SYSTEM and SYS) of all types of objects (such as tables, indexes, sequences, and so on). The DBA_SOURCE view lists:
The "Session stored procedure space" among LoadRunner oracle metrics reports the amount of memory that each session is using for stored procedures.
Table user_col_comments stores comments on each column for each table in the schema.
From Oracle Corporation:
Oracle 8 Tutorial by Michael Gertz at UC Davis offers a good introduction.
Portals to information and links:
Sources of PL/SQL scripts: