How I may help
LinkedIn Profile Email me!
Call me using Skype client on your machine

Reload this page Database Performance Tuning

Here are my notes on tuning Oracle and MS-SQL for the Oracle and MSDBA exams.

Basically, it's fastest is when:

  • SQL queries are parsed using the Data Dictionary Cache rather than reading the Data Dictionary Table
  • data is fetched from the memory cache rather than with disk i/o;
  • for short tables, data is retrieved with a full table scan of every row.
  • for long tables (with 5 or more db blocks), data is retrieved through an index rather than with a full table scan of every row.
  • sorting is performed in PGA memory rather than using disk I/O to temporary datafiles.

To keep a database fast (proactively rather than reactively):

  • Pre-production, under a representative continuous load:
    • Ensure that SQL*Net Configuration for SDU and TDU settings are optimal
    • obtain baseline metrics and settings for later comparison.
    • Make sure there are eough cursors and memory defined for the connections used.
    • Identify the sessions and SQL requests consuming the most resources..
    • Analyze the most significant SQL requests to detect causes of slowness
    • Use Oracle advice views to specify the optimal settings.
    • Investigate the impact of different settings (block sizes, etc.)
    • Set alerts to highlight troubling conditions
    • Perform Experiments using different configuration settings, platform environments, etc.
  • Monitor key activity levels and ratios in production, such as Recursive calls.
  • Compare metrics on new releases vs. baselines to guage impact


Site Map List all pages on this site 
About this site About this site 
Go to first topic Go to Bottom of this page

Related Topics:

  • Oracle Architecture
  • Oracle 9i AS
  • SQL Coding
  • Data Management
  • Data Warehousing
  • Data Types
  • Free Training!
  • Tech Support
  • Set screen Create Tablespace Extent Management Local, Not `Dictionary

      When a tablespace is created, several settings impact performance.

      When a tablespace is created with EXTENT MANAGEMENT DICTIONARY, DML commands run slower because the location of free and used blocks are stored in the data dictionary files in the SYSTEM tablespace. Since the SYSTEM bitmap file is shared, contention to access these files may occur. Different extent sizes make fragmentation more pronounced between blocks within the free list. Extra work by SMON is needed for coalescence of fragmented dellocated space (when multiple adjacent free extents are combined into a single contiguous free extent). This can be trigged with request:


      When a tablespace is created with EXTENT MANAGEMENT LOCAL, DML commands run faster because the location of free and used blocks are stored in the datafile as a bitmap of data blocks at the beginning of each group of consecutive used blocks resides in a small record in the datafile header of that tablespace. This allows all extents to have the same size, so no fragmentation occurs. Thus, LOCAL is the default starting with 10g. BIGFILE tablespaces must select LOCAL. In fact according to Powell & McCullough-Dieter (2007), Oracle plans to eliminate DICTIONARY management in future releases. Coalescence occurs automatically for locally-managed tablespaces.

      One more thing. Every change to a system table causes undo and redo records to be created, which adds more operations and processing time. Changes to locally-managed bitmaps does not result in rollback information being generated.


      Locally vs. Dictionary Managed Tablespaces (2003). Underground Oracle FAQ's retrieved November 10, 2007

      Oracle 10g database administrator: Implementation and administration (1st ed.). (Boston, MA: Course Technology/Thomson Learning, 2007) by Powell, G., & McCullough-Dieter, C.

    Go to Top of this page.
    Next topic this page

    Set screen Activity Stats For Capacity Planning

      Here are measures of activity to use in a db capacity model:

      1. The "user commits" statistic in V$SYSSTAT is the closest thing to a user transaction rate because it reflects the number of changes made to database blocks written to disk.

      2. "cursor authentications" is the number of privilege checks conducted during execution of an operation.

      3. "execute count" reports the total number of calls (user and recursive) that execute SQL statements.

      4. "opened cursors cumulative" reports The total number of opened cursors since the instance has started (in V$SYSSTAT). In V$SESSTAT, this statistic shows the total number of cursors opened since the start of the session.

      5. "parse count (total)" is the sum of parse calls (hard and soft). A soft parse is a check to make sure that the permissions on the underlying object have not changed.

      6. "recursive calls"

      7. "sorts (rows)" displays the total number of rows sorted by all sorts performed.

      8. "CPU used by this session"

    Go to Top of this page.
    Next topic this page

    Set this at top of window. Instanteous Stats

      These statistics reflect transitory conditions that change from moment to moment:

      "Opened cursors current" - The total number of current open cursors (user sessions).

    Go to Top of this page.
    Next topic this page

    Set screen Constraints

      Integrity constraints are useful to enforce business rules within tables, to ensure that each row in a table can be uniquely identified.

      PRIMARY and UNIQUE constraints can be specified for Oracle to enforce when tables are defined. Both types of constraints can be referenced by a foreign key.

      However, the complexities of a business may require more sophisticated constraints than the database can provide.

      For example, an individual's social security number is supposed to be unique key. However, in the real world, people "steal" others' social security number. And it's not inconceivable that someone have different social security numbers during their life due to mistakes or as a way to clean up a credit record.

      This is the reason why some states are requiring use of employee numbers, which are usually stored as primary keys into a database. Oracle does notallow null values in primary keys.

      One organization I know define databases with integrity checks, then before production, the checks are removed after the application has proven itself in not generating database constraint errors.

      This way, constraints are not checked in both the app and the database. Removing constraint checking workload from the database enables it to handle more.

    Go to Top of this page.
    Next topic this page

    Set screen Bottleneck Wait Stats

    Go to Top of this page.
    Next topic this page

    Set screen CPU Hogs

      CPU time (introduced with 9i) is not a wait event, but the sum of the CPU used by a session (or the amount of CPU time between snapshots). In a heavily loaded system, if the CPU time event is the biggest event, that could point to some CPU-intensive processing (for example, forcing the use of an index when a full scan should have been used), which could be the cause of the bottleneck.

      session connect time

      concurrency wait time

      user I/O wait time

      application wait time

      OS System call CPU time

      OS User lock wait sleep time

      OS Wait-cpu (latency) time

    Go to Top of this page.
    Next topic this page

    Set screen Efficiency Stats

    Go to Top of this page.
    Next topic this page

    Set screen OEM Advisor Central

      OEM Advisor Central provides several options:

      • ADDM
      • Segment Advisor
      • Undo Management and Undo Advisor
      • Memory Advisor
      • MTTR
      • SQL Tuning Advisor
      • SQL Access Advisor

      The response time is the total of Service Time and Waiting for finite resources.

      A database used for OLTP spends most of its time waiting on specific resources (enqueue, latch free, log file sync, db file sequential read, db file scattered read, etc.).

      A datawarehouse (DWH) typically have mostly parallet query waits (PX Deq: Exec Reply, PX Deq: Execution Msg, etc.).

    Go to Top of this page.
    Next topic this page

    Set screen Statspack scripts

      From Burlseon's StatsPack Analyzer

      • sppurge.sql - Purge a range of Snapshot Id's between the specified begin and end Snap Id's
      • spauto.sql - Schedule a dbms_job to automate the collection of STATPACK statistics
      • spcreate.sql - Installs the STATSPACK user, tables and package on a database (Run as SYS).
      • spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
      • sppurge.sql - Delete a range of Snapshot Id's from the database
      • spreport.sql - Report on differences between values recorded in two snapshots
      • sptrunc.sql - Truncates all data in Statspack tables

    Go to Top of this page.
    Next topic this page

    Set screen Cache I/O

      A "current" read is ???

    Go to Top of this page.
    Next topic this page

    Set screen Statistics

      To identify the best possible execution path for SQL statements, the cost-based optimizer (CBO) (introduced in Oracle 7) augments bitmap indexes, function-based indexes, hash joins, and index-organized tables with statistics to select the best access plan based on the cost of various alternatives:

      • size of tables and indexes (disk space consumed)
      • number of rows in the tables
      • number of distinct keys in an index
      • number of levels in a B* index
      • average number of blocks for a value
      • average number of leaf blocks in an index

      CBO uses a SQL profile for each query use sampling to collect additional information; partial execution techniques to verify and adjust cost estimates; and execution history information for the SQL statement to modify parameter settings.

      While the Oracle9i CBO used an I/O cost model primarily based on single block reads, Oracle Database 10g added system statistics that include system CPU for the Oracle workload (especially important for operations such as bitmap merge, sort, or hash joins that may not require any I/O).

      • Object statistics record metadata about the database objects, such as tables and indexes.
      • System statistics consist of wait events. This and other metadata about event in the database are summed cumulatively.

      Statistics can get old and dated, so regularly schedule re-analyze tables and indexes.

      Statistics can be gathered manually several ways:

      • For non-optimizer statistics against a table or index, use the simple and easy to use ANALYZE command / OEM Console


          ANALYZE { TABLE | INDEX } COMPUTE [ SYSTEM ] STATISTICS [ FOR ... object specifics ... ] ESTIMATE [ SYSTEM ] STATISTICS [ FOR ... object specifics ... ] [ SAMPLE n { ROWS | PERCENT } ] | DELETE [ SYSTEM ] STATISTICS;
      • To gather stats at all object layers of a DB (or to copy statistics between databases), use the 10g DBMS_STATS Oracle-provided PL/SQL package so such work can run in parallel mode on several CPUs. Examples:

          EXEC DBMS_STATS.GATHER_TABLE_STATS('<owner>', '<table>');
          EXEC DBMS_STATS.GATHER_INDEX_STATS('<owner>', '<table>');
      • DBMS_STATS is the recommended over snapshot comparisons using STATSPACK or OEM Database Control.

        Statistics gathered by DBMS_STATS utility improves the execution speed of SQL because it provides up to date and high quality statistics the Cost Based optimizer (CBO) needs to make the best decision for the best and fastest path for a SQL query.

      To see how fragmented an index is:


      Watch how CBO decides for an execution plan by setting diagnostic event 10053 triggered for a hard parse (not for a soft parse).

        alter session set events '10053 trace name context forever, level 1'

      A look under the Hood of CBO: The 10053 Event by Wolfgang Breitling. Dated April 27, 2004. Referenced November 11, 2007.

    Go to Top of this page.
    Next topic this page

    Set screen Tools

      The Oracle database software comes standard with profiling tools SQL_TRACE output the SQL that the application executed

      TKPROF outputs how that SQL performed.

      Orca (from Blair Zajac Marina del Rey, CA) creates graph images from data collected by Oracle's Statspack (since Oracle 8.1.6) using RRDtool to graph, (open source) Perl DBI and DBD::Oracle modules.
      More orca examples from Duncan Lawie in the UK, who also provides SQL to retrieve from Statspack

    Set screen Unix Kernel Installation Settings

      Boot Sun Solaris with these in effect for database creation to succeed:

      set shmsys:shminfo_shmmax=4294967295
      set shmsys:shminfo_shmmin=1
      set shmsys:shminfo_shmseg=10
      set shmsys:shminfo_shmmni=100
      set semsys:seminfo_semmns=1000
      set semsys:seminfo_semmni=100
      set semsys:seminfo_semmsl=250

    Set screen Enough File System Inodes

      Oracle 9i includes more than 100,000 files.

      Copy the contents of the CDROM to a hard drive using this example for Oracle 9i:

      $ cd /cdrom/orcl901_1
      $ mkdir -p /u01/app/oracle/admin/cdrom/9.0.1/Disk1
      $ cp -pr . /u01/app/oracle/admin/cdrom/9.0.1/Disk1

    Set screen Database Caching Effects

      There are several parts to the response time of a database query:

      • Time to calculate the retrieval plan
      • Time to perform disk I/O
      • Time to format the data for presentation.

      The response time of a SQL query which has been cached in a rather quiescent system is the "best case" performance metric because the database server does not have to create the retrieval plan nor make any disk I/O (since they are fetched from memory in the database server).

      To obtain these metrics, run the same request twice in rapid succession, immediately after flushing database buffers.

    Set screen Efficient SQL

      Using analytic functions can be more efficiently within Oracle than pulling raw data and performing the work at a web server.


    Go to Top of this page.
    Next topic this page

    Set this at top of window. How to Analyze Problem SQL Requests For That Take Too Long

      The v$sqlarea view displays the text of SQL statements in the shared SQL area, plus the number of users accessing the statements, disk blocks and memory blocks accessed while executing the statement, and other information.
      In 10g, V$SQLSTATS takes the place of V$SQL when fetching statistics for SQL cursors, since it's faster, more scalable, and has greater data retention.

      For 9i and 10g set the 10046 event

    1. In init.ora, set initialization parameters for trace file management:

        STATISTICS_LEVEL = TYPICAL or ALL automatically collects dump files.
        STATISTICS_LEVEL = BASIC means that dumps are not being collected.

        TIMED_STATISTICS = TRUE enables the collection of CPU and elapsed times and other stats into into dynamic session performance tables
        This is since its default value is false.

        USER_DUMP_DEST specifies the destination for the session trace file. The default value is the default destination for system dumps, which contains a lot of other generated files. This value can be modified with ALTER SYSTEM SET USER_DUMP_DEST= newdir.

        MAX_DUMP_FILE_SIZE = 500 (operating system blocks) is the default. "DUMP" refers to the SQL Trace facility text file, which is automatically truncated when full.

    2. Get into the DB server SQL prompt:

      ls /etc/oratab
    3. Verify settings:

      show parameters statistics

    4. Enable the session which Oracle will create trace files (for each process within the Oracle session):

        DBMS_SESSION.SET_SQL_TRACE procedure or

      This creates a separate trace file for each process within the session. To make trace file named for the process that created it, include in programs a statement like SELECT program_name FROM DUAL.

      The SQL Trace facility is automatically disabled for the session when the application disconnects from Oracle.

    5. Consolidate .prf files from several sessions using the trcsess Utility.

        trcsess output=output_report_file.prf
        session=... clientid=.. service=... action=... module=... trace_file1.trc trace_file2.trc
    6. Format the trace file by running a command such as:

      	tkprof input_trace_file.trc output_report_file.prf

      call   count      cpu    elapsed     disk    query current    rows
      ---- -------  -------  --------- -------- -------- -------  ------
      Parse     11     0.08      0.18        0       0       0         0
      Execute   11     0.23      0.66        0       3       6         0
      Fetch     35     6.70      6.83      100   12326       2       824 
      total     57     7.01      7.67      100   12329       8       826
      Misses in library cache during parse: 0 
      Misses in library cache during execute: 1
      Optimizer goal: CHOOSE
      Parsing user id: 02   (USER02)

      Statistics for RECURSIVE and NON-RECURSIVE queries are presented in separate reports.

      Here are the field names:

      Call count
      (# times)
      cpu seconds elapsed clock seconds # physical disk reads # consistent mode block reads in query # current mode block reads # rows processed library cache misses

    7. Interpret the output files:

      "Parse" (parsing) is the verification of SQL code syntax and interpretive compilation into a form understandable by the Oracle Database engine. The Optimizer attempts to automatically pick the fastest method of execution for an SQL code statement.

    8. "Execute" is when SQL code statements are executed in various forms and by various routes, decided by the Optimizer.
      "Fetch" is the return of results passed back to the calling process.

      A large difference between "cpu" and "elasped" time may indicate a) a lock contending with another transaction or b) interference with a trigger or constraint.

      The cache hit ratio is calculated from this number of physical reads vs. logical reads.

      • "Consistent" reads are logical reads accessed in buffer cache for normal queries (SELECTs without for update clause).
      • "Physical" reads may not really involve a hard disk drive access on disk arrays with their own cache.

      • Large "disk" values may indicate a full table scan from inefficient index schema.

    9. Collect times after running a query several times. A time value of "0.01" is one hundredth of a second, or one millisecond - the smallest increment.
    10. Store the statistics in your own TKPROF_TABLE with a row of statistics for each traced SQL statement:

        see above NUMBER,
    11. Conduct iterative tuning experiments ( Oracle Performance Improvement Methods) by finding the "next biggest" bottleneck by comparing against baseline stats:

      Consider automatic segment space management for tables and indexes with high INSERT/UPDATE/DELETE rates.
      Consider automatic undo management to avoid contention of rollback segments.
      Applications should use cursors with bind variables that open the cursor and execute it many times. Dynamically generated SQL not using cursors create repeated parses of all SQL statements.

    12. More on this at: oracle-base

    Go to Top of this page.
    Next topic this page

    Set this at top of window. Resolving Locks

    Go to Top of this page.
    Next topic this page

    Set this at top of window. Doing Experiments

      Several features of Oracle are not active by default.

      Parallel Execution

      For SQL queries which consume a lot of CPU, try parallel execution on multiple CPUs. This feature can be specified two ways. One way is specify a SQL hint clause


      A table can be setup to always process the query in parallel mode:

        { CREATE | ALTER } TABLE ... [ NOPARALLEL | PARALLEL [n] ]; 

      Real Application Testing

      Oracle 11g's "Real Application Testing" option features two levels of detail:

      • SQL Performance Analyzer records all real database SQL operations to capture files that can be replayed in the sequence they occurred by the Workload Replay Client (wrc);
      • Database Replay records all database operations beneath the level of SQL in a binary format for replay.

      These features, when combined with a 11g flashback of entire databases to a specific point in time, provide the "external stimuli" needed to conduct precise experiments on test machines.

      The optimal configuration setting becomes apparent from the variance observed in AWR reports before and after replay run under various conditions (different platform environments, software versions, or configuration settings). Note, however, that the initial 11g release does not provide a way to capture workload within 10g.

      Experiments on different Oracle configuration settings include:

      • impact of SGA memory allocation settingsanother page on this site.
      • impact of different db_file_multiblock_read_count on the optimizer.
      • impact of converting an index from b-tree to bitmap on INSERT statements.
      "Oracle Launches 11g Database With Nearly 500 New Features." Information Week. by Babcock, C. (2007). Retrieved October 27, 2007.

    Go to Top of this page.
    Next topic this page

    Set this at top of window. Resources

    Go to Top of this page.
    Next topic this page

    Portions ©Copyright 1996-2014 Wilson Mar. All rights reserved. | Privacy Policy |

    How I may help

    Send a message with your email client program

    Your rating of this page:
    Low High

    Your first name:

    Your family name:

    Your location (city, country):

    Your Email address: 

      Top of Page Go to top of page

    Thank you!