reading statspack report

Upload: ravtank198228

Post on 30-May-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Reading Statspack Report

    1/24

    Reading Statspack Report

    PURPOSEThis article is a reference to understand the output generated by the STATSPACK utility. Sinceperformance tuning is a very broad area this document only provide tuning advice in very specific

    areas.

    Introduction

    StatsPack was created in response to a need for more relevant and more extensive statisticareporting beyond what was available via UTLBSTAT/UTLESTAT reports. Further, this information

    can be stored permanently in the database instance itself so that historical data is always availablefor comparison and diagnosis.

    Statspack has been available since version 816, but can be installed on 806 and above. Snapshotscreated using older versions of statspack can usually be read using newer versions of Statspack

    although the newer features will not be available.

    Timed_statistics must be set to true prior to the creation of a snapshot. If it is not, the data within

    statspack will not be relevant. You can tell if timed_statistics was not set by looking at the totatimes columns in the report. If these are zero then timed_statistics was not set.

    Snapshots during which the instance was recycled will not contain accurate information and shouldnot be included in a statspack report.

    In general, we suggest that snapshots intervals be 15 minutes in length. This allows fine-grainedreporting when hangs are suspected/detected. The snapshots can also be combined into hourly

    reports for general performance tuning.

    When a value is too large for the statspack field it will be represented by a series of pound signssuch as #######. Should this occur and you need to see the value in the field you will need todecrease the number of snapshots in the report until the field can be read. Should there only be

    one snapshot in the report, then you will need to decrease the snapshot interval.

    Profiles created using statspack information are quite helpful in determining long-term trends suchas load increases, usage trends, resource consumption, latch activity, etc. It is especially

    important that a DBA know these things and be able to demonstrate changes in them thatnecessitate hardware improvements and load balancing policies. This document will describe the

    main sections of an statspack report, which will help to understand what information is available to

    diagnose and resolve performance tuning problems. Some of the sections of the statspack reportmay contain different information based on the Statspack release that was used to generate the

    report. This document will also indicate these changes for the different sections.

    Summary Information

    The summary information begins with the identification of the database on which the statspackreport was run along with the time interval of the statspack report. Here is the 8i instance

    information:

    STATSPACK report for

  • 8/9/2019 Reading Statspack Report

    2/24

  • 8/9/2019 Reading Statspack Report

    3/24

    -Instance cache information:

    In the 8i report the buffer cache size can be determined by multiplying the db_block_buffers by the

    db_block_size.

    Cache Sizes

    ~~~~~~~~~~~db_block_buffers: 6400 log_buffer: 104857600

    db_block_size: 32768 shared_pool_size: 150000000

    In 9i this has been done for you. Std Block size indicates the primary block size of the instance.Cache Sizes (end)~~~~~~~~~~~~~~~~~

    Buffer Cache: 704M Std Block Size: 8KShared Pool Size: 256M Log Buffer: 1,024K

    Note that the buffer cache size is that of the standard buffer cache. If you have multiple buffecaches, you will need to calculate the others separately.

    - Load profile Information:

    The load profile information is next; the load profile is measuring in bytes. It is identical in both 8

    and 9i.

    Load Profile~~~~~~~~~~~~ Per Second Per Transaction

    --------------- ---------------Redo size: 351,530.67 7,007.37

    Logical reads: 5,449.81 108.64Block changes: 1,042.0 8 20.77

    Physical reads: 37.71 0.75Physical writes: 134.68 2.68User calls: 1,254.72 25.01

    Parses: 4.92 0.10Hard parses: 0.02 0.00

    Sorts: 15.73 0.31Logons: -0.01 0.00

    Executes: 473.73 9.44Transactions: 50.17

    % Blocks changed per Read: 19.12 Recursive Call %: 4.71

    Rollback per transaction %: 2.24 Rows per Sort: 20.91Where:

    . Redo size: This is the amount of redo generated during this report.

    . Logical Reads: This is calculated as Consistent Gets + DB Block Gets = Logical Reads

    . Block changes: The number of blocks modified during the sample interval

    . Physical Reads: The number of requests for a block that caused a physical I/O.

  • 8/9/2019 Reading Statspack Report

    4/24

    . Physical Writes: The number of physical writes issued.

    . User Calls: The number of queries generated

    . Parses: Total of all parses: both hard and soft

    . Hard Parses: Those parses requiring a completely new parse of the SQL statement. Theseconsume both latches and shared pool area.

    . Soft Parses: Not listed but derived by subtracting the hard parses from parses. A soft parse

    reuses a previous hard parse and hence consumes far fewer resources.

    . Sorts, Logons, Executes and Transactions are all self explanatory

    - Instance Efficiency Ratios:

    Hit ratios are calculations that may provide information regarding different structures and

    operations in the Oracle instance. Database tuning never must be driven by hit ratios. Theyonly provide additional information to understand how the instance is operating. Forexample, in a DSS systems a low cache hit ratio may be acceptable due the amount of recycling

    needed due the large volume of data accesed. So if you increase the size of the buffer cache based

    on this number, the corrective action may not take affect and you may be wasting memoryresources.

    This section is identical in 8i and 9i.

    Instance Efficiency Percentages (Target 100%)

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Buffer Nowait %: 99.99 Redo NoWait %: 100.00Buffer Hit %: -45.57 In-memory Sort %: 97.55

    Library Hit %: 99.89 Soft Parse %: 99.72Execute to Parse %: -1.75 Latch Hit %: 99.11

    Parse CPU to Parse Elapsd %: 52.66 % Non-Parse CPU: 99.99

    Shared Pool Statistics Begin End------ ------

    Memory Usage %: 42.07 43.53% SQL with executions>1: 73.79 75.08

    % Memory for SQL w/exec>1: 76.93 77.64

    It is possible for both the 'buffer hit ratio' and the 'execute to parse' ratios to be negative. In the

    case of the buffer hit ration, the buffer cache is too small and the data in is being aged out before it

    can be used so it must be retrieved again. This is a form of thrashing which degrades performanceimmensely.

    The execute to parse ratio can be negative when the number of parses is larger than the number of

    executions. The Execute to Parse ratio is determined by the following formula:

    100 * (1 - Parses/Executions) = Execute to Parse

    Here this becomes:

    100 * (1 - 42,757 / 42,023 ) = 100 * (1 - 1.0175) = 100* -0.0175 = -1.75

  • 8/9/2019 Reading Statspack Report

    5/24

    This can be caused by the snapshot boundary occurring during a period of high parsing so that theexecutions have not occurred before the end of the snapshot. Check the next snapshot to see if

    there are enough executes to account for the parses in this report.

    Another cause for a negative execute to parse ratio is if the shared pool is too small and queries

    are aging out of the shared pool and need to be reparsed. This is another form of thrashing which

    also degrades performance tremendously.

    - Top 5 Events section:

    This section shows the Top 5 timed events that must be considered to focus the tuning efforts.

    Before Oracle 9.2 this section was called "Top 5 Wait Events". It was renamed in Oracle 9.2 to "Top5 Timed Events" to include the "CPU Time" based on the 'CPU used by this session'. This

    information will allow you to determine SQL tuning problems.

    For further see the Statspack readme file called $ORACLE_HOME/rdbms/admin/spdoc.txt. These

    events are particularly useful in determining which sections to view next. For instance if there arefairly high waits on latch free or one of the other latches you might want to examine the latch

    sections first. On the other hand, if the db file read waits events seem abnormally high, you mightwant to look at the file io section first.

    Top 5 Wait Events~~~~~~~~~~~~~~~~~ Wait % TotalEvent Waits Time (cs) Wt Time-------------------------------------------- ------------ ------------ -------db file sequential read 12,131,221 173,910 58.04db file scattered read 93,310 86,884 29.00log file sync 18,629 9,033 3.01log file parallel write 18,559 8,449 2.82buffer busy waits 304,461 7,958 2.66

    Notice that in Oracle 9.2 references are made "Elapsed Time" rather than to "Wait Time". Also the

    "CPU Time" is included as part of the Top events section.Top 5 Timed Events~~~~~~~~~~~~~~~~~~ % TotalEvent Waits Time (s) Ela Time-------------------------------------------- ------------ ----------- --------log file sync 3,223,927 32,481 64.05CPU time 7,121 14.04global cache open x 517,153 3,130 6.17log file parallel write 985,732 2,613 5.15KJC: Wait for msg sends to complete 568,061 1,794 3.54

    -------------------------------------------------------------

    Note that db file scattered and sequential read are generally the top wait events when the instance

    is tuned well and not OPS/RAC. Wait Events

    Cluster Statistics

    In Oracle 9i with the introduction of real Application Clusters, several sections were added to thestatspack report to show information related to cluster database environment. The following

  • 8/9/2019 Reading Statspack Report

    6/24

    sections are now available in statspack to monitor RAC environments and are only displayed whena cluster is detected.

    Oracle 9.0 and 9.1 Cluster Statistics :

    Global Lock Statistics

    ----------------------Ave global lock get time (ms): 0.3Ave global lock convert time (ms): 0.0Ratio of global lock gets vs global lock releases: 1.0

    Global cache statistics-----------------------Global cache hit %: 0.3Ave global cache get time (ms): 1.7Ave global cache convert time (ms): 3.1

    Cache fusion statistics-----------------------

    Ave time to process CR block request (ms): 0.2Ave receive time for CR block (ms): 1.6Ave build time for CR block (ms): 0.1Ave flush time for CR block (ms): 0.0Ave send time for CR block (ms): 0.1

    Ave time to process current block request (ms): 0.2Ave receive time for current block (ms): 2.5Ave pin time for current block (ms): 0.0Ave flush time for current block (ms): 0.0Ave send time for current block (ms): 0.1

    GCS and GES statistics----------------------

    Ave GCS message process time (ms): 0.1Ave GES message process time (ms): 0.1% of direct sent messages: 59.5% of indirect sent messages: 40.3% of flow controlled messages: 0.1% of GCS messages received by LMD: 96.4% of GES messages received by LMD: 3.6% of blocked converts: 10.3Ave number of logical side channel messages: 33.8Ave number of logical recovery claim messages:

    Oracle 9.2 Cluster Statistics :

    Global Cache Service - Workload Characteristics-----------------------------------------------Ave global cache get time (ms): 4.6Ave global cache convert time (ms): 20.2

    Ave build time for CR block (ms): 0.0Ave flush time for CR block (ms): 0.6Ave send time for CR block (ms): 0.1Ave time to process CR block request (ms): 0.7Ave receive time for CR block (ms): 0.9

  • 8/9/2019 Reading Statspack Report

    7/24

    Ave pin time for current block (ms): 2.9Ave flush time for current block (ms): 0.1Ave send time for current block (ms): 0.1Ave time to process current block request (ms): 3.1Ave receive time for current block (ms): 7.2

    Global cache hit ratio: 0.8Ratio of current block defers: 0.0% of messages sent for buffer gets: 0.5% of remote buffer gets: 0.4Ratio of I/O for coherence: 12.3Ratio of local vs remote work: 1.2Ratio of fusion vs physical writes: 0.0

    Global Enqueue Service Statistics---------------------------------Ave global lock get time (ms): 0.2Ave global lock convert time (ms): 2.3Ratio of global lock gets vs global lock releases: 1.0

    GCS and GES Messaging statistics--------------------------------Ave message sent queue time (ms): 0.1Ave message sent queue time on ksxp (ms): 12.3Ave message received queue time (ms): 0.0Ave GCS message process time (ms): 0.1Ave GES message process time (ms): 0.0% of direct sent messages: 81.2% of indirect sent messages: 13.1% of flow controlled messages: 5.7In all the Oracle9i release, a separate section shows the actual value for all thecluster statistics:GES Statistics for DB: FUSION Instance: ecfsc2 Snaps: 161 -162

    Statistic Total per Second per Trans--------------------------------- ---------------- ------------ ------------dynamically allocated gcs resourc 0 0.0 0.0dynamically allocated gcs shadows 0 0.0 0.0flow control messages received 0 0.0 0.0flow control messages sent 10 0.0 0.0gcs ast xid 30 0.0 0.0gcs blocked converts 531,572 147.7 0.2gcs blocked cr converts 55,739 15.5 0.0gcs compatible basts 45 0.0 0.0gcs compatible cr basts (global) 6,183 1.7 0.0....

    For further reference on tuning RAC clustered instances please refer to the documentation manua

    called Oracle9i Real Application Clusters Deployment and Performance

    Wait Events Information

    The following section will describe in detail most of the sections provided in a statspack report.

  • 8/9/2019 Reading Statspack Report

    8/24

    - Foreground Wait Events:

    Foreground wait events are those associated with a session or client process waiting for a resource.

    The 8i version looks like this:

    Wait Events for DB: PHS2 Instance: phs2 Snaps: 100 -104

    -> cs - centisecond - 100th of a second-> ms - millisecond - 1000th of a second-> ordered by wait time desc, waits desc (idle events last)

    AvgTotal Wait wait Waits

    Event Waits Timeouts Time (cs) (ms) /txn---------------------------- ------------ ---------- ----------- ------ ------PX Deq: Execution Msg 15,287 6,927 1,457,570 953 694.9enqueue 30,367 28,591 737,906 243 ######direct path read 45,484 0 352,127 77 ######PX Deq: Table Q Normal 7,185 811 241,532 336 326.6PX Deq: Execute Reply 13,925 712 194,202 139 633.0....

    The 9.2 version is much the same but has different time intervals in the header.Wait Events for DB: FUSION Instance: ecfsc2 Snaps: 161 -162-> s - second-> cs - centisecond - 100th of a second-> ms - millisecond - 1000th of a second-> us - microsecond - 1000000th of a second-> ordered by wait time desc, waits desc (idle events last)

    AvgTotal Wait wait Waits

    Event Waits Timeouts Time (s) (ms) /txn---------------------------- ------------ ---------- ---------- ------ --------log file sync 3,223,927 1 32,481 10 1.0

    global cache open x 517,153 777 3,130 6 0.2log file parallel write 985,732 0 2,613 3 0.3KJC: Wait for msg sends to c 568,061 34,529 1,794 3 0.2

    - Background Wait Events:

    Background wait events are those not associated with a client process. They indicate waits

    encountered by system and non-system processes. The output is the same for all the Oraclereleases.

    Background Wait Events for DB: PHS2 Instance: phs2 Snaps: 100 -104-> ordered by wait time desc, waits desc (idle events last)

    AvgTotal Wait wait WaitsEvent Waits Timeouts Time (cs) (ms) /txn---------------------------- ------------ ---------- ----------- ------ ------latch free 88,578 32,522 18,341 2 ######enqueue 319 230 5,932 186 14.5row cache lock 4,941 0 2,307 5 224.6control file parallel write 1,172 0 332 3 53.3db file parallel write 176 0 67 4 8.0log file parallel write 315 0 65 2 14.3

  • 8/9/2019 Reading Statspack Report

    9/24

    db file scattered read 137 0 62 5 6.2LGWR wait for redo copy 66 10 47 7 3.0

    Examples of background system processes are LGWR and DBWR. An example of a non-system

    background process would be a parallel query slave.

    Note that it is possible for a wait event to appear in both the foreground and background waitevents statistics. Examples of this are the enqueue and latch free events.

    The idle wait events appear at the bottom of both sections and can generally safely be ignored

    Typically these type of events keep record of the time while the clien is connected to the databasebut not requests are being made to the server.

    - Notes Regarding Waitevents:

    - The idle wait events associated with pipes are often a major source of concern for some DBAs.

    Pipe gets and waits are entirely application dependent. To tune these events you must tune theapplication generating them. High pipe gets and waits can affect the library cache latch

    performance. Rule out all other possible causes of library cache contention prior to focusing onpipe waits as it is very expensive for the client to tune their application.A list of most wait events

    used by the RDBMS kernel can be found in Appendix A of the Oracle Reference manual for the

    version being used.

    Some wait events to watch:- global cache cr request: (OPS) This wait event shows the amount of time that an instance has

    waited for a requested data block for a consistent read and the transferred block has not yetarrived at the requesting instance. See Note 157766.1 'Sessions Wait Forever for 'global cache cr

    request' Wait Event in OPS or RAC'. In some cases the 'global cache cr request' wait event may beperfectly normal if large buffer caches are used and the same data is being accessed concurrently

    on multiple instances. In a perfectly tuned, non-OPS/RAC database, I/O wait events would be thetop wait events but since we are avoiding I/O's with RAC and OPS the 'global cache cr request' wait

    event often takes the place of I/O wait events.- Buffer busy waits, write complete waits, db file parallel writes and enqueue waits: If all of these

    are in the top wait events the client may be experiencing disk saturation. See Note 155971.1Resolving Intense and "Random" Buffer Busy Wait Performance Problems for troubleshooting tips.

    - log file switch, log file sync or log switch/archive: If the waits on these events appears excessivecheck for checkpoint tuning issues. See Note 147468.1 Checkpoint Tuning and Troubleshooting

    Guide.- write complete waits, free buffer waits or buffer busy waits: If any of these wait events is high,

    the buffer cache may need tuning. See Note 62172.1 'Understanding and Tuning Buffer Cache andDBWR in Oracle7, Oracle8, and Oracle8i'

    - latch free: If high, the latch free wait event indicates that there was contention on one or moreof the primary latches used by the instance. Look at the latch sections to diagnose and resolve this

    problem.

    SQL Information

    The SQL that is stored in the shared pool SQL area (Library cache) is reported to the user via threedifferent formats in 8i. Each has their own usefulness.

  • 8/9/2019 Reading Statspack Report

    10/24

    . SQL ordered by Buffer Gets

    . SQL ordered by Physical Reads

    . SQL ordered by Executions

    9i has an additional section:

    . SQL ordered by Parse Calls

    - SQL ordered by Gets:

    SQL ordered by Gets for DB: PHS2 Instance: phs2 Snaps: 100 -104-> End Buffer Gets Threshold: 10000

    -> Note that resources reported for PL/SQL includes the resources used byall SQL statements called within the PL/SQL code. As individual SQL

    statements are also reported, it is possible and valid for the summedtotal % to exceed 100

    Buffer Gets Executions Gets per Exec % Total Hash Value

    --------------- ------------ -------------- ------- ------------198,924 37,944 5.2 41.7 2913840444select length from fet$ where file#=:1 and block#=:2 and ts#=:3

    111,384 7 15,912.0 23.4 1714733582select f.file#, f.block#, f.ts#, f.length from fet$ f, ts$ t where t.ts#=f.ts# and t.dflextpct!=0 and t.bitmapped=0

    105,365 16 6,585.3 22.1 4111567099CREATE TABLE "PHASE".:Q3236003("PID","CAMPAIGN","SCPOS1","SCPOS2","SCPOS3","SCPOS4","SCPOS5","SCPOS6","SCPOS7","SCPOS8","SCPOS9","SCPOS10","SCPOS11","SCPOS12","SCPOS13","SCPOS14","SCPOS15","SCPOS16","SCPOS17","MCELL","MAILID","RSPPROD","STATTAG","RSPREF","RSPCRED","MAILDATE","RSPTDATE","BDATE","STATE","ZIP","INCOME","R

    ....

    This section reports the contents of the SQL area ordered by the number of buffer gets and can be

    used to identify CPU Heavy SQL.

    - Many DBAs feel that if the data is already contained within the buffer cache the query should be

    efficient. This could not be further from the truth. Retrieving more data than needed, even fromthe buffer cache, requires CPU cycles and interprocess IO. Generally speaking, the cost of physica

    IO is not 10,000 times more expensive. It actually is in the neighborhood of 67 times and actuallyalmost zero if the data is stored in the UNIX buffer cache.

    - The statements of interest are those with a large number of gets per execution especially if thenumber of executions is high.

    - High buffer gets generally correlates with heavy CPU usage.

    - SQL ordered by Physical Reads:

    SQL ordered by Reads for DB: PHS2 Instance: phs2 Snaps: 100 -104-> End Disk Reads Threshold: 1000

  • 8/9/2019 Reading Statspack Report

    11/24

    Physical Reads Executions Reads per Exec % Total Hash Value--------------- ------------ -------------- ------- ------------

    98,401 16 6,150.1 14.2 3004232054SELECT C0 C0 FROM (SELECT C0 C0 FROM (SELECT /*+ NO_EXPAND ROWID(A1) */ A1."PID" C0 FROM "PHASE"."P0201F00_PLAT_MCOP_TB" PX_GRANULE(0, BLOCK_RANGE, DYNAMIC) A1) UNION ALL SELECT C0 C0 FROM (SELECT /*+ NO_EXPAND ROWID(A2) */ A2."PID" C0 FROM "PHASE"."P0201F00_UCS_MCOP_TB" PX_GRANULE(1, BLOCK_RANGE, DYNAMIC) A2) UNION

    50,836 32 1,588.6 7.3 943504307SELECT /*+ Q3263000 NO_EXPAND ROWID(A1) */ A1."PID" C0 FROM "PHASE"."P9999F00_NEW_RESP_HIST_TB" PX_GRANULE(0, BLOCK_RANGE, DYNAMIC) A1 WHERE A1."CAMPAIGN"='200109M' AND A1."RSPPROD"='B'

    50,836 32 1,588.6 7.3 3571039650SELECT /*+ Q3261000 NO_EXPAND ROWID(A1) */ A1."PID" C0 FROM "PHASE"."P9999F00_NEW_RESP_HIST_TB" PX_GRANULE(0, BLOCK_RANGE, DYNAMIC) A1 WHERE A1."CAMPAIGN"='200109M' AND A1."RSPPROD"='P'....

    This section reports the contents of the SQL area ordered by the number of reads from the data

    files and can be used to identify SQL causing IO bottlenecks which consume the followingresources.

    - CPU time needed to fetch unnecessary data.

    - File IO resources to fetch unnecessary data.

    - Buffer resources to hold unnecessary data.

    - Additional CPU time to process the query once the data is retrieved into the buffer.

    - SQL ordered by Executions:

    SQL ordered by Executions for DB: PHS2 Instance: phs2 Snaps: 100 -104

    -> End Executions Threshold: 100

    Executions Rows Processed Rows per Exec Hash Value------------ ---------------- ---------------- ------------

    37,944 16,700 0.4 2913840444select length from fet$ where file#=:1 and block#=:2 and ts#=:3

    304 1,219 4.0 904892542select file#,block#,length from fet$ where length>=:1 andts#=:2 and file#=:3

    295 0 0.0 313510536select job from sys.job$ where next_date < sysdate and (field1= :1 or (field1 = 0 and 'Y' = :2)) order by next_date, job

    273 273 1.0 3313905788insert into col$(obj#,name,intcol#,segcol#,type#,length,precision#,scale,null$,offset,fixedstorage,segcollength,deflength,default$,col#,property,charsetid,charsetform,spare1,spare2)values(:1,:2,:3,:4,:5,:6,decode(:7,0,null,:7),decode(:5,2,decode(:8,-127/*M

  • 8/9/2019 Reading Statspack Report

    12/24

    AXSB1MINAL*/,null,:8),178,:8,179,:8,180,:8,181,:8,182,:8,183,:8,....

    This section reports the contents of the SQL area ordered by the number of query executions. It isprimarily useful in identifying the most frequently used SQL within the database so that they can

    be monitored for efficiency. Generally speaking, a small performance increase on a frequently used

    query provides greater gains than a moderate performance increase on an infrequently used query

    - SQL ordered by Parse Calls (9i Only):

    SQL ordered by Parse Calls for DB: S901 Instance: S901 Snaps: 2 -3

    -> End Parse Calls Threshold: 1000% Total

    Parse Calls Executions Parses Hash Value------------ ------------ -------- ----------

    295 295 0.48 1705880752select file# from file$ where ts#=:1

    60 60 0.10 3759542639

    BEGIN DBMS_APPLICATION_INFO.SET_MODULE(:1,NULL); END;

    33 2,222 0.05 3615375148COMMIT

    1 200,000 0.00 119792462INSERT into free.freelist_test values (:b2||'J'||:b1,'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA')

    ....

    This section shows the number of times a statement was parsed as compared to the number of

    times it was executed. One to one parse/executions may indicate that:

    - Bind variables are not being used.

    - On RDBMS version 8172 and higher the init.ora parameter session_cached_cursors was not set inthe init.ora (100 is usually the suggested starting value). See enhancement bug 1589185 for an

    explanation of the change that shifts some of the load from the library cache to the user sessioncache.

    - The shared pool may be too small and the parse is not being retained long enough for multipleexecutions.

    - cursor_sharing is set to exact (this should NOT be changed without considerable testing on thepart of the client).

    Statistics

    The statistics section shows the overall database statistics. These are the statistics that the

    summary information is derived from. A list of the statistics maintained by the RDBMS kernel canbe found in Appendix C of the Oracle Reference manual for the version being utilized. The format

    is identical from 8i to 9i.

  • 8/9/2019 Reading Statspack Report

    13/24

    Instance Activity Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104

    Statistic Total per Second per Trans--------------------------------- ---------------- ------------ ------------

    CPU used by this session 84,161 23.4 3,825.5CPU used when call started 196,346 54.5 8,924.8

    CR blocks created 709 0.2 32.2DBWR buffers scanned 0 0.0 0.0

    DBWR checkpoint buffers written 245 0.1 11.1DBWR checkpoints 33 0.0 1.5

    DBWR cross instance writes 93 0.0 4.2DBWR free buffers found 0 0.0 0.0

    ....

    Of particular interest are the following statistics.

    - CPU USED BY THIS SESSION, PARSE TIME CPU or RECURSIVE CPU USAGE: These numbers are

    useful to diagnose CPU saturation on the system (usually a query tuning issue). The formula tocalculate the CPU usage breakdown is:

    Service (CPU) Time = other CPU + parse time CPUOther CPU = "CPU used by this session" - parse time CPU

    Some releases do not correctly store this data and can show huge numbers. The rule to decide ifyou can use these metrics is:

    Trustworthy if :

    (db version>= 8.1.7.2 and 9.0.1)OR ((db version >= 9.0.1.1) = 8.0.6.0 AND not using job_queue_processes AND

    CPU_PER_CALL = default)

    - DBWR BUFFERS SCANNED: the number of buffers looked at when scanning the lru portion of the

    buffer cache for dirty buffers to make clean. Divide by "dbwr lru scans" to find the average numberof buffers scanned. This count includes both dirty and clean buffers. The average buffers scannedmay be different from the average scan depth due to write batches filling up before a scan is

    complete. Note that this includes scans for reasons other than make free buffer requests.- DBWR CHECKPOINTS: the number of checkpoints messages that were sent to DBWR and not

    necessarily the total number of actual checkpoints that took place. During a checkpoint there is a

    slight decrease in performance since data blocks are being written to disk and that causes I/O. Ifthe number of checkpoints is reduced, the performance of normal database operations improve but

    recovery after instance failure is slower.

    - DBWR TIMEOUTS: the number of timeouts when DBWR had been idle since the last timeout.These are the times that DBWR looked for buffers to idle write.

    - DIRTY BUFFERS INSPECTED: the number of times a foreground encountered a dirty buffer whichhad aged out through the lru queue, when foreground is looking for a buffer to reuse. This should

    be zero if DBWR is keeping up with foregrounds.- FREE BUFFER INSPECTED: the number of buffers skipped over from the end of the LRU queue in

    order to find a free buffer. The difference between this and "dirty buffers inspected" is the numbeof buffers that could not be used because they were busy or needed to be written after rapid aging

    out. They may have a user, a waiter, or being read/written.- RECURSIVE CALLS: Recursive calls occur because of cache misses and segment extension. In

    general if recursive calls is greater than 30 per process, the data dictionary cache should beoptimized and segments should be rebuilt with storage clauses that have few large extents.

    Segments include tables, indexes, rollback segment, and temporary segments.

  • 8/9/2019 Reading Statspack Report

    14/24

    NOTE: PL/SQL can generate extra recursive calls which may be unavoidable.- REDO BUFFER ALLOCATION RETRIES: total number of retries necessary to allocate space in the

    redo buffer. Retries are needed because either the redo writer has gotten behind, or because anevent (such as log switch) is occurring

    - REDO LOG SPACE REQUESTS: indicates how many times a user process waited for space in theredo log buffer. Try increasing the init.ora parameter LOG_BUFFER so that zero Redo Log Space

    Requests are made.- REDO WASTAGE: Number of bytes "wasted" because redo blocks needed to be written before

    they are completely full. Early writing may be needed to commit transactions, to be able to writea database buffer, or to switch logs

    - SUMMED DIRTY QUEUE LENGTH: the sum of the lruw queue length after every write requestcompletes. (divide by write requests to get average queue length after write completion)

    - TABLE FETCH BY ROWID: the number of rows that were accessed by a rowid. This includes rowsthat were accessed using an index and rows that were accessed using the statement where rowid

    = 'xxxxxxxx.xxxx.xxxx'.- TABLE FETCH BY CONTINUED ROW: indicates the number of rows that are chained to another

    block. In some cases (i.e. tables with long columns) this is unavoidable, but the ANALYZE table

    command should be used to further investigate the chaining, and where possible, should beeliminated by rebuilding the table.

    - Table Scans (long tables) is the total number of full table scans performed on tables with morethan 5 database blocks. If the number of full table scans is high the application should be tuned toeffectively use Oracle indexes. Indexes, if they exist, should be used on long tables if less than 10-

    20% (depending on parameter settings and CPU count) of the rows from the table are returned. Ifthis is not the case, check the db_file_multiblock_read_count parameter setting. It may be too

    high. You may also need to tweak optimizer_index_caching and optimizer_index_cost_adj.- Table Scans (short tables) is the number of full table scans performed on tables with less than 5

    database blocks. It is optimal to perform full table scans on short tables rather than using indexes.

    IO Activity

    IO ActivityInput/Output (IO) statistics for the instance are listed in the following sections/formats:

    - Tablespace IO Stats for DB: Ordered by total IO per tablespace.- File IO Stats for DB: Ordered alphabetically by tablespace, filename.

    In Oracle 8.1.7 many other columns were included as follow:- Avg. Read / Second

    - Avg. Blocks / Read

    - Avg. Writes / Second- Buffer Waits

    - Avg. Buffer Waits / Milisecond

    - Tablespace IO Stats

    Tablespace IO Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104->ordered by IOs (Reads + Writes) desc

    Tablespace------------------------------ Av Av Av Av Buffer Av Buf

    Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)-------------- ------- ------ ------- ------------ -------- ---------- ------PHASE_WORK_TS

    138,361 38 0.0 3.9 6,859 2 0 0.0

  • 8/9/2019 Reading Statspack Report

    15/24

    OFFER_HISTORY_TS24,714 7 0.0 4.0 0 0 0 0.0

    ATTR1_TS7,823 2 0.0 4.0 0 0 0 0.0

    TEMP886 0 0.0 20.1 1,147 0 0 0.0

    SYSTEM184 0 3.9 2.8 56 0 18 3.3

    - File IO StatsFile IO Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104->ordered by Tablespace, File

    Tablespace Filename------------------------ ----------------------------------------------------

    Av Av Av Av Buffer Av BufReads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)

    -------------- ------- ------ ------- ------------ -------- ---------- ------ATTR1_TS /oradata/phs2/hsz16/attr1_01.dbf

    398 0 0.0 3.9 0 0 0

    /oradata/phs2/hsz17/attr1_02.dbf400 0 0.0 4.0 0 0 0

    /oradata/phs2/hsz18/attr1_03.dbf398 0 0.0 4.0 0 0 0

    /oradata/phs2/hsz19/attr1_04.dbf480 0 0.0 4.0 0 0 0

    ....

    Note that Oracle considers average read times of greater than 20 ms unacceptable. If a datafile

    consistently has average read times of 20 ms or greater then:- The queries against the contents of the owning tablespace should be examined and tuned so that

    less data is retrieved.- If the tablespace contains indexes, another option is to compress the indexes so that they require

    less space and hence, less IO.- The contents of that datafile should be redistributed across several disks/logical volumes to moreeasily accommodate the load.

    - If the disk layout seems optimal, check the disk controller layout. It may be that the datafilesneed to be distributed across more disk sets.

    Buffer cache Activity Information

    The buffer statistics are comprised of two sections:

    - Buffer Pool Statistics:

    This section can have multiple entries if multiple buffer pools are allocated. This section is in both 8and 9i and is identical in both.

    Buffer Pool Statistics for DB: PHS2 Instance: phs2 Snaps: 100 -104-> Pools D: default pool, K: keep pool, R: recycle pool

    Free Write BufferBuffer Consistent Physical Physical Buffer Complete Busy

    P Gets Gets Reads Writes Waits Waits Waits

  • 8/9/2019 Reading Statspack Report

    16/24

    - ----------- ------------- ----------- ---------- ------- -------- ----------D 4,167 362,492 3,091 413 0 0 60

    A baseline of the database's buffer pool statistics should be available to compare with the current

    statspack buffer pool statistics. A change in that pattern unaccounted for by a change in workloadshould be a cause for concern.

    - Buffer Wait Statistics:

    This section shows a breakdown of each type of object waited for. This section follows the Instance

    Recovery Stats for DB in 9i and is identical to that in 8i.

    Buffer wait Statistics for DB: PHS2 Instance: phs2 Snaps: 100 -104-> ordered by wait time desc, waits desc

    Tot Wait AvgClass Waits Time (cs) Time (cs)

    ------------------ ----------- ---------- ---------undo header 42 21 1data block 18 6 0

    The above shows no real contention. Typically, when there is buffer contention, it is due to datablock contention with large average wait times, like the example below:Buffer wait Statistics for DB: GLOVP Instance: glovp Snaps: 454 - 455

    Tot Wait AvgClass Waits Time (cs) Time (cs)------------------ ----------- ---------- ---------data block 9,698 17,097 2undo block 210 1,225 6

    segment header 259 367 1undo header 259 366 1

    Instance Recovery Statistics

    This section was added in 9i and is useful for monitoring the recovery and redo information.

    Instance Recovery Stats for DB: S901 Instance: S901 Snaps: 2 -3-> B: Begin snapshot, E: End snapshotTargt Estd Log File Log Ckpt Log CkptMTTR MTTR Recovery Actual Target Size Timeout Interval(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks

    - ----- ----- ---------- ---------- ---------- ---------- ---------- ----------B 15 8 8024 21033 20691 92160 20691 ##########E 15 11 8024 77248 92160 92160 285818 ##########

    PGA Memory Statistics

    This section was added in 9i and which helps when using the new model to allocate PGA in Oracle9iusing PGA_AGGREGATE_TARGET.

  • 8/9/2019 Reading Statspack Report

    17/24

    PGA Memory Stats for DB: S901 Instance: S901 Snaps: 2 -3-> WorkArea (W/A) memory is used for: sort, bitmap merge, and hash join ops

    Statistic Begin (M) End (M) % Diff

    ----------------------------------- ---------------- ---------------- ----------maximum PGA allocated 10.405 10.405 .00

    total PGA allocated 7.201 7.285 1.17total PGA inuse 6.681 6.684 .04

    This section is particularly useful when monitoring session memory usage on Windows servers.

    Enqueue Activity

    An enqueue is simply a locking mechanism. This section is very useful and must be used when the

    wait event "enqueue" is listed in the "Top 5 timed events".

    In 8i the section looks like this.

    Enqueue activity for DB: PHS2 Instance: phs2 Snaps: 100 -104-> ordered by waits desc, gets desc

    Enqueue Gets Waits---------- ------------ ----------PS 2,632 716ST 192 185TM 973 184TC 66 57US 80 53TS 68 46TT 349 36PI 56 32HW 10 5CF 275 3DV 4 3TX 499 1

    In 9i the section looks like this.

    Enqueue activity for DB: S901 Instance: S901 Snaps: 2 -3-> Enqueue stats gathered prior to 9i should not be compared with 9i data

    -> ordered by waits desc, requests desc

    Avg Wt WaitEq Requests Succ Gets Failed Gets Waits Time (ms) Time (s)

    -- ------------ ------------ ----------- ----------- ----------- ------------HW 656 656 0 139 2.04 0

    The action to take depends on the lock type that is causing the most problems. The most common

    lock waits are generally for:

    - TX - Transaction Lock: Generally due to application concurrency mechanisms, or table setupissues.

  • 8/9/2019 Reading Statspack Report

    18/24

    - TM - DML enqueue: Generally due to application issues, particularly if foreign key constraintshave not been indexed.

    - ST - Space management enqueue: Usually caused by too much space management occurring. Forexample: create table as select on large tables on busy instances, small extent sizes, lots of

    sorting, etc.

    Undo (Rollback) Information

    Undo (Rollback) information is provided in two sections. They are identical in both 8i and 9i andare self explanatory.

    - Rollback Segment Stats- Rollback Segment Storage

    In 9i the following two sections are added to provide similar information on the System ManagedUndo (SMU) tablespace. Both are self explanatory.

    - Undo Segment Summary for DB- Undo Segment Stats for DB

    The examples below show typical performance problem related to Undo (rollback) segments:

    - Rollback Segment Stats for DB

    Rollback Segment Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104->A high value for "Pct Waits" suggests more rollback segments may be required

    Trans Table Pct Undo BytesRBS No Gets Waits Written Wraps Shrinks Extends

    ------ ------------ ------- --------------- -------- -------- --------0 9.0 0.00 0 0 0 04 6,838.0 0.18 554,206 0 0 05 2,174.0 0.55 292,474 0 0 06 4,309.0 0.23 471,992 0 0 0

    ....

    In this case, the PCT Waits on three of the rollback segments indicates that there is someminor contention on the rollbacks and that either another rollback or more space should be

    added.

    - Rollback Segment Storage for DB

    Rollback Segment Storage for DB: PHS2 Instance: phs2 Snaps: 100 -104

    ->Optimal Size should be larger than Avg Active

    RBS No Segment Size Avg Active Optimal Size Maximum Size------ --------------- --------------- --------------- ---------------

    0 753,664 0 753,6644 2,520,743,936 0 2,520,743,9365 2,109,702,144 0 2,109,702,1446 528,449,536 0 528,449,536

    In this case, the client does not have optimal set.

  • 8/9/2019 Reading Statspack Report

    19/24

    Rollback Segment Storage for DB: RW1PRD Instance: rw1prd Snaps: 10489 - 1->The value of Optimal should be larger than Avg Active

    RBS No Segment Size Avg Active Optimal Size Maximum Size------ --------------- ----------- --------------- ---------------

    0 5,087,232 0 5,087,2321 52,420,608 ########### 52,428,800 335,536,1282 52,420,608 10,551,688 52,428,800 283,107,3283 52,420,608 10,621,742 52,428,800 283,107,3284 52,420,608 10,736,056 52,428,800 283,107,3285 52,420,608 17,861,266 52,428,800 325,050,3686 52,420,608 19,579,373 52,428,800 335,536,1287 52,420,608 11,571,513 52,428,800 283,107,3288 52,420,608 44,140,215 52,428,800 335,536,1289 52,420,608 65,045,643 52,428,800 325,050,368

    In this instance optimal is set and we can see an overflow for average active for RBS 1 and

    that RBS 9 was also larger than optimal. If this is a consistent problem it may be that theoptimal value should be raised.

    - Undo Segment Summary for DB

    Undo Segment Summary for DB: S901 Instance: S901 Snaps: 2 -3-> Undo segment block stats:-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed-> eS - expired Stolen, eR - expired Released, eU - expired reUsedUndo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU---- -------------- ---------- -------- ---------- -------- -------------------

    1 20,284 1,964 8 12 0 0 0/0/0/0/0/0

    The description of the view V$UNDOSTAT in the Oracle9i Database Reference guide provides

    some insight as to the columns definitions. Should the client encounter SMU problemsmonitoring this view every few minutes would provide more useful information.

    - Undo Segment Stats for DB

    Undo Segment Stats for DB: S901 Instance: S901 Snaps: 2 -3-> ordered by Time desc

    Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU------------ ------------ -------- ------- -------- ------- -------------------

    12-Mar 16:11 18,723 1,756 8 12 0 0 0/0/0/0/0/012-Mar 16:01 1,561 208 3 12 0 0 0/0/0/0/0/0

    This section provides a more detailed look at the statistics in the previous section by listingthe information as it appears in each snapshot.

    It should be noted that 9i introduces an optional init.ora parameter calledUNDO_RETENTION which allows the DBA to specify how long the system will attempt to

    retain undo information for a committed transaction without being overwritten or

  • 8/9/2019 Reading Statspack Report

    20/24

    recaptured. This parameter, based in units of wall-clock seconds, is defined universally foall undo segments.

    Use of UNDO_RETENTION can potentially increase the size of the undo segment for a givenperiod of time, so the retention period should not be arbitrarily set too high. The UNDO

    tablespace still must be sized appropriately. The following calculation can be used to

    determine how much space a given undo segment will consume given a set value ofUNDO_RETENTION.

    Undo Segment Space Required = (undo_retention_time * undo_blocks_per_seconds)

    As an example, an UNDO_RETENTION of 5 minutes (default) with 50 undo blocks/second(8k blocksize) will generate:

    Undo Segment Space Required = (300 seconds * 50 blocks/ seconds * 8K/block) = 120 M

    The retention information (transaction commit time) is stored in every transaction table

    block and each extent map block. When the retention period has expired, SMON will be

    signaled to perform undo reclaims, done by scanning each transaction table for undotimestamps and deleting the information from the undo segment extent map. Only duringextreme space constraint issues will retention period not be obeyed.

    Latch Information

    Latch information is provided in the following three sections.

    . Latch Activity

    . Latch Sleep breakdown

    . Latch Miss Sources

    This information should be checked whenever the "latch free" wait event or other latch wait eventsexperience long waits.

    - Latch Activity

    Latch Activity for DB: PHS2 Instance: phs2 Snaps: 100 -104->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics forwilling-to-wait latch get requests

    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests->"Pct Misses" for both should be very close to 0.0

    Pct Avg PctGet Get Slps NoWait NoWait

    Latch Name Requests Miss /Miss Requests Miss----------------------------- -------------- ------ ------ ------------ ------KCL freelist latch 9,382 0.0 0KCL lock element parent latch 15,500 0.0 0.0 0KCL name table latch 3,340 0.0 0Token Manager 12,474 0.0 0.0 0active checkpoint queue latch 2,504 0.0 0batching SCNs 114,141 0.0 0.0 0begin backup scn array 6,697 0.0 0

  • 8/9/2019 Reading Statspack Report

    21/24

    cache buffer handles 1 0.0 0cache buffers chains 1,056,119 0.1 0.2 6,303 0.0cache buffers lru chain 104,996 0.0 4,078 0.0

    This section is identical in both 8i and 9i.

    This section is particularly useful for determining latch contention on an instance. Latch contentiongenerally indicates resource contention and supports indications of it in other sections.

    Latch contention is indicated by a Pct Miss of greater than 1.0% or a relatively high value in AvgSleeps/Miss.

    While each latch can indicate contention on some resource, the more common latches to watchare:

    - cache buffer chains: Contention on this latch confirms a hot block issue. See Note 62172.1'Understanding and Tuning Buffer Cache and DBWR in Oracle7, Oracle8, and Oracle8i' for a

    discussion of this phenomenon.

    - shared pool: Contention on this latch in conjunction with reloads in the SQL Area of the librarycache section indicates that the shared pool is too small. Contention on this latch indicates thatone of the following is happening:

    . The library cache, and hence, the shared pool is too small.

    . Literal SQL is being used. See Note 62143.1 'Understanding and Tuning the Shared Pool for an

    excellent discussion of this topic.

    . On versions 8.1.7.2 and higher, session_cached_cursors might need to be set. See enhancemen

    bug 1589185 for details.

    See Note 62143.1 Understanding and Tuning the Shared Pool in Oracle7, Oracle8, and Oracle8i fora good discussion on literal SQL and its impact on the shared pool and library cache.

    - Latch Sleep breakdown

    Latch Sleep breakdown for DB: PHS2 Instance: phs2 Snaps: 100 -104-> ordered by misses desc

    Get Spin &Latch Name Requests Misses Sleeps Sleeps 1->4-------------------------- -------------- ----------- ----------- ------------row cache objects 1,908,536 70,584 16,976 54656/14893/

    1022/13/0

    dlm resource hash list 624,455 15,931 71,868 118/959/14835/19/0parallel query alloc buffe 37,000 4,850 362 4502/335/12/

    1/0shared pool 176,560 3,238 773 2649/431/134

    /24/0library cache 871,408 1,572 935 925/433/151/

    63/0cache buffers chains 1,056,119 872 209 670/195/7/0/

    0

  • 8/9/2019 Reading Statspack Report

    22/24

    ....

    This section provides additional supporting information to the previous section. It is identical in 8and 9i.

    Latch Miss Sources

    Latch Miss Sources for DB: PHS2 Instance: phs2 Snaps: 100 -104-> only latches with sleeps are shown-> ordered by name, sleeps desc

    NoWait WaiterLatch Name Where Misses Sleeps Sleeps------------------------ -------------------------- ------- ---------- -------batching SCNs kcsl01 0 1 1cache buffers chains kcbgtcr: kslbegin 0 114 39cache buffers chains kcbgcur: kslbegin 0 62 62cache buffers chains kcbrls: kslbegin 0 29 104cache buffers chains kcbchg: kslbegin: bufs not 0 1 1

    dlm group lock table lat kjgalk: move a lock from p 0 1 0dlm lock table freelist kjlalc: lock allocation 0 10 6dlm lock table freelist kjgdlk: move lock to paren 0 1 2dlm lock table freelist kjlfr: remove lock from pa 0 1 3dlm resource hash list kjucvl: open cr lock reque 0 36,732 562dlm resource hash list kjxcvr: handle convert req 0 29,189 39,519dlm resource hash list kjskchcv: convert on shado 0 3,907 25dlm resource hash list kjrrmas1: lookup master no 0 1,603 18dlm resource hash list kjcvscn: remove from scan 0 383 0dlm resource hash list kjrlck: lock resource 0 26 1,965

    This section provides a detailed breakdown of which latches are missing and sleeping. It is

    particularly useful in identifying library cache bugs as it provides latch child information noavailable in the previous two sections.Search on the latch child name experiencing high misses or sleeps and you can often find the bug

    responsible.It is identical in 8i and 9i.

    Dictionary Cache Statistics

    This is an interesting section to monitor but about which you can do very little as the only way to

    change the size of the dictionary cache is to change the shared pool size as the dictionary cache is

    a percentage of the shared pool. It is identical in 8i and 9i.

    Dictionary Cache Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104->"Pct Misses" should be very low (< 2% in most cases)->"Cache Usage" is the number of cache entries being used->"Pct SGA" is the ratio of usage to allocated size for that cache

    Get Pct Scan Pct Mod Final PctCache Requests Miss Requests Miss Req Usage SGA---------------------- ------------ ------ -------- ----- -------- ------ ----dc_constraints 0 0 0 0 0dc_database_links 0 0 0 0 0

  • 8/9/2019 Reading Statspack Report

    23/24

    dc_files 0 0 0 161 98dc_free_extents 226,432 16.8 304 0.0 288 ###### 99

    ...

    Library Cache Statistics

    This section of the report shows information about the different sub-areas activity in the librarycache.

    The 8i version looks like this.

    Library Cache Activity for DB: PHS2 Instance: phs2 Snaps: 100 -104->"Pct Misses" should be very low

    Get Pct Pin Pct Invali-Namespace Requests Miss Requests Miss Reloads dations--------------- ------------ ------ -------------- ------ ---------- --------BODY 48 0.0 48 0.0 0 0CLUSTER 7 0.0 8 0.0 0 0

    INDEX 0 0 0 0OBJECT 0 0 0 0PIPE 0 0 0 0SQL AREA 42,640 0.2 193,249 0.1 23 17TABLE/PROCEDURE 287 3.8 1,701 2.6 6 0TRIGGER 0 0 0 0The 9i version looks like this.Library Cache Activity for DB: S901 Instance: S901 Snaps: 2 -3->"Pct Misses" should be very low

    Get Pct Pin Pct Invali-Namespace Requests Miss Requests Miss Reloads dations--------------- ------------ ------ -------------- ------ ---------- --------

    BODY 29 0.0 29 0.0 0 0SQL AREA 579 5.7 2,203,964 0.0 0 0TABLE/PROCEDURE 292 0.0 496 0.0 0 0TRIGGER 12 0.0 12 0.0 0 0

    Values in Pct Misses or Reloads in the SQL Area, Tables/Procedures or Trigger rows indicate thatthe shared pool may be too small. To confirm this, consistent values (not sporadic) in Pct Misses or

    Reloads in the Index row indicate that the buffer cache is too small. (No longer available in 9i.)

    Values in Invalidations in the SQL Area indicate that a table definition changed while a query was

    being run against it or a PL/SQL package being used was recompiled.

    SGA Memory Summary

    This section provides a breakdown of how the SGA memory is used at the time of the report. It is

    useful to be able to track this over time. This section is identical in 8i and 9i.

    SGA regions Size in Bytes------------------------------ ----------------Database Buffers 209,715,200Fixed Size 103,396Redo Buffers 104,873,984

  • 8/9/2019 Reading Statspack Report

    24/24

    Variable Size 423,956,480----------------

    sum 738,649,060

    SGA Memory Detail

    This section shows a detailed breakdown of memory usage by the SGA at the beginning and endingof the reporting period. It allows the DBA to track memory usage throughout the business cycle.

    It is identical in 8i and 9i.

    SGA breakdown difference for DB: PHS2 Instance: phs2 Snaps: 100 -104

    Pool Name Begin value End value Difference----------- ------------------------ -------------- -------------- -----------java pool free memory 20,000,768 20,000,768 0large pool PX msg pool 230,386,744 230,386,744 0large pool free memory 299,976 299,976 0shared pool Checkpoint queue 189,280 189,280 0shared pool KGFF heap 252,128 252,128 0

    shared pool KGK heap 31,000 31,000 0shared pool KQLS heap 2,221,552 2,246,640 25,088shared pool PL/SQL DIANA 436,240 436,240 0shared pool PL/SQL MPCODE 138,688 138,688 0

    Init.ora Parameters Summary

    The final section shows the current init.ora parameter settings. It displays those that are more

    commonly used including some hidden. It is identical in 8i and 9i.

    init.ora Parameters for DB: PHS2 Instance: phs2 Snaps: 100 -104

    End valueParameter Name Begin value (if different)----------------------------- --------------------------------- --------------_PX_use_large_pool TRUEalways_anti_join HASHaudit_trail TRUEbackground_dump_dest /u01/app/oracle/admin/phs2/bdumpbitmap_merge_area_size 10485760compatible 8.1.7control_files /oradata/phs2/hsz16/control_01.dbcore_dump_dest /u01/app/oracle/admin/phs2/cdumpcursor_space_for_time TRUE

    ....