microsoft powerpoint - statenc.ppt [read-only]

52
1 State of North Carolina DB2 for z/OS Utilities Topics August 27 th , 2008 Craig Friske, IBM, [email protected]

Upload: tess98

Post on 27-Jan-2015

137 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Microsoft PowerPoint - STATENC.ppt [Read-Only]

1

State of North Carolina DB2 for z/OS Utilities TopicsAugust 27th, 2008

Craig Friske, IBM, [email protected]

Page 2: Microsoft PowerPoint - STATENC.ppt [Read-Only]

2

© 2008 IBM Corporation

AgendaUsability with Listdef and TemplatesCOPY/RECOVER/QUIESCE/MODIFYREORGRTSRUNSTATSLOADParallel index build

We will cover the best practices, including keyword specifications recommended for those utilities that are most significant within the utilities suite because they are used often or may consume large amounts of resources.

For several of these utilities, building indexes is an important part of processing, so we will cover some of the details of that process to see how is may affect space needs and performance.

Sorting is a critical part of building indexes, and it’s used at various times for many for other phases of utilities as well. DFSORT is always used as of DB2 V8, so we will look at some of the characteristics and recommendations for DFSORT. This involves using dynamic allocation of sort work data sets and why you should let DB2 manage those data sets when possible.

Along the way there will be a few samples to better understand the concepts.

Page 3: Microsoft PowerPoint - STATENC.ppt [Read-Only]

3

© 2008 IBM Corporation

Improved utility processing for groups of objects–Avoid job stream changes when application changes–Supports table spaces, index spaces, or both

Generate object lists–LISTDEF statement to generate a list of objects

wildcarding (e.g. dbname*)explicit listsINCLUDE or EXCLUDE objects

Dynamically allocate datasets–TEMPLATE statement to define datasets

intelligent dataset sizingvariables (Job, Utility, Object, Date, Time, ... etc.)

Control Statements–OPTIONS PREVIEW, EVENT (halt, skip, RC)

Wildcarding/Allocation Overview

Page 4: Microsoft PowerPoint - STATENC.ppt [Read-Only]

4

© 2008 IBM Corporation

List Example: Process objects x, y, and z in database DBAWith V6 QUIESCE TABLESPACE DBA.X TABLESPACE DBA.Y

TABLESPACE DBA.ZCOPY TABLESPACE DBA.X COPYDDN (C1,C2) RECOVERYDDN(C3,C4)

TABLESPACE DBA.Y COPYDDN (..,..) RECOVERYDDN(..,..)TABLESPACE DBA.Z COPYDDN (..,..) RECOVERYDDN(..,..)

With V7 LISTDEF X INCLUDE TABLESPACE DBA.*

EXCLUDE TABLESPACE DBA.ATEMPLATE T1 DSNAME((&DB..&TS..&LOCREM.&PRIBAC..D&JDATE.)

QUIESCE LIST XCOPY LIST X COPYDDN(T1,T1) RECOVERYDDN(T1,T1)

Wildcarding using LISTDEF

Page 5: Microsoft PowerPoint - STATENC.ppt [Read-Only]

5

© 2008 IBM Corporation

Dynamic Allocation Templates

Template example:– Allocate image copy datasets (2 local, 2 remote)

V6 - 4 DD statements for each object//C1 DD DSN=DBA.X.LP.D20020409.,UNIT=..,VOL=SER=..,DISP=//C2 DD DSN=DBA.X.LB.D20020409.,UNIT=..,VOL=SER=..,DISP=//C3 DD DSN=DBA.X.RP.D20020409.,UNIT=..,VOL=SER=..,DISP=//C4 DD DSN=DBA.X.RB.D20020409.,UNIT=..,VOL=SER=..,DISP=//SYSIN DD *

V7 - a single template for local and remote//SYSIN DD *TEMPLATE A DSNAME(&DB..&TS..&LOCREM.&PRIBAC..D&DATE.) Generated datasetsDSN=DBA.X.LP.D2002099 April 8th Local Primary for DBA.XDSN=DBA.X.LB.D2002099DSN=DBA.X.RP.D2002099DSN=DBA.X.RB.D2002099

Page 6: Microsoft PowerPoint - STATENC.ppt [Read-Only]

6

© 2008 IBM Corporation

Utilities supportedUtility LISTDEF TEMPLATECHECK DATA No YesCHECK INDEX Yes YesCHECK LOB No YesCOPY Yes YesCOPYTOCOPY Yes YesLOAD No YesMERGECOPY Yes YesMODIFY RECOVERY Yes n/aMODIFY STATISTICS Yes n/aQUIESCE Yes n/aREBUILD INDEX Yes YesRECOVER Yes n/aREORG INDEX Yes YesREORG TABLESPACE Yes YesREPORT Yes n/aRUNSTATS Yes n/aUNLOAD Yes Yes

Page 7: Microsoft PowerPoint - STATENC.ppt [Read-Only]

7

© 2008 IBM Corporation

Automatic Space ManagementSliding secondary allocation quantity sizeApplies to DB2 managed pagesets onlyTries to avoids VSAM maximum extent limit errors Can reach maximum dataset size before running out of extentsUses cylinder allocation– Default PRIQTY

• 1 cylinder for non-LOB tablespaces and indexes• 10 cylinders for LOB tablespaces

– Improved SQL SELECT and INSERT performance• 2x improvement relative to track allocation

Can be used for– New pagesets: No need for PRIQTY/SECQTY values– Existing pagesets: SQL ALTER PRIQTY/SECQTY values to -1

plus schedule a REORG

Page 8: Microsoft PowerPoint - STATENC.ppt [Read-Only]

8

© 2008 IBM Corporation

COPY Best PracticesCOPY– PARALLEL keyword provides parallelism for lists of objects (including

partitions)– CHECKPAGE YES (default in DB2 9, no longer sets copy pending, look for

RC=8!)– Maximize other utilities’ access to objects while copying a list with SHRLEVEL

CHANGE and OPTIONS EVENT(ITEMERROR,SKIP)• Keeps objects in the list in UTRW state *only* as each object is being copied instead

of for the duration of the COPY utility• UTRW – utility allows read/write access by applications, but no access for exclusive

utilities– Incremental copy rule-of-thumb: Consider using incremental image copy if

• <5% of pages are randomly updated (typically means less than 1% of rows updated) • <80% of pages are sequentially updated• Incremental image copies use list prefetch, so monitor for rid list pool full conditions

– Copy indexes on your most critical tables to speed up recoveryMERGECOPY – Consider using it

PARALLEL must be specified with a list to provide parallel backing up of objects. It can also be used to limit the degree of parallelism if desired, otherwise the COPY utility will determine the optimal number of parallel tasks itself.

CHECKPAGE YES is recommended to ensure consistency of the data. After all, one doesn’t want to RECOVER with a bad copy of the data! In addition, since COPY is I/O bound and CHECKPAGE adds CPU, the cost in elapsed time is negligible. CHECKPAGE YES is the default in DB2 9 and it will no longer set the object into copy pending to indicate that problems were found but remember this in a new SYSCOPY record and set RC=8. Look for messages DSNU441I for space map errors or DSNU518I for other errors and check the return codes.

Specifying OPTIONS EVENT(ITEMERROR,SKIP) in the same jobstep as a COPY SHRLEVEL CHANGE will prompt the Copy Utility to set the UTRW utility-in-progress state on each object in the list only as it is being copied. This increases the availability of the objects in the list since Utilities that require exclusive use of an object are not allowed when the object is in UTRW status. Two examples of Utilities that may require exclusive use of objects are the Load and Reorg Utilities.

Page 9: Microsoft PowerPoint - STATENC.ppt [Read-Only]

9

© 2008 IBM Corporation

RECOVER/QUIESCE Best PracticesRECOVER– PARALLEL keyword provides parallelism for lists of objects (including

partitions)– Compressed pagesets result in faster restore phase– Enable Fast Log Apply (which can use dual-copy logs) and PAV

– <=10 jobs/member with LOGAPSTG=100MB, 20-30 objects per RECOVER– For recovery to a prior point in time

• Always recover related sets of objects together (same RECOVER utility statement)– Always run REPORT RECOVERY utility on each object and determine the

recovery base before running the Recover Utility– DB2 9 for z/OS: recover to PIT with consistency

• Backs out uncommitted changes for the objects specified on the RECOVER utility statement

• Significantly reduces the need to run QUIESCE, which can be disruptive to applications

QUIESCE– WRITE NO is less disruptive (no quiescing of COPY=NO indexes)– Use TABLESPACESET

PARALLEL for RECOVER works the same as for COPY. In addition one should enable the ZPARM LOGAPSTG for fast log apply with a specified amount of space. This fast log apply option also is exploited during DB2 restarting processing.

DB2 9 now provides back out capability for point in time recovery. All uncommitted changes for the specified objects will be backed out. So for full application consistency you need to specify all related objects on the RECOVER command (REPORT TABLESPACESET). This should reduce the need to run QUIESCE significantly which can be disruptive to applications.

We believe that our BUFNO for COPY and RECOVER is adequate, but there have been certain customers claiming that increasing BUFNO has helped performance. With SVL, we have tested different values for BUFNO without seeing and improvements in performance, but we would certainly appreciate your feedback if you can prove otherwise!

If you are using RI, then you should really be specifying TABLESPACESET to get a consistent point for all related tables. This will make it easier for those recoveries to a specific point in time.

WRITE NO is also recommended. WRITE YES not only quiesces the table space with a drain, but it also drains all indexes for the table space and writes out the pages for all objects. The writing has the potential for encountering deadlocks with applications, although the application of PQ96628?? With data 1st claiming should help avoid these deadlocks.

The main advantage of WRITE YES would only be if you are using an offline process like DSN1COPY that requires the forcing out of all pages.

If we crash, we have to go back further and help restart times, but the pages will be externalized on every checkpoint, so the exposure isn’t great.

Page 10: Microsoft PowerPoint - STATENC.ppt [Read-Only]

10

© 2008 IBM Corporation

MODIFY RECOVERY Best PracticesRun MODIFY RECOVERY regularly to clean up old records in SYSCOPY and SYSLGRNXDB2 9 has RETAIN LAST n or GDGLIMITAlso resets “ALTER_ADD_COLUM” flag in OBD when deleting image copies with previous row versions–MODIFY RECOVERY DELETE AGE/DATE to delete

everything before the REORG that follows the ALTER–Will make next REORG more efficient if no more old row

versions existRemember that MODIFY RECOVERY works on day boundaries

MODIFY RECOVERY is usually run to clean up old records in SYSLGRNX and SYSCOPY. But it does not only clean up those tables and prevents them from growing infinitely but it also resets a flag we keep in the OBD about alter added columns. This flag is needed for point in time recovery to work correctly. As long as there are image copies out there that can be recovered we need to make sure that we keep the record format information for the row versions that were used when those image copies were taken. Even if the data sets might have gone already DB2 still keeps track of those image copies in SYSCOPY. So make sure that MODIFY RECOVERY is run on a regular basis to get rid of all image copy entries after a new column was added to a table and before the next REORG that follows that ALTER which materialized the new column.

If the ALTER_ADD_COLUMN flag is not reset, REORG doesn’t know if the column has been populated or not, so it must fully convert the rows to external format to populate the added column. After the flag is set, REORG doesn’t have to populate any columns, so it can optimize the format of the data the is unload and reloaded without fully converting it.

DB2 9 for z/OS has a new option RETAIN that can be used to retain a number of image copies or even retrieve the number of image copies to retain from the GDGLIMIT into which the image copies were written. This makes it even easier to run MODIFY RECOVERY

Page 11: Microsoft PowerPoint - STATENC.ppt [Read-Only]

11

© 2008 IBM Corporation

Why do we need to REORG ?

1) Reclaim fragmented space, optimize space utilization2) Improve access performance by organizing the data rows in

clustering order3) Apply Online Schema changes (Alter)4) Functions (REBALANCE, DISCARD, UNLOAD)

Clustering IX

K1

K2K3

K4

TSR3

R4

R1

R2

TSR1R2R3R4

deleted

deleted

deleted

2

1

BeforeREORG

AfterREORG

Page 12: Microsoft PowerPoint - STATENC.ppt [Read-Only]

12

© 2008 IBM Corporation

Online REORG in a Nutshell

Original TS/IXs

Reorged TS/IXs(Shadows)

Log

Reorg Full Image Copy

Reorg Drain All

Uses 3 new elements:1. shadow datasets2. log records3. mapping index

Page 13: Microsoft PowerPoint - STATENC.ppt [Read-Only]

13

© 2008 IBM Corporation

Use of Log Records in OLR

Step 2->n

Read log between old and new LRSN, apply to shadow, record new LRSN

process

Shadow Copy

Original Data

READ

Write (except last iteration on Log)

Update on last iteration

User Access READ

WRITE

Unload, Sort, ReloadShadow CopyOriginal Data

Step 1 Record current log position, reorg into shadow copy, record new log position

CREATE

New Copy of DataREAD

WRITE

Step n+1

Switch Users to access the newly reorged copy

DB2 Log

DB2 Log

DB2 Log ImageCopy

ImageCopy

Page 14: Microsoft PowerPoint - STATENC.ppt [Read-Only]

© 2008 IBM Corporation

Reorg Shrlevel Change

TSIX

MassDelete

UTRWCR

Unload Reload BuildSort Switch

UTUTDA

Log

UTRODW

UTRWCR

UTRWCR

UTRWCR

UTRWCR

TSIX

IX

TS TSTSTSTS

TSIX

TS

IX

TSTS

Serialization for the Shrlevel Change option is quite a bit different. You can see that for the first 4 phase the lock is been completely removed. Reorg sets UTRW as the DBET setting to show utility interest, and but it now claims as a reader instead of draining. Notice that sharelevel change has a switch phase just like that for shrlevel reference. However, it also has the Log phase which updates shadow tablespace and index pages from log records. For most of the Log phase, Reorg acts as a claim writer; however, when gets to a point where it has just only a small number of log records to be applied, it will Drain Writers and set the DBET to UTRO.There's an additional mass delete lock that's

Page 15: Microsoft PowerPoint - STATENC.ppt [Read-Only]

15

© 2008 IBM Corporation

V9 REORG Partition ParallelismREORG TABLESPACE TS Three partitions, one PI, one NPSI

UNLOAD TS Part 1

TS Part 2

TS Part 3

SORT

TS Part 1

TS Part 2

TS Part 3 SORT

SORTRELOAD

SORT

SORT

PI IX

NPSI IX

SORTBLD

- Multiple concurrent jobs no longer needed- No longer allowed with PART specification and NPSIs

This diagram illustrates the improvements in elapsed time during the UNLOAD, SORT, and RELOAD phases with REORG. In prior releases, partition level parallelism was not supported, so there would be one task to unload, sort and reload data (only one task handles all three phases). Now when there is enough memory to start a separate set of tasks for each partition, they can now be run in parallel in one job. This parallelism could be achieve in prior release with separate jobs, but then these separate jobs experienced potential availability problems and very inefficient building of NPIs.

The performance benefit doesn’t have a downside when reorganizing entire tablespaces or for a part range REORG when no NPIs exist. However, when there are NPIs and SHRLEVEL REFERENCE/CHANGE PART is specified, there may be more work is done to rebuild the entire NPI (it depends on how much work BUILD2 did previously, worst case being reorg of a single partition, best case being reorg of x-1 parts, depends on number/size of keys). Note that with NPIs multiple concurrent REORG PART jobs for the same table space is not allowed.

Page 16: Microsoft PowerPoint - STATENC.ppt [Read-Only]

16

© 2008 IBM Corporation

REORG Best PracticesREORG– Use SHRLEVEL REFERENCE or SHRLEVEL CHANGE– Inline COPY & Inline STATISTICS– KEEPDICTIONARY (track dictionary effectiveness with history

statistics PAGESAVE) – large performance impact– 254 partition limit for compressed table spaces in V8

• PK51853 shipped new ZPARM MAX_UTIL_PARTS (watch virtual storage)• DB2 9 for z/OS no longer has this limit and uses virtual storage more

effectively– Index parallelism (SORTKEYS is default and ignored in V8)

• Remove SORTWKxx / UTPRINxx, and turn on UTSORTAL=YES• Run REORG against as many partitions as possible in the same job or

against the whole table space

REORG LOG NO and KEEPDICTIONARY should be used for the same reason as specified in LOAD. KEEPDICTIONARY will prevent uncompressing and re-compressing every single row thus saving on CPU and also on disk space for the sort work datasets.

The 254 partition limit for compressed table spaces can be lifted by the DBA using ZPARM MAX_UTIL_PARTS for DB2 for z/OS V8 (provided by PK51853). DB2 9 for z/OS has removed that limit all together as the virtual storage consumption has been improved.

The default of REORG will be SORTDATA YES to unload the data and then sort it in clustering order. If the data is already in or near perfect clustering order and disk space for sorting is low you can specify SORTDATA NO to unload data in clustering key order and to skip the sort. Otherwise we recommend to let REORG use the SORT. SORTDATA NO is not available in SHRLEVEL CHANGE.

NOSYSREC with SHREVEL NONE also avoids using the SYSREC data set to unload, thus speeding up the process by passing the rows directly into sort. If this option is specified, it should be preceded by COPY as the table space will be left in RECP if the REORG fails. For SHRLEVEL CHANGE this option is always used.

Page 17: Microsoft PowerPoint - STATENC.ppt [Read-Only]

17

© 2008 IBM Corporation

REORG Best PracticesREORG–Partition parallelism in DB2 9 and NPI processing

• Parallel REORG jobs for same table space but different partitions no longer supported if NPIs defined

• After REORG PART with no BUILD2 phase, no need for REORG NPI

• Watch out for LISTDEFs on partition level with NPIs - full REORG might be more efficient

–SHRLEVEL NONE if constrained for disk space• LOG NO reduces log volume; requires an image copy (inline is a

good choice)• NOSYSREC to avoid I/O (forced for SHRLEVEL CHANGE)

– Take full image copy before REORG SHRLEVEL NONE• Use REUSE to logically reset and reuse DB2-managed data sets

without deleting and redefining them (affects elapsed time)

REORG LOG NO and KEEPDICTIONARY should be used for the same reason as specified in LOAD. KEEPDICTIONARY will prevent uncompressing and re-compressing every single row thus saving on CPU and also on disk space for the sort work datasets.

The 254 partition limit for compressed table spaces can be lifted by the DBA using ZPARM MAX_UTIL_PARTS for DB2 for z/OS V8 (provided by PK51853). DB2 9 for z/OS has removed that limit all together as the virtual storage consumption has been improved.

The default of REORG will be SORTDATA YES to unload the data and then sort it in clustering order. If the data is already in or near perfect clustering order and disk space for sorting is low you can specify SORTDATA NO to unload data in clustering key order and to skip the sort. Otherwise we recommend to let REORG use the SORT. SORTDATA NO is not available in SHRLEVEL CHANGE.

NOSYSREC with SHREVEL NONE also avoids using the SYSREC data set to unload, thus speeding up the process by passing the rows directly into sort. If this option is specified, it should be preceded by COPY as the table space will be left in RECP if the REORG fails. For SHRLEVEL CHANGE this option is always used.

Page 18: Microsoft PowerPoint - STATENC.ppt [Read-Only]

18

© 2008 IBM Corporation

REORG Best Practices continuedREORG– SORTDATA NO only if data is already in or near

perfect clustering order and disk space is an issue– Set appropriate PRIQTY/SECQTY to minimize extend

processing • PK60956 helps to improve SORTBLD elapsed time up to 20x

for indexes with small SECQTY!!!• SORTBLD elapsed up to 20x improvement!!!• Affects all utilities that are (re-)building indexes

– Run MODIFY RECOVERY some time after ALTER TABLE … ADD COLUMN

When ALTER TABLE … ADD COLUMN is used then REORG needs to materialize the new columns and will always decompress the rows even with KEEPDICTIONARY and will unload all rows in external format. This will increase the sort work data set requirement, in some cases significantly. This will be done as long as there are still image copies before the first REORG after the alter added column. Use MODIFY RECOVERY to clean up old image copies with the previous record format according to your backup retention policies.

When building indexes the performance for extend processing can be improved by APAR PK60956. Even then it is beneficial to choose the primary and secondary quantity definition for indexes so that the number of extents is kept down to a minimum. Specifying PRIQTY=-1 and SECQTY=-1 will let DB2 pick appropriate values.

Page 19: Microsoft PowerPoint - STATENC.ppt [Read-Only]

19

© 2008 IBM Corporation

REORG SHRLEVEL CHANGE BPsREORG SHRLEVEL CHANGE (sometimes called online REORG)

• TIMEOUT TERM frees up the objects if timeouts occur in getting drains• DRAIN ALL (better chance of entering SWITCH phase)• MAXRO = IRLMRWT minus 5-10 seconds (to prevent timeouts)• DRAIN_WAIT = IRLMRWT minus 5-10 seconds (to prevent timeouts)• RETRY = utility lock timeout multiplier (6 by default)• RETRY_DELAY = DRAIN_WAIT*RETRY• Enable detection of long running readers (zparm) and activate IFCID 0313

(it’s included in STATS CLASS(3))– This will report readers that may block command and utilities from draining– It includes “well-behaved” WITH HOLD cursors which a drain cannot break-in on

• More Joys of Commitment by Bonnie Baker– http://www.db2mag.com/db_area/archives/2003/q1/programmers.shtml

TIMEOUT TERM terminate the utility if timing out trying to acquire drains, and thus avoids leaving the objects in a Utxx state which could result with unavailability.

DRAIN ALL gets a drain on all objects once instead of doing it in two steps of draining writers followed by draining readers. If the drain fails, the data is still available in a UTRW state. The two steps may have an advantage if activity is such that it’s hard to get a DRAIN ALL successfully most of the time. The two steps may then quiesce enough activity for the 2nd DRAIN to be more likely to be successful.

MAXRO is the estimated time to finish log apply as a UTRO utility before switching data sets. We suggest using something less than the SQL lock timeout value (SPRM???), like the value divided by two.

RETRY is the number of times to retry getting drains if unsuccessful. We suggest using the utility multiplier SPRMUTO, which has a default value of 6.

RETRY WAIT is the amount of time to wait after a failure before trying to drain again. If your workload has spikes of activity, then you may want to delay considering the average period of those spikes. If activity is fairly consistent, then you may one to retry again immediately. In the absence of any specific workload knowledge, we suggest a value equal to the drain-wait time times the retry value, or a default of 3 minutes.

MAXRO ½ log timeout valueDRAIN_WAIT – prevent transactions from timing. Should be less (1/2?)RETRY – 6XRETRY

Page 20: Microsoft PowerPoint - STATENC.ppt [Read-Only]

20

© 2008 IBM Corporation

Automatically display CLAIMERS when REORG receives resource unavailAfterDSNUGUTC - REORG TABLESPACE DB1.TS1 COPYDDN(SYSCOPY) SHRLEVEL NONE STATISTICS) ***********************************) * DISPLAY DATABASE SUMMARY* GLOBAL CLAIMERS

) ***********************************…NAME TYPE PART STATUS CONNID CORRID CLAIMINFO-------- ---- ----- ----------------- -------- ------------ --------TS1 TS RW,UTRO BATCH CAPP (WR,C)

- AGENT TOKEN 10- MEMBER NAME V91A

TS1 TS RW,UTRO BATCH CAPP (RR,C)- AGENT TOKEN 10- MEMBER NAME V91A

******* DISPLAY OF DATABASE DB1 ENDED **********************) DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETIONDSNUGBAC - RESOURCE UNAVAILABLE

REASON 00C200EATYPE 00000200NAME DB1.TS1

Culprits

Page 21: Microsoft PowerPoint - STATENC.ppt [Read-Only]

21

© 2008 IBM Corporation

REORG SHRLEVEL CHANGE contd.REORG SHRLEVEL CHANGE– Consider scheduling SWITCH phase in a maintenance

window to avoid concurrent workloads that may prevent the utility from breaking in:

• MAXRO DEFER and LONGLOG CONTINUE will let REORG do its job except for the last log iteration and the switching

• REORG will continue applying log until MAXRO is changed with the ALTER UTILITY command

• Many log iterations might reduce the “perfect” organization of the table space, so keep the time until MAXRO is changed to allow final processing down to a minimum

When it is still very difficult for the REORG utility to break into heavy concurrent workload it might be possible to use a dedicated maintenance window to execute the switch phase. For that the REORG SHRLEVEL CHANGE job would be submitted with MAXRO DEFER and LONGLOG CONTINUE. This will let the REORG utility perform the reorganization and then loop in the log phase. Submission of the REORG utility should be done so that this status is reached before the scheduled maintenance window. In that window the utility can then be allowed to move forward with the final log phase iteration and the switch phase with the ALTER UTILITY command setting MAXRO to a defined value (after making sure that the workload has been reduced so that the utility has a chance to break in).

Since the log phase will update the reorganized shadow table space this might have an effect on the “quality” of the REORG run, when many updates/deletes happen on the existing data. The longer the log phase is executed the more you will see disorganization of data. So the time before switching MAXRO should be kept down to a minimum.

Page 22: Microsoft PowerPoint - STATENC.ppt [Read-Only]

22

© 2008 IBM Corporation

Introduced in V7Contain “space” and some “accesspath” statistics in user-defined tables:– SYSIBM.TABLESPACESTATS (one row per

partition)– SYSIBM.INDEXSPACESTATS (one row per

partition)– In DB2 9, these are moved into the DB2 Catalog

(DSNDB06.SYSRTSTS) as • SYSIBM.SYSTABLESPACESTATS• SYSIBM.SYSINDEXSPACESTATS

Intended to eliminate running RUNSTATS for reasons of running utilities by exceptionAccess path selection doesn’t use RTS in V7, V8 or V9

Real-time Statistics

Page 23: Microsoft PowerPoint - STATENC.ppt [Read-Only]

23

© 2008 IBM Corporation

SYSTABLESPACESTATS– Global

• NACTIVE• NPAGES• EXTENTS• SPACE• TOTALROWS• DATASIZE• UNCOMPRESSEDDATASIZE• UPDATESTATSTIME

SYSINDEXSPACESTATS– Global

• NACTIVE• NLEVELS• NPAGES• NLEAF• EXTENTS• SPACE• TOTALENTRIES• LASTUSED• UPDATESTATSTIME

Incremental– REORG Statistics

• LASTTIME• INSERTS• UPDATES• DELETES• DISORGLOB• UNCLUSTINS• MASSDELETE• NEARINDREF• FARINDREF

– COPY Statistics• LASTTIME• UPDATEDPAGES• CHANGES• UPDATELRSN• UPDATETIME

– RUNSTATS Statistics• LASTTIME• INSERTS• UPDATES• DELETES• MASSDELETE

Incremental Statistics– REORG Statistics

• REBUILDLASTTIME• LASTTIME• INSERTS• UPDATES• DELETES• APPENDINSERT• PSEUDODELETES• MASSDELETE• LEAFNEAR• LEAFFAR• NUMLEVELS

– COPY Statistics• LASTTIME• UPDATEDPAGES• CHANGES• UPDATELRSN• UPDATETIME

– RUNSTATS Statistics• LASTTIME• INSERTS• DELETES• MASSDELETE

RTS

Page 24: Microsoft PowerPoint - STATENC.ppt [Read-Only]

24

© 2008 IBM Corporation

Utilities On DemandRun utilities only when necessary and not on fixed schedulesInformation on the current status of all objects is contained in Real-Time Statistics (RTS) tables Stored Procedure DSNACCOR applies our suggested thresholds and formulas against a list of objects and recommends utility actionsDB2 9 NFM adds Stored Procedure DSNACCOX (PK44133) with additional real-time statistics being used and improved recommendations

Real-Time statistics also contains valuable information that can be used to schedule utility operations on demand, as the objects require certain actions, like REORG, COPY or RUNSTATS. DB2 provides the stored procedure DSNACCOR that can be used to apply our suggested thresholds and formulas to the information in the RTS tables to generate recommended utility actions.

DB2 9 NFM has the stored procedure DSNACCOX that uses further real-time statistics for improved recommendations.

Page 25: Microsoft PowerPoint - STATENC.ppt [Read-Only]

25

© 2008 IBM Corporation

Consider running REORG TABLESPACE in the following situations:– Real-time statistics (TABLESPACESTATS)

• REORGUNCLUSTINS (number of records inserted since the last Reorg that are not well-clustered)/TOTALROWS > 10%

– Irrelevant if predominantly random access– REORGUNCLUSTINS is only an indication of the insert behavior and is correlated to the cluster ratio

only if there are no updates or deletes. To prevent DSNACCOR/X from triggering on these, identify such objects and put them in exception list

• (REORGNEARINDREF+REORGFARINDREF (number of overflow rows since the last Reorg))/TOTALROWS > 5% in data sharing, >10% in non-data sharing

• REORGINSERTS (number of records inserted since the last Reorg)/TOTALROWS > 25%• REORGDELETES (number of records deleted since the last Reorg)/TOTALROWS > 25%• EXTENTS (number of extents) > 254• REORGDISORGLOB (number of LOBs inserted since the last Reorg that are not perfectly

chunked)/TOTALROWS > 50%• SPACE > 2 * (DATASIZE / 1024) (when free space is more than used space)• REORGMASSDELETE > 0 (mass deletes on seg tsp and DROP on multi-table tsps)

– RUNSTATS• PERCDROP > 10%• SYSIBM.SYSLOBSTATS.ORGRATIO < 50% (changed to a value 0-100 in PQ96460 on

V7/V8)• (NEARINDREF + FARINDREF) / CARDF > 10% non-data-sharing, > 5% if data sharing• FAROFFPOSF / CARDF > 10%

– Or, if index is a clustering index, CLUSTERRATIOF < 90% (irrelevant if predominantly random access)

– Other• Tsp is in REORP or adv reorg pending status (AREO*) as result of an ALTER TABLE stmnt• Index on the tsp is in adv REBUILD pend state (ARBDP) as result an ALTER stmnt

Reorg table space V9 Recommendations

Page 26: Microsoft PowerPoint - STATENC.ppt [Read-Only]

26

© 2008 IBM Corporation

Consider running REORG TABLESPACE in the following situations:– Real-time statistics (TABLESPACESTATS)

• REORGUNCLUSTINS (number of records inserted since the last Reorg that are not well-clustered)/TOTALROWS > 10%

– Irrelevant if predominantly random access– REORGUNCLUSTINS is only an indication of the insert behavior and is correlated to the cluster ratio

only if there are no updates or deletes. To prevent DSNACCOR/X from triggering on these, identify such objects and put them in exception list

• (REORGNEARINDREF+REORGFARINDREF (number of overflow rows since the last Reorg))/TOTALROWS > 5% in data sharing, >10% in non-data sharing

• REORGINSERTS (number of records inserted since the last Reorg)/TOTALROWS > 25%• REORGDELETES (number of records deleted since the last Reorg)/TOTALROWS > 25%• EXTENTS (number of extents) > 254• REORGDISORGLOB (number of LOBs inserted since the last Reorg that are not perfectly

chunked)/TOTALROWS > 50%• SPACE > 2 * (DATASIZE / 1024) (when free space is more than used space)• REORGMASSDELETE > 0 (mass deletes on seg tsp and DROP on multi-table tsps)

– RUNSTATS• PERCDROP > 10%• SYSIBM.SYSLOBSTATS.ORGRATIO < 50% (changed to a value 0-100 in PQ96460 on

V7/V8)• (NEARINDREF + FARINDREF) / CARDF > 10% non-data-sharing, > 5% if data sharing• FAROFFPOSF / CARDF > 10%

– Or, if index is a clustering index, CLUSTERRATIOF < 90% (irrelevant if predominantly random access)

– Other• Tsp is in REORP or adv reorg pending status (AREO*) as result of an ALTER TABLE stmnt• Index on the tsp is in adv REBUILD pend state (ARBDP) as result an ALTER stmnt

Don’t use RUNSTATS statistics as a trigger to consider running REORG

Reorg table space V9 recommendations

Page 27: Microsoft PowerPoint - STATENC.ppt [Read-Only]

27

© 2008 IBM Corporation

Consider running REORG TABLESPACE in the following situations:– Real-time statistics (TABLESPACESTATS)

• REORGUNCLUSTINS (number of records inserted since the last Reorg that are not well-clustered)/TOTALROWS > 10%

– Irrelevant if predominantly random access– REORGUNCLUSTINS is only an indication of the insert behavior and is correlated to

the cluster ratio only if there are no updates or deletes. To prevent DSNACCOR/X from triggering on these, identify such objects and put them in exception list

• (REORGNEARINDREF+REORGFARINDREF (number of overflow rows since the last Reorg))/TOTALROWS > 5% in data sharing, >10% in non-data sharing

• REORGINSERTS (# of records inserted since the last Reorg)/TOTALROWS > 25%

• REORGDELETES (# of records deleted since the last Reorg)/TOTALROWS > 25%

• EXTENTS (number of extents) > 254• REORGDISORGLOB (number of LOBs inserted since the last Reorg that are not

perfectly chunked)/TOTALROWS > 50%• SPACE > 2 * (DATASIZE / 1024) (when free space is more than used space)• REORGMASSDELETE > 0 (mass deletes on seg tsp and DROP on multi-table

tsps)– Other

• Tsp is in REORP or adv reorg pending status (AREO*) as result of an ALTER TABLE stmnt

• Index on the tsp is in adv REBUILD pend state (ARBDP) as result an ALTER stmnt

Reorg table space V9 recommendations

Page 28: Microsoft PowerPoint - STATENC.ppt [Read-Only]

28

© 2008 IBM Corporation

Consider running REORG INDEX in the following cases:– Real-time statistics (SYSINDEXSPACESTATS)

• REORGPSEUDODELETES (number of index entries pseudo-deleted since the last Reorg)/TOTALENTRIES > 10% in non-data sharing, 5% if data sharing as pseudo-deleted entry can cause S-lock/unlock in Insert for unique index

• REORGLEAFFAR (number of index leaf page splits since the last Reorg and the new leaf page far from the original leaf page)/NACTIVE > 10%

• REORGINSERTS ( number of index entries inserted since the last Reorg)/TOTALENTRIES > 25%

• REORGDELETES ( number of index entries inserted since the last Reorg)/TOTALENTRIES > 25%

• REORGAPPENDINSERT / TOTALENTRIES > 20%• EXTENTS (number of extents) > 254

– RUNSTATS• LEAFFAR / NLEAF > 10% (NLEAF is a column in SYSIBM.SYSINDEXES

and SYSIBM.SYSINDEXPART)• PSEUDO_DEL_ENTRIES / CARDF > 10% for non-data sharing and > 5%

for data sharing – Other

• The index is in advisory REORG-pending status (AREO*) or advisory-REBUILD-pending status (ARBDP) as the result of an ALTER statement

Reorg Index recommendations

Page 29: Microsoft PowerPoint - STATENC.ppt [Read-Only]

29

© 2008 IBM Corporation

RUNSTATS Best Practices

RUNSTATS–SHRLEVEL CHANGE for availability–Collect only column stats on columns used in SQL

predicates• Use the Statistics Advisor to detect which stats to collect• SAMPLE reduces CPU time when gathering column stats

–KEYCARD provides valuable info for little processing cost (see next slide)

If not gathered with inline stats, RUNSTATS should be run with SHRLEVEL CHANGE for availability. For large amounts of data RUNSTATS TABLESPACE with column statistics is very CPU intensive. By specifying the SAMPLE keyword, CPU can be reduced while still gathering enough stats for the optimizer to choose a good access path.

Page 30: Microsoft PowerPoint - STATENC.ppt [Read-Only]

30

© 2008 IBM Corporation

RUNSTATS KEYCARDCollects all of the distinct values in all of the 1 to n key column combinations for the specified indexes. n is the number of columns in the index. For example, suppose that you have an index defined on three columns: A, B, and C. If you specify KEYCARD, RUNSTATS collects cardinality statistics for column A, column set A and B, and column set A, B, and C. So these are cardinality statisics across column sets... if we had a 3-column index that had these values:

Col1 Col2 Col3A B CA B DA B EA B EA C AA C AA D AB B B

then these stats would be collected:– Col1 cardinality = 2– Col1 and Col2 cardinality = 4– Col 1, Col2, and Col3 cardinality = 6

Page 31: Microsoft PowerPoint - STATENC.ppt [Read-Only]

31

© 2008 IBM Corporation

When the data changes sufficiently to warrant new statistics–REORG of table space or index (use inline stats!)–LOAD REPLACE of table space (use inline stats!)–After "significant" application changes for the table space or index•Periodically (weekly, monthly) except for read only data?

•Application tracks updates with activity tables?•After percentage of pages changed since last RUNSTATS (RTS)?

Understand implications for access paths!

When is RUNSTATS needed?

Page 32: Microsoft PowerPoint - STATENC.ppt [Read-Only]

32

© 2008 IBM Corporation

LOAD Best PracticesLOAD – LOG NO reduces log volume; if REPLACE, then take inline copy– KEEPDICTIONARY (track dictionary effectiveness with history

statistics PAGESAVE) - small performance impact if loading lots of data

– 254 partition limit for compressed table spaces can be lifted by DBA• PK51853 shipped new ZPARM MAX_UTIL_PARTS (watch virtual storage)

– Load Partition Parallelism (V7)• Not individual LOAD part level jobs• Enable Parallel Access Volume (PAV)

– Index parallelism (SORTKEYS)• Provide value for SORTKEYS when input is tape/PDS mbr or variable length• SORTKEYS is the sum of ALL indexes (and foreign keys) on the table• Remove SORTWKxx / UTPRINxx, and turn on UTSORTAL=YES

LOG NO is the best performance option as well as KEEPDICTIONARY. Dictionaries should not change much. Their effectiveness can be tracked via history statistics PAGESAVE.

When loading data into compressed table spaces there is a limit of 254 partitions that can be processed. This limit was introduced because of the amount of virtual storage being used while building compression dictionaries for many partitions in parallel. With PK51853 (UK31488/UK31489) you can now set your own limit in ZPARM MAX_UTIL_PARTS. The ZPARM only changes the limit for the number of partitions, storage consumption was not changed with this PTF, so keep an eye on your virtual storage when loading table spaces with many compressed partitions.

Prior to V7, jobs were separated by partition to improved elapsed time in loading large partitioned tables. When NPIs existed, these would be dropped and rebuilt in a separate step. However, now partitioned objects can be loaded in parallel by partition. If the table has NPIs, the keys are merged into a common sort so that KLOAD processing can be used on a single member, thus improving LOAD performance.

With multiple indexes, SORTKEYS allows for the indexes to be built in parallel, thus improving elapsed time with some increase in CPU. Usually LOAD should be able to estimate the number of records contained in the input data set. For input on tape, in PDS members or when the input data set has variable length records with great variation of the record length this estimate is not possible. In those cases specify a SORTKEYS estimate which will be required to estimate the required sort work space. Remember that SORTKEYS is the sum for all keys/foreign keys on the table(s) not the number of input records. Foreign keys are only counted if they don’t match an existing index or if there are more than 1 foreign key even if they match an existing index. So if you’re loading into a table space with 1 table and 3 indexes then SORTKEYS needs to be set to three times the number of input records.

For the SORT, although you can control the data sets used by specifying SORTWKxx, avoid doing that. By specifying SORTDEVT, DFSORT or DB2 will dynamically allocate what is needed. DFSORT can’t span volumes with sort data sets, so if the amount of data is large, you may want so specify SORTNUM to ensure there are enough data sets available or turn on UTSORTAL=YES.

Page 33: Microsoft PowerPoint - STATENC.ppt [Read-Only]

33

© 2008 IBM Corporation

LOAD Best Practices continuedLOAD– Inline COPY & Inline STATISTICS– Use REUSE to logically reset and reuse DB2-managed data sets

without deleting and redefining them (affects elapsed time)– When using DISCARD, try to avoid having the input on tape

• Input is re-read to discard the errant records– Avoid data conversion, use internal representation if possible– Sort data in clustering order (unless data is randomly accessed via

SQL)– LOAD RESUME SHRLEVEL CHANGE instead of batch inserts– “LOAD REPLACE SHRLEVEL CHANGE” can be achieved by loading

into clone table and then exchanging the tables on DB2 9– LOAD via Batchpipes to load data that is transferred via FTP from

clients

A single pass of the data which creates an inline copy and gathers inline statistics while loading is better for elapsed time than running separate COPY and RUNSTATS utilities after the LOAD.

Be aware that DISCARD records input block and record numbers and later returns to the input data set to position on each discarded record. This can be really reallyslow if the input data set is on tape!

Sorting data in clustering key order before running LOAD improves performance -and can be done automatically by DB2 Utilities Enhancement Tool for z/OS Version 2.1.

DB9 with its clone table support allows a function that comes close to a LOAD REPLACE SHRLEVEL REFERENCE. You can load data into the (named) clone table in the background and as soon as the LOAD has completed you can exchange the clone and the base table and make the new data active. The old data can be read by applications while new data is being loaded, only for the EXCHANGE all readers have to be drained.

LOAD via Batchpipes can be used to load data immediately while it is ftp’ed from the client without storing it in data sets first. This helps to reduce the elapsed time as well as the amount of DASD needed for the transfer in total. An article in the IDUG Solutions Journal Volume 14 Number 2 (Summer 2007) describes the process in detail.

Page 34: Microsoft PowerPoint - STATENC.ppt [Read-Only]

34

© 2008 IBM Corporation

REBUILD INDEX Best PracticesREBUILD INDEX– Indexes are built in parallel– Remove SORTWKxx / UTPRINxx and use SORTDEVT/SORTNUM or

UTSORTAL=YES– Inline STATISTICS– Use REORG INDEX SHRLEVEL CHANGE to move index data sets to

different volumes– CREATE INDEX DEFER followed by REBUILD INDEX

• As of V8, dynamic SQL will not select the index until it is built– DB2 9 allows SHRLEVEL CHANGE

• Unique indexes are put in RBDP because uniqueness can not be checked during rebuild process, so no INSERT/ UPDATE/DELETE allowed thataffects unique index

• No parallel jobs on different indexes of the same table space -> use single job with multiple indexes specified

REBUILD INDEX recommendations follow LOAD and REORG.

For availability reasons you should not use REBUILD INDEX to move the index data sets to different volumes. REORG INDEX SHRLEVEL CHANGE is much more efficient and provides better availability.

Indexes can be created with DEFER YES and then materialized by REBUILD INDEX. Dynamic SQL will not select the index until it is built.

In DB2 9 this is even more effective as the index can be rebuilt with SHRLEVEL CHANGE allowing concurrent updates. However this does not allow concurrent updates of data that affects unique indexes as they will still be put in RBDP because uniqueness can not be guaranteed otherwise. REBUILD INDEX SHRLEVEL CHANGE can not be invoked for different indexes of the same table space in multiple jobs concurrently. Instead specify those indexes in a single invocation of the REBUILD INDEX utility.

Page 35: Microsoft PowerPoint - STATENC.ppt [Read-Only]

35

© 2008 IBM Corporation

More online utilitiesREBUILD INDEX SHRLEVEL CHANGE

– Great for building new non-unique indexes or when index is in RBDP

– Index is built in place with no shadow• To move indexes to different volumes with

availability still use REORG INDEX SHRLEVEL CHANGE)

– Table space must be in LOGGED state (new log phase)

A new enhancement of REBUILD SHRLEVEL CHANGE allows for improved availability when creating a new index, and it’s especially great for creating a new non-unique index on a populated table.

Let’s suppose you have a non-unique index with column A, and you have done performance analysis to know that if you also included column B in that same index you would get performance benefits from your applications. The way to accomplish this without a period of unavailability follows:

1) CREATE the new non-unique index with A/B specifying DEFER YES. This places the index in RBDP without building it.

2) RUN REBUILD SHRLEVEL CHANGE to build the new index. The table space is kept in RW mode so that changes can be made. The index being built is in RBDP. Inserts, Updates, and Deletes are avoided for the index when in RBDP. After building the index, there is a log apply to catch all the changes that were made after the table space scan. This log phase is much like that for REORG. Finally, the index is completely built and RBDP is reset.

3) BIND is run to select the superior access path with the new index.4) The old index is dropped.

This type of SHRLEVEL CHANGE utility is different than REORG because the index is build in place rather than using a separate shadow dataset. Therefore, there is no SWITCH phase for REBUILD. Shadows aren’t as useful as in the case of REORG because the index is usually only built when it is newly created or broken, so the original is typically unavailable before the utility starts anyway. Because there are no shadows, don’t try to use REBUILD to move indexes to a new volume like you can do with REORG INDEX.

Of course, this utility depends on applying log records, so you can’t specify SHRLEVEL CHANGE for not logged tables.

Page 36: Microsoft PowerPoint - STATENC.ppt [Read-Only]

36

© 2008 IBM Corporation

DB2 Allocated Sort Work Data SetsPTFs shipped 02/2008 to enable DB2 to dynamically allocate sort work data sets in utilities:– DB2 for z/OS V8: PK45916 / UK33692– DB2 9 for z/OS: PK41899 / UK33636– Enable with UTSORTAL=YES– Used for all sorts in utilities: LOAD, REORG, CHECK INDEX,

REBUILD INDEX, CHECK DATA, RUNSTATS– Message “DSNU3340I - UTILITY PERFORMS DYNAMIC

ALLOCATION OF SORT DISK SPACE” indicates use– New behavior ignored if hard coded DD cards are foundNo more need to specify SORTNUM. Existing SORTNUM specification can be honored or ignored (IGNSORTN=YES)Data sets for largest sorts are allocated firstAttempts to allocate data sets as large as possible (starting with 2 data sets per sort task, more data sets allocated if necessary)

SORTNUM

In February 2008 we shipped two PTFs that enable DB2 to allocate the sort work data sets itself instead of having DFSORT allocate them. If DB2 allocates the data sets beforehand it knows exactly how many data sets were allocated and can optimize the degree of parallelism for utilities that use parallelism. Also it takes the burden off the administrator to find a reasonable SORTNUM value that matches many different utilities with different sort work requirements (one size doesn’t fit all). We often refer to that enhancement as “SORTNUM elimination”.

To make migration easier this new function needs to be enabled by a new ZPARM UTSORTAL=YES. If only this parameter is set then existing SORTNUM specifications will be honored to make sure existing jobs still run exactly as before. The new behavior will only be used for utilities without SORTNUM specified. If IGNSORTN is also set to YES then all utilities will use the new behavior. SORTNUM values specified will be ignored.

When data set allocation starts the largest sort tasks will be handled first as long as the largest chunks are still available on the sort pool disks. DB2 will try to allocate the necessary sort work disk space within 2 data sets first. If the current disk conditions don’t allow data sets of this size it will retry allocating smaller data sets and continue allocating data sets until the required size has been allocated. Don’t worry if you see messages like “IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR DATA SET…”.

Page 37: Microsoft PowerPoint - STATENC.ppt [Read-Only]

37

© 2008 IBM Corporation

Checking all indexes in parallel

Fast replication: Flashcopy/Snapshot

UNLOAD

SORTCHK

TS Part 1

IX1 Part 1

IX1 Part 2

IX1 Part 3

IX2 NPI

IX3 NPI

IX3 NPI

IX2 NPI

UTILINIT

IX1 Part 1

IX1 Part 2

IX1 Part 3

Shadows

TS Part 2

TS Part 3

SORT

TS Part 1

TS Part 2

TS Part 3

In this example of rebuilding all the indexes on a partitioned table space, three unload subtasks will pipe index keys to the appropriate of the three sort subtasks. The three sort subtasks will pipe sorted keys to three companion index build subtasks.

If inline statistics are being collected, there will be three statistic subtasks as well.

Page 38: Microsoft PowerPoint - STATENC.ppt [Read-Only]

38

© 2008 IBM Corporation

CHECK SHRLEVEL CHANGE design SHRLEVEL REFERENCE CHECK INDEX causes data and indexes to be unavailable for update for the duration of the utilityClaim as reader for target data and indexes Create shadow datasets – Same dataset naming convention as REORG SHRLEVEL CHANGE

Drain writers for target data and indexesCopy datasets with DFSMSdss ADRDSSU with FCNOCOPY to shadows– Uses dataset-level FlashCopy2 if available– Else, traditional media copy – still smaller r/o outage than SHR REF

After logical complete for datasets, – Dedrain target data and indexes– Run CHECK on shadow data and indexes

At utilterm delete shadow datasets when DB2 managed

Performing a physical copy of the data uses subsystem resources and can impact the performance of other I/O operations that are issued to the ESS. Using the FCNOCOPY keyword on a DFSMSdss™ COPY command prevents the ESS subsystem from performing a physical copy of the data.

In general, if you want a temporary copy of the data, specify FCNOCOPY, and then withdraw the FlashCopy relationship when you no longer need the copy. If you want a permanent copy, but want to delay background copy until a convenient time, specify FCNOCOPY to get a point-in-time copy and then perform FCNOCOPYTOCOPY later to start background copy. If you want a permanent copy and do not want to delay background copy, do not specify FCNOCOPY. Allow the ESS subsystem to perform the physical copy and release the subsystem resources that are used to maintain the FlashCopy relationship.

Page 39: Microsoft PowerPoint - STATENC.ppt [Read-Only]

39

© 2008 IBM Corporation

DB2 9 Utilities HighlightsMore online utilities– Rebuild Index SHRLEVEL CHANGE– Reorg LOB now supports SHRLEVEL REFERENCE

(space reclamation)– Check data, LOB and repair locate … SHRLEVEL

CHANGE– Check index SHRLEVEL REFERENCE supports

parallel for > 1 index– Clones for “online LOAD REPLACE”

Online REORG BUILD2 phase eliminationREORG parallelism for UNLOAD, RELOAD, LOG phasesUtility TEMPLATE switchingUNLOAD SKIP LOCKED DATA option

If you look at all the utility offerings, we have been adding more SHRLEVEL CHANGE and REFERENCE utilities to improve availability in every release. Now with DB2 9, the following utilities have SHRLEVEL CHANGE (RUNSTATS, COPY, REORG TABLESPACE, REORG INDEX, LOAD RESUME, REBUILD, UNLOAD, CHECK INDEX, CHECK DATA, CHECK LOB, and REPAIR). Cloned tables effectually function as LOAD REPLACE SHRLEVEL CHANGE. We have also improved availability for REORG TABLESPACE of a part/part range by removing the BUILD2 phase.

From a performance perspective, REORG of partitioned table spaces now has partition parallelism much like already existed in REBUILD INDEX, CHECK INDEX, COPY, RECOVER, and LOAD.

Utility template switching allows flexibility in which template applies to datasets based on attributes of the object processed.

UNLOAD SKIP LOCKED DATA gives another option besides just UR processing or CS processing. UR will not get locks, and therefore can unload “dirty” data. CS acquires locks and can therefore be slowed with lock contention. SKIP LOCKED is a CS option that will skip those rows/pages that are locked and give consistent data (but not all rows) without being slowed much by contention.

Page 40: Microsoft PowerPoint - STATENC.ppt [Read-Only]

40

© 2008 IBM Corporation

DB2 9 Utilities Highlights…MODIFY Recovery enhancements– “Retain” keyword added to improve management of copies

• LAST(n), LOGLIMIT, GDGLIMITVolume-based COPY/RECOVER (BACKUP SYSTEM/RESTORE SYSTEM)– RECOVER modified to enable object-level recovery from

volume FlashCopy– Full integration of tape into BACKUP/RESTORE SYSTEM

utilities– Incremental FlashCopy, APAR PK41001

Truncate log based on timestampRECOVER to any point-in-time with consistencyRECOVER RESTOREBEFORE to use an earlier image copyDisplay progress of RECOVER during log applyCOPY CHECKPAGE option always active– “Copy Pending” avoided if broken page encountered

COPY SCOPE PENDING to copy only objects in “Copy Pending”

Prior to DB2 9 the user specified the deletion criteria either as before a specific date or by greater than a given age in days. DB2 9 has an alternative, by which instead of deletion criteria, retention criteria can be specified.

There are a number of improvements and extensions to the volume base utilities, or the BACKUP/RESTORE SYSTEM utilities.

If you need to do a conditional restart, DSNJU003 now allow you to specify a timestamp instead of an RBA/LRSN. The option is also available for the SYSPITRT for easier prep to truncate the log before running RESTORE SYSTEM.

The COPY and RECOVER utilities each had several enhancements in DB2 9.

Page 41: Microsoft PowerPoint - STATENC.ppt [Read-Only]

41

© 2008 IBM Corporation

More online utilities (cont…)CHECK DATA SHRLEVEL CHANGECHECK LOB SHRLEVEL CHANGEREPAIR LOCATE … SHRLEVEL CHANGEREORG LOB now supports SHRLEVEL REFERENCEClones effectively provide LOAD REPLACE SHRLEVEL CHANGE– UTS only

UNLOAD with ISO(CS) supports skipping rows that are locked for transaction updates

More improvements to availability are listed here. CHECK DATA and CHECK LOB now have SHRLEVEL CHANGE support which operates the same way described for CHECK INDEX. The best availability is when there are FlashCopy devices availability, but the utilities will still work on other devices. The difference will be that a SLOW copy will be done to make a snapshot before running the CHECK utility. The availability is still improved over running SHRLEVEL REFERENCE, but the superior availability when using FlashCopy devices is the way to go.

When running CHECK with SHRLEVEL CHANGE on a snapshot, any error cannot be corrected, but where we can REPAIR statements will be generated to correct the problems. These REPAIR statements also run with SHRLEVEL CHANGE for maximum availability.

REORG LOB SHRLEVEL REFERENCE gives better function along with better availability. It turns out that reorganization typically benefits space usage and performance; however, for LOBS the need to reclaim fragmented space is usually much more important, and any performance improvement for better LOB organization is typically negligible. SHRLEVEL REFERENCE reorganizes to a shadow and reclaims spaces where SHRLEVEL NONE did not, therefore it is much better and is recommended as the best practice. In a future release SHRLEVEL NONE will be deprecated.

We don’t have support for LOAD REPLACE SHRLEVEL CHANGE, but clone tables provide the same capability in a more flexible manner. Two separate but identical tables are maintained under one logical table space. Both tables can be populated or access using utilities or SQL. At the point a switchover is needed, it can be done with an ALTER statement to switch from the active table to the other. The “switch” is much like the SWITCH phase for online reorg. Note that clones are only supported for Universal Table Spaces (UTS).

Another variation of UNLOAD allows a method of access that is somewhat in between UR and CS. For UNLOAD with UR, one can unload “dirty” data that hasn’t been committed, but the advantage is that there is no contention with applications and it will proceed quickly. For UNLOAD with CS, all data is clean and committed, but if locked data is encountered it will skip around it without unloading when SKIP LOCKED DATA is specified. This means rows or pages may be missed during the unload.

Page 42: Microsoft PowerPoint - STATENC.ppt [Read-Only]

42

© 2008 IBM Corporation

DB2 9 UtilitiesSupport for all new functions in DB2 9 for z/OS – Universal Table Spaces (UTS)

• Partition By Growth (PBG)• Partition By Range (PBR)

– XML table spaces (PBG or PBR)– Not logged tables/table spaces– Clone tables– Index on expression– New data types (BIGINT, VARBINARY, DECFLOAT

XML)

Utility support is extensive for new features such as UTSs, XML, and not logged.

For example, one can gather stats, reorg, unload, or rebuild indexes on a PBR or PBG table space. For PBGs, this means that inserts which extend the partitions can be run concurrently with SHRLEVEL CHANGE utilities. For XML on a segmented table space, it resides in a PBG format.

Cloned tables are somewhat like Online Load Replace. There is the live table, and then there is the shadow or clone. One can load the clone, and then when ready for the clone to become live, a “switch” can be trigger with DDL.

Index on expression is a key that is generated from an SQL or RDS expression.

We also must support all the new data types so utilities like LOAD and UNLOAD can exploit them.

Page 43: Microsoft PowerPoint - STATENC.ppt [Read-Only]

43

© 2008 IBM Corporation

Copy Utility ChangesAlways perform CHECKPAGE on the COPY utilityThe COPY utility includes SCOPE PENDING support to improve usabilityCOPY utility bufferpool usage uses MRU management of those pages read by the COPY utilityTemplate switching (e.g., copy to tape if large; to disk if small)TEMPLATE LRG DSN &DB..&TS..D&DA..T&TI. UNIT=TAPETEMPLATE SML DSN &DB..&TS..D&DA..T&TI. UNIT=SYSALLDA LIMIT(20 CYL, LRG)COPY TABLESPACE SMALL.TS COPYDDN(SML)COPY TABLESPACE LARGE.TS COPYDDN(SML)

CHECKPAGE was an option that is now always done. The value of knowing copies are good outweighs the extra cost of less than 5% CPU. It’s still a bit less than before because of other improvements in our internal logic.

CHECK for an index is not the default because the overhead is greater than 2X CPU (700%). These indexes are not as important as the data.

COPY Pending is no longer set because this is prohibitive. However, a syscopyrecord is cut so that an incremental can’t be taken using that base. The RC=8 still applies, but an entire check is done instead of stopping early.

COPY now has SCOPE pending like REORG and REBUILD to simply usability.

Page 44: Microsoft PowerPoint - STATENC.ppt [Read-Only]

44

© 2008 IBM Corporation

Recovery to a point in time with consistency (NFM mode)– Uncommitted changes are backed out– Significantly reduces (eliminates?) the need to run

QUIESCE– Does not apply to RECOVER TOCOPY, TOLASTCOPY

and TOLASTFULLCOPY using SHRLEVEL CHANGE copy (consistency is not ensured – use RBA/LRSN after COPY point)

– Include all relevant objects in same RECOVER to ensure data consistency from the application point of view

Recover Utility Changes

U12

U11 U21

U22

UW1

UW2

CurrentUWn - Unit of Work number nUnm – Update number n in Unit of Work m

A

The RECOVER utility is changed to improve recovering to a point of time with consistency. If RECOVER is specified for a particular RBA/LRSN, the utility will also ensure that the tablespaceand indexes are at a level of consistency. Here’s how it works:

1) As before 9, the RESTORE phase is followed by a fast log apply technique to get to the specified point, in our example that is point A.

2) The utility does a Current Status Rebuild (CSR), just like is done for DB2 restart/recovery to determine if there were any active units of work for the recovery point of A.

3) For anything inflight, inabort, etc that have not been committed, the log will be applied to roll back those units of work. In this case, UW2 and UW1 changes will be rolled back to there respective starting points.

This function could greatly improve your procedures today if you use QUIESCE to get recovery points so that you can get a consistent recovery to a point in time. In today’s environment with heavier workloads, some customers were finding it very difficult to get a quiesce point, and there was a requirement to put in timeout/retry logic for that utility. However, the RECOVER solution we provide in 9 is a superior solution!

An exception to consistency occurs for RECOVER to an image copy if that image copy was taken with SHRLEVEL CHANGE. In that case there is no log apply. Specify an RBA/LRSN after the COPY point to ensure there is log processing that gets you to a point of consistency.

Remember, if there are multiple objects that are logically related (e.g. RI, LOB spaces, XML spaces), they should be included in the same RECOVER utility specification.

Page 45: Microsoft PowerPoint - STATENC.ppt [Read-Only]

45

© 2008 IBM Corporation

Recover to a Point In Time (cont…)Recovery to a point in time with consistency (contd)– Two new phases in RECOVER: LOGCSR and LOGUNDO

(will each be executed by member)– RECOVER can be restarted in any LOG phase, LOGCSR

will restart at beginning, others at last commit point– Fast Log Apply not used for LOGUNDO phase– Progress of LOGUNDO phase can be monitored by regular

status message DSNU1555I showing current and target log RBA (e.g. to detect backout pf long running URs)

– This function can greatly reduce your need to run QUIESCE• If you want to capture a log point non-disruptively

– -DISPLAY LOG, or– QUIESCE against SYSEBCDC (or some other dummy table space), or– SELECT CURRENT_TIMESTAMP from SYSDUMMY1 and then convert

it to an LRSN

Here are some more details on how RECOVER to a point of consistency works. The status of RECOVER and the LOGUNDO phase can be tracked my regular message which show the status of the backout of the long running URs. This backout will not be as fast as forward logapply.

If you run quiesce to capture a log point, there are several ways to do it in a more non-disruptive way.

Page 46: Microsoft PowerPoint - STATENC.ppt [Read-Only]

46

© 2008 IBM Corporation

BACKUP/RESTORE SYSTEMWhat’s new in DB2 9 for BACKUP and RESTORE SYSTEM?– The ability to recover at the object level using system-level

backups• RECOVER to point in time or to current• COPY YES indexes can be included in this recovery• Will restore from the previous system level backup or image copy• Requires z/OS V1R8

– Tape Support– Support for Incremental FlashCopy

• PK41001 & PK42014– RTS COPY columns also updated for BACKUP SYSTEM

In V8 it was introduced, primarily for ERP vendors. Doesn’t make sense to recover a single object because all related and there is little knowledge of the contents of objects. For example, an SAP systems consists of thousands of objects.

The backup info is in the BSDS.

For RECOVER CURRENT, they need Jack’s DCR. You can’t specify a PITR of ‘FFFFFFFF’.

We can now recover an individual recovery for a single objects. Exploit a DFSMS enhancement.

Potentially, no standard DB2 image copies.

If the dataset has moved since the flash copy, this one would not work. z/OS is fixing this in 1.11 which is scheduled for Sept 2009. Any time we do a DSM reset. So whenever a REORG, take another system level backup.

The ZPARM set means extra overhead to look at the BSDS.How long does restore system take? Stacking 99 volumes. 7 hours for 99 volumes.

The Apars are for V8 or DB2 9

DASD RESTORE is like 500 volumes in 2 minutes.

Page 47: Microsoft PowerPoint - STATENC.ppt [Read-Only]

47

© 2008 IBM Corporation

Future enhancements

Simplification, automation, availability, performance, flexibility Web-based Data Studio Admin ConsoleDB2 Utilities Enhancement Tool (UET)High priority requirements– Autonomic stats– Autonomic compression– Recover to a different table space in a different DB2 with

consistency– Reduce RUNSTATS resource consumption– Data set-level FlashCopy support– Online consistent image copies– Policy-based recovery– UTSERIAL elimination– REORG enhancements (LOBs, etc.)– …

Manual invocation of•RUNSTATS•COPY/BACKUP SYSTEM•QUIESCE•MODIFY RECOVERY•REORG

We will continue to enhancement utilities for the major areas of need, especially in usability. The goal is that periodic utilities of RUNSTATS, COPY/BACKUP, QUIESCE, MODIFY RECOVERY, and REORG will be submitted automatically when needed without manual intervention.

The Data Studio Admin Console comes from the Data Servers group. It is designed for alerts and managing systems at a global level.

The Utilities Enhancement Tool works closely with utilities to do things like.Sort LOAD input before doing a LOADCreate mapping tables for online reorg jobs.Allow for constant’s in the LOAD input.

Auto-StatsRun RUNSTATS when needed, or update with RTS if possible. Determine what options are needed to collect the appropriate stats according to the access paths taken by applications. In the future profiles with the options needed for each object will be kept in the DB2 catalog. There will be better usability between stats advisor and the running of RUNSTATS with the suggested options.

Auto-compressionEliminate the need for administration of compressing DB2 table spaces. The DBMS will automatically use compression when it is beneficial, disable compression when it is not beneficial, and not require the use of a utility to implement/disable compression. This becomes the default behavior for the whole data, administrators still can configure and limit the scope for the DBMS automatic compression.

Page 48: Microsoft PowerPoint - STATENC.ppt [Read-Only]

48

© 2008 IBM Corporation

Future enhancements

Simplification, automation, availability, performance, flexibility Web-based Data Studio Admin ConsoleDB2 Utilities Enhancement Tool (UET)High priority requirements– Autonomic stats– Autonomic compression– Recover to a different table space in a different DB2 with

consistency– Reduce RUNSTATS resource consumption– Data set-level FlashCopy support– Online consistent image copies– Policy-based recovery– UTSERIAL elimination– REORG enhancements (LOBs, etc.)– …

Manual invocation of•RUNSTATS•COPY/BACKUP SYSTEM•QUIESCE•MODIFY RECOVERY•REORG

Recover to alternative TS with consistency. This is targeted with Recover Expert.

Dataset level FlashCopy will 1. Exploit dataset-level FlashCopy that will result in VSAM image copies that can be used by the RECOVER utility to instantaneously restore an image copy. UNLOAD and COPYTOCOPY utilities will support these new VSAM image copy formats.2. With a new keyword on all utilities that create image copies, specify whether log records should be used to create a consistent image copy. FlashCopy Version 2-capable devices are a prerequisite for this function. 3. Provide a utility option that will specify if the recovery should be performed by processing the log backwards versus the traditional restore and forward log-apply process. 4. Provide a DB2 subsystem/data sharing group level option that will disable all tracking of modified pages.

UTSERIAL elimination eliminates the majority of timeouts on the global UTSERIAL lock resource. Existing concurrent workloadscan be executed as they have previously, with no design changes in the jobs containing IBM DB2Utilities. Throughout for multiple utilities may be increased, but more concurrent utilities can be run.

Policy Based Recover is target for delivery with Recover Expert

REORG Enhancements allow multiple part ranges to be specified (n:m, i:j), and allow REORG of a UTS tablespace or subset of partitions of a UTS to move rows with LOB columns between partitions when required by an alter partition key range or if the table space is PBG. Reduce the time spent in the switch phase and provide improved messages and diagnostics to allow customers to better estimate the length of REORG phases and time to completion.

Page 49: Microsoft PowerPoint - STATENC.ppt [Read-Only]

49

© 2008 IBM Corporation

DB2 for z/OS home page:http://www-306.ibm.com/software/data/db2/zos/

DB2 9 for z/OS Technical Overview:http://www.redbooks.ibm.com/abstracts/sg247330.html?Open

DB2 9 for z/OS Performance Topics:http://www.redbooks.ibm.com/abstracts/sg247473.html?Open

DB2 for z/OS and OS/390 Version 7 Using the Utilities Suite:http://www.redbooks.ibm.com/abstracts/sg246289.html?OpenRecommendations for Tuning Large DFSORT Taskshttp://www.ibm.com/servers/storage/support/software/sort/mvs/tuning/index.htmlDFSMSrmm SMS ACS Supporthttp://www.redbooks.ibm.com/abstracts/TIPS0530.html?OpenIDUG Solutions Journalhttp://www.idug.org

References

Page 50: Microsoft PowerPoint - STATENC.ppt [Read-Only]

50

© 2008 IBM Corporation

DB2 for z/OS Information ResourcesInformation Management Software for z/OS Solutions Information Center

http://publib.boulder.ibm.com/infocenter/dzichelp/index.jspDB2 for z/OS Information Roadmap

http://ibm.com/software/db2zos/roadmap.htmlDB2 for z/OS library page

http://ibm.com/software/db2zos/library.htmlDB2 for z/OS Exchange (examples trading post)

http://ibm.com/software/db2zos/exHome.htmlDB2 for z/OS support

http://ibm.com/software/db2zos/support.htmlOfficial Introduction to DB2 for z/OS

http://ibm.com/software/data/education/bookstore

Page 51: Microsoft PowerPoint - STATENC.ppt [Read-Only]

51

© 2008 IBM Corporation

Thank you!

Craig Friske – [email protected] for z/OS Utilities Development

IBM Silicon Valley Lab

Page 52: Microsoft PowerPoint - STATENC.ppt [Read-Only]

52

© 2008 IBM Corporation

Disclaimer© Copyright IBM Corporation 2008. All rights reserved.U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. WHILE EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS PRESENTATION, IT IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. IN ADDITION, THIS INFORMATION IS BASED ON IBM’S CURRENT PRODUCT PLANS AND STRATEGY, WHICH ARE SUBJECT TO CHANGE BY IBM WITHOUT NOTICE. IBM SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION OR ANY OTHER DOCUMENTATION. NOTHING CONTAINED IN THIS PRESENTATION IS INTENDED TO, NOR SHALL HAVE THE EFFECT OF, CREATING ANY WARRANTIES OR REPRESENTATIONS FROM IBM (OR ITS SUPPLIERS OR LICENSORS), OR ALTERING THE TERMS AND CONDITIONS OF ANY AGREEMENT OR LICENSE GOVERNING THE USE OF IBM PRODUCTS AND/OR SOFTWARE.

IBM, the IBM logo, ibm.com, z/OS, DB2, DFSMS, and DFSORT are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml