Microsoft PowerPoint - STATENC.ppt [Read-Only]

Download Microsoft PowerPoint - STATENC.ppt [Read-Only]

Post on 27-Jan-2015




2 download




1. State of North Carolina DB2 for z/OS Utilities Topics August 27th, 2008Craig Friske, IBM, 2. Agenda Usability with Listdef and Templates COPY/RECOVER/QUIESCE/MODIFY REORG RTS RUNSTATS LOAD Parallel index build 2008 IBM Corporation We will cover the best practices, including keyword specifications recommended for those utilities that are most significant within the utilities suite because they are used often or may consume large amounts of resources. For several of these utilities, building indexes is an important part of processing, so we will cover some of the details of that process to see how is may affect space needs and performance. Sorting is a critical part of building indexes, and its used at various times for many for other phases of utilities as well. DFSORT is always used as of DB2 V8, so we will look at some of the characteristics and recommendations for DFSORT. This involves using dynamic allocation of sort work data sets and why you should let DB2 manage those data sets when possible. Along the way there will be a few samples to better understand the concepts.2 3. Wildcarding/Allocation Overview Improved utility processing for groups of objectsAvoid job stream changes when application changesSupports table spaces, index spaces, or both Generate object listsLISTDEF statement to generate a list of objects wildcarding (e.g. dbname*) explicit lists INCLUDE or EXCLUDE objects Dynamically allocate datasetsTEMPLATE statement to define datasets intelligent dataset sizing variables (Job, Utility, Object, Date, Time, ... etc.) Control StatementsOPTIONS PREVIEW, EVENT (halt, skip, RC) 2008 IBM Corporation 3 4. Wildcarding using LISTDEF List Example: Process objects x, y, and z in database DBA With V6 QUIESCE TABLESPACE DBA.X TABLESPACE DBA.YTABLESPACE DBA.Z COPY TABLESPACE DBA.X COPYDDN (C1,C2) RECOVERYDDN(C3,C4) TABLESPACE DBA.Y COPYDDN (..,..) RECOVERYDDN(..,..) TABLESPACE DBA.Z COPYDDN (..,..) RECOVERYDDN(..,..) With V7 LISTDEF X INCLUDE TABLESPACE DBA.*EXCLUDE TABLESPACE DBA.ATEMPLATE T1 DSNAME((&DB..&TS..&LOCREM.&PRIBAC..D&JDATE.) QUIESCE LIST XCOPY LIST X COPYDDN(T1,T1) RECOVERYDDN(T1,T1) 2008 IBM Corporation 4 5. Dynamic Allocation TemplatesTemplate example: Allocate image copy datasets (2 local, 2 remote)V6 - 4 DD statements for each object //C1 DD DSN=DBA.X.LP.D20020409.,UNIT=..,VOL=SER=..,DISP= //C2 DD DSN=DBA.X.LB.D20020409.,UNIT=..,VOL=SER=..,DISP= //C3 DD DSN=DBA.X.RP.D20020409.,UNIT=..,VOL=SER=..,DISP= //C4 DD DSN=DBA.X.RB.D20020409.,UNIT=..,VOL=SER=..,DISP= //SYSIN DD *V7 - a single template for local and remote//SYSIN DD *TEMPLATE A DSNAME(&DB..&TS..&LOCREM.&PRIBAC..D&DATE.)Generated datasets DSN=DBA.X.LP.D2002099 April 8th Local Primary for DBA.X DSN=DBA.X.LB.D2002099 DSN=DBA.X.RP.D2002099 DSN=DBA.X.RB.D2002099 2008 IBM Corporation5 6. Utilities supported Utility LISTDEF TEMPLATE CHECK DATANoYes CHECK INDEX Yes Yes CHECK LOB NoYes COPYYes Yes COPYTOCOPYYes Yes LOADNoYes MERGECOPY Yes Yes MODIFY RECOVERY Yes n/a MODIFY STATISTICS Yes n/a QUIESCE Yes n/a REBUILD INDEX Yes Yes RECOVER Yes n/a REORG INDEX Yes Yes REORG TABLESPACEYes Yes REPORTYes n/a RUNSTATSYes n/a UNLOADYes Yes 2008 IBM Corporation 6 7. Automatic Space ManagementSliding secondary allocation quantity sizeApplies to DB2 managed pagesets onlyTries to avoids VSAM maximum extent limit errorsCan reach maximum dataset size before running out of extentsUses cylinder allocation Default PRIQTY 1 cylinder for non-LOB tablespaces and indexes 10 cylinders for LOB tablespaces Improved SQL SELECT and INSERT performance 2x improvement relative to track allocationCan be used for New pagesets: No need for PRIQTY/SECQTY values Existing pagesets: SQL ALTER PRIQTY/SECQTY values to -1 plus schedule a REORG 2008 IBM Corporation7 8. COPY Best Practices COPY PARALLEL keyword provides parallelism for lists of objects (including partitions) CHECKPAGE YES (default in DB2 9, no longer sets copy pending, look for RC=8!) Maximize other utilities access to objects while copying a list with SHRLEVEL CHANGE and OPTIONS EVENT(ITEMERROR,SKIP) Keeps objects in the list in UTRW state *only* as each object is being copied instead of for the duration of the COPY utility UTRW utility allows read/write access by applications, but no access for exclusive utilities Incremental copy rule-of-thumb: Consider using incremental image copy if 5% in data sharing, >10% in non-data sharing REORGINSERTS (number of records inserted since the last Reorg)/TOTALROWS > 25% REORGDELETES (number of records deleted since the last Reorg)/TOTALROWS > 25% EXTENTS (number of extents) > 254 REORGDISORGLOB (number of LOBs inserted since the last Reorg that are not perfectly chunked)/TOTALROWS > 50% SPACE > 2 * (DATASIZE / 1024) (when free space is more than used space) REORGMASSDELETE > 0 (mass deletes on seg tsp and DROP on multi-table tsps) RUNSTATS PERCDROP > 10% SYSIBM.SYSLOBSTATS.ORGRATIO < 50% (changed to a value 0-100 in PQ96460 on V7/V8) (NEARINDREF + FARINDREF) / CARDF > 10% non-data-sharing, > 5% if data sharing FAROFFPOSF / CARDF > 10% Or, if index is a clustering index, CLUSTERRATIOF < 90% (irrelevant if predominantly randomaccess) Other Tsp is in REORP or adv reorg pending status (AREO*) as result of an ALTER TABLE stmnt Index on the tsp is in adv REBUILD pend state (ARBDP) as result an ALTER stmnt 2008 IBM Corporation25 26. Reorg table space V9 recommendations Consider running REORG TABLESPACE in the following situations: Real-time statistics (TABLESPACESTATS) REORGUNCLUSTINS (number of records inserted since the last Reorg that are not well- clustered)/TOTALROWS > 10% Irrelevant if predominantly random access REORGUNCLUSTINS is only an indication of the insert behavior and is correlated to the cluster ratioonly if there are no updates or deletes. To prevent DSNACCOR/X from triggering on these, identifysuch objects and put them in exception list (REORGNEARINDREF+REORGFARINDREF (number of overflow rows since the last Reorg))/TOTALROWS > 5% in data sharing, >10% in non-data sharing REORGINSERTS (number of records inserted since the last Reorg)/TOTALROWS > 25% REORGDELETES (number of records deleted since the last Reorg)/TOTALROWS > 25% EXTENTS (number of extents) > 254 REORGDISORGLOB (number of LOBs inserted since the last Reorg that are not perfectly chunked)/TOTALROWS > 50% SPACE > 2 * (DATASIZE / 1024) (when free space is more than used space) REORGMASSDELETE > 0 (mass deletes on seg tsp and DROP on multi-table tsps) RUNSTATS use RUNSTATS statistics as a trigger to consider running REORG Dont PERCDROP > 10% SYSIBM.SYSLOBSTATS.ORGRATIO < 50% (changed to a value 0-100 in PQ96460 on V7/V8) (NEARINDREF + FARINDREF) / CARDF > 10% non-data-sharing, > 5% if data sharing FAROFFPOSF / CARDF > 10% Or, if index is a clustering index, CLUSTERRATIOF < 90% (irrelevant if predominantly randomaccess) Other Tsp is in REORP or adv reorg pending status (AREO*) as result of an ALTER TABLE stmnt Index on the tsp is in adv REBUILD pend state (ARBDP) as result an ALTER stmnt 2008 IBM Corporation26 27. Reorg table space V9 recommendations Consider running REORG TABLESPACE in the following situations: Real-time statistics (TABLESPACESTATS) REORGUNCLUSTINS (number of records inserted since the last Reorg that arenot well-clustered)/TOTALROWS > 10% Irrelevant if predominantly random access REORGUNCLUSTINS is only an indication of the insert behavior and is correlated tothe cluster ratio only if there are no updates or deletes. To prevent DSNACCOR/Xfrom triggering on these, identify such objects and put them in exception list (REORGNEARINDREF+REORGFARINDREF (number of overflow rows sincethe last Reorg))/TOTALROWS > 5% in data sharing, >10% in non-data sharing REORGINSERTS (# of records inserted since the last Reorg)/TOTALROWS >25% REORGDELETES (# of records deleted since the last Reorg)/TOTALROWS >25% EXTENTS (number of extents) > 254 REORGDISORGLOB (number of LOBs inserted since the last Reorg that are notperfectly chunked)/TOTALROWS > 50% SPACE > 2 * (DATASIZE / 1024) (when free space is more than used space) REORGMASSDELETE > 0 (mass deletes on seg tsp and DROP on multi-tabletsps) Other Tsp is in REORP or adv reorg pending status (AREO*) as result of an ALTERTABLE stmnt Index on the tsp is in adv REBUILD pend state (ARBDP) as result an ALTERstmnt 2008 IBM Corporation 27 28. Reorg Index recommendationsConsider running REORG INDEX in the following cases: Real-time statistics (SYSINDEXSPACESTATS) REORGPSEUDODELETES (number of index entries pseudo-deletedsince the last Reorg)/TOTALENTRIES > 10% in non-data sharing, 5% ifdata sharing as pseudo-deleted entry can cause S-lock/unlock in Insertfor unique index REORGLEAFFAR (number of index leaf page splits since the last Reorgand the new leaf page far from the original leaf page)/NACTIVE > 10% REORGINSERTS ( number of index entries inserted since the lastReorg)/TOTALENTRIES > 25% REORGDELETES ( number of index entries inserted since the lastReorg)/TOTALENTRIES > 25% REORGAPPENDINSERT / TOTALENTRIES > 20% EXTENTS (number of extents) > 254 RUNSTATS LEAFFAR / NLEAF > 10% (NLEAF is a column in SYSIBM.SYSINDEXESand SYSIBM.SYSINDEXPART) PSEUDO_DEL_ENTRIES / CARDF > 10% for non-data sharing and > 5%for data sharing Other The index is in advisory REORG-pending status (AREO*) or advisory-REBUILD-pending status (ARBDP) as the result of an ALTER statement 2008 IBM Corporation 28 29. RUNSTATS Best PracticesRUNSTATS SHRLEVEL CHANGE for availability Collect only column stats on columns used in SQLpredicates Use the Statistics Advisor to detect which stats to collect SAMPLE reduces CPU time when gathering column stats KEYCARD provides valuable info for little processing cost(see next slide) 2008 IBM Corporation If not gathered with inline stats, RUNSTATS should be run with SHRLEVEL CHANGE for availability. For large amounts of data RUNSTATS TABLESPACE with column statistics is very CPU intensive. By specifying the SAMPLE keyword, CPU can be reduced while still gathering enough stats for the optimizer to choose a good access path. 29 30. RUNSTATS KEYCARD Collects all of the distinct values in all of the 1 to n key column combinations for the specified indexes. n is the number of columns in the index. For example, suppose that you have an index defined on three columns: A, B, and C. If you specify KEYCARD, RUNSTATS collects cardinality statistics for column A, column set A and B, and column set A, B, and C. So these are cardinality statisics across column sets... if we had a 3-column index that had these values:Col1 Col2 Col3ABCABDABEABEACAACAADABBB then these stats would be collected: Col1 cardinality = 2 Col1 and Col2 cardinality = 4 Col 1, Col2, and Col3 cardinality = 6 2008 IBM Corporation 30 31. When is RUNSTATS needed? When the data changes sufficiently to warrant new statistics REORG of table space or index (use inline stats!) LOAD REPLACE of table space (use inline stats!) After "significant" application changes for the tablespace or index Periodically (weekly, monthly) except for read onlydata? Application tracks updates with activity tables? After percentage of pages changed since lastRUNSTATS (RTS)? Understand implications for access paths! 2008 IBM Corporation 31 32. LOAD Best PracticesLOAD LOG NO reduces log volume; if REPLACE, then take inline copy KEEPDICTIONARY (track dictionary effectiveness with historystatistics PAGESAVE) - small performance impact if loading lots ofdata 254 partition limit for compressed table spaces can be lifted by DBA PK51853 shipped new ZPARM MAX_UTIL_PARTS (watch virtual storage) Load Partition Parallelism (V7) Not individual LOAD part level jobs Enable Parallel Access Volume (PAV) Index parallelism (SORTKEYS) Provide value for SORTKEYS when input is tape/PDS mbr or variable length SORTKEYS is the sum of ALL indexes (and foreign keys) on the table Remove SORTWKxx / UTPRINxx, and turn on UTSORTAL=YES 2008 IBM Corporation LOG NO is the best performance option as well as KEEPDICTIONARY. Dictionaries should not change much. Their effectiveness can be tracked via history statistics PAGESAVE.When loading data into compressed table spaces there is a limit of 254 partitions that can be processed. This limit was introduced because of the amount of virtual storage being used while building compression dictionaries for many partitions in parallel. With PK51853 (UK31488/UK31489) you can now set your own limit in ZPARM MAX_UTIL_PARTS. The ZPARM only changes the limit for the number of partitions, storage consumption was not changed with this PTF, so keep an eye on your virtual storage when loading table spaces with many compressed partitions.Prior to V7, jobs were separated by partition to improved elapsed time in loading large partitioned tables. When NPIs existed, these would be dropped and rebuilt in a separate step. However, now partitioned objects can be loaded in parallel by partition. If the table has NPIs, the keys are merged into a common sort so that KLOAD processing can be used on a single member, thus improving LOAD performance.With multiple indexes, SORTKEYS allows for the indexes to be built in parallel, thus improving elapsed time with some increase in CPU. Usually LOAD should be able to estimate the number of records contained in the input data set. For input on tape, in PDS members or when the input data set has variable length records with great variation of the record length this estimate is not possible. In those cases specify a SORTKEYS estimate which will be required to estimate the required sort work space. Remember that SORTKEYS is the sum for all keys/foreign keys on the table(s) not the number of input records. Foreign keys are only counted if they dont match an existing index or if there are more than 1 foreign key even if they match an existing index. So if youre loading into a table space with 1 table and 3 indexes then SORTKEYS needs to be set to three times the number of input records.For the SORT, although you can control the data sets used by specifying SORTWKxx, avoid doing that. By specifying SORTDEVT, DFSORT or DB2 will dynamically allocate what is needed. DFSORT cant span volumes with sort data sets, so if the amount of data is large, you may want so specify SORTNUM to ensure there are enough data sets available or turn on UTSORTAL=YES.32 33. LOAD Best Practices continued LOAD Inline COPY & Inline STATISTICS Use REUSE to logically reset and reuse DB2-managed data sets without deleting and redefining them (affects elapsed time) When using DISCARD, try to avoid having the input on tape Input is re-read to discard the errant records Avoid data conversion, use internal representation if possible Sort data in clustering order (unless data is randomly accessed via SQL) LOAD RESUME SHRLEVEL CHANGE instead of batch inserts LOAD REPLACE SHRLEVEL CHANGE can be achieved by loading into clone table and then exchanging the tables on DB2 9 LOAD via Batchpipes to load data that is transferred via FTP from clients 2008 IBM Corporation A single pass of the data which creates an inline copy and gathers inline statistics while loading is better for elapsed time than running separate COPY and RUNSTATS utilities after the LOAD. Be aware that DISCARD records input block and record numbers and later returns to the input data set to position on each discarded record. This can be really really slow if the input data set is on tape! Sorting data in clustering key order before running LOAD improves performance - and can be done automatically by DB2 Utilities Enhancement Tool for z/OS Version 2.1. DB9 with its clone table support allows a function that comes close to a LOAD REPLACE SHRLEVEL REFERENCE. You can load data into the (named) clone table in the background and as soon as the LOAD has completed you can exchange the clone and the base table and make the new data active. The old data can be read by applications while new data is being loaded, only for the EXCHANGE all readers have to be drained. LOAD via Batchpipes can be used to load data immediately while it is ftped from the client without storing it in data sets first. This helps to reduce the elapsed time as well as the amount of DASD needed for the transfer in total. An article in the IDUG Solutions Journal Volume 14 Number 2 (Summer 2007) describes the process in detail.33 34. REBUILD INDEX Best PracticesREBUILD INDEX Indexes are built in parallel Remove SORTWKxx / UTPRINxx and use SORTDEVT/SORTNUM orUTSORTAL=YES Inline STATISTICS Use REORG INDEX SHRLEVEL CHANGE to move index data sets todifferent volumes CREATE INDEX DEFER followed by REBUILD INDEX As of V8, dynamic SQL will not select the index until it is built DB2 9 allows SHRLEVEL CHANGE Unique indexes are put in RBDP because uniqueness can not be checkedduring rebuild process, so no INSERT/ UPDATE/DELETE allowed thataffects unique index No parallel jobs on different indexes of the same table space -> use single jobwith multiple indexes specified 2008 IBM Corporation REBUILD INDEX recommendations follow LOAD and REORG.For availability reasons you should not use REBUILD INDEX to move the index data sets to different volumes. REORG INDEX SHRLEVEL CHANGE is much more efficient and provides better availability.Indexes can be created with DEFER YES and then materialized by REBUILD INDEX. Dynamic SQL will not select the index until it is built. In DB2 9 this is even more effective as the index can be rebuilt with SHRLEVEL CHANGE allowing concurrent updates. However this does not allow concurrent updates of data that affects unique indexes as they will still be put in RBDP because uniqueness can not be guaranteed otherwise. REBUILD INDEX SHRLEVEL CHANGE can not be invoked for different indexes of the same table space in multiple jobs concurrently. Instead specify those indexes in a single invocation of the REBUILD INDEX utility. 34 35. More online utilitiesREBUILD INDEX SHRLEVEL CHANGE Great for building new non-unique indexes orwhen index is in RBDP Index is built in place with no shadow To move indexes to different volumes with availability still use REORG INDEX SHRLEVEL CHANGE) Table space must be in LOGGED state (newlog phase) 2008 IBM Corporation A new enhancement of REBUILD SHRLEVEL CHANGE allows for improved availability whencreating a new index, and its especially great for creating a new non-unique index on a populatedtable.Lets suppose you have a non-unique index with column A, and you have done performance analysisto know that if you also included column B in that same index you would get performance benefitsfrom your applications. The way to accomplish this without a period of unavailability follows: 1) CREATE the new non-unique index with A/B specifying DEFER YES. This places the index inRBDP without building it. 2) RUN REBUILD SHRLEVEL CHANGE to build the new index. The table space is kept in RWmode so that changes can be made. The index being built is in RBDP. Inserts, Updates, andDeletes are avoided for the index when in RBDP. After building the index, there is a log apply tocatch all the changes that were made after the table space scan. This log phase is much like thatfor REORG. Finally, the index is completely built and RBDP is reset. 3) BIND is run to select the superior access path with the new index. 4) The old index is dropped.This type of SHRLEVEL CHANGE utility is different than REORG because the index is build in placerather than using a separate shadow dataset. Therefore, there is no SWITCH phase forREBUILD. Shadows arent as useful as in the case of REORG because the index is usually onlybuilt when it is newly created or broken, so the original is typically unavailable before the utilitystarts anyway. Because there are no shadows, dont try to use REBUILD to move indexes to anew volume like you can do with REORG INDEX.Of course, this utility depends on applying log records, so you cant specify SHRLEVEL CHANGE fornot logged tables.35 36. DB2 Allocated Sort Work Data Sets PTFs shipped 02/2008 to enable DB2 to dynamically allocate sort work data sets in utilities: DB2 for z/OS V8: PK45916 / UK33692SORTNUM DB2 9 for z/OS: PK41899 / UK33636 Enable with UTSORTAL=YES Used for all sorts in utilities: LOAD, REORG, CHECK INDEX, REBUILD INDEX, CHECK DATA, RUNSTATS Message DSNU3340I - UTILITY PERFORMS DYNAMIC ALLOCATION OF SORT DISK SPACE indicates use New behavior ignored if hard coded DD cards are found No more need to specify SORTNUM. Existing SORTNUM specification can be honored or ignored (IGNSORTN=YES) Data sets for largest sorts are allocated first Attempts to allocate data sets as large as possible (starting with 2 data sets per sort task, more data sets allocated if necessary) 2008 IBM Corporation In February 2008 we shipped two PTFs that enable DB2 to allocate the sort work data sets itself instead of having DFSORT allocate them. If DB2 allocates the data sets beforehand it knows exactly how many data sets were allocated and can optimize the degree of parallelism for utilities that use parallelism. Also it takes the burden off the administrator to find a reasonable SORTNUM value that matches many different utilities with different sort work requirements (one size doesnt fit all). We often refer to that enhancement as SORTNUM elimination.To make migration easier this new function needs to be enabled by a new ZPARM UTSORTAL=YES. If only this parameter is set then existing SORTNUM specifications will be honored to make sure existing jobs still run exactly as before. The new behavior will only be used for utilities without SORTNUM specified. If IGNSORTN is also set to YES then all utilities will use the new behavior. SORTNUM values specified will be ignored.When data set allocation starts the largest sort tasks will be handled first as long as the largest chunks are still available on the sort pool disks. DB2 will try to allocate the necessary sort work disk space within 2 data sets first. If the current disk conditions dont allow data sets of this size it will retry allocating smaller data sets and continue allocating data sets until the required size has been allocated. Dont worry if you see messages like IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR DATA SET.36 37. Checking all indexes in parallelTS TSTS IX1IX1IX1 IX2 IX3 Part 1 Part 2Part 3 Part 1 Part 2 Part 3 NPI NPI Fast replication: Flashcopy/SnapshotUTILINIT IX1 IX1IX1Part 1Part 2 Part 3UNLOAD TSPart 1SORT IX2 TS NPIPart 2TSPart 3IX3 ShadowsNPI SORTCHK 2008 IBM Corporation In this example of rebuilding all the indexes on a partitioned table space, three unload subtasks will pipe index keys to the appropriate of the three sort subtasks. The three sort subtasks will pipe sorted keys to three companion index build subtasks. If inline statistics are being collected, there will be three statistic subtasks as well. 37 38. CHECK SHRLEVEL CHANGE design SHRLEVEL REFERENCE CHECK INDEX causes data and indexes to be unavailable for update for the duration of the utility Claim as reader for target data and indexes Create shadow datasets Same dataset naming convention as REORG SHRLEVEL CHANGE Drain writers for target data and indexes Copy datasets with DFSMSdss ADRDSSU with FCNOCOPY to shadows Uses dataset-level FlashCopy2 if available Else, traditional media copy still smaller r/o outage than SHR REF After logical complete for datasets, Dedrain target data and indexes Run CHECK on shadow data and indexes At utilterm delete shadow datasets when DB2 managed 2008 IBM Corporation Performing a physical copy of the data uses subsystem resources and can impact the performance of other I/O operations that are issued to the ESS. Using the FCNOCOPY keyword on a DFSMSdss COPY command prevents the ESS subsystem from performing a physical copy of the data. In general, if you want a temporary copy of the data, specify FCNOCOPY, and then withdraw the FlashCopy relationship when you no longer need the copy. If you want a permanent copy, but want to delay background copy until a convenient time, specify FCNOCOPY to get a point-in-time copy and then perform FCNOCOPYTOCOPY later to start background copy. If you want a permanent copy and do not want to delay background copy, do not specify FCNOCOPY. Allow the ESS subsystem to perform the physical copy and release the subsystem resources that are used to maintain the FlashCopy relationship. 38 39. DB2 9 Utilities HighlightsMore online utilities Rebuild Index SHRLEVEL CHANGE Reorg LOB now supports SHRLEVEL REFERENCE (space reclamation) Check data, LOB and repair locate SHRLEVEL CHANGE Check index SHRLEVEL REFERENCE supports parallel for > 1 index Clones for online LOAD REPLACEOnline REORG BUILD2 phase eliminationREORG parallelism for UNLOAD, RELOAD, LOG phasesUtility TEMPLATE switchingUNLOAD SKIP LOCKED DATA option 2008 IBM Corporation If you look at all the utility offerings, we have been adding more SHRLEVEL CHANGE and REFERENCE utilities to improve availability in every release. Now with DB2 9, the following utilities have SHRLEVEL CHANGE (RUNSTATS, COPY, REORG TABLESPACE, REORG INDEX, LOAD RESUME, REBUILD, UNLOAD, CHECK INDEX, CHECK DATA, CHECK LOB, and REPAIR). Cloned tables effectually function as LOAD REPLACE SHRLEVEL CHANGE. We have also improved availability for REORG TABLESPACE of a part/part range by removing the BUILD2 phase. From a performance perspective, REORG of partitioned table spaces now has partition parallelism much like already existed in REBUILD INDEX, CHECK INDEX, COPY, RECOVER, and LOAD. Utility template switching allows flexibility in which template applies to datasets based on attributes of the object processed. UNLOAD SKIP LOCKED DATA gives another option besides just UR processing or CS processing. UR will not get locks, and therefore can unload dirty data. CS acquires locks and can therefore be slowed with lock contention. SKIP LOCKED is a CS option that will skip those rows/pages that are locked and give consistent data (but not all rows) without being slowed much by contention.39 40. DB2 9 Utilities HighlightsMODIFY Recovery enhancements Retain keyword added to improve management of copies LAST(n), LOGLIMIT, GDGLIMITVolume-based COPY/RECOVER (BACKUP SYSTEM/RESTORESYSTEM) RECOVER modified to enable object-level recovery from volume FlashCopy Full integration of tape into BACKUP/RESTORE SYSTEM utilities Incremental FlashCopy, APAR PK41001Truncate log based on timestampRECOVER to any point-in-time with consistencyRECOVER RESTOREBEFORE to use an earlier image copyDisplay progress of RECOVER during log applyCOPY CHECKPAGE option always active Copy Pending avoided if broken page encounteredCOPY SCOPE PENDING to copy only objects in Copy Pending 2008 IBM Corporation Prior to DB2 9 the user specified the deletion criteria either as before a specific date or by greater than a given age in days. DB2 9 has an alternative, by which instead of deletion criteria, retention criteria can be specified. There are a number of improvements and extensions to the volume base utilities, or the BACKUP/RESTORE SYSTEM utilities. If you need to do a conditional restart, DSNJU003 now allow you to specify a timestamp instead of an RBA/LRSN. The option is also available for the SYSPITRT for easier prep to truncate the log before running RESTORE SYSTEM. The COPY and RECOVER utilities each had several enhancements in DB2 9.40 41. More online utilities (cont) CHECK DATA SHRLEVEL CHANGE CHECK LOB SHRLEVEL CHANGE REPAIR LOCATE SHRLEVEL CHANGE REORG LOB now supports SHRLEVEL REFERENCE Clones effectively provide LOAD REPLACE SHRLEVEL CHANGE UTS only UNLOAD with ISO(CS) supports skipping rows that are locked for transaction updates 2008 IBM Corporation More improvements to availability are listed here. CHECK DATA and CHECK LOB now have SHRLEVEL CHANGE support which operates the same way described for CHECK INDEX. The best availability is when there are FlashCopy devices availability, but the utilities will still work on other devices. The difference will be that a SLOW copy will be done to make a snapshot before running the CHECK utility. The availability is still improved over running SHRLEVEL REFERENCE, but the superior availability when using FlashCopy devices is the way to go.When running CHECK with SHRLEVEL CHANGE on a snapshot, any error cannot be corrected, but where we can REPAIR statements will be generated to correct the problems. These REPAIR statements also run with SHRLEVEL CHANGE for maximum availability.REORG LOB SHRLEVEL REFERENCE gives better function along with better availability. It turns out that reorganization typically benefits space usage and performance; however, for LOBS the need to reclaim fragmented space is usually much more important, and any performance improvement for better LOB organization is typically negligible. SHRLEVEL REFERENCE reorganizes to a shadow and reclaims spaces where SHRLEVEL NONE did not, therefore it is much better and is recommended as the best practice. In a future release SHRLEVEL NONE will be deprecated.We dont have support for LOAD REPLACE SHRLEVEL CHANGE, but clone tables provide the same capability in a more flexible manner. Two separate but identical tables are maintained under one logical table space. Both tables can be populated or access using utilities or SQL. At the point a switchover is needed, it can be done with an ALTER statement to switch from the active table to the other. The switch is much like the SWITCH phase for online reorg. Note that clones are only supported for Universal Table Spaces (UTS).Another variation of UNLOAD allows a method of access that is somewhat in between UR and CS. For UNLOAD with UR, one can unload dirty data that hasnt been committed, but the advantage is that there is no contention with applications and it will proceed quickly. For UNLOAD with CS, all data is clean and committed, but if locked data is encountered it will skip around it without unloading when SKIP LOCKED DATA is specified. This means rows or pages may be missed during the unload.41 42. DB2 9 Utilities Support for all new functions in DB2 9 for z/OS Universal Table Spaces (UTS) Partition By Growth (PBG) Partition By Range (PBR) XML table spaces (PBG or PBR) Not logged tables/table spaces Clone tables Index on expression New data types (BIGINT, VARBINARY, DECFLOATXML) 2008 IBM Corporation Utility support is extensive for new features such as UTSs, XML, and not logged. For example, one can gather stats, reorg, unload, or rebuild indexes on a PBR or PBG table space. For PBGs, this means that inserts which extend the partitions can be run concurrently with SHRLEVEL CHANGE utilities. For XML on a segmented table space, it resides in a PBG format. Cloned tables are somewhat like Online Load Replace. There is the live table, and then there is the shadow or clone. One can load the clone, and then when ready for the clone to become live, a switch can be trigger with DDL. Index on expression is a key that is generated from an SQL or RDS expression. We also must support all the new data types so utilities like LOAD and UNLOAD can exploit them.42 43. Copy Utility Changes Always perform CHECKPAGE on the COPY utility The COPY utility includes SCOPE PENDING support to improve usability COPY utility bufferpool usage uses MRU management of those pages read by the COPY utility Template switching (e.g., copy to tape if large; to disk if small)TEMPLATE LRG DSN &DB..&TS..D&DA..T&TI. UNIT=TAPETEMPLATE SML DSN &DB..&TS..D&DA..T&TI. UNIT=SYSALLDA LIMIT(20 CYL, LRG)COPY TABLESPACE SMALL.TS COPYDDN(SML)COPY TABLESPACE LARGE.TS COPYDDN(SML) 2008 IBM Corporation CHECKPAGE was an option that is now always done. The value of knowing copies are good outweighs the extra cost of less than 5% CPU. Its still a bit less than before because of other improvements in our internal logic.CHECK for an index is not the default because the overhead is greater than 2X CPU (700%). These indexes are not as important as the data.COPY Pending is no longer set because this is prohibitive. However, a syscopy record is cut so that an incremental cant be taken using that base. The RC=8 still applies, but an entire check is done instead of stopping early.COPY now has SCOPE pending like REORG and REBUILD to simply usability. 43 44. Recover Utility Changes Recovery to a point in time with consistency (NFM mode) Uncommitted changes are backed out Significantly reduces (eliminates?) the need to runQUIESCE Does not apply to RECOVER TOCOPY, TOLASTCOPYand TOLASTFULLCOPY using SHRLEVEL CHANGEcopy (consistency is not ensured use RBA/LRSN afterCOPY point) Include all relevant objects in same RECOVER to UW2ensure data consistency from the application point ofview U12 U22U11 U21UW1 UWn - Unit of Work number nACurrent Unm Update number n in Unit of Work m 2008 IBM Corporation The RECOVER utility is changed to improve recovering to a point of time with consistency. IfRECOVER is specified for a particular RBA/LRSN, the utility will also ensure that the tablespaceand indexes are at a level of consistency. Heres how it works: 1) As before 9, the RESTORE phase is followed by a fast log apply technique to get to the specifiedpoint, in our example that is point A. 2) The utility does a Current Status Rebuild (CSR), just like is done for DB2 restart/recovery todetermine if there were any active units of work for the recovery point of A. 3) For anything inflight, inabort, etc that have not been committed, the log will be applied to roll backthose units of work. In this case, UW2 and UW1 changes will be rolled back to there respectivestarting points.This function could greatly improve your procedures today if you use QUIESCE to get recoverypoints so that you can get a consistent recovery to a point in time. In todays environment withheavier workloads, some customers were finding it very difficult to get a quiesce point, and therewas a requirement to put in timeout/retry logic for that utility. However, the RECOVER solution weprovide in 9 is a superior solution!An exception to consistency occurs for RECOVER to an image copy if that image copy was taken with SHRLEVEL CHANGE. In that case there is no log apply. Specify an RBA/LRSN after the COPY point to ensure there is log processing that gets you to a point of consistency.Remember, if there are multiple objects that are logically related (e.g. RI, LOB spaces, XML spaces), they should be included in the same RECOVER utility specification. 44 45. Recover to a Point In Time (cont)Recovery to a point in time with consistency (contd) Two new phases in RECOVER: LOGCSR and LOGUNDO (will each be executed by member) RECOVER can be restarted in any LOG phase, LOGCSRwill restart at beginning, others at last commit point Fast Log Apply not used for LOGUNDO phase Progress of LOGUNDO phase can be monitored by regularstatus message DSNU1555I showing current and target logRBA (e.g. to detect backout pf long running URs) This function can greatly reduce your need to run QUIESCE If you want to capture a log point non-disruptively -DISPLAY LOG, or QUIESCE against SYSEBCDC (or some other dummy table space), or SELECT CURRENT_TIMESTAMP from SYSDUMMY1 and then convertit to an LRSN 2008 IBM Corporation Here are some more details on how RECOVER to a point of consistency works. The status of RECOVER and the LOGUNDO phase can be tracked my regular message which show the status of the backout of the long running URs. This backout will not be as fast as forward logapply. If you run quiesce to capture a log point, there are several ways to do it in a more non-disruptive way. 45 46. BACKUP/RESTORE SYSTEM Whats new in DB2 9 for BACKUP and RESTORE SYSTEM? The ability to recover at the object level using system-levelbackups RECOVER to point in time or to current COPY YES indexes can be included in this recovery Will restore from the previous system level backup or image copy Requires z/OS V1R8 Tape Support Support for Incremental FlashCopy PK41001 & PK42014 RTS COPY columns also updated for BACKUP SYSTEM 2008 IBM Corporation In V8 it was introduced, primarily for ERP vendors. Doesnt make sense to recover a single object because all related and there is little knowledge of the contents of objects. For example, an SAP systems consists of thousands of objects. The backup info is in the BSDS. For RECOVER CURRENT, they need Jacks DCR. You cant specify a PITR of FFFFFFFF. We can now recover an individual recovery for a single objects. Exploit a DFSMS enhancement. Potentially, no standard DB2 image copies. If the dataset has moved since the flash copy, this one would not work. z/OS is fixing this in 1.11 which is scheduled for Sept 2009. Any time we do a DSM reset. So whenever a REORG, take another system level backup. The ZPARM set means extra overhead to look at the BSDS. How long does restore system take? Stacking 99 volumes. 7 hours for 99 volumes. The Apars are for V8 or DB2 9 DASD RESTORE is like 500 volumes in 2 minutes. 46 47. Future enhancements Simplification, automation, availability, performance, flexibilityWeb-based Data Studio Admin ConsoleDB2 Utilities Enhancement Tool (UET)High priority requirements Autonomic stats Autonomic compression Recover to a different table space in a different DB2 with consistency Reduce RUNSTATS resource consumption Data set-level FlashCopy supportManual invocation of Online consistent image copiesRUNSTATS Policy-based recovery COPY/BACKUP SYSTEM UTSERIAL eliminationQUIESCE REORG enhancements (LOBs, etc.) MODIFY RECOVERY REORG 2008 IBM Corporation We will continue to enhancement utilities for the major areas of need, especially in usability. The goal is that periodic utilities of RUNSTATS, COPY/BACKUP, QUIESCE, MODIFY RECOVERY, and REORG will be submitted automatically when needed without manual intervention. The Data Studio Admin Console comes from the Data Servers group. It is designed for alerts and managing systems at a global level. The Utilities Enhancement Tool works closely with utilities to do things like. Sort LOAD input before doing a LOAD Create mapping tables for online reorg jobs. Allow for constants in the LOAD input. Auto-Stats Run RUNSTATS when needed, or update with RTS if possible. Determine what options are needed to collect the appropriate stats according to the access paths taken by applications. In the future profiles with the options needed for each object will be kept in the DB2 catalog. There will be better usability between stats advisor and the running of RUNSTATS with the suggested options. Auto-compression Eliminate the need for administration of compressing DB2 table spaces. The DBMS will automatically use compression when it is beneficial, disable compression when it is not beneficial, and not require the use of a utility to implement/disable compression. This becomes the default behavior for the whole data, administrators still can configure and limit the scope for the DBMS automatic compression. 47 48. Future enhancements Simplification, automation, availability, performance, flexibilityWeb-based Data Studio Admin ConsoleDB2 Utilities Enhancement Tool (UET)High priority requirements Autonomic stats Autonomic compression Recover to a different table space in a different DB2 with consistency Reduce RUNSTATS resource consumption Data set-level FlashCopy support Manual invocation of Online consistent image copies RUNSTATS Policy-based recoveryCOPY/BACKUP SYSTEM UTSERIAL elimination QUIESCE REORG enhancements (LOBs, etc.)MODIFY RECOVERY REORG 2008 IBM Corporation Recover to alternative TS with consistency. This is targeted with Recover Expert. Dataset level FlashCopy will 1. Exploit dataset-level FlashCopy that will result in VSAM image copies that can be used by the RECOVER utility to instantaneously restore an image copy. UNLOAD and COPYTOCOPY utilities will support these new VSAM image copy formats. 2. With a new keyword on all utilities that create image copies, specify whether log records should be used to create a consistent image copy. FlashCopy Version 2-capable devices are a prerequisite for this function. 3. Provide a utility option that will specify if the recovery should be performed by processing the log backwards versus the traditional restore and forward log-apply process. 4. Provide a DB2 subsystem/data sharing group level option that will disable all tracking of modified pages. UTSERIAL elimination eliminates the majority of timeouts on the global UTSERIAL lock resource. Existing concurrent workloads can be executed as they have previously, with no design changes in the jobs containing IBM DB2 Utilities. Throughout for multiple utilities may be increased, but more concurrent utilities can be run. Policy Based Recover is target for delivery with Recover Expert REORG Enhancements allow multiple part ranges to be specified (n:m, i:j), and allow REORG of a UTS tablespace or subset of partitions of a UTS to move rows with LOB columns between partitions when required by an alter partition key range or if the table space is PBG. Reduce the time spent in the switch phase and provide improved messages and diagnostics to allow customers to better estimate the length of REORG phases and time to completion.48 49. References DB2 for z/OS home page: DB2 9 for z/OS Technical Overview: DB2 9 for z/OS Performance Topics: DB2 for z/OS and OS/390 Version 7 Using the Utilities Suite: Recommendations for Tuning Large DFSORT Tasks DFSMSrmm SMS ACS Support IDUG Solutions Journal 2008 IBM Corporation 49 50. DB2 for z/OS Information Resources Information Management Software for z/OS Solutions Information Center DB2 for z/OS Information Roadmap DB2 for z/OS library page DB2 for z/OS Exchange (examples trading post) DB2 for z/OS support Official Introduction to DB2 for z/OS 2008 IBM Corporation 50 51. Thank you! Craig Friske for z/OS Utilities Development IBM Silicon Valley Lab 2008 IBM Corporation51 52. Disclaimer Copyright IBM Corporation 2008. All rights reserved.U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted byGSA ADP Schedule Contract with IBM Corp. THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FORINFORMATIONAL PURPOSES ONLY. WHILE EFFORTS WERE MADE TO VERIFY THECOMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THISPRESENTATION, IT IS PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND,EXPRESS OR IMPLIED. IN ADDITION, THIS INFORMATION IS BASED ON IBMSCURRENT PRODUCT PLANS AND STRATEGY, WHICH ARE SUBJECT TO CHANGE BYIBM WITHOUT NOTICE. IBM SHALL NOT BE RESPONSIBLE FOR ANY DAMAGESARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION ORANY OTHER DOCUMENTATION. NOTHING CONTAINED IN THIS PRESENTATION ISINTENDED TO, NOR SHALL HAVE THE EFFECT OF, CREATING ANY WARRANTIES ORREPRESENTATIONS FROM IBM (OR ITS SUPPLIERS OR LICENSORS), OR ALTERINGTHE TERMS AND CONDITIONS OF ANY AGREEMENT OR LICENSE GOVERNING THEUSE OF IBM PRODUCTS AND/OR SOFTWARE. IBM, the IBM logo,, z/OS, DB2, DFSMS, and DFSORT are trademarks or registeredtrademarks of International Business Machines Corporation in the United States, othercountries, or both. If these and other IBM trademarked terms are marked on their firstoccurrence in this information with a trademark symbol ( or ), these symbols indicate U.S.registered or common law trademarks owned by IBM at the time this information waspublished. Such trademarks may also be registered or common law trademarks in othercountries. A current list of IBM trademarks is available on the Web at Copyright andtrademark information at 2008 IBM Corporation52