z/os best practices: large stand-alone dump handling ... · pdf filez/os best practices: large...

12
© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014 Page 1 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices z/OS Best Practices: Large Stand-Alone Dump Handling - Version 4 Update: 07/28/2014 Nobody ever wants to take a Stand-alone Dump (SADMP) of a z/OS system. Nevertheless, when your z/OS system is not responding due to an error in a critical system component, or it enters a wait state, a Stand-alone Dump is your only means of capturing sufficient diagnostic data to allow IBM Service to diagnose why your system entered the condition and recommend a fix to prevent the issue from happening again. Therefore, one needs to plan for the dump to be taken and processed as quickly as possible. Several z/OS releases have made improvements in taking and processing large stand- alone dumps. The system allows you to define a multiple-volume dump data set that simulates “striping,” and writing blocks of data in I/O priority order to each of the volumes defined in the data set. This greatly improved the elapsed time to capture a stand-alone dump. Stand-alone dump captures the page-frame table space and uses it as a pre-built dump index that relates to each absolute storage page, allowing IPCS to map the data ranges in the dump and improve the time to handle typical IPCS dump analysis requests for virtual storage access. Other enhancements allow stand-alone dump to be initiated from the operator’s console with a VARY command. z/OS functions also improve the handling of large stand-alone dumps, including the ability to subset a dump using the IPCS COPYDUMP command, allowing you to send a dump of the core system components to IBM while the rest of the dump is being processed and transferred. Most recently, in z/OS V1R13, the Problem Documentation Upload Utility allows transmission of multi-gigabyte files much more quickly, and encrypts the data all in the same process. This paper describes a set of “best practices” for ensuring the stand-alone dump is successful at capturing the necessary information for use by IBM Service, optimizing stand-alone dump data capture, and optimizing problem analysis time, . In particular, the following areas are described: Stand-Alone dump data set definition and placement IPCS performance considerations Preparing documentation to be analyzed Sending documentation to IBM support Testing your stand-alone dump setup This paper replaces an existing stand-alone dump best practices information, previously published as “z/OS Best Practices: Large Stand-Alone Dump Handling Version 2”. The URL remains the same: (http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103286).

Upload: ngoxuyen

Post on 06-Feb-2018

261 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 1 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

z/OS Best Practices: Large Stand-Alone Dump Handling - Version 4

Update: 07/28/2014

Nobody ever wants to take a Stand-alone Dump (SADMP) of a z/OS system.

Nevertheless, when your z/OS system is not responding due to an error in a critical

system component, or it enters a wait state, a Stand-alone Dump is your only means of

capturing sufficient diagnostic data to allow IBM Service to diagnose why your system

entered the condition and recommend a fix to prevent the issue from happening again.

Therefore, one needs to plan for the dump to be taken and processed as quickly as

possible.

Several z/OS releases have made improvements in taking and processing large stand-

alone dumps. The system allows you to define a multiple-volume dump data set that

simulates “striping,” and writing blocks of data in I/O priority order to each of the

volumes defined in the data set. This greatly improved the elapsed time to capture a

stand-alone dump. Stand-alone dump captures the page-frame table space and uses it as a

pre-built dump index that relates to each absolute storage page, allowing IPCS to map the

data ranges in the dump and improve the time to handle typical IPCS dump analysis

requests for virtual storage access. Other enhancements allow stand-alone dump to be

initiated from the operator’s console with a VARY command.

z/OS functions also improve the handling of large stand-alone dumps, including the

ability to subset a dump using the IPCS COPYDUMP command, allowing you to send a

dump of the core system components to IBM while the rest of the dump is being

processed and transferred. Most recently, in z/OS V1R13, the Problem Documentation

Upload Utility allows transmission of multi-gigabyte files much more quickly, and

encrypts the data all in the same process.

This paper describes a set of “best practices” for ensuring the stand-alone dump is

successful at capturing the necessary information for use by IBM Service, optimizing

stand-alone dump data capture, and optimizing problem analysis time, . In particular, the

following areas are described:

Stand-Alone dump data set definition and placement

IPCS performance considerations

Preparing documentation to be analyzed

Sending documentation to IBM support

Testing your stand-alone dump setup

This paper replaces an existing stand-alone dump best practices information, previously

published as “z/OS Best Practices: Large Stand-Alone Dump Handling Version 2”. The

URL remains the same:

(http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103286).

Page 2: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 2 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

This paper is also being updated with information which was described in “Every picture

tells a story: Best practices for stand-alone dump”, published in the February 2007 issue

of the z/OS Hot Topics Newsletter. You can access all issues of the Newsletter at:

www.ibm.com/servers/eserver/zseries/zos/bkserv/hot_topics.html

This paper assumes the installation is operating at least at the z/OS V1R13 level. Related

enhancements available on lower z/OS levels are noted where appropriate. The planning

steps are highlighted, followed by background information.

1. Plan a multi-volume stand-alone dump data set, being sure to place each volume on a separate DASD volume, and on a separate control unit.

The best dump performance is realized when the dump is taken to a multiple-volume,

DASD, stand-alone dump data set. Stand-alone dump exploits multiple, independent

DASD volume paths to accelerate data recording. The dump data set is actually spread

across all of the specified volumes, and not on each volume in succession. Stand-alone

dump processing does not treat a multi-volume DASD dump data set as multiple, single

data sets. (The creation of the multi-volume data set is covered in step 2.)

A key to the performance of stand-alone dump is the rate at which the data is written to

DASD. Modern DASD uses cache in the control unit to improve the performance of

write operations. The placement of the multi-volume, stand-alone dump data set across

Logical Subsystems (LSS) is strongly recommended to ensure contention in the I/O

subsystem, (channels, director ports, cache, or Control unit back store) is avoided,

thereby providing well performing DASD throughput throughout the duration of the

stand-alone dump process.

WSC Flash 10143 demonstrated that there are significant performance improvements

observed with writing the data to a multi-volume DASD stand-alone dump data set, or to

specific types of DASD. For more information, review the IBM performance analysis

reports at the following URL: http://www-

03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10143

Therefore, when defining the placement of a multi-volume DASD, stand-alone dump

data set, the following best practices apply:

1. Configure each volume on a separate Logical Subsystem (LSS) to ensure maximum

parallel operation. The best performance of stand-alone dump can be attained when

the multi-volume DASD data sets have the most separation, that is, separate physical

control units and separate channel paths.

2. Configure, if possible, the control units to minimize the occurrence of other activity at

the time of the stand-alone dump. For example, DB2 database recovery writing to a

local database volume, from an LPAR that is still running against the same control

unit as the stand-alone dump volume, will result in slower dump speed and may affect

the elapsed time needed to restart an alternative DB2 on an LPAR that is still running.

Page 3: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 3 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

3. Use FICON-attached DASD volumes where possible, which typically yield the best

data transfer rates. FICON channels will deliver much better performance than

ESCON channels.

4. For the best overall performance, dedicate a minimum of 4-5 DASD volumes to

stand-alone dump, plus any additional capacity needed to contain the dump size, up to

a maximum of 32 volumes. See the performance analysis report cited above for more

information.

IBM test results on z/OS V1R11 demonstrated that there is benefit to using up to 16

volumes spread over multiple channels. The test environment wrote a stand-alone

dump at a rate of 1.2 to 1.5 Gb/second. However, you can configure up to 32

volumes in the multiple-volume data set to handle larger capacity if needed.

5. Do not define your stand-alone dump volumes as targets of a hyperswap. You could

lose a completed stand-alone dump or a dump could get interrupted in the middle of

the dump process.

6. Starting with the RSM Enablement offering for z/OS V1R13, you can use storage

class memory (SCM), available on IBM zEnterprise EC12 servers, for paging. Doing

so provides greatly-improved paging performance than can be achieved using DASD,

and provides substantial improvements in SVC Dump and stand-alone dump capture

time. The dump improvement is from capturing paged-out data on SCM faster than

from DASD page data sets.

While not recommended by IBM, stand-alone dump can also write to a fast tape

subsystem. When directing the stand-alone dump to a tape drive, the dump will only use

a single device and will not prompt for another device, so you cannot switch back to

using a DASD device for this stand-alone dump.

2. Create the multi-volume stand-alone dump data set

Define a stand-alone dump data set using the AMDSADDD utility1. Specify a volume

list (VOLLIST) in AMDSADDD to designate a list of VOLSERs corresponding to each

DASD volume making up the data set. A multi-volume data set will be allocated using

the specified list of volumes. The utility uses the device number of the first volume to

specify the data set to stand-alone dump. To achieve parallelism each volume should be

on a different LSS when writing the dump. See “IBM System Test Example,” at the end

of this paper, for a sample job using AMDSADDD to generate the stand-alone dump data

set.

1 See MVS Diagnosis: Tools and Service Aids (GA22-7589), Chapter 4, for information on

the AMDSADDD utility and any other Stand-alone Dump functions discussed in this paper.

Page 4: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 4 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

Be sure to catalog your stand-alone dump data set to prevent the possibility of accessing

the “wrong version” of the data set when using IPCS COPYDUMP. (Step 4 covers

information about IPCS COPYDUMP).

Alternatively, IPCS offers a “SADMP Dump Data Set Utility,” available from the IPCS

Utility menu. From the data set utility panel, specify whether to define, clear, or

reallocate the stand-alone dump data set, specify its name and the volume serial numbers

for the stand-alone dump data striping. This panel invokes the stand-alone dump

allocation program to define the data set requested. The volume names, device type, and

allocated space are confirmed as well.

3. Define a dump directory with the right attributes to facilitate post-processing of large stand-alone dumps and large SVC dumps

Use IPCS to consolidate and/or extract ASIDs from the dump, as well as format and

analyze the dump. IPCS uses a dump directory to maintain information about the layout

and content of the dump. The dump directory is a VSAM data set and can be tuned to

optimize performance. You can improve IPCS performance by reducing the number of

Control Interval (CI) splits during initialization and analysis of dumps by specifying the

RECORDSIZE parameter of the BLSCDDIR CLIST (shipped in SYS1.SBLSCLI0).

The RECORDSIZE parameter in the shipped BLSCDDIR for z/OS V1R7 was changed to

‘RECORDSIZE (2560 3072)’ and has proven to yield well performing CISIZEs for the

data portion of the data set.2 IBM recommends use of this CI size specification prior to

z/OS V1R10.3 It is also recommended to delete old dump references from the directory

periodically (especially stand-alone dumps) to allow IPCS to be more efficient in its

processing.IPCS CLIST BLSCDROP may aid in doing that efficiently.

4. Taking the Stand-Alone dump Chapter 4 of MVS Diagnosis: Tools and Service Aids contains instructions for IPLing

stand-alone dump and minimizing operator interaction to ensure that the dump runs

quickly.

z/OS “AutoIPL” function, introduced in z/OS V1R10, automates taking a Stand-alone

Dump, re-IPL z/OS, or both in lieu of entering a disabled wait state. Enable AutoIPL by

coding an AUTOIPL statement in a DIAGxx parmlib member, specifying the device on

which the Stand-alone Dump program resides and the load parameter that z/OS will use

to load stand-alone dump; as well as the device and load parameter that Stand-alone

Dump or z/OS will use to re-IPL z/OS. Then point to that DIAGxx member so that the

2 When IBM System Test uses BLSCDDIR, they specify a CI size of 24,576 and a

BUFSPACE of x’100000’. 3 If desired, you can tune the RECORDSIZE parameter observing the number of

CI splits using standard VSAM dataset analysis techniques, such as the LISTCAT

command.

Page 5: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 5 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

system can read it during IPL, or have the system read it immediately by issuing a SET

DIAG=xx command. You still need to generate the SADMP properly, ensure that there

is enough output space in a multiple-volume data set, etc. Auto-IPL will automatically

trigger a Stand Alone Dump and re-IPL for all but a few non-restartable wait state codes.

In addition, a Stand-alone Dump and/or re-IPL can be initiated using the VARY

XCF,OFFLINE command. This feature makes it possible to initiate a dump and/or re-

IPL without having to place the system into a non-restartable wait state, and has the

following syntax: VARY XCF,sysname,OFFLINE,SADMP,REIPL

Do not use AutoIPL in a GDPS environment, since GDPS could cause the system to be

reset while a SADMP or re-IPL is in process.

Additional information on these z/OS V1R10 improvements can be found in z/OS Hot

Topics Newsletter, Issue 19, August 2008 (“Don’t keep me down!”).

(http://publibz.boulder.ibm.com/epubs/pdf/e0z2n190.pdf)

5. Process the SADMP

Prepare the dump for further processing using IPCS COPYDUMP to enable the SADMP to be accessed more efficiently.

Use IPCS COPYDUMP , not IEBGENER, to produce a merged dump data set from the multi-volume stand-alone dump, and a subset of the original stand-alone dump (ASIDs 1-20). A sample batch job is included at the end of this paper.

a) Ensure the output data set specified to COPYDUMP is DFSMS-striped (with at least eight stripes).

b) Catalog the output dump data set to allow IPCS to access it properly.

c) Send the subset dump to IBM initially (not the full merged dump) using standard FTP or the Problem Documentation Upload Utility.

After the stand-alone dump writes to a multi-volume data set, it needs to be post-

processed using the IPCS COPYDUMP utility before IPCS or other tooling can view it

efficiently. COPYDUMP reads and processes the multi-volume SADMP faster than

IEBGENER by processing the dump volumes in parallel, and produces a merged dump

ordered by ASID.

In addition as of z/OS V2R1 IPCS, one can specify the INITAPPEND option on the

COPYDUMP command to generate the initialization dump directory file, which is then

appended to the specified target dump data set. Since this option requires the processing

Page 6: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 6 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

of the entire SADMP, avoid using it for any subset dumps, since that will increase the

processing time required to generate the dump directory entries when the dump is very

large. Therefore, consider not using INITAPPEND when specifying the EASYCOPY

option. Using this option allows the “directory” to be used when subsequently opening

the dump to be analyzed via IPCS, shortening the process by a significant amount of time

for very large dumps. IPCS automatically detects the pre-built dump directory and uses

it, rather than re-initialize the dump. IBM’s internal system test has identified a savings

of 21 minutes when opening a 44GB dump, and approximately 1 hour when opening a

278 GB dump. Note that INITAPPEND is not supported prior to z/OS V2R1.

IBM recommends using two COPYDUMP jobs in parallel to produce:

(1) a subset dump, to send to IBM.

(2) a full merged dump, saved in case it is needed later by Service

The subset dump will contain ASIDs 1-20 with the primary system components of the

operating system. Additionally, one can now specify a JOBLIST parameter on the

COPYDUMP command. This allows you to capture important address spaces that have

been restarted into different ASIDs (beyond the first 20) from when the system was

IPLed. Send the subset dump to IBM using FTP or the problem documentation upload

utility - it will be a smaller data set requiring less time to send to IBM. In most cases, the

subset dump is sufficient for problem analysis. Keep the merged dump for later use by

IBM Service (if necessary). The upload utility is the better choice for sending data set

types supported by the utility since it includes encryption support.

IPCS performance is improved when the dump being processed (the COPYDUMP

output) is DFSMS-striped. Placing the dump into a data set with at least eight stripes has

shown marked improvement in IPCS response (when IBM is analyzing the problem).

A subset of ASIDs can be extracted from the full stand-alone dump into a separate data

set, and sent to IBM using COPYDUMP. This has shown to reduce the data transferred

by roughly 30% to 40% compared to transmitting a full stand-alone dump. Prior to z/OS

V1R10, IBM established the first 20 ASIDs in the system should be included in the

COPYDUMP as well as the CATALOG and JES ASIDs, if known. This allows the base

system address spaces to be extracted from the dump.

Also included in the COPYDUMP should be any other ASIDs known to be involved in

the problem, or requested to be included by IBM Service. Example syntax of the

COPYDUMP command for the subset dump is:

COPYDUMP ODS('OUTPUT DATASET NAME')

EASYCOPY

JOBLIST(JES2,CATALOG,PROBLEM)

IDS('INPUT DATASET NAME') NOCONFIRM

Page 7: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 7 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

Using IPCS COPYDUMP with the EASYCOPY keyword results in extracting all of the

recommended minimum areas from the stand-alone dump, including the suggested list of

named system address spaces.

In addition, IPCS provides a panel that allows you to run COPYDUMP against a dump,

with the default set of address spaces, more easily. When selecting option 2 in the IPCS

Utility Menu, you can specify the following lists:

Input dump data set names

Job names to include in the subset dump

Names of output subset and full dump data sets

Again, ensure the specified output data set name supports DFSMS striping.

6. Compress and transmit the dump to IBM

Prior to z/OS R13, the recommended approach for compressing and sending the stand-

alone dump includes use of a pair of utilities available with z/OS: AMATERSE for

compression and PUTDOC to transmit the data to IBM.

a. AMATERSE compresses the dump to obtain about a 2-5x improvement on the

size of the resulting data set sent via FTP. Some z/OS customers require the data

sent to IBM be encrypted prior to tersing it and sending the compacted-encrypted

data to IBM. Customers then typically place the necessary de-encryption

information in the PMR for IBM Service to utilize. For more information on the

use of IBM’s Encryption Facility, for those customers not able to use secure FTP,

see http://www5.ibm.com/de/support/ecurep/mvs_encryption.html.

b. PUTDOC transmits the data to IBM, allowing the dump to be split into multiple

files and then sent by FTP. By sending a large dump in smaller segments,

recovery of the data in the event of a network failure is limited to the segment in

process. For complete instructions on the use of the PUTDOC facility, see:

http://techsupport.services.ibm.com/server/nav/zSeries/putdoc/putdoc.html

The Problem Documentation Upload Utility (AMAPDUPL) Starting with z/OS V1R13, IBM highly recommends use of the z/OS Problem

Documentation Upload Utility (PDUU) to handle the compression, encryption and

transfer of large dump data sets. The function, invoked as AMAPDUPL, splits the

Stand-alone Data Set into multiple parts and FTPs them in parallel to any of the IBM

Service FTP destinations. Just configure the utility’s functions to specify parameters like

the target system, number of parallel FTP sessions, PMR number, encryption key, and

information for traversing firewall or proxy servers. Additional information on planning

to use the upload utility can be found in Chapter 18 of the MVS Diagnosis: Tools and

Service Aids book.

Prior to V1R13, PDUU is available as an optional download. For more information, see

ftp://ftp.software.ibm.com/s390/mvs/tools/mtftp/e0zk1ftpt.pdf

Page 8: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 8 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

7. Set up remote access for AOS to allow remote viewing of your dump by IBM in time-critical situations.

Remote access products allow customers to permit IBM Service personnel to

immediately log into an IPCS session and view available documentation with no initial

data transfer. Choices include:

OnTop for EMEA customers

Assist On Site (AOS), available worldwide (available at

http://www.ibm.com/support/assistonsite )

Remote viewing should always be the first option in a time-critical situation. Rapid

viewing of the documentation has the advantage of allowing IBM Service to itemize or

customize any additional documentation they may want to send to IBM for the given

situation. If documentation is still required to be sent, the analysis of the current

documentation can continue while the requested documentation is in transit. In many

cases, the complete stand-alone dump is not required for diagnosis of a given problem.

Therefore, sending a subset of the stand-alone dump to IBM may prove sufficient for

problem resolution.

8. Test stand-alone dump setup

It is critical for the Operations staff to train and practice taking a stand-alone dump on

occasion, to become familiar with the procedure and to ensure all data sets are set up

properly before you run into a critical situation. This includes the set up and process for:

taking a stand-alone dump as part of the standard scheduled shutdown of an

LPAR and

using COPYDUMP to obtain the merged dump and the subset dump

sending the resulting dump data set via the upload utility

If the SADMP resides in a DASD dump data set, IBM recommends copying the dump to

another data set for IPCS processing and clear (reinitialize) the dump data set using the

AMDSADDD or IPCS stand-alone dump dump data set utilities. For more information,

see "Using the AMDSADDD utility" in topic 4.2.3 and Utility option on the IPCS Dialog

in z/OS MVS IPCS User's Guide, SA22-7596. The best practice is to practice taking a

stand-alone dump during scheduled Disaster Recovery rehearsals. Other times might

include when migrating to a new z/OS release or moving to a new server. Another option

is using a test LPAR for training the operations staff; one part of that training should be

to take a stand-alone dump following local procedures and being prepared to react to

stand-alone dump messages.

Also test the use of the Problem Documentation Upload Utility. Since the Upload Utility

does not have access to DD statements like SYSFTPD and SYSTCPD which may appear

in a normal FTP batch job, it makes sense to test and troubleshoot local network or

firewall configurations prior to the need to send a stand-alone dump to IBM.

Page 9: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 9 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

Summary

The intent of this paper is to describe current best practices in setting up for a stand-alone

dump, taking the dump, and post-processing the result. Hopefully, this results in one less

impediment to getting a quick resolution to your reported problem.

The following is a summary of the steps described in this paper:

1. Define the stand-alone dump data set as a multi-volume DASD dump group, with

each volume on a separate control unit.

2. Use a modern control unit for the stand-alone dump volumes, such as the IBM

Enterprise Storage Subsystem, as opposed to non-ESS DASD or dumping to tape.

3. Optimize performance of the IPCS dump directory. Begin with the parameter

RECORDSIZE(2560 3072) in BLSCDDIR when defining the directory.

4. Ensure that the stand-alone dump data set is cataloged.

5. Use the IPCS COPYDUMP command to extract a subset of data to transmit to

IBM (ASIDs 1-20 and named address spaces).

6. Ensure that the full merged output SADMP data set from COPYDUMP is a

striped data set with at least eight stripes. Specify the INITAPPEND option to

save the dump directory, in the dump data set, to enable the stand alone dump to

be opened more efficiently.

7. Use the Problem Documentation Upload Utility to compress, encrypt and

transmit (in parallel files) large dump data sets to IBM. Prior to z/OS V1R13 use

the downloadable tool PDUU.

8. Continue to maintain remote access capabilities such as AOS. The fastest way to

get the documentation to IBM Service is for IBM Service to analyze the

documentation remotely.

9. Test a SADMP procedures to ensure all settings remain accurate, and train

operations staff on taking the dump.Consider establishing AUTOIPL to generate a

SADMP.

Page 10: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 10 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

Appendix

IBM System Test Example

In a recent set of tests performed by IBM’s z/Series Product Evaluation Test, a 12-

volume configuration was set up to support a stand-alone dump of a 152Gb-real memory

system:

Three ESS (SHARK) subsystems were used:

o ESS 2105 F20 - 2 FICON 2 Gb CHPs, 8 GB cache

o ESS 2105 mod. 800 - 8 FICON 2 gb CHPs, 8 GB cache

o DS6000 1750 mod. 511 - 6 FICON 2gb CHPs, 1.3 GB cache

Four volumes per box

o Each volume is defined on a unique LSS

o Each volume is defined as 14902 cylinders

o New DSTYPE=LARGE attribute used

Here is an example of the AMDSADDD JCL used to exploit the above DASD

configuration:

//STEP1 EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * EXEC 'SYS1.SBLSCLI0(AMDSADDD)' 'DEFINE (SAD041,SAD042,SAD043,SAD044,SA+ D045,SAD046,SAD047,SAD048,SAD049,SAD050,SAD051,SAD052)(PETDUMP.J80L12.+ SADUMP.DSS0.STRIPE) 3390 14902 YES LARGE'

Automating the process

This sample JCL can help automate several of the best practice processes described

earlier. The result is two steps to run as background jobs:

1. Invoke IPCS COPYDUMP to merge the data and produce a single data set to send to

IBM. Because the JCL requires invocation of IPCS in a background TSO

environment, it is not possible to obtain condition code information from the

COPYDUMP “step” to determine whether the preparation step should be invoked.

This requires the results of the COPYDUMP step to be examined manually.

2. Invoke the “Preparation” job, which will terse the output data set produced by

COPYDUMP, encrypt the terse version, and send the final result to IBM using

PUTDOC.

Sample JCL to drive the entire Post-processing

Post-processing of a stand-alone dump needs to occur in two steps:

Page 11: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 11 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

Run IPCS COPYDUMP to merge the data and produce a single data set to send to

IBM. Examine the output from the run step to ensure that the COPYDUMP ran

correctly. This JCL is identified below as “=== IPCS COPYDUMP ====”.

Send the dump using the Problem Documentation Upload Utility (AMAPDUPL).

This JCL is identified below as “=== PDUU ====”.

Run the following JCL, which will terse the resulting (striped) dump data set, encrypt

the terse version, and FTP it to IBM using PUTDOC. This JCL is identified below as

“=== TERSE, ENCRYPT and FTP ====”.

Tailor the following JCL to process the data sets for transmission to the FTP server.

Note: Please be sure to TURN OFF the LINE NUMBERING in following job.

1) === IPCS COPYDUMP ====

//IPCSCPYD JOB MSGLEVEL=(2,1),....

// CLASS=V,NOTIFY=&SYSUID.,MSGCLASS=H

//*********************************************************************

//* IN DD IS USED TO POINT TO THE SOURCE OF INPUT WHICH WOULD BE

//* THE SYS1.SADMP... DATASET

//* OUT DD IS USED TO POINT TO THE OUTPUT OF THE COPYDUMP

//* WHERE PPPPP SHOULD BE THE NUMBER OF CYLINDERS FOR PRIMARY

//* SSSS SHOULD BE THE NUMBER OF CYLINDERS FOR SECONDARY

//* &DATACLAS SHOULD BE THE DATACLAS

//* &MGMTCLAS SHOULD BE THE MGMTCLAS

//* &STORCLAS SHOULD BE THE STORCLAS

//* IPCSDDIR DD DEFINING &SYUID..COPYDUMP.DDIR WITH NON-COMPRESS

//* DATACLAS

//* COPYDUMP SUBCOMMAND TO REQUEST FIRST 20 ADDRESS SPACES

//* IF JES OR CATALOG WERE NOT AMONG THE FIRST 20 ADDRESS SPACES

//* XXX AND YYY SHOULD BE USED FOR THESE TWO SUBSYSTEM ASIDS

//*********************************************************************

//RUN EXEC PGM=IKJEFT01,REGION=200096K,DYNAMNBR=50

//IPCSPRNT DD SYSOUT=H

//IPCSTOC DD SYSOUT=H

//IPCSPARM DD DISP=SHR,DSN=SYS1.PARMLIB

//SYSTSPRT DD SYSOUT=H

//IN DD DISP=SHR,DSN=SYS1.SADMP.....

//OUT DD DISP=(NEW,CATLG),DSN=OUTPUT.DATASET.NAME

// SPACE=(CYL,(PPPPP,SSSS),RLSE),DATACLAS=&DATACLAS.,

// MGMTCLAS=&MGMTCLAS.,STORCLAS=&STORCLAS

//SYSTSIN DD *

EX 'SYS1.SBLSCLI0(BLSCDDIR)' 'DSN(&SYSUID..COPYDUMP.DDIR) +

RECORDS(90000) DATACLAS(NOCOMP) MGMTCLAS(DMGDEBUG)'

IPCS NOPARM

COPYDUMP IFILE(IN) OFILE(OUT) ASIDLIST(1:20,XXX,YYY) NOCONFIRM +

INITAPPEND

END

/*

2. “=== PDUU ====”

Chapter 18 of the MVS Diagnosis: Tools and Service Aids book contains a number of

examples on using the z/OS PDUU to FTP the dump data set using a variety of proxy

Page 12: z/OS Best Practices: Large Stand-Alone Dump Handling ... · PDF filez/OS Best Practices: Large Stand-Alone Dump Handling ... For the best overall performance, ... Do not define your

© 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014

Page 12 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices

server configurations. The following is example JCL to send a large dump data set to

IBM using a simple FTP connection.

//FTP EXEC PGM=AMAPDUPL

//SYSUDUMP DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//SYSUT1 DD DISP=SHR,DSN=H44IPCS.WESSAMP.TRKS055K

//SYSIN DD *

USERID=anonymous

PASSWORD=anonymous

TARGET_SYS=testcase.boulder.ibm.com

TARGET_DSN=wessamp.bigfile

WORK_DSN=wes.ftpout

CC_FTP=03

WORK_DSN_SIZE=500

DIRECTORY=/toibm/mvs/

PMR=12345.123.123

//

3) ==== TERSE, ENCRYPT and FTP ====

//TRENCFTP JOB CLASS=I,......

// NOTIFY=&SYSUID.

//JOBLIB DD DISP=SHR,DSN=PDS_WITH_TERSE_ENCRYP_PGM

//TERSE EXEC PGM=TRSMAIN,PARM=PACK

//SYSPRINT DD SYSOUT=H

//INFILE DD DISP=SHR,DSN=SOURCE_OF_DUMP

//OUTFILE DD DISP=(NEW,CATLG),

// DSN=&SYSUID..PMR....TRSD,

// UNIT=SYSDAL,

// DATACLAS=COMPRESS,

// SPACE=(CYL,(PPPPP,SSSS),RLSE)

//DECRYPT EXEC PGM=FTPENCRD,PARM='PASSCODE',COND=(0,NE)

//SYSOUT DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

//FIN DD DISP=SHR,DSN=*.TERSE.OUTFILE

//FOUT DD DSN=&SYSUID..PMR.....TRSENCRP,

// DCB=(DSORG=PS,RECFM=FB,LRECL=1024),

// DISP=(NEW,CATLG),UNIT=SYSDAL,

// DATACLAS=COMPRESS,

// SPACE=(CYL,(PPPPP,SSSS),RLSE)

//FTPSTEP EXEC PGM=FTP,REGION=5000K,

// PARM='TESTCASE.BOULDER.IBM.COM (EXIT',COND=(00,NE)

//STEPLIB DD DISP=SHR,DSN=SYS1.TCPIP.SEZALINK

//*SYSMDUMP DD SYSOUT=*

//SYSPRINT DD SYSOUT=H

//OUTPUT DD SYSOUT=H

//INPUT DD *

ANONYMOUS

YOUR_EMAIL@

cd toibm/mvs

bin

PUT PMR......TRSENCRP PMR.......TRS.ENCRP64

quit /*