c1891862

80
IBM DB2 Information Integrator Getting Started with Classic Event Publishing Version 8.2 GC18-9186-02

Upload: api-3729284

Post on 10-Apr-2015

84 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: c1891862

IBM DB2 Information Integrator

Getting Started with Classic Event

Publishing

Version 8.2

GC18-9186-02

���

Page 2: c1891862
Page 3: c1891862

IBM DB2 Information Integrator

Getting Started with Classic Event

Publishing

Version 8.2

GC18-9186-02

���

Page 4: c1891862

Before using this information and the product it supports, be sure to read the general information under “Notices” on page 65.

This document contains proprietary information of IBM. It is provided under a license agreement and copyright law

protects it. The information contained in this publication does not include any product warranties, and any

statements provided in this manual should not be interpreted as such.

You can order IBM publications online or through your local IBM representative:

v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order

v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at

www.ibm.com/planetwide

When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any

way it believes appropriate without incurring any obligation to you.

© Copyright International Business Machines Corporation 2003, 2004. All rights reserved.

US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract

with IBM Corp.

© CrossAccess Corporation 1993, 2003

Page 5: c1891862

Contents

Chapter 1. Overview of setting up and

starting Classic Event Publisher . . . . 1

Introduction to Classic Event Publisher . . . . . 1

Overview of configuring to capture information . . 2

Overview of configuring to publish information . . 3

Overview of starting Classic Event Publisher . . . 4

Chapter 2. Preparing data and

configuring change-capture agents for

CA-IDMS . . . . . . . . . . . . . . 5

Setup procedures for CA-IDMS sources . . . . . 5

Enabling CA-IDMS change-capture . . . . . . . 5

Punching the schema and subschema . . . . . 6

Mapping the CA-IDMS schema and subschema . 6

Loading the data server metadata catalog . . . . 9

Activating change-capture in a CA-IDMS Central

Version . . . . . . . . . . . . . . . . 9

Modifying the Central Version JCL . . . . . 11

Modifying automatic journaling . . . . . . 11

Configuring a named server environment . . . 11

Relinking the CA-IDMS database I/O module

IDMSDBIO . . . . . . . . . . . . . . 12

Relinking the presspack support module . . . . 12

Setting up a server to access a CA-IDMS Central

Version . . . . . . . . . . . . . . . . 13

Chapter 3. Preparing data and

configuring change-capture agents for

IMS . . . . . . . . . . . . . . . . 15

Supported environments and program types . . . 15

Enabling IMS change capture . . . . . . . . 15

Mapping the sample IMS DBD and copybooks 15

Loading the metadata catalogs . . . . . . . 22

Installing the IMS active change-capture agent . . . 23

Advantages and disadvantages of the IMS logger

exit installation options . . . . . . . . . 24

Adding the IMS logger exit to an existing exit . . . 25

Augmenting a DBD to generate IMS data capture

log records . . . . . . . . . . . . . . 25

Chapter 4. Preparing data and

configuring change-capture agents for

VSAM . . . . . . . . . . . . . . . 27

Prerequisites for VSAM monitoring . . . . . . 27

Setup procedures for CICS monitoring for VSAM

changes . . . . . . . . . . . . . . . . 27

Configuring CICS resource definitions . . . . . 28

VTAM resource definitions . . . . . . . . 28

CICS resource definitions . . . . . . . . . 29

Mapping VSAM data . . . . . . . . . . . 30

Mapping the sample VSAM copybook . . . . 30

Loading the metadata catalogs . . . . . . . . 34

Configuring change-capture agents for VSAM . . . 35

Chapter 5. Configuring correlation

services, publication services, and

publications . . . . . . . . . . . . 37

Copying the correlation service JCL . . . . . . 37

Configuring the correlation service and publication

service . . . . . . . . . . . . . . . . 37

Configuring the maximum size of messages . . . 40

Configuring Cross Memory services . . . . . . 41

Creating publications . . . . . . . . . . . 41

Creating the Classic Event Publisher recovery data

sets . . . . . . . . . . . . . . . . . 43

Chapter 6. Starting the processes of

capturing and publishing . . . . . . . 45

Starting the process of publishing . . . . . . . 45

Activating change-capture for CA-IDMS . . . . . 45

Setting up the IDMSJNL2 exit . . . . . . . 45

Before starting a change-capture agent . . . . 46

Starting an active change-capture agent . . . . 46

Activating change capture for an IMS

database/segment . . . . . . . . . . . . 46

Activating change capture for VSAM . . . . . . 47

Monitoring correlation services and publication

services . . . . . . . . . . . . . . . . 47

Chapter 7. Recovering from errors . . . 49

Introduction to recovery mode . . . . . . . . 49

Starting a recovery change-capture agent for

CA-IDMS . . . . . . . . . . . . . . . 50

Parameter example . . . . . . . . . . . 51

Execution JCL . . . . . . . . . . . . 52

Journal files in execution JCL . . . . . . . 52

Preparing for recovery mode when using IMS

change-capture agents . . . . . . . . . . . 52

Recovering from errors when using IMS

change-capture agents . . . . . . . . . . . 53

Starting recovery change-capture agents for VSAM 54

Stopping recovery change-agents for VSAM . . . 54

DB2 Information Integrator

documentation . . . . . . . . . . . 55

Accessing DB2 Information Integrator

documentation . . . . . . . . . . . . . 55

Documentation about replication function on z/OS 57

Documentation about event publishing function for

DB2 Universal Database on z/OS . . . . . . . 58

Documentation about event publishing function for

IMS and VSAM on z/OS . . . . . . . . . . 58

Documentation about event publishing and

replication function on Linux, UNIX, and Windows . 59

Documentation about federated function on z/OS 60

Documentation about federated function on Linux,

UNIX, and Windows . . . . . . . . . . . 60

© Copyright IBM Corp. 2003, 2004 iii

|||||||||||||||||||||||||||||||

| | | | | | | |

| | | | | | | | |

Page 6: c1891862

Documentation about enterprise search function on

Linux, UNIX, and Windows . . . . . . . . . 62

Release notes and installation requirements . . . . 62

Notices . . . . . . . . . . . . . . 65

Trademarks . . . . . . . . . . . . . . 67

Index . . . . . . . . . . . . . . . 69

Contacting IBM . . . . . . . . . . . 71

Product information . . . . . . . . . . . 71

Comments on the documentation . . . . . . . 71

iv DB2 II Getting Started with Classic Event Publishing

Page 7: c1891862

Chapter 1. Overview of setting up and starting Classic Event

Publisher

The following topics briefly introduce Classic Event Publisher and how to

configure and start it:

v Introduction to Classic Event Publisher

v Overview of configuring to capture information

v Overview of configuring to publish information

v Overview of starting Classic Event Publisher

Introduction to Classic Event Publisher

Classic Event Publisher captures and publishes committed changes made to

CA-IDMS databases, to IMS™ databases, or to VSAM files. Captured committed

changed-data items are first reformatted into a consistent relational data format

and then translated into a message in an Extensible Markup Language (XML)

format. The XML messages are then published to WebSphere® MQ message queues

from which they can be read by an application working with a WebSphere MQ

server, WebSphere MQ client, or with WebSphere Business Integration Event

Broker.

Each message contains changes from a single type of data source (for example,

only CA-IDMS changes, IMS changes, or VSAM changes). Each message can

contain an entire transaction or only a row-level change.

You can control which fields within which CA-IDMS files, IMS segments, or VSAM

files will be monitored for changes by using a metadata catalog to identify the

specific data items to be captured and published. This metadata catalog also

defines how the individual data items are to be reformatted into relational data

types. This relational mapping results in ″logical″ CA-IDMS, IMS, and VSAM

tables.

You can use Classic Event Publisher to push data changes to a variety of tools. The

most common consumers of changed data will be information brokers, data

warehousing tools, workflow systems and enterprise application integration (EAI)

solutions. Consider a scenario in which changing prices and inventory are

published to potential buyers. For example, a food wholesaler procures perishable

food products such as bananas from world markets in bulk and sells them to

grocery food retailers and distributors.

The value of bananas decreases the longer that they are in the warehouse. The

wholesaler wants to inform its potential buyers of the changing price and

inventory data and can set up event publishing to do that. Each time the price

changes, an XML message can be sent to potential buyers, informing them of the

″price change event.″

Each buyer (retailer or distributor) wants to maximize profit. These organizations

can determine when to buy the bananas based upon price, age (or time to spoil),

and historical knowledge regarding how quickly they can sell a certain quantity. If

they purchase too early, they will pay a higher price for the bananas and will not

achieve the maximum profit. Buying too late will likely result in spoilage and

© Copyright IBM Corp. 2003, 2004 1

Page 8: c1891862

again profit will not be maximized. Profit maximization can be achieved by timing

the purchase appropriately. Applications can receive the event notification

messages and generate purchase orders automatically at the right time to maximize

profit.

Overview of configuring to capture information

For each data source, there is a change-capture agent that captures changes made

to the source database or file that is being monitored. A change-capture agent runs

on the same machine as your data source. The change-capture agent monitors the

logs that are associated with your database (or in the case of VSAM, the CICS®

logs), looking for actions that affect the designated data. All log records that are

associated with logical INSERT, DELETE, and UPDATE operations are checked

against the metadata that defines which CA-IDMS files, IMS segments, and VSAM

records are to be monitored. The raw log data for monitored segments and records

is then delivered to Classic Event Publisher’s correlation service for additional

handling.

The correlation service collects information from the change-capture agents and

segregates the log data by unit-of-work identifiers. If a ROLLBACK occurs, the

correlation service discards all of the data collected for that unit of work. When a

COMMIT occurs, the correlation service processes all of the log data for that unit

of work.

The correlation service’s COMMIT-related processing reformats the data in the log

records into a relational format represented by one or more SQL Descriptor Areas

(SQLDAs). This reformatting ensures that all captured data changes are

consistently formatted before they are packaged for delivery. The SQLDAs are then

passed to the publication service that will handle the transformation into XML and

the delivery to WebSphere MQ.

The key to all of this processing is the metadata that is stored in the Classic Event

Publisher’s metadata catalog. You use Classic Event Publisher’s GUI administration

tool, Data Mapper, to define the metadata that tells Classic Event Publisher which

IMS or VSAM data is to be monitored for changes, as well as how to reformat the

IMS segment and VSAM record data into ″logical relational table″ format (i.e.

SQLDAs.). The metadata defined in the Data Mapper is then exported to the

z/OS® platform as USE GRAMMAR that is used as input to a metadata utility. The

metadata utility creates or updates the metadata stored in the Classic Event

Publisher’s metadata catalog.

The following steps provide an overview of how to configure Classic Event

Publisher to capture changes made to source data:

1. Configure change–capture agents for your source databases and populate the

metadata catalog with information about your source data.

For the steps required to configure change-capture agents for CA-IDMS

source databases and to populate the metadata catalog with information

about your CA—IDMS logical tables, see the following topics:

v Setup procedures for CA-IDMS sources

v Enabling IDMS change-capture

v Relinking the CA-IDMS database I/O module IDMSDBIO

v Relinking the presspack support module

v Setting up a server to access a CA-IDMS Central Version

2 DB2 II Getting Started with Classic Event Publishing

Page 9: c1891862

For the steps required to configure change-capture agents for IMS source

databases and to populate the metadata catalog with information about

your IMS logical tables, see the following topics:

v Enabling IMS change capture

v Installing the IMS active change-capture agent

v Adding the IMS logger exit to an existing exit

v Augmenting a DBD to generate IMS data capture log records

For the steps required to configure change-capture agents for VSAM source

files being updated through CICS and to populate the metadata catalog

with information about the VSAM logical tables, see the following topics:

v Prerequisites for VSAM monitoring

v Setup procedures for CICS monitoring for VSAM changes

v Configuring CICS resource definitions

v Mapping VSAM data

v Loading the metadata catalog

v Configuring change-capture agents for VSAM

2. Configure the correlation service.

For the steps required to configure the correlation service, see the following

topics:

1. Copying the correlation service JCL

2. Configuring the correlation service and publication service

3. Configuring Cross Memory services

4. Creating the Classic Event Publisher recovery data sets

Overview of configuring to publish information

A service called the publication service receives validated change-capture

information from the correlation service. The publication service matches the

change-capture information to publications, creates XML messages containing the

change-capture information, and writes the messages to WebSphere MQ message

queues. A publication tells the publication service the following information:

v The owner and name of a source logical table from which you want to publish

changed data

v Which data you want to publish from the source logical table

v The format of the messages that will contain the changes

v Which WebSphere MQ message queue you want to publish the changes to

The following steps provide an overview of how to configure DB2® Information

Integrator Classic Event Publisher to publish information:

1. Configure WebSphere MQ objects.

The publication service needs to work with one WebSphere MQ queue

manager and one or more persistent message queues. The type and

number of message queues that you create depends on the number of

target applications and where those applications are running. For example,

if you have one target application running on the same system as the

publication service, you can use a local queue to send messages to the

target application.

A publication service also needs one persistent local queue that can be

used as an in-doubt resolution queue. The publication service populates this

Chapter 1. Overview of setting up and starting Classic Event Publisher 3

Page 10: c1891862

queue with messages that detail its progress. If a publication fails for any

reason, the publication service looks in the in-doubt resolution queue for

information about the last unit of work that was in a message that was

committed to a message queue.

For more information about setting up WebSphere MQ objects, see the DB2

Information Integrator Planning Guide for Classic Event Publishing.

2. Configure the publication service.

For the steps required to configure the publication service, see Configuring

the correlation service and publication service.

3. Configure publications.

For the steps required to configure publications, see Creating publications.

Overview of starting Classic Event Publisher

When you are ready to start capturing changes from your source logical tables and

publishing them to WebSphere MQ, you first start the publishing process and then

the change-capture agents. This order ensures that you do not lose any captured

data.

The following steps provide an overview of how you start capturing and

publishing data with Classic Event Publisher.

1. Be sure that WebSphere MQ is running.

2. Start the correlation service, the publication service, and the publications.

For the steps required to start publishing, see Starting the process of

publishing.

3. Start the change-capture agents.

For the steps required to start change-capture agents, see the following

topics:

1. Activating change capture in a CA-IDMS Central Version

2. Activating change capture for an IMS database/segment

3. Activating change-capture for VSAM

4 DB2 II Getting Started with Classic Event Publishing

Page 11: c1891862

Chapter 2. Preparing data and configuring change-capture

agents for CA-IDMS

The following topics explain how to configure change-capture agents for CA-IDMS

source databases:

v Setup procedures for CA-IDMS sources

v Enabling IDMS change-capture

v Activating change capture in a CA-IDMS Central Version

v Relinking the CA-IDMS database I/O module IDMSDBIO

v Relinking the presspack support module

v Setting up a server to access a CA-IDMS Central Version

Setup procedures for CA-IDMS sources

The following pages describe this process using a sample CA-IDMS database called

Employee Demo Database. This database is part of the CA-IDMS installation. The

steps outlined in this chapter are the same steps used to enable change-capture for

your own CA-IDMS databases.

The process includes the following steps:

v Punch schema and subschema.

v Map CA-IDMS schema/subschema using the Data Mapper to create logical

tables.

v Loading the metadata catalog with the logical tables.

Note: It is assumed that DB2 II Classic Event Publisher has been installed on the

mainframe and the Data Mapper is installed on a workstation.

The process is described in detail in the next section.

Note: For additional information about developing and deploying applications

with DB2 II Classic Event Publisher, see the DB2 Information Integrator

Operations Guide for Classic Event Publishing

Enabling CA-IDMS change-capture

Your CA-IDMS installation should have a sample database called Employee Demo

Database. The schema named EMPSCHM and subschema named EMPSS01

identify this demo database. You can use the process described in this section to

bring your own CA-IDMS databases online.

Note: In all the jobs that follow, you need to customize the JCL for your site. This

customization includes concatenating CA-IDMS-specific libraries provided

by the vendor. Templates for these libraries are included in the JCL. You

need to uncomment the libraries and provide the appropriate high-level

qualifiers.

© Copyright IBM Corp. 2003, 2004 5

|

|

|

||

|

|

|

|

|

|

||

||||

|

|

||

|

||

|

|||

||

||||

|||||

Page 12: c1891862

Punching the schema and subschema

The SCACSAMP data set on the mainframe contains a member called CACIDPCH.

This member contains sample JCL that you can use to punch the schema and

subschema.

To punch the schema and subschema:

1. Customize the JCL in member CACIDPCH to run in your environment.

2. Submit the JCL.

By default, this job creates two members in the SCACSAMP data set called

CACIDSCH and CACIDSUB. These newly-created members contain the schema

and subschema for the Employee Demo Database.

Mapping the CA-IDMS schema and subschema

The SCACSAMP data set on the mainframe contains the schema and subschema

that are created after you submit the CACIDPCH member and the job successfully

completes. This schema and subschema describe the Employee Demo Database.

For more detailed information on data mapping, see the IBM DB2 Information

Integrator Data Mapper Guide for Classic Federation and Classic Event Publishing.

To map the CA-IDMS schema and subschema:

1. Upload the CACIDSCH and CACIDSUB members from the SCACSAMP data

set to the directory of your choice on the workstation where the Data Mapper

is installed.

As you upload these members, you must rename them with the following

filenames:

v cacidsch.sch

v cacidsub.sub 2. From the Windows server, click Start --> IBM Data Mapper.

3. Select File --> Open Repository, select the Sample.mdb repository in the xadata

directory, and click Open.

4. Select Edit --> Create a new Data Catalog.

5. Type the following information in the window:

v Name: CUSTOMER SAMPLE - IDMS

v Type: IDMS

v Select the Capture check box to mark the tables for monitoring. 6. Click OK.

7. Select File --> Load CA-IDMS Schema for Reference. The Load CA-IDMS

Schema File window appears.

8. From the Open window, select the schema that you uploaded from the

mainframe (cacidsch.sch). You are prompted to load the subschema as well.

The Load Complete message window appears. Click Yes and the Load

CA-IDMS Schema File window appears so that you can select the subschema

(cacidsub.sub). After the subschema is successfully loaded, the Load Complete

message window appears. Click OK to complete the import process.

9. Highlight the IDMS data catalog and select Window --> List Tables. The

IDMS Tables for Data Catalog window appears. Because this is a new data

catalog, the list of tables is empty.

10. Select Edit --> Create a new Table.

6 DB2 II Getting Started with Classic Event Publishing

|

|||

|

|

|

|||

|

|||

||

|

|||

||

|

|

|

||

|

|

|

|

|

|

||

||||||

|||

|

Page 13: c1891862

Complete the following steps to create a logical table for the CA-IDMS

EMPLOYEE record:

a. Leave the Name field blank. The logical table is assigned the default name

of EMPLOYEE when you select the EMPLOYEE record in the Record

Name field.

b. Select CAC from the Owner field.

c. Type the database name of sample employee database (EMPDEMO) or the

database of the file you are mapping.

d. Select EMPLOYEE from the Record Name field.

e. Click OK.You are now ready to import the field definitions for the EMPLOYEE record

from the currently-loaded schema.

11. Select File --> Import External File. The Import From Schema message

window asks whether you would like to import from the existing schema.

Click Yes. The Import - CA-IDMS Record Select window appears.

12. Ensure that the EMPLOYEE record appears to the right of the Select Records

for Import field and click Continue. The Import Copybook window appears.

The CA-IDMS Element definitions contained in the schema for the record

appear in the window. These definitions are converted to SQL column

definitions.

13. Click Import. The Columns for CA-IDMS Table EMPLOYEE window appears.

14. Close the Columns for IDMS Table EMPLOYEE window.

15. Close the IDMS Tables for Data Catalog CUSTOMER SAMPLE - IDMS

window.

At this point, you should be back to the list of data catalogs window entitled

Sample.mdb.

16. Ensure the data catalog CUSTOMER SAMPLE - IDMS is highlighted and

select File --> Generate USE Statements.

17. Select a file name for the generated statements to be stored on the

workstation, for example: idms.use and click OK.

After generation is complete, you can view the USE grammar in Windows

Notepad or click Yes when the Data Catalog USE Generation Results window

appears. Your completed USE grammar will look similar to the following example.

The ALTER TABLE statement at the end notifies DB2 II Classic Event Publisher to

capture changes for a logical table definition.

DROP TABLE CAC.EMPLOYEE;

USE TABLE CAC.EMPLOYEE DBTYPE IDMS EMPSCHM

SUBSCHEMA IS EMPSS01 VERSION IS 100

DBNAME IS EMPDEMO

PATH IS ( EMPLOYEE )

(

/* COBOL Name EMP-ID-0415 */

EMP_ID_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-ID-0415

USE AS CHAR(4),

/* COBOL Name EMP-FIRST-NAME-0415 */

EMP_FIRST_NAME_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-FIRST-NAME-0415

USE AS CHAR(10),

/* COBOL Name EMP-LAST-NAME-0415 */

EMP_LAST_NAME_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-LAST-NAME-0415

USE AS CHAR(15),

/* COBOL Name EMP-STREET-0415 */

Chapter 2. Preparing data and configuring change-capture agents for CA-IDMS 7

||

|||

|

||

|

|

||

|||

|||||

|

|

||

||

||

||

|||||

|||||||||||||||||||

Page 14: c1891862

EMP_STREET_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-STREET-0415

USE AS CHAR(20),

/* COBOL Name EMP-CITY-0415 */

EMP_CITY_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-CITY-0415

USE AS CHAR(15),

/* COBOL Name EMP-STATE-0415 */

EMP_STATE_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-STATE-0415

USE AS CHAR(2),

/* COBOL Name EMP-ZIP-FIRST-FIVE-0415 */

EMP_ZIP_FIRST_FIVE_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-ZIP-FIRST-FIVE-0415

USE AS CHAR(5),

/* COBOL Name EMP-ZIP-LAST-FOUR-0415 */

EMP_ZIP_LAST_FOUR_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-ZIP-LAST-FOUR-0415

USE AS CHAR(4),

/* COBOL Name EMP-PHONE-0415 */

EMP_PHONE_0415 SOURCE DEFINITION ENTRY

EMPLOYEE EMP-PHONE-0415

USE AS CHAR(10),

/* COBOL Name STATUS-0415 */

STATUS_0415 SOURCE DEFINITION ENTRY

EMPLOYEE STATUS-0415

USE AS CHAR(2),

/* COBOL Name SS-NUMBER-0415 */

SS_NUMBER_0415 SOURCE DEFINITION ENTRY

EMPLOYEE SS-NUMBER-0415

USE AS CHAR(9),

/* COBOL Name START-YEAR-0415 */

START_YEAR_0415 SOURCE DEFINITION ENTRY

EMPLOYEE START-YEAR-0415

USE AS CHAR(4),

/* COBOL Name START-MONTH-0415 */

START_MONTH_0415 SOURCE DEFINITION ENTRY

EMPLOYEE START-MONTH-0415

USE AS CHAR(2),

/* COBOL Name START-DAY-0415 */

START_DAY_0415 SOURCE DEFINITION ENTRY

EMPLOYEE START-DAY-0415

USE AS CHAR(2),

/* COBOL Name TERMINATION-YEAR-0415 */

TERMINATION_YEAR_0415 SOURCE DEFINITION ENTRY

EMPLOYEE TERMINATION-YEAR-0415

USE AS CHAR(4),

/* COBOL Name TERMINATION-MONTH-0415 */

TERMINATION_MONTH_0415 SOURCE DEFINITION ENTRY

EMPLOYEE TERMINATION-MONTH-0415

USE AS CHAR(2),

/* COBOL Name TERMINATION-DAY-0415 */

TERMINATION_DAY_0415 SOURCE DEFINITION ENTRY

EMPLOYEE TERMINATION-DAY-0415

USE AS CHAR(2),

/* COBOL Name BIRTH-YEAR-0415 */

BIRTH_YEAR_0415 SOURCE DEFINITION ENTRY

EMPLOYEE BIRTH-YEAR-0415

USE AS CHAR(4),

/* COBOL Name BIRTH-MONTH-0415 */

BIRTH_MONTH_0415 SOURCE DEFINITION ENTRY

EMPLOYEE BIRTH-MONTH-0415

USE AS CHAR(2),

/* COBOL Name BIRTH-DAY-0415 */

BIRTH_DAY_0415 SOURCE DEFINITION ENTRY

EMPLOYEE BIRTH-DAY-0415

8 DB2 II Getting Started with Classic Event Publishing

||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

Page 15: c1891862

USE AS CHAR(2)

);

ALTER TABLE CAC.EMPLOYEE DATA CAPTURE CHANGES;

Loading the data server metadata catalog

To load the catalog with the tables that you created in the previous section:

1. Upload the generated USE grammar (idms.use) to the SCACSAMP data set on

the mainframe.

2. If you have not already allocated catalogs, run CACCATLG to allocate them.

In the SCACSAMP data set, there is a member called CACCATLG that contains

JCL to allocate the metadata catalog that is used by the system.

a. Customize the JCL to run in your environment and submit.

b. After this job completes, ensure that the data server procedure in the

PROCLIB (default member name CACCS) points to the newly-created

catalogs using the CACCAT and CACINDX DD statements.

Ensure that the CACCAT and CACINDX DD statements are not commented

in the JCL.3. Load the catalog.

In the SCACSAMP data set, there is a member called CACMETAU that

contains JCL to load the metadata catalog using the USE grammar as input.

Customize the JCL to run in your environment and submit.

a. Ensure that the symbolic GRAMMAR = is pointing to the appropriate USE

grammar member (GRAMMAR=IDMSUSE).

b. Uncomment, and set the CA-IDMS symbolic, the STEPLIB DDs that point to

your CA-IDMS libraries and the DDLPUNCH DD.

c. Ensure that the CACCAT and CACINDX DDs refer to the catalogs created

using the CACCATLG JCL.

After you successfully run this job, the metadata catalog is loaded with the

logical tables created in the Data Mapper.

You can expect areturn code of 4. The DROP TABLE fails because the table

does not exist yet.

Activating change-capture in a CA-IDMS Central Version

Activating change-capture in an IDMS Central Version requires updating the IDMS

database I/O module to install the database exit IDMSJNL2, and modifying the

IDMS Central Version JCL to ensure proper communication with the correlation

service.

The installation of the IDMS exit is a link-edit job which needs to be done only

once during installation and is not repeated. If you created a special loadlib to

contain the IDMSDBIO module with the exit activated, activating change-capture

in each new Central Version requires updating the STEPLIB DD for the Central

Version to point to the special loadlib. For detailed instructions, see the section

″Relinking the CA-IDMS database I/O module IDMSDBIO.″

Once IDMSDBIO has been relinked, the CA-IDMS Central Versions must be

restarted to pick up the new IDMSDBIO module.

Chapter 2. Preparing data and configuring change-capture agents for CA-IDMS 9

|||

|

|

||

|

||

|

|||

||

|

||

|

||

||

||

||

||

||

||||

||||||

||

Page 16: c1891862

The correlation service monitors change messages from the CA-IDMS active and

recovery agents. To determine whether an active agent can remain active at

correlation service shutdown, the server needs to know whether the CA-IDMS

Central Version itself is still active.

Since the active agent running in CA-IDMS cannot notify the correlation service at

shutdown, a jobstep must be added to the CA-IDMS Central Version JCL to notify

the correlation service that CA-IDMS has terminated. The sample SCACSAMP

member CACIDTRM must be added to the end of the Central Version JCL.

The z/OS agent must be correctly set based on the Central Version number

associated with the CA-IDMS Central Version. For example, if the Central Version

number is 55, the parm must be specified as AGENT=’IDMS_055’. When an IDMS

Central Version completes initialization, it issues CA-IDMS message DC201001,

which identifes the Central Version number for that system.

After making the necessary changes to the Central Version JCL, you can verify the

installation by starting the Central Version and looking for the operator message

CACH001I XSYNC AGENT ’IDMS_nnn’ INSTALLED FOR SERVER ’nnnnnnnn’

in the Central Version JES messages. This message will only appear when the first

journalled event takes place within the Central Version itself. To ensure a

journalled event has occurred, you can update a record in the CA-IDMS Central

Version using the data server product or any existing update application you have

communicating with the Central Version.

Once the active agent is installed, starting the Central Version without a correlation

service will cause the message

XSYNC SERVER ’(servername)’ NOT FOUND, REPLY ’R’ OR ’A’

RECOVERY/ACTIVE FOR AGENT ’agentname’

to appear on the operator console. This message indicates that database changes

are taking place and there is no correlation service available to receive the changes.

Though this message requires operator action, the CA-IDMS Central Version itself

will not be halted to wait for the reply. In most cases, the operator should reply ’R’

to this message to force the agent into recovery mode so any changes made to the

database since it was started can be processed by the recovery agent.

The response to this message is cached in memory until a correlation service is

started. At that time, the agent is placed in either active or recovery mode based

on the operator reply. As an additional safeguard, the response of ’A’ will be

ignored by the correlation service if both of the following conditions are true:

v The correlation service is warm started.

v The restart dataset the correlation service processes already has the agent in

recovery mode.

To verify that the termination message is working correctly, the CA-IDMS Central

Version must be running in active mode and communicating successfully with the

correlation service. Once that has been verified, stopping the CA-IDMS Central

Version results in a 0 return code from the CACIDTRM jobstep, and the correlation

service issues the message:

CACG114I SHUTDOWN RECEIVED FROM ACTIVE AGENT ’IDMS_nnn’

10 DB2 II Getting Started with Classic Event Publishing

||||

||||

|||||

||

|

|||||

||

||

||||||

||||

|

||

|||||

|

Page 17: c1891862

Again, this will only only occur if the change-capture agent is in active mode.

Otherwise, all active messages (including the shutdown message) are disabled, as

recovery is necessary.

Modifying the Central Version JCL

When a CA-IDMS Central Version is started without a correlation service running,

the change-capture agent running the Central Version records a restart point to a

dataset. To enable DB2 II Classic Event Publisher to record this information, you

must allocate an 80-byte lreel file, CACRCV, in the Central Version startup JCL.

You must also add a jobstep to the Central Version JCL to inform the correlation

service that the Central Version has been shut down. This allows the correlation

service to be stopped without forcing the CA-IDMS agent into recovery mode.

Member CACIDTRM contains a sample of the JCL that you need to add to the

IDMS JCL.

Modifying automatic journaling

It is also typical for your CA-IDMS system to be setup to automatically archive

journals when they become full. You may want to change this behavior to allow a

certain number of active journals to remain unarchived so that they can be used by

the CA-IDMS recovery change-capture agent to prevent having to run the

CA-IDMS recovery change-capture agent multiple times. The number of

unarchived journals that you need to retain is based on the size of the journal files,

the frequency at which journals are filled up (i.e., the volume of changes in your

CA-IDMS system) and how quickly you can detect and respond to an CA-IDMS

Event Publisher recovery situation.

To modify your automatic archiving procedures, you can run the recovery agent as

part of your archiving procedure. The recovery agent counts the number of full

CA-IDMS online journals. You can use the returned value to prevent journal

archiving from taking place unless there are a specified number of full

(unarchived) journals available for recovery purposes.

Because this modification will reduce the number of archived journals available,

you may want to increase the number of online journal files that the CA-IDMS

Central Version uses to prevent CA-IDMS from halting due to the unavailability of

an archived journal.

You might also need to change your end-of-day procedures to make sure all full

journal files are archived.

Configuring a named server environment

If you have multiple Central Versions running on the same LPAR and wish to run

multiple versions of the correlation service, you must use the named server option.

To switch an existing correlation service for CA-IDMS to a named server:

1. Add a NAME= parameter to the SERVICE INFO ENTRY of the correlation service

(such as NAME=IDMSSRVR).

2. Update the CACE1OPT source file and add in the server name. The source

code for the CACE1OPT module is located in the SCACSAMP library.

3. Re-assemble CACE1OPT and link it into the CA-IDMS database module

IDMSDBIO.

4. Change the CA-IDMS recovery agent JCL by adding a SERVER= parameter to the

execution parameter (such as SERVER=IDMSSRVR).

Chapter 2. Preparing data and configuring change-capture agents for CA-IDMS 11

|||

|

||||

|||||

|

|||||||||

|||||

||||

||

|

||

|

||

||

||

||

Page 18: c1891862

Relinking the CA-IDMS database I/O module IDMSDBIO

To access IDMS for change-capture, you need to relink the IDMS Database I/O

module, IDMSDBIO. The SCACSAMP member CACIDLDB creates a backup copy

of the existing IDMSDBIO module to the member name @BKPDBIO (if the

member name does not already exist) in the same library. This module will be

included in the link step. Sample JCL to relink the module IDMSDBIO is in the

SCACSAMP member CACIDLDB.

The module CACECA1D contains a CSECT named IDMSJNL2 which is referenced

in IDMSDBIO as a weak external. By including CACECA1D, the IDMSJNL2

external is resolved and the exit is active. If you already have an IDMSJNL2 exit

active in your IDMSDBIO module, this link may not replace the active exit, and

you need to determine whether the existing exit can be replaced by the DB2 II

Classic Event Publisher exit, or if stacking the two exits is required.

To replace your exit, add the link-edit control card:

REPLACE IDMSJNL2

before the INCLUDE SYSLIB(@BKPDBIO) card.

Stacking the exit requires renaming your exit CSECT from IDMSJNL2 to

IDM2JNL2 as part of the link process. If IDM2JNL2 is resolved by the DB2 II

Classic Event Publisher exit, it will automatically call your exit whenever it

receives control from IDMS.

Note: These are only general instructions for stacking the exit. The actual steps

involved in completing this process depends on how well you know the

linkage-editor and whether or not your exit source can be changed and

rebuilt.

After relinking the IDMSDBIO module, you must stop and restart IDMS before the

exit is activated. Once activated, the exit remains essentially dormant until a DB2 II

Classic Event Publisher correlation service is started with IDMS tables mapped and

ALTERed for change-capture.

Relinking the presspack support module

If the tables that you are monitoring use the Presspack support module to

compress data, relink the Presspack support module CACPPK to include the IDMS

interface module so that the correlation service can decompress the data that is

stored in the Central Version journals. Sample JCL for the relink can be found in

the SCACSAMP member CACIDLPP.

Note: If you are using other compression/decompression modules or DC tables

used by the Presspack support module, be sure to include these authorized

libraries in the STEPLIB of the correlation service.

If the Presspack support module fails to decompress a record, the following

message is written to the log:

Presspack RC=nn

The following list shows the return codes from IDMS R14.0

00—Decompression successful

12 DB2 II Getting Started with Classic Event Publishing

||

||||||

||||||

|

|

|

||||

||||

||||

||

|||||

|||

||

|

|

|

Page 19: c1891862

04—GETSTG failure

08—Call from system mode module

12—DCT load failure

16—DCT not valid

20—Record or table not compressed by CA-IDMS Presspack

24—Load of IDMSPRES failed

>100—Error during decompression (most likely, the wrong DCT was specified);

PRESSTO return code = return code minus 100.

Setting up a server to access a CA-IDMS Central Version

The following JCL changes are required to access an IDMS Central Version:

1. Add the IDMS.LOADLIB to the STEPLIB concatenation.

2. Add a SYSCTL DD statement and allocate the SYSCTL file used by the Central

Version you need access to.

Note: APF authorization is required. Certain DB2 Information Integrator Classic

Event Publisher functions, such as Cross Memory services (used by the

IDMS active change-capture agent), require the server’s STEPLIB to be

APF-authorized. The CA-IDMS.LOADLIB is not usually APF-authorized,

and some utility programs in that library will fail if they are run from an

APF-authorized STEPLIB concatenation.

To run DB2 Information Integrator Classic Event Publisher, you must create

a separate authorized copy of the CA-IDMS.LOADLIB. If you are doing

change capture on CA-IDMS records using Presspack compression, you also

must authorize the library containing DCTABLE modules and include it in

the server STEPLIB concatenation.

Chapter 2. Preparing data and configuring change-capture agents for CA-IDMS 13

|

|

|

|

|

|

||

||

|

|

||

||||||

||||||

Page 20: c1891862

14 DB2 II Getting Started with Classic Event Publishing

Page 21: c1891862

Chapter 3. Preparing data and configuring change-capture

agents for IMS

The following topics explain how to configure change-capture agents for IMS

source databases:

v Supported environments and program types

v Enabling IMS change capture

v Installing the IMS active change-capture agent

v Adding the IMS logger exit to an existing exit

v Augmenting a DBD to generate IMS data capture log records

Supported environments and program types

The following table identifies the IMS environments and database types that are

currently supported for change capture.

Table 1. Supported IMS environments and database types

IMS Environment Databases Supported

DB Batch Full-function

TM Batch None

DB/DC Full-function DEDB

DBCTL Full-function DEBD

DCCTL None

You can also capture updates made by CICS applications, DB2 Information

Integrator Classic Federation for z/OS, or ODBA clients using DRA.

Data capture is only supported when the batch job allocates a non-dummy

IEFRDER DD statement.

Enabling IMS change capture

Your IMS system should have the DI21PART database installed as part of the base

IMS system. Provided in the SCACSAMP data set is the DBD and two COBOL

copybooks, describing the IMS database and the segments you can use to validate

the implementation of DB2 II Classic Event Publisher . You can follow a similar

process to configure Classic Event Publisher for use with your own IMS databases.

Note: In all the jobs that follow, you will need to customize the JCL as appropriate

for your site. This includes concatenating IMS-specific libraries provided by

IBM®. Templates for these libraries are included in the JCL. You will need to

uncomment them and provide the appropriate high-level qualifiers.

Mapping the sample IMS DBD and copybooks

The SCACSAMP data set on the mainframe contains the DBD and COBOL

copybooks describing the DI21PART database. The DBD is contained in a member

© Copyright IBM Corp. 2003, 2004 15

Page 22: c1891862

called CACIMPAR and the two COBOL copybooks are in members CACIMROT

(PARTROOT segment) and CACIMSTO (STOKSTAT segment). You will need these

files to complete the following steps.

For more detailed information on data mapping, see the DB2 Information Integrator

Data Mapper Guide.

1. FTP the CACIMPAR, CACIMROT, and CACIMSTO members from the

SCACSAMP data set to a directory of your choice on the workstation where

Data Mapper is installed. As you FTP these members, rename them with the

following extensions:

v cacimpar.dbd

v cacimrot.fd

v cacimsto.fd 2. From the Windows® Start menu, select DB2 Information Integrator Data

Mapper.

3. From the File menu, select Open Repository and select the Sample.mdb

repository under the xadata directory.

4. From the Edit menu, select Create a new Data Catalog. The following screen

appears:

5. Enter the following information in the dialog box:

v Name: Parts Catalog - IMS

v Type: IMS

v Change Capture: check the check box 6. Click OK.

7. From the File menu, select Load DL/I DBD for Reference.

16 DB2 II Getting Started with Classic Event Publishing

Page 23: c1891862

The Load DBD File dialog box appears.

8. Select the DBD you obtained by FTP from the mainframe (cacimpar.dbd) and

click OK.

The DL/I DBD window appears.

9. From the Window menu, select List Tables.

The IMS Table for <Data Catalog Name> dialog box appears.

Since this is a new Data Catalog, the list of tables will be empty.

Chapter 3. Preparing data and configuring change-capture agents for IMS 17

Page 24: c1891862

10. From the Edit menu, select Create a new Table.

The following information creates a logical table that includes the IMS root

segment PARTROOT as defined by the DBD.

You do not need to fill in the Name field, as it is automatically populated

from the Leaf Seg field.

a. Select CAC from the Owner drop down list.

b. Select PARTROOT from the Index Root drop down list.

c. Select PARTROOT from the Leaf Seg drop down list.

PARTROOT is referred to as the leaf segment because it acts as the leaf

segment as defined by this logical table.

For Classic Event Publisher you do not need to specify IMSID, PSB name

or PCB prefix information.

d. Click OK.You are now ready to import the definitions from the CACIMROT copybook

you obtained by FTP from SCACSAMP data set.

11. From the File menu, select Import External File and select the CACIMROT

copybook that you stored on the workstation.

18 DB2 II Getting Started with Classic Event Publishing

Page 25: c1891862

After the Import Copybook dialog box appears, the CACIMROT copybook is

loaded and ready to Import. (But don’t click Import until Step 13.)

12. Make sure the correct segment in the Seg. Name drop down is selected. Make

sure the PARTROOT segment is selected as it is the segment for which you

are loading the copybook.

13. Click Import. This imports the COBOL definitions from the CACIMROT

copybook into the table CAC.PARTROOT. The columns for the table are

created:

You have completed creating the logical table mapping for the PARTROOT

segment. The following steps walk you through creating the logical table for the

STOKSTAT segment.

1. Click on the window titled ″IMS Tables for Data Catalog Parts Sample for IMS″

to regain focus.

Chapter 3. Preparing data and configuring change-capture agents for IMS 19

Page 26: c1891862

2. From the Edit menu, select Create a New Table.

The following information creates a logical table that includes the IMS

STOKSTAT segment as defined by the DBD.

You do not need to fill in the Name field, as it is automatically populated from

the Leaf Seg field.

3. Select CAC from the Owner drop down list.

4. Select PARTROOT from the Index Root drop down list.

5. Select STOKSTAT from the Leaf Seg drop down list.

STOKSTAT is referred to as the leaf segment because it acts as the leaf segment

as defined by this logical table.

For Classic Event Publisher you do not need to specify IMSID, PSB name or

PCB prefix information.

6. Click OK.

You are now ready to import the definitions from the CACIMROT copybook you

obtained by FTP from SCACSAMP data set. Follow the instructions outlined above

in steps 11, 12 and 13. When you have completed these steps, the window should

look as follows:

20 DB2 II Getting Started with Classic Event Publishing

Page 27: c1891862

1. Select the DESCRIPT column (number 3) and hit the Delete key or select

″Delete the Selected Column″ from the Edit menu. The DESCRIPT column is

not part of the PARTROOT key and is not captured when changes occur to the

STOKSTAT segment.

2. From the File menu, select Import External File and select the next segment

CACIMSTO copybook that you stored on the workstation. After the Import

Copybook dialog box appears, the CACIMSTO copybook is loaded and ready

to Import.

Make sure the correct segment in the seg. name drop down is selected. Make

sure the STOKSTAT segment is selected as it is the segment for which you are

loading the copybook. Also, make sure the Append to Existing Columns check

box is checked.

3. Click Import. This concatenates the COBOL definitions from the CACIMSTO

copybook into the table CAC.STOKSTAT after the CACIMROT definitions. You

have now defined a logical table which includes a root and child segment.

4. Click Sample.mdb window to bring it in focus.

Chapter 3. Preparing data and configuring change-capture agents for IMS 21

Page 28: c1891862

5. Ensure the Data Catalog Parts Catalog - IMS is highlighted and select Generate

USE Statements from the File menu.

6. Select a file name for the generated statements to be stored on the workstation,

for example: parts.use and click OK.

After generation is complete you can view the USE GRAMMAR from Windows

Notepad or click Yes when the Data Catalog USE Generation Results dialog box

appears. Your completed USE GRAMMAR will look similar to the following

example:

DROP TABLE CAC.PARTROOT;

USE TABLE CAC.PARTROOT DBTYPE IMS

DI21PART INDEXROOT PARTROOT PARTROOT

SCHEDULEPSB DFSSAM03

(

/* COBOL Name PARTCOD */

PARTCOD SOURCE DEFINITION ENTRY PARTROOT

DATAMAP OFFSET 0 LENGTH 2 DATATYPE C

USE AS CHAR(2),

/* COBOL Name PARTNO */

PARTNO SOURCE DEFINITION ENTRY PARTROOT

DATAMAP OFFSET 2 LENGTH 15 DATATYPE C

USE AS CHAR(15),

/* COBOL Name DESCRIPT */

DESCRIPT SOURCE DEFINITION ENTRY PARTROOT

DATAMAP OFFSET 26 LENGTH 20 DATATYPE C

USE AS CHAR(20)

);

ALTER TABLE CAC.PARTROOT DATA CAPTURE CHANGES;

DROP TABLE CAC.STOKSTAT;

USE TABLE CAC.STOKSTAT DBTYPE IMS

DI21PART INDEXROOT PARTROOT STOKSTAT

SCHEDULEPSB DFSSAM03

(

/* COBOL Name PARTCOD */

PARTCOD SOURCE DEFINITION ENTRY STOKSTAT

...

Loading the metadata catalogs

To load the metadata catalogs with the table you created in the previous section:

1. FTP the generated USE GRAMMAR (parts.use), to the SCACSAMP data set on

the mainframe.

2. Run CACCATLG to allocate the metadata catalogs.

Note: If the catalogs has already been allocated previously, you can skip this

step.

In the SCACSAMP data set, there is a member called CACCATLG. This

member contains JCL to allocate the metadata catalogs that are used by the

data server.

a. Customize the JCL to run in your environment and submit.

b. After this job completes, ensure that the data server procedure in the

PROCLIB points to the newly created catalogs using the CACCAT and

CACINDX DD statements.

Note: Ensure that the CACCAT and CACINDX DD statements are

uncommented in the JCL.3. Load the catalogs.

22 DB2 II Getting Started with Classic Event Publishing

Page 29: c1891862

In the SCACSAMP data set, there is a member called CACMETAU. This

member contains JCL to load the metadata catalogs using the USE GRAMMAR

as input.

Customize JCL to run in your environment and submit.

a. Make sure the symbolic, GRAMMAR =, is pointing to the appropriate USE

GRAMMAR member (GRAMMAR=PARTSUSE).

b. Uncomment and set the IMS symbolic and the DBDLIB DD that points to

your IMS DBD library.

c. Ensure the CACCAT and CACINDX DDs refer to the catalogs created using

the CACCATLG JCL.

After this job has been run successfully, the catalogs are loaded with the

logical tables created in the Data Mapper.

A return code of 4 is expected. The DROP TABLE fails since the table does

not exist yet.

Installing the IMS active change-capture agent

The IMS active change-capture agent is implemented as the IMS Logger Exit. The

IMS Logger Exit has a hard-coded name of DFSFLGX0. IMS automatically invokes

the IMS Logger Exit during IMS log file initialization processing if a module

named DFSFLGX0 is found in the STEPLIB concatenation or the link-pack area.

One of the easiest ways to install the IMS active change-capture agent is to copy

module DFSFLGX0 from the Classic Event Publisher distribution libraries into the

IMS RESLIB. Another method is to concatenate the Classic Event Publisher load

library into your IMS batch jobs and started task procedures for the online DB/DC

or DBCTL regions.

Another modification to the IMS region JCL provides for recovery information.

When an IMS region is started without a correlation service running, the

change-capture agent running in the region records a restart point to a data set. To

enable Classic Event Publisher to record this information, an 80-byte lrecl data set

must be allocated and referenced by a CACRCV DD statement. The CACRCV DD

statement must be added to the DB/DC or DBCTL started task JCL or into IMS

batch job JCL. A unique data set name must be created for each IMS job that a

change-capture agent will be active in.

After making the necessary changes to the IMS region JCL, you can verify the

installation by starting the DB/DC or DBCTL region and looking for the following

operator message in the IMS region JES messages:

CACH001I EVENT PUBLISHER AGENT ’IMS_xxxx’ INSTALLED FOR SERVER ’(noname)’

After the active agent is installed, starting the DB/DC or DBCTL region without a

correlation service will cause this message to appear on the operator console:

CACH002A EVENT PUBLISHER SERVER ’(noname)’ NOT FOUND BY AGENT ’IMS_xxxx’, REPLY

’R’ OR ’A’ RECOVERY/ACTIVE

This message indicates that database changes are taking place and there is no

correlation service available to receive the changes. Though this message requires

operator action, the region itself will not be halted to wait for the reply. In most

cases, the operator should reply ’R’ to this message to force the agent into recovery

mode so any changes made to the database since it was started can be processed

by the recovery agent.

Chapter 3. Preparing data and configuring change-capture agents for IMS 23

Page 30: c1891862

The response to this message is cached in memory until a correlation service is

started. At that time, the agent is placed in either active or recovery mode based

on the operator reply. As an additional safeguard, the response of ’A’ will be

ignored by the correlation service if both of the following conditions are true:

v The correlation service is warm started.

v The restart data set the correlation service processes already has the agent in

recovery mode.

To verify that the termination message is working correctly, the DB/DC or DBCTL

region must be running in active mode and communicating successfully with the

correlation service. After that has been verified, stop the IMS region. The

correlation service should issue the message:

CACG114I SHUTDOWN RECEIVED FROM ACTIVE AGENT ’IMS_xxxx’

Again, this will only occur if the change-capture agent is in active mode.

Otherwise, all active messages (including the shutdown message) are disabled, as

recovery is necessary.

Advantages and disadvantages of the IMS logger exit

installation options

Each approach has its advantages and disadvantages. Generally you will decide

which approach to take based upon how pervasive the use of Classic Event

Publisher will be at your site.

If you are implementing a large-scale deployment, then placing the IMS Logger

Exit in the IMS RESLIB is the easiest installation method. You have a large-scale

deployment when either/or:

v You are planning to augment the majority of your IMS databases for change

capture.

v You are augmenting an IMS database for change capture that is updated by the

majority of your IMS applications.

If you are planning a smaller-scale implementation that only monitors a small

number of IMS databases updated by a small number of IMS applications, you

may want to modify these IMS batch jobs and the DB/DC or DBCTL subsystems’

started task procedures to reference the authorized Classic Event Publisher load

library.

In a large-scale deployment, you need to update each IMS batch job and DB/DC

or DBCTL subsystems’ started task JCL to include a recovery data set and

(optionally) install IMS Log File Tracking. In a small-scale implementation, the

number of IMS batch jobs and started task procedures that need to be updated are

reduced. However, if you forget to update one of your IMS applications that

updates a monitored database, these changes are lost and the correlation service

has no knowledge that this has occurred.

If you install the IMS active change-capture agent in the IMS RESLIB and are only

performing a small-scale implementation, then the correlation service still tracks all

IMS control regions that are referencing the IMS RESLIB where the IMS active

change-capture agent is installed, even though many of these IMS applications do

not update databases that are being monitored by Classic Event Publisher .

Likewise, if these IMS active change-capture agents go into recovery mode, you

have to recover these failed agents, even though no IMS changes are being

captured, making more work for you.

24 DB2 II Getting Started with Classic Event Publishing

Page 31: c1891862

Adding the IMS logger exit to an existing exit

The IMS Logger Exit is somewhat an esoteric IMS system exit. IBM does not

supply a sample for this exit, so normally this exit is not used. In case you have

implemented your own IMS Logger Exit or are using an exit from another

company, the supplied version of the IMS Logger Exit does contain support for

invoking an existing IMS Logger Exit.

The SCACSAMP member CACIMLEX is a sample relink job that will create a

backup of your Logger Exit, and then relink our version of the exit with yours.

Your version of the IMS Logger Exit must be named DFSFLGX0 for the call to

succeed.

Augmenting a DBD to generate IMS data capture log records

IMS was enhanced to generate IMS Data Capture log records for use by IBM’s

DPropNR for asynchronous replication purposes. Internally, IMS uses type 50

undo/redo log records for its own recovery purposes. Unfortunately, these log

records do not contain enough information to be effectively used for change

capture purposes. IBM realized this and enhanced IMS to generate IMS Data

Capture log records that do contain all of the information necessary.

To have IMS generate IMS Data Capture records, the DBD that specifies the

information to be captured must be augmented. These DBD modifications only

affect the actual DBD definitions (stored in the DBD/ACB library) and do not

affect the physical database.

You use the EXIT= keyword to specify IMS Data Capture information. The EXIT

keyword is supported for the DBD control statement and the SEGM control

statement. Supplying an EXIT keyword on the DBD statement defines default

values for all segments in the DBD. Specifying an EXIT keyword on the SEGM

statement allows you to override the default values. This gives you great flexibility

about the types and amounts of information that is captured.

The format of the EXIT keyword is:

EXIT=(Exit-Name,KEY|NOKEY,DATA|NODATA,PATH|NOPATH,

(CASCADE|NOCASCADE,KEY|NOKEY,DATA|NODATA,PATH|NOPATH),

LOG|NOLOG)

The following table identifies the use of each of these parameters:

Table 2. Parameters of the EXIT keyword

Keyword Purpose

Exit-Name In this parameter, you specify:

v the name of the DPropNR synchronous data capture exit, if there

is one,

v * to indicate that there is no exit, or

v NONE to deactivate an exit routine on a SEGM statement

Classic Event Publisher for IMS does not use data capture exits, but

co-exists if your site is using DPropNR, or if you have implemented

your own exits at your site. If you do not have any data capture

exits, set this parameter to *.

Chapter 3. Preparing data and configuring change-capture agents for IMS 25

Page 32: c1891862

Table 2. Parameters of the EXIT keyword (continued)

Keyword Purpose

KEY|NOKEY The value KEY indicates that you want the IMS Data Capture log

records to contain physical path concatenated key information for a

segment that has been deleted, inserted or updated. The default

value is KEY.

DATA|NODATA The value DATA indicates that you want physical segment data

included in the IMS Data Capture log records for a segment that has

been deleted, inserted or updated. The default value is DATA.

PATH|NOPATH The value PATH indicates that you want physical segment data

included in the IMS Data Capture log records for the parents of a

segment that has been deleted, inserted or updated. The default

value is NOPATH.

CASCADE|NOCASCADE Specifying the CASCADE parameter indicates that you want IMS Data

Capture log records to contain cascade delete information for a

segment that has been deleted that has child segments. When you

specify this parameter, you can specify the next three parameters

(KEY|NOKEY, DATA|NODATA, PATH|NOPATH) to identify what kind of

information you want included in the IMS Data Capture log record

for a segment when its parent is deleted.

KEY|NOKEY The value KEY indicates that you want the child segment

concatenated key information included in the IMS Data Capture log

record. The default value is KEY.

DATA|NODATA The value DATA indicates that you want the physical segment data

included in the IMS Data Capture log record for the deleted child

segment. The default value is DATA.

PATH|NOPATH The value PATH indicates that you want physical segment data

included in the IMS Data Capture log records for the parents of the

child segment that has been deleted. The default is NOPATH.

LOG|NOLOG The value LOG indicates that you want IMS Data Capture log records

to be generated to the IMS log files. If you set the Exit-Name

parameter to *, the default value for this parameter is LOG. You can

specify NOLOG on individual SEGM statements, if these segments do

not contain data that is to be captured. You can also specify NOKEY,

NODATA for segments that you do not want data captured for.

The recommended EXIT options for root segments are NOKEY, DATA, and NOPATH. The

recommended EXIT options for child segments are KEY, DATA, and NOPATH. Also,

NOCASCADE is recommended as an option. If possible, design the application that is

processing the changes to parent segments to handle the implied deletion of any

″child″ information.

In addition to specifying EXIT information in the DBD, you can also supply

VERSION information on the DBD control statement. Unless you have a specific

reason to do otherwise, allow IMS to generate the default DBD version identifier,

which is the date/time the DBD was assembled.

When the correlation service, during commit processing, processes an IMS Data

Capture record, it compares the version information contained in the record against

the version information in the DBD load module. If the version information does

not match, the correlation service logs an error message and terminates. By using

the date/time stamp of DBD assembly you are sure that the DBDs that IMS is

accessing are the same ones that the correlation service is using for reference

purposes.

26 DB2 II Getting Started with Classic Event Publishing

Page 33: c1891862

Chapter 4. Preparing data and configuring change-capture

agents for VSAM

The following topics explain how to configure change-capture agents for changes

made to VSAM files through CICS:

v Prerequisites for VSAM monitoring

v Setup procedures for CICS monitoring for VSAM changes

v Configuring CICS resource definitions

v Mapping VSAM data

v Loading the metadata catalogs

v Configuring change-capture agents for VSAM

Prerequisites for VSAM monitoring

VSAM changes are captured by monitoring the CICS transactions that update

them. To monitor CICS transactions, you must:

v have enabled logging, and created a user log (DFHJ01), log of log (DFHLGLOG),

system log (DFHLOG), and system secondary log (DFHSHUNT), and

v have an application that inserts, updates, or deletes data in a VSAM file through

CICS.

If you have not enabled logging for CICS, see the CICS Transaction Server for z/OS

V2.3: Installation Guidefor instructions on enabling it.

If you do not yet have an application that interacts with a VSAM file through

CICS, you can use the sample COBOL application (FILEA) that comes with CICS.

See the CICS Transaction Server for z/OS V2.3: Installation Guide for information

about this sample application.

Setup procedures for CICS monitoring for VSAM changes

This chapter describes setting up CICS monitoring for VSAM changes using a

sample VSAM file called Employee Demo Database. This database is part of the

VSAM installation. The steps outlines in this chapter are the same steps used to

enable change capture for your own VSAM files and databases that are updated

through CICS.

The process is as follows:

v Map VSAM data using the Data Mapper to create logical tables,

v Configure CICS Resource Definitions,

v Load the Classic Event Publisher metadata catalog with logical tables, and

v Activate a VSAM change-capture agent.

Note: It is assumed that Classic Event Publisher has been installed on the

mainframe and Data Mapper is installed on a workstation.

© Copyright IBM Corp. 2003, 2004 27

Page 34: c1891862

Configuring CICS resource definitions

The metadata utility requires a connection to CICS to validate the table and collect

information about it. This is accomplish using a VTAM® LU62 connection from the

metadata utility to CICS. To set up this connection, perform the steps documented

in the following sections.

VTAM resource definitions

A VTAM APPL definition is required to communicate with CICS and to create a

VTAM mode table.

Sample member CACCAPPL in the SCACSAMP data set contains two sample VTAM

APPL definitions. CACCICS1 is not required for Classic Event Publisher for VSAM,

and can be removed. CACCICS2 is used by the metadata utility. The following is the

sample member:

*

* SAMPLE APPL ID DEFINITIONS FOR CICS INTERFACE

*

CACCAPPL VBUILD TYPE=APPL

CACCICS1 APPL ACBNAME=CACCICS1,

APPC=YES,

AUTOSES=1,

MODETAB=CACCMODE,

DLOGMOD=MTLU62,

AUTH=(ACQ),

EAS=100,PARSESS=YES,

SONSCIP=YES,

DMINWNL=0,

DMINWNR=1,

DSESLIM=100

CACCICS2 APPL ACBNAME=CACCICS2,

APPC=YES,

AUTOSES=1,

MODETAB=CACCMODE,

DLOGMOD=MTLU62,

AUTH=(ACQ),

EAS=1,PARSESS=YES,

SONSCIP=YES,

DMINWNL=0,

DMINWNR=1,

DSESLIM=1

Create a Logon Mode Table entry. The member CACCMODE in the SCACSAMP data set

contains the macro definitions to define it. Assemble and catalogue this member in

VTAM’s VTAMLIB. The following is the member’s content:

CACCMODE MODETAB

MTLU62 MODEENT LOGMODE=MTLU62,

TYPE=0,

FMPROF=X’13’,

TSPROF=X’07’,

PRIPROT=X’B0’,

SECPROT=X’B0’,

COMPROT=X’D0B1’,

RUSIZES=X’8989’,

PSERVIC=X’060200000000000000000300’

MODEEND

END

28 DB2 II Getting Started with Classic Event Publishing

Page 35: c1891862

CICS resource definitions

You add CICS SIT, transaction, program, connection, and session entries to allow

the metadata utility to communicate with CICS. You also add a file and

journalmodel.

The CICS system initialization table (DFHSIT) definition or initialization overrides

must include ISC=YES to enable intercommunication programs. If this does not

already exist, add it and cycle CICS.

Copy the load modules CACCICAT from the load library to the CICS user load

library.

Install IBM’s Language Environment® (LE) in CICS.

The file CACCDEF in the SCACSAMP data set contains a sample job. Add it to the CICS

transaction, program, connection, session, and file definitions required for Classic

Event Publisher for VSAM. For Classic Event Publisher for VSAM to capture the

before and after images of a file, the RECOVERY setting must be set to ALL in the file

definition for the file, and it must specify a FWDECOVLOG for which the journal to

which the after images for forward recovery are written.

To run the job:

1. After replacing the sample values CICSUID, CICSAPPL, and DFHJ01 with

site-specific values, run the following JCL, which defines a logstream into the

MVS™ LOGGER subsystem:

//STEP1 EXEC PGM=IXCMIAPU

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

DATA TYPE(LOGR) REPORT(YES)

DEFINE LOGSTREAM NAME(CICSUID.CICSAPPL.DFHJ01)

HLQ(xxxxxxx) MODEL(NO)

STG_DATACLAS(xxxxxxxx)

LOWOFFLOAD(0) HIGHOFFLOAD(80)

RETPD(n) AUTODELETE(YES)

DASDONLY(YES) DIAG(NO)

MAXBUFSIZE(65532)

/*

2. Update the job card for your site specifications.

3. Update the STEPLIB for the correct CICS library.

4. Update the DFHCSD DD for the correct CSD file.

5. Add the following user Journalmodel definition at the end:

DEFINE JOURNALMODEL (DFHJ01)

GROUP(CACVSAM)

DESCRIPTION (USER LOG STREAM)

JOURNALNAME(DFHJ01)

TYPE(MVS)

STREAMNAME (&USERID..&APPLID..&JNAME)

Note: The entries &USERID, &APPLID, and &JNAME can be modified or left

as they are.

6. Remove the program definition for CACCIVS, the EXV1 transaction, the EXC1

connection, and the EXS1 session.

7. If you are using an SMS Managed Storage for the VSAM file, run the following

job to alter LOG and LOGSTREAMID parameters in the VSAM file.

Chapter 4. Preparing data and configuring change-capture agents for VSAM 29

Page 36: c1891862

Use the following JCL, replacing **hilev** and **vrm** with the names used

for the Classic Event Publisher for VSAM installation, and set LOG to ALL and

specify a LOGSTREAMID.

//IDCAMS EXEC PGM=IDCAMS

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

ALTER **hilev**.**vrm**.VSAM.EMPLOYEE -

LOG(ALL) LOGSTREAMID(CICSUID.CICSAPPL.DFHJ01)

If you are using a non-SMS volume, perform the following steps:

a. Set the RECOVERY setting to ALL in the CACEMP file definition.

b. Specify the FWDRECOVLOG to 01 in CACEMP file definition.8. After successful completion of the job, install the new definitions with the

following CICS transaction:

CEDA INSTALL GROUP(CACVSAM)

9. Next, add the CACVSAM group to your start-up group with the following CICS

transaction:

CEDA ADD GR(CACVSAM) LIST(xxxxxxxx)

where xxxxxxxx is the name of the start-up group from your SIT table.

Mapping VSAM data

Your VSAM installation should have a sample VSAM cluster. This VSAM cluster

contains 34 records of employee information. This is the database for which you

will describe how to setup monitoring. The same process described below can be

used to map and set up monitoring for your own VSAM files.

Note: In all the jobs that follow, you will need to customize the JCL as appropriate

for your site.

Mapping the sample VSAM copybook

The SCACSAMP data set on the mainframe contains a sample COBOL copybook

describing the employee VSAM cluster created during the installation process. The

member name is CACEMPFD. You need this sample copybook to complete the

following steps.

For more detailed information on data mapping, see the DB2 Information Integrator

Data Mapper Guide.

To map the sample VSAM copybook:

1. FTP the CACEMPFD member from the SCACSAMP data set to the workstation

where the Data Mapper is installed. Name the file on the workstation

cacemp.fd.

2. Start DB2 Information Integrator Data Mapper.

3. From the File menu, select Open Repository.

4. Select the Sample.mdb repository under the xadata directory.

5. From the Edit menu, select Create a new Data Catalog.

30 DB2 II Getting Started with Classic Event Publishing

Page 37: c1891862

The Create Data Catalog dialog box appears.

6. Enter the following information in the dialog box:

v Name: Employee Sample - CICS VSAM

v Type: VSAM

v Check the Change Capture check box to modify the tables for monitoring. 7. Click OK.

8. From the Window menu, select List Tables.

Since this is a new Data Catalog, the list of tables will be empty.

9. From the Edit menu, select Create a new Table.

Chapter 4. Preparing data and configuring change-capture agents for VSAM 31

Page 38: c1891862

The Create VSAM Table dialog box appears.

10. In the Create VSAM Table dialog box:

a. Enter EMworkstationICS in the Name field.

b. Enter CAC in the Owner field.

c. Click the radio button labeled DD and enter the name of the CICS VSAM

data, CACEMP.

d. Enter CACCICS2 as the local Applid.

e. Enter the Applid for the CICS Applid.

f. Enter MTLU62 for the Logmode.

g. Enter EXV2 for the Transaction ID.

h. Enter a Remote Network Name, if required at your site.11. Click OK.

You are now ready to import the definitions from the cacemp.fd copybook you

obtained by FTP from the SCACSAMP data set.

12. From the File menu, select Import External File and select the cacemp.fd

copybook that you stored on the workstation and click OK.

32 DB2 II Getting Started with Classic Event Publishing

Page 39: c1891862

After the Import Copybook dialog box appears, the cacemp.fd copybook is

loaded and ready to Import.

13. Click Import.

This imports the COBOL definitions from the cacemp.fd copybook into the

table CAC.EMPLCICS, converting them into SQL data types.

14. Close the Columns for VSAM Table EMPLCICS dialog box.

15. Close the VSAM Tables for Data Catalog Employee Sample - CICS VSAM

dialog box.

At this point, you should be back to the list of Data Catalogs dialog box

named Sample.mdb.

16. Ensure the Data Catalog Employee Sample -- VSAM is highlighted and select

Generate USE Statements from the File menu.

17. Select a file name for the generated statements to be stored on the

workstation, such as empcics.use, and click OK.

After generation is complete you can view the metadata grammar (USE Grammar)

from the Windows Notepad or click Yes when the Data Catalog USE Generation

Results prompt appears. The following is an example of what your completed USE

Chapter 4. Preparing data and configuring change-capture agents for VSAM 33

Page 40: c1891862

GRAMMAR might look like.

Loading the metadata catalogs

To load the catalogs with the table you created in the previous section:

1. FTP the generated metadata grammar (metadata GRAMMAR) empcics.useto

the SCACSAMP data set on the mainframe.

2. If the catalogs have not been allocated, run CACCATLG to allocate them.

In the SCACSAMP data set, there is a member called CACCATLG. This member

contains JCL to allocate the metadata catalogs that are used by the system.

a. Customize the JCL to run in your environment and submit.

b. After this job completes, ensure that the Server Procedure in the PROCLIB

points to the newly-created catalogs using the CACCAT and CACINDX DD

statements.

Note: Make sure that the CACCAT and CACINDX DD statements are

uncommented in the JCL.3. Load the catalogs.

In the SCACSAMP data set, there is a member called CACMETAU. This member

contains JCL to load the metadata catalogs using the metadata Grammar as

input.

4. Customize JCL to run in your environment and submit.

a. Make sure the symbolic GRAMMAR= is pointing to the appropriate metadata

Grammar member (GRAMMAR=EMPLmetadata).

b. Ensure the CACCAT and CACINDX DDs refer to the catalogs created using the

CACCATLG JCL.

After this job has been run successfully, the catalogs have been loaded with the

logical tables created in the Data Mapper.

A return code of 4 is expected. The DROP TABLE fails since the table does not exist

yet.

34 DB2 II Getting Started with Classic Event Publishing

Page 41: c1891862

Configuring change-capture agents for VSAM

A VSAM change-capture agent is defined as a SERVICE INFO ENTRY to a

correlation service. This usually is a new service in a correlation service. In the

SCACCONF library, edit member CACCSCF.

The following is a sample SERVICE INFO ENTRY for the VSAM change-capture

agent:

SERVICE INFO ENTRY = CACECA1V VSAMECA 2 1 1 1 4 5M 5S \

APPLID=CICSUID STARTUP CICSUID.CICSAPPL.DFHLOG \

CICSUID.CICSAPPL.DFHJ01 CICSUID.CICSAPPL.DFHJ02 CICSUID.CICSVR.DFHLGLOG

To create a VSAM change-capture agent:

1. Uncomment the SIE for the change-capture agent (VSAMECA):

2. Specify the APPLID, STARTUP time and CICS system, user, and log of log

streams in the SIE.

Chapter 4. Preparing data and configuring change-capture agents for VSAM 35

Page 42: c1891862

36 DB2 II Getting Started with Classic Event Publishing

Page 43: c1891862

Chapter 5. Configuring correlation services, publication

services, and publications

The following topics explain how to configure correlation services, publication

services, and publications:

v Copying the correlation service JCL

v Configuring the correlation service and publication service

v Configuring the maximum size of messages

v Configuring Cross Memory services

v Creating publications

v Creating the Classic Event Publisher recovery data sets

Copying the correlation service JCL

The SCACSAMP data set contains sample JCL to start the server as a started task.

To copy the correlation service JCL:

1. Copy the CACCS member from the SCACSAMP data set to your PROCLIB.

2. Customize the member to run in your environment.

Configuring the correlation service and publication service

The configuration member CACCSCF is stored in the SCACCONF data set and

contains a Service Info Entry (SIE) that defines the various services. Within the SIE

are various fields that define the service, the number of tasks started at correlation

service start up, the minimum and maximum number of tasks allowed, timeout

values, and trace options. Service Info Entries are used in configuration files to

inform the Region Controller task that a service is to be activated and how that

service is to be controlled.

Multiple SIE parameters are required to activate multiple instances of a given

service if different sub-parameter values are needed. A single SIE parameter is

used if only a single instance is needed (or multiple instances using the same

sub-parameter values). A given service’s restrictions and allowable combinations of

multiple instances are discussed in that service’s specific descriptions.

Mutually-exclusive services are also noted in these descriptions.

The SIE parameter consists of ten sub-parameters, each delimited by at least one

space. The format of the first nine of these subfields is consistent across all

services. The format for the tenth subfield is service-dependent.

The following table shows sample SIEs for the correlation service and the

publication service.

Table 3. Sample Service Info Entries for the correlation service and the publication service

Type of

service

Sample SIEs

Correlation

service

SERVICE INFO ENTRY = CACECA2 XM1/XSYN/XSYN/16 2 1 1 16 4 10MS 30S \

TCP/111.111.111.111/SOCKET#,CSA=1K,CSARLSE=3,INT=1,WARMSTART

© Copyright IBM Corp. 2003, 2004 37

Page 44: c1891862

Table 3. Sample Service Info Entries for the correlation service and the publication

service (continued)

Type of

service

Sample SIEs

Publication

service

SERVICE INFO ENTRY = CACPUB PUB1 2 1 1 1 4 5M 5M mqi/QM_P39D/Queue1

The order of SIEs for the correlation service and the publication service matters.

The entry for the correlation service must come before the entry for the publication

service. This order is particularly important on shutdown, when services are

stopped in LIFO (reverse) order. The publication service must stop first so that it

can send the proper quiesce command to the correlation service. If the publication

service does not stop first, the correlation service might go into recovery mode on

an otherwise normal shutdown. For this reason, if the publication service is

configured to start before its corresponding correlation service, the publication

service will fail on startup when it fails to detect that the correlation service exists.

The parameters for the SIEs are explained below:

Parameter One: Task Name

For correlation services, the token CACECA2 is the task name and the

name of the correlation service load module. Leave this value as is.

For publication services, the token CACPUB is the task name and the

name of the publication service load module. Leave this value as is.

Parameter Two: Service Name

For correlation services, the service name token defines the protocol and

queue name for receiving raw data changes from active and recovery

agents. In most cases, the protocol name should be XM1 for Cross Memory

services. The subtoken XSYN/XSYN identifies the Cross Memory data

space and queue name and can be modified to suit any particular site

standards if applicable. The final subtoken 16 identifies the size (in

megabytes) of the change capture data space queue. This value can range

from 1 to 2048 and defaults to 8 if not specified. The size of the queue

depends on the number of expected active agents and the burst volume of

changes from each agent. A value of 16 is recommended to help ensure

queue space during peak periods.

Important: You must define a unique data space/queue name for each

correlation service that will be running at any one time.

For publication services, the service name can be a string 16 characters

long.

Parameter Three: Service Start Class

For both correlation services and publication services, leave this value set

to 2.

Parameter Four: Minimum Tasks

For both correlation services and publication services, leave this value set

to 1. A value of 0 is acceptable if you want to manually start the service

with an operator command, but changing the value is not recommended.

Parameter Five: Maximum Tasks

38 DB2 II Getting Started with Classic Event Publishing

Page 45: c1891862

For both correlation services and publication services, leave this value set

to 1.

Parameter Six: Maximum Connections per Task

For correlation services, this value should be the maximum number of

active and recovery agents that the correlation service will service, plus

four. The additional connections are used by the publication service and

reporting utility.

Parameter Seven: Trace Output Level

Leave this value set to 4 unless you are asked to change it by IBM

technical support for problem diagnostics.

Parameter Eight: Response Timeout

This value determines the length of time that the server will listen for a

response to requests before timing out.

Parameter Nine: Idle Timeout

This value sets a polling frequency for recovery restart and rules

confirmation messages. A value of 30 seconds is recommended.

Parameter Ten: Service Specific Information

For correlation services, the first token in the service-specific information

defines the queue for communication with recovery agents and the

publication service. Generally, this token defines a TCP/IP connection

string to which the publication service connects for receiving change

messages. The format of a TCP/IP connection string is:

TCP/ip address or hostname/port number

Examples:

TCP/192.123.456.11/5555

TCP/OS390/5555

For publication services, the tenth parameter specifies the WebSphere MQ

message queue to use as the restart queue. The format of the parameter is

as follows:

mqi/queue manager/queue name

queue manager is the name of the local queue manager that manages the

message queue. queue name is the name of the local message queue to use

as the restart queue. If your correlation service is running remotely from

your publication service, you can follow the name with a comma-delimited

communication string to describe how the publication service is to

communicate with the correlation service.

Additional service information for correlation services that you can specify:

CSA=nK -- The number of kilobytes that each server is to allocate for CSA space.

In most cases, 1K should be enough to manage change capture on at least 50

tables.

CSARLSE=n--The number of seconds to wait before attempting to release CSA

storage at shutdown. A value of 0 leaves CSA allocated for reuse when the

correlation service is restarted. The default value of CSARLSE is 0, which prevents

Chapter 5. Configuring correlation services, publication services, and publications 39

Page 46: c1891862

CSA from being released. If CSARLSE is not 0, the server will release CSA only if

no other correlation services are still active in the system.

INT=n--The number of changes to process for a committed unit of recovery before

checking for incoming change data from active or recovery agents. This parameter

prevents large transactions from blocking the incoming raw data queue while

sending change messages to the publication service. A value of 0 will not interrupt

processing of committed messages to look for incoming raw data messages. 0 is

the default.

NOSTAE--NOSTAE disables abend detection in the correlation service. Do not

specify this value unless requested by IBM technical support for diagnostics.

NAME--Names the correlation service. If you leave this option out, the correlation

service will be started unnamed. See the DB2 Information Integrator Planning Guide

for Classic Event Publishing for more information about named servers.

COLDSTART/WARMSTART--Specifies whether to coldstart or warmstart the

server. A cold start discards all recovery information and places all known agents

in active mode. A warm start retains the state of all known change-capture agents

at the time the correlation service was last shutdown.

WARMSTART is the default action.

If you set the SIE to perform a coldstart, make sure that you reset the SIE after you

cold start the server so that you do not inadvertently cold start the server in the

future.

Configuring the maximum size of messages

The publication service draws memory buffers from the message pool (the size of

which is determined by the MESSAGE POOL SIZE parameter in the configuration

file) and constructs messages within these buffers. When the publication service is

sending messages of type TRANS, a large transaction can exceed the size of the

buffer in which it is being converted to messages. In such cases, it is useful to

allow the publication service to segment the transaction. The publication service

constructs two or more messages for the transaction and puts messages on the

queue in succession when each becomes a certain size.

Use the MAX TRANSPORT MESSAGE SIZE parameter in your configuration file

to specify in bytes the largest size that a message can be before the publication

service writes it to a message queue. For example, consider the following entry in

a configuration file:

MAX TRANSPORT MESSAGE SIZE = 262144

When the publication service constructs a message for a large transaction in the

message pool, whenever the publication service finds that the size of the message

reaches 256 KB, the publication service writes the current message to the

appropriate message queue and starts building another message to contain the

subsequent DML of the transaction. If this next message becomes 256 KB in size

before the end of the transaction is reached, the publication service writes this

message to the message queue and begins constructing another message. This

process continues until the publication service reaches the end of the transaction.

40 DB2 II Getting Started with Classic Event Publishing

Page 47: c1891862

Segments are numbered sequentially by the publication service so that the

application that receives them can be sure that they are in sequence. The message

attribute isLast is set to 1 in the final message so that receiving applications can

tell when the transaction is finished.

The maximum value of MAX TRANSPORT MESSAGE SIZE is 10 percent of the

parameter MESSAGE POOL SIZE.

The minimum value is 64 KB, expressed as 65536.

The default value is 128 KB, expressed as 131072.

Configuring Cross Memory services

When configuring Cross Memory services, you must configure each correlation

service with a unique data space/queue name. You define Cross Memory services

with four tokens: protocol, data space, queue name, and data space queue size.

If you use the same combination of data space and queue name for more than one

correlation service definition, then change-capture agents will send captured

changes to the least-busy server, which might be the correct server. Names are

intentionally shared in a DB2 Information Integrator Classic Federation enterprise

server environment for load balancing and because serialization is not an issue.

But, in an Classic Event Publisher environment, serialization is essential.

Unless you have a specific reason for sharing data spaces between multiple

correlation services, use a unique data space name for each server. If you choose to

share data spaces between servers by using a common data space name, make sure

that the queue name is unique for each server.

If you configure more than one correlation service in a single data space, then the

first correlation service that is started will set the size of the data space.

Creating publications

After you configure your correlation service and your publication service, you

must configure publications to indicate where changes to mapped tables will be

published and how. You do so in the same configuration file in which you

configured your correlation service and your publication service. (If your

correlation service and publication service are remote from each other and

therefore use two different configuration files, configure your publications in the

publication service’s configuration file.)

IMS example

The following example publication uses an IMS source:

PUB ALIAS=ims1,

MSGTYPE=TRANS,

TABLE=CAC.STOKSTAT,

TOPIC=Schema1/IMS_update,

QUEUE=MQI/CSQ1/one,

BEFORE_VALUES = YES

VSAM example

The following example publication uses a VSAM source:

Chapter 5. Configuring correlation services, publication services, and publications 41

Page 48: c1891862

PUB ALIAS=vsam1,

MSGTYPE=TRANS,

TABLE=CAC.EMPCICS,

TOPIC=Schema1/VSAM_update,

QUEUE=MQI/CSQ1/one,

BEFORE_VALUES = YES

Publications are composed of the following parts:

Alias parameter

An alias defines the unique name for a publication within a Data Server.

Topic parameter (optional)

Include a topic in your publication if you want to publish it to WebSphere

Business Integrator Event Broker. Topics tell the WBI Event Broker how to

route the messages for the publication.

Queue parameter

Queues are the WebSphere MQ queues where messages are put. The

format of this parameter is MQI/queue_manager/queue_name, where MQI is

the designator for WebSphere MQ, queue_manager is the name of the queue

manager that the publication service is working with, and queue_name is

the name of the queue on which messages for the publication are put.

Message output parameters

Message output parameters define how the messages are constructed. The

table below lists these parameters and describes them.

Table 4. Message output parameters

Message output parameter Default

value

Possible values

MSGTYPE TRANS

TRANS

A message is published for each

committed transaction that affects the

source table. The message contains all

of the changes made to the source table

by the transaction.

ROWOP

A message is published for each

committed row operation on the source

table.

TABLE none The Table string is used to identify the mapped

table name from which changes will be

published. There can be only one table per

publication. This table is specified in the format :

ownerName.tableName (for example,

QAVSAM.EMPLOYEES). The example above

refers to a table, QAVSAM.EMPLOYEES, that

was mapped into the Classic Event Publisher

catalog and was altered for data capture

changes.

42 DB2 II Getting Started with Classic Event Publishing

Page 49: c1891862

Table 4. Message output parameters (continued)

Message output parameter Default

value

Possible values

BEFORE_VALUES NO

NO When a row is updated, only the

current values for all columns are

included in the message.

YES When a row is updated, the previous

values and the current values for all

columns are included in the message.This parameter is effective for UPDATE

operations only.

CHANGED_COLS_ONLY YES This parameter is not currently supported. Do

not change its value.

ALL_CHANGED_ROWS NO This parameter is not currently supported. Do

not change its value.

Creating the Classic Event Publisher recovery data sets

To create the Classic Event Publisher recovery data sets referenced in the

correlation service:

1. Run the SCACSAMP member, CACGDGA, to create GDG files and allocate the

first generation recovery data sets.

2. The CACGDGA member contains JCL to allocate the Classic Event Publisher

recovery data sets used by the correlation service.

a. Customize the JCL to run in your environment and submit it.

b. After this job completes, ensure that the correlation service procedure in the

PROCLIB points to the newly created data sets using the CACRCVD and

CACRCVX DD statements.

c. Ensure that the correlation service has the proper authority to allocate the

next generation of the recovery files.

Chapter 5. Configuring correlation services, publication services, and publications 43

Page 50: c1891862

44 DB2 II Getting Started with Classic Event Publishing

Page 51: c1891862

Chapter 6. Starting the processes of capturing and publishing

The following topics explain how to start capturing and publishing data:

v Starting the process of publishing

v Activating change capture for CA-IDMS

v Activating change capture for an IMS database/segment

v Activating change capture for VSAM

v Monitoring correlation services and publication services

Starting the process of publishing

Before starting to publish changes that are made to your sources, ensure that the

WebSphere MQ queue manager is running and then start the correlation service

and the publication service.

If the correlation service and publication service are configured in the same file,

you either issue a console command to start the correlation service JCL procedure,

or submit a batch job. The console command to start the correlation service is:

S procname

where procname is the 1-8 character proclib member name to be started. When you

issue commands from the SDSF product, prefix all operator commands with the

forward slash ( / ) character.

If the correlation service and publication service are configured in separate files,

you can issue a console command to start the correlation service JCL procedure

and another console command to start the publication service JCL procedure. The

console command is described above. You can also choose to submit a batch job for

each.

Activating change-capture for CA-IDMS

Setting up the IDMSJNL2 exit

The CA-IDMS change-capture agent is implemented as a database exit named

IDMSJNL2. You must link edit the IDMSJNL2 exit into the IDMSDBIO module and

restart the IDMS Centeral Version for the exit to take effect.

Because IDMSJNL2 is a general purpose exit, you might be using your own

version of the exit, and might want to incorporate your exit along with the DB2

Information Integrator Classic Event Publisher version of the exit. For these cases,

DB2 Information Integrator Classic Event Publisher supports stacking the exits.

To incorporate your version of the exit:

1. Change your exit to have an internal CSECT name of IDM2JNL2 instead of

IDMSJNL2. You can do this by either changing your exit source and

re-assembling the exit, or by using the linkage-editor to rename the CSECT.

Renaming using the linkage-editor can be an error-prone process and is not

recommended unless you do not have the program source.

© Copyright IBM Corp. 2003, 2004 45

|

|

|||

||||

|

|||||

Page 52: c1891862

2. Link your renamed exit to the IDMSDBIO module along with the DB2

Information Integrator Classic Event Publisher exit.

The DB2 Information Integrator Classic Event Publisher exit contains a

weak-external for the name IDM2JNL2. If this was resolved by link-editing your

renamed exit, all calls to the DB2 Information Integrator Classic Event Publisher

version of the exit are passed on to your exit as though DB2 Information Integrator

Classic Event Publisher were not part of the IDMSDBIO module at all.

Before starting a change-capture agent

Before you start a change-capture agent, you must start a correlation service. If you

do not start a correlation service before starting a change-capture agent, then the

change-capture agent will be put into recovery mode immediately after the

correlation service is started.

Starting an active change-capture agent

The change-capture agent is a CA-IDMS database exit that is automatically

installed when you relink the CA-IDMS database I/O module IDMSDBIO as

described in the IBM DB2 Information Integrator Installation Guide for Classic

Federation and Classic Event Publishing. After you restart the Central Version, the exit

is activated. To verify that the exit has been installed successfully, look in the

CA-IDMS JES log for the following message:

CACH001I EVENT PUBLISHER AGENT ’IDMS_nnn ’ INSTALLED FOR SERVER ’xxxxxxxx’

This message appears after the first journaled record is written.

Activating change capture for an IMS database/segment

Change capture is implemented using two IMS features. The IMS active

change-capture agent is implemented as an IMS Logger Exit. This allows the IMS

system to be monitored for changes in near real-time.

Although IMS performs its recovery processing based on the normal contents of

the IMS log files, Classic Event Publisher does not use the “raw” log records that

IMS uses to capture changes. Classic Event Publisher does use the same log

records, in addition to some additional IMS sync-point log records, to track the

state of an in-flight Unit of Recovery (UOR), but does not use the type 50

(undo/redo) and other low-level change notification records that IMS uses for

recovery purposes.

Instead, Classic Event Publisher uses type 99 Data Capture log records to identify

changes to a monitored IMS database because these records contain more

information and are easier to deal with than the “raw” recovery records used by

IMS.

Data Capture log records are generated at the database or segment level and

require augmentation of your DBD definitions. This augmentation does not affect

the physical database definition; it adds additional information to the DBD and

ACB load module control blocks.

Augmentation consists of adding the EXIT= keyword parameter on the DBD or

individual SEGM statements in your DBD definition. You can supply default

capture values at the DBD level and override or even suppress data capture

altogether at the SEGM level.

46 DB2 II Getting Started with Classic Event Publishing

||

|||||

|

||||

|

||||||

|

|

Page 53: c1891862

After you augment your DBD, perform these steps:

v Run a DBDGEN for the updated DBD.

v Run the ACBGEN utility to update all PSBs that reference the DBD.

You can then put the updated DBD and PSB members into your production ACB

libraries. If you perform this augmentation using the IMS Online Change facility,

either Classic Event Publisher will go into recovery mode or you will need to

recycle the correlation service to pick up changes to an existing monitored DBD or

to add a new DBD to be monitored. As part of Classic Event Publisher installation

and customization, you update the correlation service’s JCL and add a DBDLIB

statement that references your DBD library or a copy of the DBD load modules

that are being monitored for changes.

Activating change capture for VSAM

If you have a change-capture agent for VSAM defined in the same configuration

file as your correlation service, that agents starts when you start the correlation

service. After the server is initialized, you will see the following message on the

system log:

CACH105I CICS VSAM CAPTURE: Vv.r.m mmddyyyy READY

This is followed by a message that indicates the time that processing began:

CACH106I START PROCESSING AT mm/dd/yyyy hh:mm:ss

Monitoring correlation services and publication services

After you start your correlation service and publication service, you can use MTO

commands to display reports on their activities.

Command for monitoring correlation services

cmd,name of correlation service,report

This command displays a report on the activity of the correlation service

and all change-capture agents that send it change-capture data.

The following shows an example of the results of this command.

CAC00200I CMD,XM1/IMSX/IMSX/200,REPORT

CACG150I CORRELATION SERVICE ACTIVITY REPORT

*************** Transactions ***************

Agent Processed Sent to Rules Confirmed Pending State

----------- ---------- ------------- --------- ------- -----

VSAVSAMECA 0000002 0000002 0000002 0000000 Active

CACG151I END OF REPORT, AGENT TOTAL=1

CACG152I PENDINGQ(0) MSGQ(0) UNCONFIRMED(0)

Command for monitoring publication services

Command to report activity: cmd,name of publication service,report

This command publishes a report of the number of change messages

received, the number of commit messages received, the number of commits

that are confirmed received from the correlation service, the number of

commit messages rejected by the publication service. Here is a sample

report:

Chapter 6. Starting the processes of capturing and publishing 47

Page 54: c1891862

CACJ001I DISTRIBUTION SERVER ACTIVITY REPORT

--------------------------------------------

Change Message(s) Received = 13

Commit Message(s) Received = 2

Commit Message(s) Confirmed = 2

Commit Message(s) Rejected = 0

48 DB2 II Getting Started with Classic Event Publishing

Page 55: c1891862

Chapter 7. Recovering from errors

The following topics explain how to start publishing again if Classic Event

Publisher should go down:

v Introduction to recovery mode

v Starting a recovery change-capture agent for CA-IDMS

v Preparing for recovery mode when using IMS change-capture agents

v Recovering from errors when using IMS change-capture agents

v Starting recovery change-capture agents for VSAM

v Stopping recovery change-capture agents for VSAM

Introduction to recovery mode

When live data capture is prevented, the system moves into recovery mode.

Recovery mode ensures that no changes are lost.

Recovery is performed differently on the various change-capture agents. Recovery

occurs when an agent fails. Some reasons for an agent to fail include:

v Starting a database without a correlation service

v Shutting down the correlation service without first stopping the database

v Change-capture agent messaging failure

v Correlation service failure

v Rejection of a message by the publication service

v Rejection of a message by a WebSphere MQ message queue

These failures might not be recoverable if they are driven by change data that will

produce the same error if recovery is attempted. For example, if the data captured

and forwarded to the publication service caused the publication service to reject

the message, then the publication service will reject that message every time that

the message is resent.

The correlation service is responsible for detecting failure and returning messages

stating that the system entered recovery mode. At this point, the recovery

change-capture agent should start or be started. Depending upon the

configuration, recovery might start automatically, or might require you to run a job

or otherwise start the recovery agent manually.

When the recovery change-capture agent is started, it performs the following

actions:

1. It retrieves restart information from the correlation service.

2. It starts reading recovery source information from the native data source

log/journal.

3. It sends raw change data and syncpoint records (COMMIT and ROLLBACK) to

the correlation service.

When the recovery of data catches up with the active server, with some databases

you need to restart the database to move back into active mode with no changes

lost. However, often, it is not practical to stop the monitored database to complete

the recovery process. For these situations, Classic Event Publisher provides

© Copyright IBM Corp. 2003, 2004 49

Page 56: c1891862

methods to exit recovery mode and resume the active capture of changes from the

database system. Check the chapter of the DB2 Information Integrator Operations

Guide for Classic Event Publishing for your database for the steps that you must

take.

You can use the THROTTLE parameter to keep the recovery change-capture agent

from overtaking the correlation service. For more information about this parameter,

see the chapter of the DB2 Information Integrator Operations Guide for Classic Event

Publishing for your database.

Starting a recovery change-capture agent for CA-IDMS

The recovery change-capture agent for CA-IDMS is a program that reads CA-IDMS

journal files and forwards previously journaled changes to the DB2 Information

Integrator Classic Event Publisher correlation service. This agent can be started as a

batch job or through automated installation procedures when the active agent

enters recovery mode.

Unlike the active agent, which captures changes as they are written to the journal,

the recovery agent can process historical changes which were lost by the active

agent due to the lack of an active correlation service or some other system failure.

This agent can also use throttling to control the rate at which changes are sent to

the correlation service so other active and recovery agents can continue to operate

normally without risk of overrunning the message queue.

The z/OS parameter controls the processing of the recovery agent. The format of

the parameter is:

PARM=’CV,Optkeyword=vvvv,...’

’LOCAL,Optkeyword=vvvv,...’

’ARCHIVE,Optkeyword=vvvv,...’

’REPORT,Optkeyword=vvvv,...’

’MONITOR,Optkeyword=vvvv,...’

’COUNT’

Where:

v CV defines recovery from one or more CA-IDMS disk journal files written by an

CA-IDMS Central Version. In CV mode, the Central Version can either be

running or stopped.

v LOCAL defines recovery from a single tape or disk journal file written by a local

mode CA-IDMS application.

v ARCHIVE defines recovery from an archived Central Version journal file. Do

not use the AGENT optional keyword with PARM=’ARCHIVE’.

v REPORT requests a report of the current recovery sequence and timestamp for

the requested agent. The AGENT keyword is required with the REPORT option.

v MONITOR indicates whether the recovery agent will do a single check for

recovery state, or will run as a continuous monitor for automatic recovery.

v COUNT indicates to count the number of full recovery logs and skip automatic

archiving if the minimum number of full journals is not available.

v Optkeyword=vvvv is one or more optional keyword parameters, which can be

included in the parameter string to control recovery processing.

The following are the optional keywords for the CV, LOCAL, and REPORT parameters.

v ACTIVATE={Y | N}. Specifies whether or not to enable the active agent on

successful completion of processing. Specifying ‘Y’ informs the correlation

50 DB2 II Getting Started with Classic Event Publishing

|

|||||

||||||

||

|||||||

|

|||

||

||

||

||

||

||

|

||

Page 57: c1891862

service that all recovery messages have been sent and that the active

change-capture agent may begin forwarding messages again after the final

recovery message has been processed.

Using ACTIVATE with the CV parameter is not recommended unless you need

to disable automatic activation when continuous-mode processing ends due to

the shutdown of an CA-IDMS Central Version.

If this parameter is unspecified, the agent remains in recovery mode at

termination unless continuous mode CV processing detects the shutdown of an

CA-IDMS Central Version.

v AGENT=agentname. Identifies the active change-capture agent name when you

set PARM=LOCAL or the Central Version number is 0. For local agents, this

name must be ’IDMS_jjjjjjjj’ where jjjjjjjj is the original z/OS job or started task

name that performed the database update. For Central Version 0 systems, the

name is ’IDMS_ssssssss’ where ssssssss is the Central Version started task name.

For Central Version disk and tape archive files, the Central Version number

carried in journal ‘TIME’ records is used to automatically determine the agent

name. This name is always ‘IDMS_nnn’ where nnn is the Central Version

number.

PARM=’LOCAL,AGENT=IDMS_IDMJUPDT’

You must specify this value to use the REPORT option.

v RESTARTWAIT=nn{M|S}. Defines the wait interval in minutes or seconds for

continuous run operation (PARM=CV). Each time the recovery agent catches up

with the last record written by an active CA-IDMS Central Version, the recovery

agent suspends processing for the specified interval before querying the active

journal for new change records.

If M or S is not specified, the value supplied is assumed to be in seconds. If the

parameter is not specified or is specified as 0, the agent runs in non-continuous

mode and terminates when it catches up with the current position in the active

journal file.

PARM=’CV,RESTARTWAIT=5S’

v THROTTLE=nnnn. Defines a throttle value to prevent the recovery agent from

overrunning the correlation service with change messages. The throttle value

defines the maximum number of messages to be queued to the correlation

service at any point in time. This ensures available space in the message queue

for any active change-capture agents communicating with the correlation service.

If this value is not specified, the throttle value defaults to 512. A value of 0

disables throttling of messages.

PARM=’CV,THROTTLE=1024’

v TIMEOUT=nn{M|S}. Defines a timeout value when throttling is used.

Throttling relies on the correlation service responding to requests that messages

are received. In the event that the correlation service is unable to respond within

the specified TIMEOUT value, the recovery server terminates with an error.

If this value is not specified, the throttle value defaults to 5 minutes.

PARM=’CV,THROTTLE=1024,TIMEOUT=1M’

v SERVER=name—Specifies the name of the named correlation service that you

want this change-capture agent to communicate with. .

Parameter example

An example parameter string to recover from Central Version disk files in

continuous mode is:

EXEC PGM=CACEC1DR,PARM=’CV,RESTARTWAIT=2S’

Chapter 7. Recovering from errors 51

|||

|||

|||

|||||

||||

|

|

|||||

||||

|

|||||

||

|

||||

|

|

||

|

||

|

Page 58: c1891862

Execution JCL

//JOBNAME JOB (ACCT,INFO),’CA-IDMS RECOVERY’,CLASS=A,MSGCLASS=X,

// NOTIFY=&SYSUID

//CACEC1DR EXEC PGM=CACEC1DR,

// PARM=’CV,RESTARTWAIT=5S,TIMEOUT=5M’

//STEPLIB DD DISP=SHR,DSN=CAC.LOADLIB

//CTRANSDD DISP=SHR,DSN=SYS2.SASC.C650.LINKLIB

//** CV JOURNAL DATASETS AS DEFINED IN THE DMCL SOURCE

//** FOR CV 120.

//J1JRNL DD DISP=SHR,DSN=CAI.CA-IDMS.J1JRNL

//J2JRNL DD DISP=SHR,DSN=CAI.CA-IDMS.J2JRNL

//J3JRNL DD DISP=SHR,DSN=CAI.CA-IDMS.J3JRNL

//J4JRNL DD DISP=SHR,DSN=CAI.CA-IDMS.J4JRNL

//SYSPRINT DD SYSOUT=*

//SYSTERMDD SYSOUT=*

//

Journal files in execution JCL

CA-IDMS journals are read from the JnJRNL DD statements allocated in the

recovery execution JCL. When Central Version mode is used (PARM=’CV,...’),

multiple journal files can be included so that all journal files allocated in the

Central Version startup JCL can be processed by the recovery change-capture

agent.

When allocating multiple files for Central Version mode, the order of dataset

allocations to JnJRNL must match the processing order as defined in the CREATE

DISK JOURNAL statements in the DMCL.

Preparing for recovery mode when using IMS change-capture agents

There are two steps that you can take to make it easier to switch from recovery

mode back to active mode when you are capturing changes from IMS databases.

Do these steps before you start your IMS change-capture agents.

1. Update the IMS JCL to include a reference to a recovery data set for each IMS

active change-capture agent that is running in your Classic Event Publisher

configuration.

Recommendation: Add a reference to a recovery data set in each IMS job or

started task that the IMS change-capture agent is installed

in. Otherwise, if you run the IMS job or start-task without a

correlation server active, you will need to manually create a

recovery data set and provide IMS restart information to

recover the changes that were lost.

2. Implement the log file tracking utility.

The IMS Log Tracking Utility is a job-step that you must add to an IMS

DB/DC or DBCTL subsystem’s log archive JCL to register the archived IMS log

file in the IMS log file tracking data set. For IMS batch jobs, additional job-steps

must be added after each job-step that updates IMS data that is monitored by

an IMS active change-capture agent. Member name CACIMSLT in SCACSAMP

contains sample JCL used to execute the IMS Log Tracking Utility.

The IMS Log Tracking Utility is command-line driven and uses fixed DD

names to identify the primary and secondary log files to be registered and the

name of the IMS log file tracking data set to be updated.

Use the command-line parameters in the following table to control the actions

of the IMS Log Tracking Utility.

52 DB2 II Getting Started with Classic Event Publishing

|

|||||||||||||||

|

|||||

|||

Page 59: c1891862

Table 5. IMS Log Tracking Utility Command-Line Parameters

Keyword Value Description

DUALLOGS Y | N Specifies whether the IMS Log File Tracking Utility collects information about a

secondary IMS log file.

Y The IMS control region is using dual logging.

N The IMS control region is using single logging.

The default value is N.

PARM=’DUALLOGS=N’

ECHO Y | N Specifies whether the IMS Log File Tracking Utility issues informational WTO

messages.

Y Issue informational WTO messages.

N Do not issue informational WTO messages.

The default value is Y.

PARM=’DUALLOGS=N,ECHO=N’

MAXLOGS Number Specifies the maximum number of IMS log file entries that are to be maintained

in the IMS log file tracking data set. Specifying a value of 0, or not supplying a

MAXLOGS value causes the IMS Log File Tracking Utility to maintain an

unlimited number of IMS log file entries.

Normally, IMS log files have a certain lifetime associated with them. Generally,

the IMS log files are defined as generation data sets that have a fixed number of

generations that are retained. In these situations, supply a MAXLOGS value that

matches the number of generations being retained.

If you have specified that dual logging is being used for this IMS active

change-capture agent, the IMS Log File Tracking Utility automatically doubles

the MAXLOGS values supplied by you, because it assumes that the same

number of secondary IMS log files are retained.

PARM=’DUALLOGS=N,ECHO=N,MAXFILES=5’

For more information about recovery mode for when you are capturing data from

IMS databases, see IBM DB2 Information Integrator Planning Guide for Classic Event

Publishing.

Recovering from errors when using IMS change-capture agents

If WTO messages issued by IMS change-capture agents or by the correlation

service tell you that one or more change-capture agents are in recovery mode,

perform the following steps to return to active mode:

1. Read the output from the IMS Active Agent Status Job to identify any

change-capture agents that started before the correlation service was started

and that are now in recovery mode.

2. Tell the correlation service about the change-capture agents identified in step 1

that are not reported by the correlation service as being in recovery mode. Do

so by creating a custom input control file and run the IMS recovery

change-capture agent in ″set″ mode.

3. Identify the log files required for recovering data.

4. Recover the necessary log files.

5. Return the change-capture agents to active mode.

Chapter 7. Recovering from errors 53

Page 60: c1891862

For more information about these steps, see IBM DB2 Information Integrator

Operations Guide for Classic Event Publishing.

Starting recovery change-capture agents for VSAM

A VSAM change-capture agent switches from active to recovery mode if one of the

following events occurs:

v You stop the correlation service.

v You stop the change-capture agent.

v An application error causes the correlation service to stop.

A starting point is written in the correlation service’s restart data store, so the

recovery change-capture agent can locate a starting position in the log stream files.

After switching to recovery mode, the change-capture agent queries the correlation

service for the restart point and starts reading the log streams at that point. After

the end of the log streams is reached, the change-capture agent sets itself to active

mode.

Set retention period and AUTODELETE specifications of the system, user, and log

of log streams so that the data remains in the log stream for the longest period of

recovery that you want. If the retention period and AUTODELETE specifications

are met, CICS purges completed units of work on the system log file when CICS

terminates.

Stopping recovery change-agents for VSAM

There are two methods for stopping recovery change-agents for VSAM.

v If you started a recovery agent by stopping the correlation service, restart the

correlation service in either COLDSTART mode or WARMSTART mode.

– COLDSTART mode:

To switch a change-capture agent back to active mode without capturing

lost data changes:

1. Modify the Service Info Entry for the correlation service that you want

to cold start, changing the entry WARMSTART to COLDSTART.

2. Restart the correlation service with the following command:

START,SERVICE=<name> where <name> is the value of the second

parameter in the Service Info Entry for the correlation service.

3. Change the Service Info Entry by changing the COLDSTART value back to

WARMSTART.– WARMSTART mode:

Use the following command: START,SERVICE=name where name is the value of

the second parameter in the Service Info Entry for the correlation service.v If you started a recovery change-capture agent by stopping an active

change-capture agent or if a recovery change-capture agent started because of an

application error, restart the change-capture agent with the following command:

START,SERVICE=name where name is the value of the second parameter in the

Service Info Entry for the correlation service.

54 DB2 II Getting Started with Classic Event Publishing

Page 61: c1891862

DB2 Information Integrator documentation

This topic provides information about the documentation that is available for DB2

Information Integrator. The tables in this topic provide the official document title,

form number, and location of each PDF book. To order a printed book, you must

know either the official book title or the document form number. Titles, file names,

and the locations of the DB2 Information Integrator release notes and installation

requirements are also provided in this topic.

This topic contains the following sections:

v Accessing DB2 Information Integrator documentation

v Documentation for replication function on z/OS

v Documentation for event publishing function for DB2 Universal Database on

z/OS

v Documentation for event publishing function for IMS and VSAM on z/OS

v Documentation for event publishing and replication function on Linux, UNIX,

and Windows

v Documentation for federated function on z/OS

v Documentation for federated function on Linux, UNIX, and Windows

v Documentation for enterprise search on Linux, UNIX, and Windows

v Release notes and installation requirements

Accessing DB2 Information Integrator documentation

All DB2 Information Integrator books and release notes are available in PDF files

from the DB2 Information Integrator Support Web site at

www.ibm.com/software/data/integration/db2ii/support.html.

To access the latest DB2 Information Integrator product documentation, from the

DB2 Information Integrator Support Web site, click on the Product Information

link, as shown in Figure 1 on page 56.

© Copyright IBM Corp. 2003, 2004 55

Page 62: c1891862

You can access the latest DB2 Information Integrator documentation, in all

supported languages, from the Product Information link:

v DB2 Information Integrator product documentation in PDF files

v Fix pack product documentation, including release notes

v Instructions for downloading and installing the DB2 Information Center for

Linux, UNIX, and Windows

v Links to the DB2 Information Center online

Scroll though the list to find the product documentation for the version of DB2

Information Integrator that you are using.

Figure 1. Accessing the Product Information link from DB2 Information Integrator Support Web site

56 DB2 II Getting Started with Classic Event Publishing

Page 63: c1891862

The DB2 Information Integrator Support Web site also provides support

documentation, IBM Redbooks, white papers, product downloads, links to user

groups, and news about DB2 Information Integrator.

You can also view and print the DB2 Information Integrator PDF books from the

DB2 PDF Documentation CD.

To view or print the PDF documentation:

1. From the root directory of the DB2 PDF Documentation CD, open the index.htm

file.

2. Click the language that you want to use.

3. Click the link for the document that you want to view.

Documentation about replication function on z/OS

Table 6. DB2 Information Integrator documentation about replication function on z/OS

Name

Form

number Location

ASNCLP Program Reference for Replication

and Event Publishing

N/A DB2 Information Integrator

Support Web site

Introduction to Replication and Event

Publishing

GC18-7567 DB2 Information Integrator

Support Web site

Migrating to SQL Replication N/A DB2 Information Integrator

Support Web site

Replication and Event Publishing Guide and

Reference

SC18-7568 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Replication Installation and Customization

Guide for z/OS

SC18-9127 DB2 Information Integrator

Support Web site

SQL Replication Guide and Reference SC27-1121 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Tuning for Replication and Event Publishing

Performance

N/A DB2 Information Integrator

Support Web site

Tuning for SQL Replication Performance N/A DB2 Information Integrator

Support Web site

Release Notes for IBM DB2 Information

Integrator Standard Edition, Advanced Edition,

and Replication for z/OS

N/A v In the DB2 Information

Center, Product Overviews >

Information Integration >

DB2 Information Integrator

overview > Problems,

workarounds, and

documentation updates

v DB2 Information Integrator

Installation launchpad

v DB2 Information Integrator

Support Web site

v The DB2 Information Integrator

product CD

DB2 Information Integrator documentation 57

Page 64: c1891862

Documentation about event publishing function for DB2 Universal

Database on z/OS

Table 7. DB2 Information Integrator documentation about event publishing function for DB2

Universal Database on z/OS

Name

Form

number Location

ASNCLP Program Reference for Replication

and Event Publishing

N/A DB2 Information Integrator

Support Web site

Introduction to Replication and Event

Publishing

GC18-7567 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Replication and Event Publishing Guide and

Reference

SC18-7568 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Tuning for Replication and Event Publishing

Performance

N/A DB2 Information Integrator

Support Web site

Release Notes for IBM DB2 Information

Integrator Standard Edition, Advanced Edition,

and Replication for z/OS

N/A v In the DB2 Information

Center, Product Overviews >

Information Integration >

DB2 Information Integrator

overview > Problems,

workarounds, and

documentation updates

v DB2 Information Integrator

Installation launchpad

v DB2 Information Integrator

Support Web site

v The DB2 Information Integrator

product CD

Documentation about event publishing function for IMS and VSAM on

z/OS

Table 8. DB2 Information Integrator documentation about event publishing function for IMS

and VSAM on z/OS

Name

Form

number Location

Client Guide for Classic Federation and Event

Publisher for z/OS

SC18-9160 DB2 Information Integrator

Support Web site

Data Mapper Guide for Classic Federation and

Event Publisher for z/OS

SC18-9163 DB2 Information Integrator

Support Web site

Getting Started with Event Publisher for z/OS GC18-9186 DB2 Information Integrator

Support Web site

Installation Guide for Classic Federation and

Event Publisher for z/OS

GC18-9301 DB2 Information Integrator

Support Web site

Operations Guide for Event Publisher for z/OS SC18-9157 DB2 Information Integrator

Support Web site

58 DB2 II Getting Started with Classic Event Publishing

Page 65: c1891862

Table 8. DB2 Information Integrator documentation about event publishing function for IMS

and VSAM on z/OS (continued)

Name

Form

number Location

Planning Guide for Event Publisher for z/OS SC18-9158 DB2 Information Integrator

Support Web site

Reference for Classic Federation and Event

Publisher for z/OS

SC18-9156 DB2 Information Integrator

Support Web site

System Messages for Classic Federation and

Event Publisher for z/OS

SC18-9162 DB2 Information Integrator

Support Web site

Release Notes for IBM DB2 Information

Integrator Event Publisher for IMS for z/OS

N/A DB2 Information Integrator

Support Web site

Release Notes for IBM DB2 Information

Integrator Event Publisher for VSAM for z/OS

N/A DB2 Information Integrator

Support Web site

Documentation about event publishing and replication function on

Linux, UNIX, and Windows

Table 9. DB2 Information Integrator documentation about event publishing and replication

function on Linux, UNIX, and Windows

Name Form number Location

ASNCLP Program Reference for Replication and

Event Publishing

N/A DB2 Information Integrator

Support Web site

Installation Guide for Linux, UNIX, and

Windows

GC18-7036 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Introduction to Replication and Event

Publishing

GC18-7567 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Migrating to SQL Replication N/A DB2 Information Integrator

Support Web site

Replication and Event Publishing Guide and

Reference

SC18-7568 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

SQL Replication Guide and Reference SC27-1121 DB2 Information Integrator

Support Web site

Tuning for Replication and Event Publishing

Performance

N/A DB2 Information Integrator

Support Web site

Tuning for SQL Replication Performance N/A DB2 Information Integrator

Support Web site

DB2 Information Integrator documentation 59

Page 66: c1891862

Table 9. DB2 Information Integrator documentation about event publishing and replication

function on Linux, UNIX, and Windows (continued)

Name Form number Location

Release Notes for IBM DB2 Information

Integrator Standard Edition, Advanced Edition,

and Replication for z/OS

N/A v In the DB2 Information

Center, Product Overviews

> Information Integration >

DB2 Information Integrator

overview > Problems,

workarounds, and

documentation updates

v DB2 Information Integrator

Installation launchpad

v DB2 Information Integrator

Support Web site

v The DB2 Information

Integrator product CD

Documentation about federated function on z/OS

Table 10. DB2 Information Integrator documentation about federated function on z/OS

Name Form number Location

Client Guide for Classic Federation and Event

Publisher for z/OS

SC18-9160 DB2 Information Integrator

Support Web site

Data Mapper Guide for Classic Federation and

Event Publisher for z/OS

SC18-9163 DB2 Information Integrator

Support Web site

Getting Started with Classic Federation for z/OS GC18-9155 DB2 Information Integrator

Support Web site

Installation Guide for Classic Federation and

Event Publisher for z/OS

GC18-9301 DB2 Information Integrator

Support Web site

Reference for Classic Federation and Event

Publisher for z/OS

SC18-9156 DB2 Information Integrator

Support Web site

System Messages for Classic Federation and

Event Publisher for z/OS

SC18-9162 DB2 Information Integrator

Support Web site

Transaction Services Guide for Classic

Federation for z/OS

SC18-9161 DB2 Information Integrator

Support Web site

Release Notes for IBM DB2 Information

Integrator Classic Federation for z/OS

N/A DB2 Information Integrator

Support Web site

Documentation about federated function on Linux, UNIX, and Windows

Table 11. DB2 Information Integrator documentation about federated function on Linux, UNIX,

and Windows

Name

Form

number Location

Application Developer’s Guide SC18-7359 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

60 DB2 II Getting Started with Classic Event Publishing

Page 67: c1891862

Table 11. DB2 Information Integrator documentation about federated function on Linux, UNIX,

and Windows (continued)

Name

Form

number Location

C++ API Reference for Developing Wrappers SC18-9172 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Data Source Configuration Guide N/A v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Federated Systems Guide SC18-7364 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Guide to Configuring the Content Connector for

VeniceBridge

N/A DB2 Information Integrator

Support Web site

Installation Guide for Linux, UNIX, and

Windows

GC18-7036 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Java API Reference for Developing Wrappers SC18-9173 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Migration Guide SC18-7360 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Wrapper Developer’s Guide SC18-9174 v DB2 PDF Documentation CD

v DB2 Information Integrator

Support Web site

Release Notes for IBM DB2 Information

Integrator Standard Edition, Advanced Edition,

and Replication for z/OS

N/A v In the DB2 Information

Center, Product Overviews

> Information Integration >

DB2 Information Integrator

overview > Problems,

workarounds, and

documentation updates

v DB2 Information Integrator

Installation launchpad

v DB2 Information Integrator

Support Web site

v The DB2 Information

Integrator product CD

DB2 Information Integrator documentation 61

Page 68: c1891862

Documentation about enterprise search function on Linux, UNIX, and

Windows

Table 12. DB2 Information Integrator documentation about enterprise search function on

Linux, UNIX, and Windows

Name Form number Location

Administering Enterprise Search SC18-9283 DB2 Information

Integrator Support Web

site

Installation Guide for Enterprise Search GC18-9282 DB2 Information

Integrator Support Web

site

Programming Guide and API Reference for

Enterprise Search

SC18-9284 DB2 Information

Integrator Support Web

site

Release Notes for Enterprise Search N/A DB2 Information

Integrator Support Web

site

Release notes and installation requirements

Release notes provide information that is specific to the release and fix pack level

for your product and include the latest corrections to the documentation for each

release.

Installation requirements provide information that is specific to the release of your

product.

Table 13. DB2 Information Integrator Release Notes and Installation Requirements

Name File name Location

Installation Requirements for IBM

DB2 Information Integrator Event

Publishing Edition, Replication

Edition, Standard Edition, Advanced

Edition, Advanced Edition Unlimited,

Developer Edition, and Replication for

z/OS

Prereqs v The DB2 Information Integrator

product CD

v DB2 Information Integrator

Installation Launchpad

Release Notes for IBM DB2

Information Integrator Standard

Edition, Advanced Edition, and

Replication for z/OS

ReleaseNotes v In the DB2 Information Center,

Product Overviews > Information

Integration > DB2 Information

Integrator overview > Problems,

workarounds, and documentation

updates

v DB2 Information Integrator

Installation launchpad

v DB2 Information Integrator Support

Web site

v The DB2 Information Integrator

product CD

Release Notes for IBM DB2

Information Integrator Event

Publisher for IMS for z/OS

N/A DB2 Information Integrator Support

Web site

62 DB2 II Getting Started with Classic Event Publishing

Page 69: c1891862

Table 13. DB2 Information Integrator Release Notes and Installation

Requirements (continued)

Name File name Location

Release Notes for IBM DB2

Information Integrator Event

Publisher for VSAM for z/OS

N/A DB2 Information Integrator Support

Web site

Release Notes for IBM DB2

Information Integrator Classic

Federation for z/OS

N/A DB2 Information Integrator Support

Web site

Release Notes for Enterprise Search N/A DB2 Information Integrator Support

Web site

To view the installation requirements and release notes that are on the product CD:

v On Windows operating systems, enter:

x:\doc\%L

x is the Windows CD drive letter and %L is the locale of the documentation that

you want to use, for example, en_US.

v On UNIX operating systems, enter:

/cdrom/doc/%L/

cdrom refers to the UNIX mount point of the CD and %L is the locale of the

documentation that you want to use, for example, en_US.

DB2 Information Integrator documentation 63

Page 70: c1891862

64 DB2 II Getting Started with Classic Event Publishing

Page 71: c1891862

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in

all countries. Consult your local IBM representative for information on the

products and services currently available in your area. Any reference to an IBM

product, program, or service is not intended to state or imply that only that IBM

product, program, or service may be used. Any functionally equivalent product,

program, or service that does not infringe any IBM intellectual property right may

be used instead. However, it is the user’s responsibility to evaluate and verify the

operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter

described in this document. The furnishing of this document does not give you

any license to these patents. You can send license inquiries, in writing, to:

IBM Director of Licensing

IBM Corporation

North Castle Drive

Armonk, NY 10504-1785

U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM

Intellectual Property Department in your country/region or send inquiries, in

writing, to:

IBM World Trade Asia Corporation

Licensing

2-31 Roppongi 3-chome, Minato-ku

Tokyo 106-0032, Japan

The following paragraph does not apply to the United Kingdom or any other

country/region where such provisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS

PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER

EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED

WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS

FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or

implied warranties in certain transactions; therefore, this statement may not apply

to you.

This information could include technical inaccuracies or typographical errors.

Changes are periodically made to the information herein; these changes will be

incorporated in new editions of the publication. IBM may make improvements

and/or changes in the product(s) and/or the program(s) described in this

publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for

convenience only and do not in any manner serve as an endorsement of those Web

sites. The materials at those Web sites are not part of the materials for this IBM

product, and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it

believes appropriate without incurring any obligation to you.

© Copyright IBM Corp. 2003, 2004 65

Page 72: c1891862

Licensees of this program who wish to have information about it for the purpose

of enabling: (i) the exchange of information between independently created

programs and other programs (including this one) and (ii) the mutual use of the

information that has been exchanged, should contact:

IBM Corporation

J46A/G4

555 Bailey Avenue

San Jose, CA 95141-1003

U.S.A.

Such information may be available, subject to appropriate terms and conditions,

including in some cases payment of a fee.

The licensed program described in this document and all licensed material

available for it are provided by IBM under terms of the IBM Customer Agreement,

IBM International Program License Agreement, or any equivalent agreement

between us.

Any performance data contained herein was determined in a controlled

environment. Therefore, the results obtained in other operating environments may

vary significantly. Some measurements may have been made on development-level

systems, and there is no guarantee that these measurements will be the same on

generally available systems. Furthermore, some measurements may have been

estimated through extrapolation. Actual results may vary. Users of this document

should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of

those products, their published announcements, or other publicly available sources.

IBM has not tested those products and cannot confirm the accuracy of

performance, compatibility, or any other claims related to non-IBM products.

Questions on the capabilities of non-IBM products should be addressed to the

suppliers of those products.

All statements regarding IBM’s future direction or intent are subject to change or

withdrawal without notice, and represent goals and objectives only.

This information contains examples of data and reports used in daily business

operations. To illustrate them as completely as possible, the examples include the

names of individuals, companies, brands, and products. All of these names are

fictitious, and any similarity to the names and addresses used by an actual

business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs, in source language, which

illustrate programming techniques on various operating platforms. You may copy,

modify, and distribute these sample programs in any form without payment to

IBM for the purposes of developing, using, marketing, or distributing application

programs conforming to the application programming interface for the operating

platform for which the sample programs are written. These examples have not

been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or

imply reliability, serviceability, or function of these programs. You may copy,

modify, and distribute these sample programs in any form without payment to

IBM for the purposes of developing, using, marketing, or distributing application

programs conforming to IBM’s application programming interfaces.

66 DB2 II Getting Started with Classic Event Publishing

Page 73: c1891862

Each copy or any portion of these sample programs or any derivative work must

include a copyright notice as follows:

© (your company name) (year). Portions of this code are derived from IBM Corp.

Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights

reserved.

Trademarks

The following terms are trademarks of International Business Machines

Corporation in the United States, other countries, or both:

IBM

CICS

DB2

IMS

Language Environment

MVS

VTAM

WebSphere

z/OS

The following terms are trademarks or registered trademarks of other companies:

Java and all Java-based trademarks and logos are trademarks or registered

trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of

Microsoft Corporation in the United States, other countries, or both.

Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation

in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other

countries.

Other company, product or service names may be trademarks or service marks of

others.

Notices 67

Page 74: c1891862

68 DB2 II Getting Started with Classic Event Publishing

Page 75: c1891862

Index

AABEND detection, disabling 40

Automatic journalingmodifying 11

CCA-IDMS

configuring a named server

environment 11

enabling change-capture 5

loading System Catalog 9

mapping schema/subschema 6

modifying Central Version JCL 11

punching scheme/subschema 6

starting a recovery Change Capture

Agent 50

starting an active change-capture

agent 52

CA-IDMS Central Versionmodifying JCL 11

Catalogs, loadingIMS 22

Central Version.See IDMS Central Version

Change captureactivating

for IDMS 9

enablingfor an IMS database/segment 46

for CA-IDMS 5

type of IMS log records used 46

Change Capture Agentmessaging failure 49

recovery mode 49, 50

starting 49

change-capture agentrecovery mode

restoring active status 50

starting 50

startingCA-IDMS 52

COLDSTART a correlation service 40

Connections, maximum for a correlation

service 39

correlation serviceCOLDSTART 40

configuration 37

configuringCA-IDMS named servers 11

named servers 40

TCP/IP 39

creating recovery data sets 43

CSA storage 39

defining protocol 38

defining queue name 38

disabling ABEND detection 40

idle time out 39

maximum connections 39

minimum/maximum tasks 38

correlation service (continued)response time out 39

Service Info Entry 37

shutting down without stopping

database 49

starting a database without 49

tracing 39

WARMSTART 40

CSA storage 39

EEnvironments

supported by IMS for change

capture 15

IIdle time out, correlation service 39

IDMSactivating change-capture 9

enabling XSync access 13

modifying automatic journaling 11

relinking database I/O module 12

relinking Presspack Support

Module 12

stacking the I/O module exit 12

IDMSJNL2 exitsetting up 45

IMSactivating change capture for a

database/segment 46

adding logger exit to existing exit 25

Change Capture Agentinstallation 23

Data Capture records 25

log file tracking 24

mapping data 15

supported environments and program

types 15

type of log records used for change

capture 46

InstallationIMS Change Capture Agent 23

LLogger exit

adding to existing exit in IMS 25

MMapping data

IMS 15

Maximum connections, correlation

service 39

Maximum tasks, correlation service 39

metadata catalogloading for CA-IDMS 9

Minimum tasks, correlation service 38

NNamed servers

configuring 40

CA-IDMS 11

NOSTAE 40

Ooverview

capturing information 2

publishing information 3

PPresspack Support Module

relinking 12

Program typessupported by IMS for change

capture 15

Protocoldefining for correlation service 38

Punching schema/subschema 6

QQueue name, defining for correlation

service 38

RRecovery Change Capture Agent.

See Change Capture Agent, recovery

Recovery data setscreating 43

Recovery mode 49, 50

Response time out, correlation

service 39

Rules Servermessage rejection 49

Sschema

mapping CA-IDMS 6

Service Info Entrycorrelation service 37

SIE.See Service Info Entry.

Subschemamapping CA-IDMS 6

© Copyright IBM Corp. 2003, 2004 69

Page 76: c1891862

TTCP/IP

configuringfor correlation service 39

THROTTLE parameter 50

Tracingcorrelation service 39

WWarm starting

after a Change Capture Agent runs

without a correlation service 10, 24

correlation service 40

70 DB2 II Getting Started with Classic Event Publishing

Page 77: c1891862

Contacting IBM

To contact IBM customer service in the United States or Canada, call

1-800-IBM-SERV (1-800-426-7378).

To learn about available service options, call one of the following numbers:

v In the United States: 1-888-426-4343

v In Canada: 1-800-465-9600

To locate an IBM office in your country or region, see the IBM Directory of

Worldwide Contacts on the Web at www.ibm.com/planetwide.

Product information

Information about DB2 Information Integrator is available by telephone or on the

Web.

If you live in the United States, you can call one of the following numbers:

v To order products or to obtain general information: 1-800-IBM-CALL

(1-800-426-2255)

v To order publications: 1-800-879-2755

On the Web, go to www.ibm.com/software/data/integration/db2ii/support.html.

This site contains the latest information about:

v The technical library

v Ordering books

v Client downloads

v Newsgroups

v Fix packs

v News

v Links to Web resources

Comments on the documentation

Your feedback helps IBM to provide quality information. Please send any

comments that you have about this book or other DB2 Information Integrator

documentation. You can use any of the following methods to provide comments:

v Send your comments using the online readers’ comment form at

www.ibm.com/software/data/rcf.

v Send your comments by e-mail to [email protected]. Include the name of

the product, the version number of the product, and the name and part number

of the book (if applicable). If you are commenting on specific text, please include

the location of the text (for example, a title, a table number, or a page number).

© Copyright IBM Corp. 2003, 2004 71

Page 78: c1891862

72 DB2 II Getting Started with Classic Event Publishing

Page 79: c1891862
Page 80: c1891862

����

Printed in USA

GC18-9186-02