websphere for z/os to cics and ims connectivity performance · customers, we ran tests with cics...
TRANSCRIPT
ibm.com/redbooks Redpaper
WebSphere for z/OS to CICS and IMS Connectivity Performance
Tamas VilaghyRich Conway
Kim PattersonRajesh P Ramachandran
Robert W St JohnBrent Watson
Frances Williams
Compare the performance of connectors
Look at the environment that was used
See the key findings for each measurement
Front cover
WebSphere for z/OS to CICS and IMS Connectivity Performance
January 2006
International Technical Support Organization
© Copyright International Business Machines Corporation 2006. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADPSchedule Contract with IBM Corp.
First Edition (January 2006)
This edition applies to Version 5, Release 1, Modification 02 of WebSphere Application Server for z/OS.
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiThe team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiBecome a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Connectivity design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Summary of key performance results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Chapter 2. The measurement environment. . . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Test objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.2 Test scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.1 The sysplex configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 WLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4 WebSphere Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.5 CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1 CICS Transaction Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.5.2 WebSphere MQ/CICS DPL bridge . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.6.1 IMS Connect environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.6.2 The IMS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.6.3 IMS back-end transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.7 SOAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 3. The Trader applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.1 Overview of Trader application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.1 Trader IMS and CICS applications and data stores . . . . . . . . . . . . . 293.1.2 SOAP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1.3 Trader Web front-end user interface . . . . . . . . . . . . . . . . . . . . . . . . . 333.1.4 Trader interface architecture and implementation. . . . . . . . . . . . . . . 353.1.5 Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 TraderCICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.3 TraderSOAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.4 TraderMQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
© Copyright IBM Corp. 2006. All rights reserved. iii
3.5 TraderIMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Chapter 4. Measurements and results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.1 The testing procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.1.1 The test script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.1.2 RMF Monitor III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2 Recorded data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.2.1 WebSphere Studio Workload Simulator . . . . . . . . . . . . . . . . . . . . . . 504.2.2 RMF Monitor I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Metrics in our final analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.4 Tuning and adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.4.1 Changing the settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.4.2 Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.5 Results for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.5.1 CICS Transaction Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.5.2 SOAP for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.5.3 CICS MQ DPL Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Results for IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.6.1 IMS Connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.6.2 IMS MQ DPL Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.7 Connector and data size comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . 904.7.1 CICS comparison charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.7.2 IMS comparison charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
iv WebSphere for z/OS to CICS and IMS Connectivity Performance
© Copyright International Business Machines Corporation 2006. All rights reserved.
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. v
TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Eserver®Eserver®Redbooks (logo) ™z/OS®zSeries®AIX®
CICS®DB2®IBM®IMS™MVS™OS/390®
Redbooks™RACF®RMF™WebSphere®
The following terms are trademarks of other companies:
Java, J2EE, JVM, JSP, JMX, JDBC, JavaServer Pages, JavaServer, JavaBeans. Java Naming and Directory Interface, Java, Forte, EJB, Enterprise JavaBeans, and other Java trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
vi WebSphere for z/OS to CICS and IMS Connectivity Performance
Preface
This IBM® Redpaper focuses on helping you understand the performance implications of the different connectivity options from WebSphere® for IBM z/OS® to CICS® or IMS™. The architectural choices can be reviewed in WebSphere for z/OS to CICS/IMS Connectivity Architectural Choices, SG24-6365. That IBM Redbook shows you the different attributes of a connection, such as availability, security, transactional capability, and performance; however, it does not compare the performance impact of various connectivity options. Instead, it emphasizes the architectural solution and the non-functional requirements.
For this paper, we selected two options from that book and created a measurement environment to simulate customer scenarios. For our CICS customers, we ran tests with CICS TG, SOAP for CICS, and CICS MQ DPL Bridge. For our IMS customers, we also ran tests with IMS Connect and with IMS MQ DPL Bridge. We selected 500-byte, 5 KB, and 20 KB COMMAREA sizes, with very complex records to simulate complex customer scenarios.
All of our measurements were done during a quick six-week residency. However, an issue that affected some of our results arose immediately after our measurements were completed. Service to the WebSphere Studio Enterprise Developer development tooling significantly improved CICS SOAP performance. Because we were unable to rerun our performance tests after this service became available and because we did not want to provide misleading results, we worked with IBM performance experts and developers to provide best estimates for measurements that were affected by these changes. These estimates are based on other measurements that are outside the scope of this Redpaper.
All measurements were done with the ITSO-developed Trader application, which has been used in many redbooks over the years. The last version can be downloaded from WebSphere for z/OS Connectivity Handbook, SG24-7064-01. Before coming to any conclusion, we suggest that you evaluate your application.
The team that wrote this RedpaperThis Redpaper was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), Poughkeepsie Center.
© Copyright IBM Corp. 2006. All rights reserved. vii
Tamas Vilaghy is currently a project manager in the Design Center for On Demand Business, Poughkeepsie. He was a project leader at the ITSO, Poughkeepsie Center until 2004. He led Redbook projects that involved e-business on IBM Eserver zSeries® servers. Before joining the ITSO, he worked in the System Sales Unit and Global Services departments of IBM Hungary. Tamas spent two years in Poughkeepsie, from 1998 to 2000, working with zSeries marketing and competitive analysis. From 1991 to 1998, he held technical, marketing, and sales positions for zSeries. From 1979 to 1991, he was involved with system software installation, development, and education.
Rich Conway is a senior IT specialist at the ITSO, Poughkeepsie Center. He has 21 years of experience in all aspects of MVS™ and z/OS as a systems programmer. He has worked extensively with UNIX® System Services and WebSphere on z/OS. He was a project leader for the ITSO for many redbooks and has also provided technical advice and support for numerous redbooks over the past 10 years.
Kim Patterson has been with IBM for eight years, working on customer contracts in OS/390® and z/OS. Her experience is with development and implementation of OS/390 and z/OS applications in IMS, DB2®, and CICS. During her IT career she has worked as an IMS and DB2 database administrator as well as an application analyst and developer. Most recently she was a member of the IBM Learning Services DB2 team on the OS/390 and z/OS platforms. As an instructor, Kim specialized in DB2 SQL, application programming, and administration courses. Recently she participated in another Redbook project on APPC protected conversations.
Rajesh P Ramachandran is an advisory software engineer for IBM zSeries e-business Services. He has 10 years of experience in application development in various platforms, including mainframe, UNIX, and Linux®. He used COBOL, Java™, CICS, and Forte in his assignments. Recently, he was involved with DB2 tools development, where he was a lead developer of DB2 Data Archive Expert. Rajesh is currently on assignment in the Design Center for On Demand Business, Poughkeepsie Center.
viii WebSphere for z/OS to CICS and IMS Connectivity Performance
Robert W St John is a senior performance analyst from the IBM Poughkeepsie lab. His primary areas or expertise are WebSphere, Java, and UNIX Systems Services performance on z/OS. Robert has 23 years of experience with MVS and z/OS, including performance tools, system programming, and performance analysis. Although his primary focus is on improving the performance of WebSphere and Java products, he is also an author and a frequent conference speaker, providing information about WebSphere performance on z/OS.
Brent Watson is an eServer IT architect at the IBM Client Technology Center (CTC). He has a strong background in IT consulting and entrepreneurship, with technical expertise in J2EE and .NET software architecture, application servers, portal implementation, and business intelligence solutions architecture. He holds a BS in Computer Science from Clarkson University.
Frances Williams is a senior consultant with the eServer e-Business Services in the United States. She has over 20 years of experience in the IT field as an application design architect. Her focus is on z/OS platform technologies, which include WebSphere Application Server, WebSphere MQ, CICS, and many development languages. Her skills include performance tuning and troubleshooting application problems.
Thanks to the following people for their contributions to this project:
Robert Haimowitz, Patrick C. Ryan, Michael G. ConollyITSO, Poughkeepsie Center
Mitch JohnsonIBM Software Services for WebSphere
Denny ColvinIBM WebSphere Studio Workload Simulator Development
Peter MailandIBM RMF™ Tools Development, Boeblingen
Preface ix
Phil Anselm, Chuck Neidig, Sun SyIBM Server Group Software Services
Kathy WalshIBM Washington System Center
Kenneth BlackmanIMS Advanced Technical Support
Colin PaiceIBM Hursley, WebSphere MQ Development
Nigel WilliamsIBM Design Center for On Demand Business, Montpelier, France
Phil Wakelin, Mark Cocker, Richard Cowgill, Catherine Moxey, Ian J Mitchell, John Burgess, Trevor Clarke IBM Hursley, CICS Development
Sinmei DeGrange, David Viguers, Barbara Klein, Gerald Hughes, Judith HillIMS Connect Development
Forsyth AlexanderITSO, Raleigh Center
Become a published authorJoin us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
x WebSphere for z/OS to CICS and IMS Connectivity Performance
Comments welcomeYour comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this Redpaper or other Redbooks™ in one of the following ways:
� Use the online Contact us review redbook form found at:
ibm.com/redbooks
� Send your comments in an email to:
� Mail your comments to:
IBM Corporation, International Technical Support OrganizationDept. HYJ Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400
Preface xi
xii WebSphere for z/OS to CICS and IMS Connectivity Performance
Chapter 1. Introduction
This chapter provides an introduction to connectivity design and also gives a summary of the results of our project.
The first section highlights aspects that must be evaluated when designing connectivity. Such issues are security, scalability, availability, and standards compliance. Each of these affect performance, usually negatively.
The second section gives a short summary of our results. The reason for this summary is to highlight the key messages for those readers who do not have time to crawl through all of the details of our measurements.
It should be noted that, after our measurements were completed, some significant improvements were made in the development tools that influenced the measurement results. We have done our best to adjust the results accordingly based on other performance measurements that have been done independently of this paper.
This IBM Redpaper is a companion to WebSphere for z/OS Connectivity Architectural Choices, SG24-6365. This paper provides information about the performance costs of selected connectivity options that are described in SG24-6365.
1
© Copyright IBM Corp. 2006. All rights reserved. 1
Important: The following information must be kept in mind as you read this paper:
� Because all our testing was done in a short period of time, we were unable to refine our results enough to consider them official performance measurements. However, we think our experiences can be used to understand the performance trade-offs that are associated with various connectivity options at a high level.
� Performance improvements in various areas are being released quickly and every application is different. Even if we had the best possible performance results for our application, we could only provide general guidelines. For this reason, we recommend that you test your own application on the latest software before making major design decisions.
� The Trader application that we deployed used virtual storage access method (VSAM) files in the CICS case and Data Language One Hiearchic Indexed Direct Access method (DL/I HIDAM) in the IMS case.
2 WebSphere for z/OS to CICS and IMS Connectivity Performance
1.1 Connectivity design considerationsDuring the design of a connectivity solution, there are many attributes to consider. The system can be well-balanced, satisfying multiple requirements at the same time (Figure 1-1).
Figure 1-1 Well-balanced system
The system can also be tailored towards a given goal, such as security and availability (see Figure 1-2 on page 4). In these cases, you cannot necessarily satisfy all requirements and still achieve the best performance.
Scalability
Performance
Interoperability
Availability
Security
Maturity
Skills
Amount of data
Chapter 1. Introduction 3
Figure 1-2 System tailored to security and availability
You can view the following list as a starting point for aspects to consider, but individual design situations might bring up additional factors:
� SecuritySecurity is a necessity for most designs. Each connectivity solution has strengths and weaknesses that must be considered. Some connectivity solutions offer extensive security, and others offer hardly any. Adding security to a design usually slows down performance, because additional data is likely to be transmitted, which means that additional processing must be done.
� Standards compliance, interoperabilitySome connectivity options are proprietary; some are more standards compliant. Some standards are newer; some are older, more mature. In some cases, there is not even a standard available. Implementing new standards might mean reduced performance because the product is not mature and has not gone through many refinements. Also, standards compliance might mean implementation of additional layers between two end-points that can cause more processor usage (for example, creating Web Services). However, using a given standard can also result in better interoperability and increased accessibility because more clients can use the service.
Scalability
Performance
Interoperability
Availability
SecurityMaturity
Skills
Amount of data
4 WebSphere for z/OS to CICS and IMS Connectivity Performance
� AvailabilityAvailability means duplication. Checking the viability of the duplicate generally requires more processing and resources. This affects performance negatively. However, continuous or longer uptime is generally the result of availability, and this uptime is a business requirement that is defined in business terms.
� Performance (response time, CPU, and memory cost)Designing for performance is always a compromise. Usually it is hard to build open, standards-compliant, highly available, scalable, and secure connectivity and achieve high performance. The compromise is how to achieve the best balance between performance and other attributes. There are multiple performance measures, some are user related (like response time), but some are resource oriented (like CPU time).
� Skills availabilityIn a given customer scenario, the skills availability always determines connectivity choices. If the programmers know WebSphere MQ, they tend to use that more than J2EE Connector Architecture (J2CA), even though J2CA might fit better for that particular solution.
� Synchronous and asynchronous response requirementsWhen there is communication between applications, this is always a question. Most users prefer a prompt answer, so the design leads to synchronous solutions. But, a prompt answer is not always possible because of application availability, time zones, or scheduling issues. So, the design leads to an asynchronous solution. Usually, this is the first question that should come up when designing a solution. This automatically limits the number of connectivity options.
� Co-location and separation requirementsThis attribute is either related to security (the two applications cannot be put under the control of the same operating system) or hardware capacity (separate hardware is needed for the two applications). This automatically limits connectivity options. The standard transport protocol used today is TCP/IP for separated machines and this, as the measurements show, always under performs co-located, proprietary transport protocols.
� Product maturityAs with standards maturity, product maturity always influences performance. Functional requirements are first; non-functional requirements are second. For example, a CICS Enterprise Information System (EIS) that has been around for 40 years has gone through so many design and performance reviews and refinements that it is likely to outperform some newly created and announced transactional systems, even though the new system might offer functions that cannot be found with CICS. The same is true for a connectivity product. It might take two or more releases or versions to solve most of the performance issues. Eventually, these issues are likely to be solved, if they can be solved internally to the product.
Chapter 1. Introduction 5
� Amount and type of data for communicationYou can solve your data communication requirements by sending small amounts of data many times or large chunks of data only a few times. If the communication choice is costly (huge amounts of processing or many data conversions), you should choose a different communication method or redesign the application for better performance. There are also limitations posed by a given connectivity option (for example, CICS Transaction Gateway, 32 KB), that can push a design to another option, although the performance is excellent.
� Scalable software architectureThis is an attribute that can also have performance implications. Scalable solutions require the use of different internal algorithms; more storage; different sorting, table manipulation, thread safety; and other attributes that can hinder performance.
The list is definitely not complete. There are many factors that influence a design, for example, company policies or limitations posed by a given EIS. For more information, refer to WebSphere for z/OS Connectivity Architectural Choices, SG24-6365.
1.2 Summary of key performance resultsThe objective of our project was to measure the performance of the different connectivity options from IBM WebSphere Application Server for z/OS to CICS and IMS EISs. We used a simple HTTP client that connects to WebSphere for all of our tests. The project could not cover all the possible architectural options. For example, we could not measure Web Services or Java clients that connect to Simple Object Access Protocol (SOAP) or CICS Transaction Gateway (TG). It should be noted that CICS TG can handle any J2EE client and the SOAP server in CICS can handle any Web Services client. We did not measure Web Services clients that connect to WebSphere Application Server.
Some of the key results of our series of measurements are:
� Working with a small communications area (COMMAREA) size, CICS TG outperforms both SOAP and CICS MQ Distributed Program Link (DPL) Bridge.
� With a small COMMAREA size, SOAP and CICS MQ DPL Bridge results are similar. In the local case, CICS MQ DPL Bridge performs better; in the remote case, CICS SOAP performs better.
� CICS MQ DPL Bridge cost per byte is very low when compared to other connectors that we measured.
6 WebSphere for z/OS to CICS and IMS Connectivity Performance
� As the COMMAREA size was increased, CICS MQ DPL Bridge performed better than SOAP and CICS TG.
� The CPU usage for CICS SOAP can be greater than for other connectors, because of the XML parsing of data structures, which allows users to take advantage of how easy it is to manage loosely coupled systems. The amount of XML parsing is related to the complexity of the data structure. Our testing showed that, with a less complex COMMAREA, the performance of CICS SOAP is better than for a more complex COMMAREA of the same size. Therefore, for example, the application designer has the option of packaging multiple fields into one larger field. This simplifies the SOAP/XML processing, but the Java program must pack and unpack that larger field correctly.
� In general, local connectors perform better than remote connectors. The relative CPU usage delta between local and remote decreases as the application data size increases.
� IMS Connect performs better than IMS MQ DPL Bridge.
Chapter 1. Introduction 7
8 WebSphere for z/OS to CICS and IMS Connectivity Performance
Chapter 2. The measurement environment
This chapter describes the test environment that we used to test our business applications. The objective was to test the performance of the different connectivity choices to the enterprise back-end systems. We briefly describe the infrastructure we set up and the connector types that we chose to test.
We discuss the following components used in our tests:
� Test objectives� Infrastructure
– CICS– IMS– WebSphere MQ/CICS DPL Bridge– WebSphere MQ/IMS DPL Bridge
2
© Copyright IBM Corp. 2006. All rights reserved. 9
2.1 Test objectivesThe objective of our tests were to effectively measure the CPU consumption of simple J2EE applications that use different connectors to back-end EISs. We tested:
� CICS TG� SOAP for CICS� WebSphere MQ CICS DPL Bridge� IMS Connect connectors � One measurement with IMS MQ DPL Bridge
2.1.1 ApproachOur main goal was to set up our environment so that our measurements best reflected the speed of each connector and not the speed of the EIS. To do this, our EISs ran in their own service classes with highest priority so that the transactions would not have to wait for any other process. The WebSphere environment and enclaves were configured to run at a slightly lower priority so that they would not steal processing from the EIS.
During our test runs, we used RMF Monitor III to monitor any delays. For each address space or group of address spaces, RMF Monitor III reported the delay that was experienced for the report interval and identified the primary cause for the delay. With the amount of processors and real storage defined for our test systems, we did not experience delays of this nature.
The majority of the delays that we experienced were the result of inadequate data set placement, which were corrected by moving our test databases to ESS Direct Area Storage Devices (DASDs).
With each of our test cases, we made every effort to optimize the settings for that instance of the test to achieve the maximum throughput.
2.1.2 Test scenariosWe chose varying message sizes to test the impact of the message size on the performance of the transactions. The testing scenarios are shown in Table 2-1 on page 11.
10 WebSphere for z/OS to CICS and IMS Connectivity Performance
Table 2-1 Table of test cases
2.2 InfrastructureThis section describes the infrastructure of our environment.
2.2.1 The sysplex configurationWe used three systems for our performance tests. All were logical partitions (LPARs) in either a 2064 or 2084 zSeries server. All network connectivity between the LPARs was by XCF paths in the coupling facility.
CICS Transaction Gateway (CICS TG) settings for both local and remote connections
Scenario 1 DFHCOMMAREA = 0.5 KB
Scenario 2 DFHCOMMAREA = 5 KB
Scenario 3 DFHCOMMAREA = 20 KB
SOAP connector settings for both local and remote connections
Scenario 1 Message size = 0.5 KB
Scenario 2 Message size = 5 KB
CICS/MQ DPL Bridge connector settings for both local and remote connections
Scenario 1 Message = 0.5 KB
Scenario 2 message = 5 KB
Scenario 3 message = 20 KB
IMS connector settings for both local and remote connections
Scenario 1 DFHCOMMAREA = 0.5 KB
Scenario 2 DFHCOMMAREA = 5 KB
Scenario 3 DFHCOMMAREA = 20 KB
IMS/MQ DPL Bridge connector settings for local connection
Scenario 1 DFHCOMMAREA = 5 KB
Chapter 2. The measurement environment 11
The DASD used for our tests was ESS DASD shared between the three LPARs. Figure 2-1 shows the server configuration.
Figure 2-1 The servers used for the tests
The EIS subsystems were configured using criteria described in WebSphere for z/OS Connectivity Handbook, SG24-7064-01.
Figure 2-2 on page 13 shows the test cases that we performed:
� CICS TG� WebSphere MQ DPL Bridge: CICS� CICS-SOAP� IMS Connect� WebSphere MQ DPL Bridge: IMS
ITSO zSeries Configuration
z900 server
z990 server
ESS (disk storage)
XCF
LPAR48
LPAR49
LPAR43
12 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 2-2 Local test configuration
Figure 2-2 also shows the high level system configuration that we used for our test environment for the local connectivity scenarios. We used only two LPARs:
� One LPAR contained WebSphere and the EIS systems. � The other LPAR (a separate machine) was used to drive the workload.
IBM zSeries IBM zSeries
CICS/IMSDatabases
LPAR 43 - z/OSLPAR48 - z/OSMQSeries
IMS
XCF
bridge
bridge
CICS
CTG
IMSCWebSphere App Server
SRSR
SR
WebSphere Workload Simulator for z/OSTrader apps 1
2
4
SoapIMS bridge
CICS bridge
3
5
Chapter 2. The measurement environment 13
Figure 2-3 shows the high level system configuration that we used for our test environment for the remote connectivity scenarios.
Figure 2-3 Environment for remote cases
We used three LPARs: one for WebSphere, the second for the EISs, and the third for driving the workload.
The following hardware and software components were used in our test environment:
� Two external Coupling Facilities (CF) were installed.
� WebSphere Application Server V 5.1.0.2 for z/OS was used.
� The Resource Access Control Facility (RACF®) used a sysplex-wide shared database.
� Workload Manager (WLM) was set up in goal mode.
IBM zSeries
LPAR49 - z/OS
IBM zSeries
CICS/IMSDatabases
LPAR 43 - z/OSLPAR48 - z/OSMQSeries
IMS
XCF
bridge
bridge
CICS
CTG
IMSC
WebSphere App Server
SRSR
SR
WebSphere Workload Simulator for z/OSscripts
Trader apps
1
2
4
SoapIMS bridge
CICS bridge
3
5
All TCP/IP inter-LPAR communication is done through XCF
14 WebSphere for z/OS to CICS and IMS Connectivity Performance
Table 2-2 shows the release levels.
Table 2-2 Release levels of software we used
2.3 WLMTo acquire the best throughput for our transactions, we set WLM properties (Example 2-1) for WebSphere servant regions on our LPARs as CPU critical flag = yes, importance of 2, and a 90% goal.
Example 2-1 Service class settings for WebSphere on SC48
Browse Line 00000000 Col 001 072 Command ===> SCROLL ===> PAGE **************************** Top of Data ****************************** * Service Class WAS48 - WAS servant sc48 Created by user WATSON on 2004/11/12 at 15:52:38 Base last updated by user WATSON on 2004/11/22 at 16:04:09 Base goal: CPU Critical flag: YES # Duration Imp Goal description - --------- - ---------------------------------------- 1 2 90% complete within 00:00:00.350 *************************** Bottom of Data ****************************
Product Release Level
z/OS V1.5
WebSphere Application Server
V5.1.0.2
CICS TS V2.2
CICS TG V5.1.0
SOAP for CICS feature V2
WebSphere MQ V5.31
IMS V8.1
IMS Connect V2.2
Chapter 2. The measurement environment 15
The CICS region is set at CPU critical flag = yes, importance of 1, and 80% goal (Example 2-2). The objective was to give the CICS more importance than the WebSphere application servant region, resulting in less delay for CICS resources. Similar service class definitions were set up for IMS.
Example 2-2 Service class settings for CICS regions
Browse Line 00000000 Col 001 072 Command ===> SCROLL ===> PAGE **************************** Top of Data ****************************** * Service Class CICSW - WAS CICS transactions Created by user FRANCK on 2002/11/16 at 16:24:26 Base last updated by user WATSON on 2004/11/12 at 17:26:09 Base goal: CPU Critical flag: YES # Duration Imp Goal description - --------- - ---------------------------------------- 1 1 80% complete within 00:00:00.150 *************************** Bottom of Data ****************************
WebSphere for z/OS propagates the performance context of work requests with WLM enclaves. Each transaction has its own enclave and is managed according to its service class.
2.4 WebSphere Application ServerFor WebSphere Application Server, we swapped Object Request Broker (ORB) services settings between IOBound and LONGWAIT. Figure 2-4 on page 17 shows how to set this up.
16 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 2-4 Setting the request broker in WebSphere Application Server
LONGWAIT specifies more threads than IOBOUND for application processing. Specifically, LONGWAIT allowed us to run with 40 worker threads in each servant region. We used this setting for CICS TG because it spends most of the time waiting for network and remote operations to complete and very little time on its own processing. IOBOUND uses three times the number of CPUs or a minimum of five threads and a maximum of 30 threads. Visit the WebSphere information center for more information about these settings.
2.5 CICSThis section describes the architecture we chose to connect WebSphere Application Server for z/OS (WebSphere) to the CICS subsystem. We used three different methods:
� CICS TG� JMS to WebSphere MQ DPL Bridge connection to CICS� SOAP connection to CICS.
In the case of the local CICS TG, we did not use the gateway daemon address space because the CICS Enterprise Content Integration (ECI) resource adapter runs in WebSphere and communicates directly with CICS using the CICS TG facilities.
In the remote case, we used a resource adapter connected to CICS TG by a network connection. This connection uses the ECI protocol over TCP/IP.
The maximum amount of outstanding connection requests (SOMAXCONN) TCP/IP value was 10.
We used the CICS resource definitions seen in Example 2-3 on page 18.
Chapter 2. The measurement environment 17
Example 2-3 CICS TG and SOAP definitions during the measurements
GROUP NAME: CTG ---------- CONNECTION(CTG) GROUP(CTG) DESCRIPTION(CTG CONNECTION)
CONNECTION-IDENTIFIERS NETNAME(SCSCERWW) INDSYS()
REMOTE-ATTRIBUTES REMOTESYSTEM() REMOTENAME() REMOTESYSNET()
CONNECTION-PROPERTIES ACCESSMETHOD(IRC) PROTOCOL(EXCI) CONNTYPE(SPECIFIC)
SINGLESESS(NO) DATASTREAM(USER) RECORDFORMAT(U) QUEUELIMIT(NO) MAXQTIME(NO)
OPERATIONAL-PROPERTIES AUTOCONNECT(NO) INSERVICE(YES)
SECURITY SECURITYNAME() ATTACHSEC(IDENTIFY) BINDSECURITY(NO) USEDFLTUSER(NO) RECOVERY PSRECOVERY() XLNACTION(KEEP) SESSIONS(CTG) GROUP(CTG) DESCRIPTION(CTG SESSIONS) SESSION-IDENTIFIERS CONNECTION(CTG) SESSNAME() NETNAMEQ() MODENAME()
SESSION-PROPERTIES PROTOCOL(EXCI) MAXIMUM(0,0) RECEIVEPFX(C)
RECEIVECOUNT(999) SENDPFX() SENDCOUNT() SENDSIZE(4096) RECEIVESIZE(4096) SESSPRIORITY(0) OPERATOR-DEFAULTSPRESET-SECURITY USERID()
OPERATIONAL-PROPERTIES AUTOCONNECT(NO) BUILDCHAIN(YES) USERAREALEN(0)
IOAREALEN(4096,4096) RELREQ(NO) DISCREQ(NO) NEPCLASS(0) RECOVERY RECOVOPTION(SYSDEFAULT)
+++++++++++++++++++++++++++++++++++++GROUP NAME: SOAPUSER ---------- PROGRAM(DFHWBCLI) GROUP(SOAPUSER) DESCRIPTION(Outbound HTTP Transport Interface) LANGUAGE(ASSEMBLER) RELOAD(NO) RESIDENT(NO) USAGE(NORMAL) USELPACOPY(NO) STATUS(ENABLED) CEDF(YES) DATALOCATION(ANY) EXECKEY(CICS) CONCURRENCY(QUASIRENT)
REMOTE-ATTRIBUTES DYNAMIC(NO) REMOTESYSTEM() REMOTENAME()
18 WebSphere for z/OS to CICS and IMS Connectivity Performance
TRANSID() EXECUTIONSET(FULLAPI) JVM-ATTRIBUTES
JVM(NO) JVMCLASS() JVMPROFILE(DFHJVMPR)JAVA-PROGRAM-OBJECT-ATTRIBUTES
HOTPOOL(NO) TCPIPSERVICE(SOAP) GROUP(SOAPUSER) DESCRIPTION(SOAP for CICS: HTTP port definition) URM(DFHWBADX) PORTNUMBER(8080) STATUS(OPEN) PROTOCOL(HTTP) TRANSACTION(CWXN) BACKLOG(200) TSQPREFIX() IPADDRESS() SOCKETCLOSE(10) SECURITY SSL(NO) CERTIFICATE() AUTHENTICATE(NO) ATTACHSEC()
DNS-CONNECTION-BALANCING DNSGROUP() GRPCRITICAL(NO)
We defined Trader using the job in Example 2-4.
Example 2-4 Trader definitions in CICS
//CICSADD1 JOB (999,POK),'CONWAY',CLASS=A, // MSGLEVEL=(1,1),MSGCLASS=A,NOTIFY=&SYSUID /*JOBPARM S=SC48 //************************************************* //* ADD CICS DEFS FOR TRADER - //* CICS HAS TO BE DOWN TO ADD THESE //************************************************* //* //ADDDEF EXEC PGM=DFHCSDUP,REGION=1M //* //STEPLIB DD DSN=CICSTS22.CICS.SDFHLOAD,DISP=SHR //DFHCSD DD DSN=CICSUSER.CICS220.CICSERW.DFHCSD,DISP=SHR //SYSUT1 DD UNIT=SYSDA,SPACE=(1024,(100,100)) //SYSPRINT DD SYSOUT=* //SYSIN DD * DELETE GROUP(TRADER) DEFINE TRANSACTION(TRAD) GROUP(TRADER) DESCRIPTION(ITSO TRADER TRANS) PROGRAM(TRADERPL) TRANCLASS(DFHTCL00) DEFINE MAPSET(NEWTRAD) GROUP(TRADER) DESCRIPTION(ITSO TRADER MAPSET) DEFINE FILE(COMPFILE) GROUP(TRADER) RECORDFORMAT(V) ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) DATABUFFERS(2) INDEXBUFFERS(1) DSNAME(CICSUSER.CICS220.CICSERW.COMPFILE) DEFINE FILE(CUSTFILE) GROUP(TRADER)
Chapter 2. The measurement environment 19
RECORDFORMAT(V) ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) DATABUFFERS(2) INDEXBUFFERS(1) DSNAME(CICSUSER.CICS220.CICSERW.CUSTFILE) ADD GROUP(TRADER) LIST(WEBLIST) //
We used the Trader file definitions shown in Example 2-5.
Example 2-5 VSAM file definitions
FILE(COMPFILE) GROUP(TRADERW1) DESCRIPTION() VSAM-PARAMETERS DSNAME(CICSUSER.CICS220.CICSERW1.COMPFILE) PASSWORD() RLSACCESS(NO) LSRPOOLID(1) READINTEG(UNCOMMITTED) DSNSHARING(ALLREQS) STRINGS(1) NSRGROUP()
REMOTE-ATTRIBUTES REMOTESYSTEM() REMOTENAME()
REMOTE-AND-CFDATATABLE-PARAMETERS RECORDSIZE() KEYLENGTH()
INITIAL-STATUS STATUS(ENABLED) OPENTIME(FIRSTREF) DISPOSITION(SHARE)
BUFFERS DATABUFFERS(2) INDEXBUFFERS(1)
DATATABLE-PARAMETERS TABLE(NO) MAXNUMRECS(NOLIMIT)
CFDATATABLE-PARAMETERS CFDTPOOL() TABLENAME() UPDATEMODEL(LOCKING)
LOAD(NO) DATA-FORMAT
RECORDFORMAT(V) OPERATIONS ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) AUTO-JOURNALLING JOURNAL(NO) JNLREAD(NONE) JNLSYNCREAD(NO) JNLUPDATE(NO) JNLADD(NONE) JNLSYNCWRITE(YES) RECOVERY-PARAMETERS RECOVERY(NONE) FWDRECOVLOG(NO) BACKUPTYPE(STATIC) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
FILE(CUSTFILE) GROUP(TRADERW1) DESCRIPTION()
20 WebSphere for z/OS to CICS and IMS Connectivity Performance
VSAM-PARAMETERS DSNAME(CICSUSER.CICS220.CICSERW1.CUSTFILE)
PASSWORD() RLSACCESS(NO) LSRPOOLID(1) READINTEG(UNCOMMITTED) DSNSHARING(ALLREQS) STRINGS(1) NSRGROUP() REMOTE-ATTRIBUTES REMOTESYSTEM() REMOTENAME() REMOTE-AND-CFDATATABLE-PARAMETERS RECORDSIZE() KEYLENGTH() INITIAL-STATUS STATUS(ENABLED) OPENTIME(FIRSTREF) DISPOSITION(SHARE) BUFFERS DATABUFFERS(2) INDEXBUFFERS(1) DATATABLE-PARAMETERS TABLE(NO) MAXNUMRECS(NOLIMIT) CFDATATABLE-PARAMETERS CFDTPOOL() TABLENAME() UPDATEMODEL(LOCKING) LOAD(NO)
DATA-FORMAT RECORDFORMAT(V)
OPERATIONS ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) AUTO-JOURNALLING JOURNAL(NO) JNLREAD(NONE) JNLSYNCREAD(NO) JNLUPDATE(NO) JNLADD(NONE) JNLSYNCWRITE(YES) RECOVERY-PARAMETERS RECOVERY(BACKOUTONLY) FWDRECOVLOG(NO) BACKUPTYPE(STATIC)
2.5.1 CICS Transaction GatewayWe modified CTG.INI as shown in Example 2-6 to increase the amount of connections we could achieve between WebSphere and CICS.
Example 2-6 Modification of CTG.INI
BROWSE -- /ctg/erwctg/CTG.INI ---------------------------------------------------------------------- Line SECTION GATEWAY closetimeout=10000 ecigenericreplies=off initconnect=100 initworker=100 maxconnect=200 maxworker=200 noinput=off nonames=on
Chapter 2. The measurement environment 21
notime=off workertimeout=10000 [email protected]=com.ibm.ctg.server.TCPHandler [email protected]=connecttimeout=2000;idletimeout=600000;pingfrequency=60000;port=2006;solinger=0;sotimeout=1000;
2.5.2 WebSphere MQ/CICS DPL bridgeWebSphere MQ/CICS DPL Bridge was configured according to the WebSphere for z/OS Connectivity Handbook, SG24-7064-01.
To run our test scenario successfully, we had to increase the CTHREAD, and IDBACK MQ system parameters. We used the following values:
� CTHREAD = 3000� IDBACK = 200
CTHREAD controls the total number of connections. IDBACK is the number of non-TSO connections.
2.6 IMSWe used the following changes to the standard IMS and IMS Connect environment.
2.6.1 IMS Connect environmentThe following IMS Connect parameters were used:
HWS (ID=IM4BCONN,RRS=Y,RACF=Y,XIBAREA=20) TCPIP (HOSTNAME=TCPIP,PORTID=(6001,LOCAL),MAXSOC=2000,TIMEOUT=60000)DATASTORE (ID=IM4B,GROUP=HAOTMA,MEMBER=HWS814B,TMEMBER=SCSIMS4B)
The IMS Connect trace settings are shown in Example 2-7. The recommended trace level settings for IMS Connect BPE and IMS Connect internal trace are the defaults and ERROR for the production environment.
Example 2-7 IMS Connect trace settings
LANG=ENU /* LANGUAGE FOR MESSAGES */ /* (ENU = U.S. ENGLISH) */ # # DEFINITIONS FOR BPE SYSTEM TRACES #
22 WebSphere for z/OS to CICS and IMS Connectivity Performance
TRCLEV=(AWE,LOW,BPE) /* AWE SERVER TRACE */ TRCLEV=(CBS,MEDIUM,BPE) /* CONTROL BLK SRVCS TRACE */ TRCLEV=(LATC,LOW,BPE) /* LATCH TRACE */ TRCLEV=(DISP,HIGH,BPE,PAGES=12) /* DISPATCHER TRACE WITH 12 */ /* PAGES (48K BYTES) */ TRCLEV=(SSRV,HIGH,BPE) /* GEN SYS SERVICES TRACE */ TRCLEV=(STG,MEDIUM,BPE) /* STORAGE TRACE */ # # DEFINITIONS FOR HWS TRACES # TRCLEV=(CMDT,HIGH,HWS) /* HWS COMMAND TRACE */ TRCLEV=(ENVT,HIGH,HWS) /* HWS ENVIRONMENT TRACE */ TRCLEV=(HWSW,HIGH,HWS) /* SERVER TO HWS TRACE */ TRCLEV=(OTMA,HIGH,HWS) /* HWS COMM DRIVER TRACE */ TRCLEV=(HWSI,HIGH,HWS) /* HWS TO IMS OTMA TRACE */ TRCLEV=(TCPI,HIGH,HWS) /* HWS COMM DRIVER TRACE */
2.6.2 The IMS environment We started a total of 22 Message Processing Regions.
We ran the IMS monitor to obtain performance data. The DFSVSMxx PROCLIB member contains the log data set definition information and specifies the allocation of OLDS and WADS, the number of buffers to be used for the OLDS, and the mode of operation of the OLDS (single or dual).
We changed the number of buffers according to the settings shown in Example 2-8. The settings that were changed are in bold.
Example 2-8 Buffer settings
BROWSE IMS814B.PROCLIB(DFSVSMDC) - 01.03 Line 00000000 Co Command ===> Scroll ********************************* Top of Data ************************VSRBF=8192,100 VSRBF=4096,1000 VSRBF=2048,100 VSRBF=1024,100 VSRBF=512,5
Important: The IMS settings were not tuned for a high performance production environment because that was not our goal. As you can see from the measurement charts (Figure 4-17 on page 106), the EIS utilization is only a small portion of the overall utilization, so even with a perfectly tuned EIS, the overall results do not change too much.
Chapter 2. The measurement environment 23
IOBF=(8192,100,Y,Y) IOBF=(2048,100,Y,Y) SBONLINE,MAXSB=10 OPTIONS,BGWRT=YES,INSERT=SKP,DUMP=YES,DUMPIO=YES OPTIONS,VSAMFIX=(BFR,IOB),VSAMPLS=LOCL OPTIONS,DL/I=OUT,LOCK=OUT,DISP=OUT,SCHD=OUT,DLOG=OUT,LATC=ON,SUBS=ON OPTIONS,STRG=ON OLDSDEF OLDS=(00,01,02),BUFNO=050,MODE=DUAL WADSDEF WADS=(0,1,8,9)
The DFSPBxxx member contains IMS control region execution parameters. Based on the IMS monitor reports, we changed buffer numbers as shown in Example 2-9.
Example 2-9 Buffer number changes
QBUF=0200,PSBW=20000,CSAPSB=5000, DLIPSB=30000,
2.6.3 IMS back-end transactionsThe IMS back-end EIS environment was set up with three COBOL programs that process a message from the user. Each of the three programs received and sent a COMMAREA of varying sizes based on our performance test requirements.
The COBOL program names and the IMS transaction/PSB names are the same:
� Program TRADERS received and sent a COMMAREA of 500 bytes.� Program TRADERM received and sent a COMMAREA of 5 KB.� Program TRADERL received and sent a COMMAREA of 20 KB.
Each COBOL program was a copy of the IMS version of the original TRADERBL COBOL CICS program (see WebSphere for z/OS Connectivity Handbook, SG24-7064-01). The basic differences are in the input and output COMMAREA sizes and the movement of data to fill those COMMAREAs in working storage. Each program performs the same basic process:
� Check for the type of request that has been entered by the user. � Get company information, check share value, or buy or sell stock.
– The getting company transaction process reads the IMS company data and allows the selection of up to four companies to review at one time.
– The share value transaction process reads the IMS customer data to determine what shares the customer has so that their current values can be evaluated and returned to the user.
24 WebSphere for z/OS to CICS and IMS Connectivity Performance
– The buy and sell transaction process first checks buy or sell:
• For a customer that wants to buy shares, the program reads the IMS company data to determine if the company exists. If it does, the program checks the IMS customer data to determine if the buyer is a current customer. If the buyer is not, it creates a customer entry. If the buyer is, it checks the shares that the customer has, verifies that the company has the shares to sell, calculates the share value, increases the number of shares owned by the customer, updates the customer file with the new number of shares and their value, and then returns this information to the user.
• For a customer that wants to sell shares, the process is to read the IMS company data to determine if the company exists. If it does, then it checks the IMS customer data to determine if that customer has those shares to sell, calculates the share value, decreases the number of shares owned by the customer, updates the customer file with the new number of shares and their value, and then returns this information to the user.
2.7 SOAPThe SOAP connection to CICS is illustrated in Figure 2-5.
Figure 2-5 SOAP connection to CICS
SOAP Application in CICS
TransportXML/HTTPXML./MQXML/HTTPXML./MQ
SOAP EnvelopeProcessor
Requestbody
Codepage conversion
TraderApplication
Message Adapter
Responsebody
Chapter 2. The measurement environment 25
Refer to WebSphere for z/OS Connectivity Handbook, SG24-7064-01 for more information.
SOAP and XML (DOM) are storage-intensive and small heap sizes can result in excessive Java garbage collection (GC). We found that a heap size of 512 MB was optimal for most test cases. You can monitor GC using the verbose:gc Java directive.
Figure 2-5 on page 25 also shows a traditional CICS transaction (TRADER transaction) exposed as a Web service. This uses SOAP and requires XML parsing, which consumes more CPUs for more complex data structures.
26 WebSphere for z/OS to CICS and IMS Connectivity Performance
Chapter 3. The Trader applications
In this chapter, we introduce the applications that were used to drive the workload that we ran during our tests.The Trader application was developed by IBM to model an electronic brokerage service, with a mix of servlets, JavaServer Pages (JSPs), and Enterprise JavaBeans (EJBs).
The actual workload involved sending three different data sizes to the CICS and IMS back-end transactions. The sizes picked were 500 bytes, 5 KB and 20 KB. We developed different COBOL CICS and COBOL IMS back-end transactions for each workload; however, the business logic was the same.
The applications were developed with standard WebSphere Studio Application Developer Integration Edition for the J2CA versions and WebSphere Studio Enterprise Developer for the SOAP versions.
3
© Copyright IBM Corp. 2006. All rights reserved. 27
3.1 Overview of Trader applicationTrader is an example application that provides different incarnations that show how to use some of the connectors provided by WebSphere, WebSphere Studio Application Developer Integration Edition, and WebSphere Studio Enterprise Developer in an application. It is a simple application that mimics trading stocks in four different companies.
The Trade application consists of four major components (Figure 3-1):
� A back end� A data store� A middle tier that provides access to the back end� A front end that is implemented as a Web application
Figure 3-1 Trader major components
The Web front end is a regular Java 2 Platform, Enterprise Edition (J2EE) Web module. The middle tier (back-end interface) is based on EJBs.
The following connectors are used:
� CICS ECI J2CA resource adapter: This provides direct access from the back-end interface to the back-end logic hosted in CICS TS, using CICS TG
� IMS J2CA resource adapter: This provides direct access from the back-end interface to the back-end logic hosted in IMS TS.
� WebSphere MQ JMS Provider: This provides access to CICS transactions via the CICS WebSphere MQ DPL bridge.
� SOAP for CICS: This provides access to CICS transactions from WebSphere using SOAP.
We created a Trader application for each of the connectors that are used:
� TraderCICS (TRADERC)� TraderIMS (TRADERI)� TraderSOAP (TRADERS)� TraderMQ (TRADERM)
Webfront end
Back-end interface Back-end logic Back-end
datastore
28 WebSphere for z/OS to CICS and IMS Connectivity Performance
3.1.1 Trader IMS and CICS applications and data storesThe Trader application consists of a transaction that can process the trade of shares (CICS or IMS) and a data store. The data stores for all Trader applications (CICS, IMS, MQ, DB2) shared the same basic structure (Figure 3-2).
Figure 3-2 Data store definitions of the Trader application
For TraderCICS, the CICS application used a VSAM file as the data store. For TraderIMS, the IMS application used DL/I HIDAM as the data store. TraderMQ and TraderSOAP used the same CICS application or transaction as TraderCICS.
For our purposes, we extended the COMMAREA to fit the performance requirement of 500 bytes, 5 KB, and 20 KB test cases. The COMMAREA can be seen in Example 3-1.
Example 3-1 COMMAREA description for the 500 byte case
01 COMMAREA-BUFFER. 03 REQUEST-TYPE PIC X(15). 03 RETURN-VALUE PIC X(02). 03 USERID PIC X(60). 03 USER-PASSWORD PIC X(10).
TRADER.COMPANY
Column name Type Len Nulls---------------------- ----------- ---- -----COMPANY CHARACTER 20 NoSHARE_PRICE REAL 4 YesUNIT_VALUE_7DAYS REAL 4 YesUNIT_VALUE_6DAYS REAL 4 YesUNIT_VALUE_5DAYS REAL 4 YesUNIT_VALUE_4DAYS REAL 4 YesUNIT_VALUE_3DAYS REAL 4 YesUNIT_VALUE_2DAYS REAL 4 YesUNIT_VALUE_1DAYS REAL 4 YesCOMM_COST_SELL INTEGER 4 YesCOMM_COST_BUY INTEGER 4 Yes
TRADER.CUSTOMER
Column name Type Len Nulls---------------------- ----------- ---- -----CUSTOMER CHARACTER 60 NoCOMPANY CHARACTER 20 NoNO_SHARES INTEGER 4 Yes
Chapter 3. The Trader applications 29
03 COMPANY-NAME PIC X(20). 03 CORRELID PIC X(32). 03 UNIT-SHARE-VALUES. 05 UNIT-SHARE-PRICE PIC X(08). 05 UNIT-VALUE-7-DAYS PIC X(08). 05 UNIT-VALUE-6-DAYS PIC X(08). 05 UNIT-VALUE-5-DAYS PIC X(08). 05 UNIT-VALUE-4-DAYS PIC X(08). 05 UNIT-VALUE-3-DAYS PIC X(08). 05 UNIT-VALUE-2-DAYS PIC X(08). 05 UNIT-VALUE-1-DAYS PIC X(08). 03 COMMISSION-COST-SELL PIC X(03). 03 COMMISSION-COST-BUY PIC X(03). 03 SHARES. 05 NO-OF-SHARES PIC X(04). 03 SHARES-CONVERT REDEFINES SHARES. 05 NO-OF-SHARES-DEC PIC 9(04). 03 TOTAL-SHARE-VALUE PIC X(12). 03 BUY-SELL1 PIC X(04). 03 BUY-SELL-PRICE1 PIC X(08). 03 BUY-SELL2 PIC X(04). 03 BUY-SELL-PRICE2 PIC X(08). 03 BUY-SELL3 PIC X(04). 03 BUY-SELL-PRICE3 PIC X(08). 03 BUY-SELL4 PIC X(04). 03 BUY-SELL-PRICE4 PIC X(08). 03 ALARM-CHANGE PIC X(03). 03 UPDATE-BUY-SELL PIC X(01). 03 FILLER PIC X(15). 03 COMPANY-NAME-BUFFER. 05 COMPANY-NAME-TAB OCCURS 4 TIMES INDEXED BY COMPANY-NAME-IDX PIC X(20). 03 PERFTEST PIC 9(6). 03 PERFTEST-CHAR-IDX-1 PIC S9(4) COMP. 03 PEFRTEST-BUFFER. 05 PERFTEST-CHAR OCCURS 8 TIMES. 09 PERFTEST-CHAR PIC X(05). 09 PERFTEST-INT PIC 9(05). 09 PERFTEST-COMP PIC S9(04) COMP. 09 PERFTEST-COMP3 PIC S9(05) COMP-3.
For the 5 KB case, the OCCURS setting (highlighted) was modified to 316 and for the 20 KB case, it was 1340.
30 WebSphere for z/OS to CICS and IMS Connectivity Performance
3.1.2 SOAP considerationsThe SOAP test cases showed us that XML conversion is an expensive process that extends the transport size considerably. However, there is no 32 KB limitation for the COMMAREA, unlike the CICS TG case. We thought it would be valuable to have a less complex COMMAREA, so we ran a test with the simple COMMAREA shown in Example 3-2. This COMMAREA uses a 140 byte text field plus other numeric fields, compared to the 5 byte text field in the complex case.
Example 3-2 Simple 5KB COMMAREA for the SOAP test
01 COMMAREA-BUFFER. 03 REQUEST-TYPE PIC X(15). 03 RETURN-VALUE PIC X(02). 03 USERID PIC X(60). 03 USER-PASSWORD PIC X(10). 03 COMPANY-NAME PIC X(20). 03 CORRELID PIC X(32). 03 UNIT-SHARE-VALUES. 05 UNIT-SHARE-PRICE PIC X(08). 05 UNIT-VALUE-7-DAYS PIC X(08). 05 UNIT-VALUE-6-DAYS PIC X(08). 05 UNIT-VALUE-5-DAYS PIC X(08). 05 UNIT-VALUE-4-DAYS PIC X(08). 05 UNIT-VALUE-3-DAYS PIC X(08). 05 UNIT-VALUE-2-DAYS PIC X(08). 05 UNIT-VALUE-1-DAYS PIC X(08). 03 COMMISSION-COST-SELL PIC X(03). 03 COMMISSION-COST-BUY PIC X(03). 03 SHARES. 05 NO-OF-SHARES PIC X(04). 03 SHARES-CONVERT REDEFINES SHARES. 05 NO-OF-SHARES-DEC PIC 9(04). 03 TOTAL-SHARE-VALUE PIC X(12). 03 BUY-SELL1 PIC X(04). 03 BUY-SELL-PRICE1 PIC X(08). 03 BUY-SELL2 PIC X(04). 03 BUY-SELL-PRICE2 PIC X(08). 03 BUY-SELL3 PIC X(04). 03 BUY-SELL-PRICE3 PIC X(08). 03 BUY-SELL4 PIC X(04). 03 BUY-SELL-PRICE4 PIC X(08).
Important: These COMMAREAs are very complex. For small COMMAREAs, this is usually not a problem. However, for larger COMMAREAs, it is wise to group fields, reducing the complexity of the COMMAREA format used to pass data over the connector. See “Further improvement options” on page 32.
Chapter 3. The Trader applications 31
03 ALARM-CHANGE PIC X(03). 03 UPDATE-BUY-SELL PIC X(01). 03 FILLER PIC X(15). 03 COMPANY-NAME-BUFFER. 05 COMPANY-NAME-TAB OCCURS 4 TIMES INDEXED BY COMPANY-NAME-IDX PIC X(20). 03 PERFTEST PIC 9(06). 03 PERFTEST-CHAR-9 PIC X(28). 03 PERFTEST-CHAR-IDX-1 PIC S9(4) COMP. 03 PEFRTEST-BUFFER. 05 PERFTEST-CHAR OCCURS 31 TIMES. 09 PERFTEST-CHAR PIC X(140). 09 PERFTEST-INT PIC 9(07). 09 PERFTEST-COMP PIC S9(04) COMP. 09 PERFTEST-COMP3 PIC S9(05) COMP-3.
The number of data elements are the following:
� 500 B: 36 + (8 x 4) = 68� 5 KB: 36 + (316 x 4) = 1300� 5 KB simple: 36 + (31 x 4) = 160
The first two were complex COMMAREAs. We did not use null-truncated COMMAREAs.
Further improvement optionsWe obtained very large transport sizes because of the large number of fields. For larger COMMAREAs, it is wise to group fields together. For example, in the COMMAREA definition for the 5 KB case, PERFTEST-CHAR can be considered as one field repeated 31 times in the SOAP message. This reduced the number of fields, the transport size, and the cost of the SOAP request. In all cases, this meant 36 + 316 = 352 data elements, instead of 36 + (316 x 4) = 1300.
If you take this approach, the client application must be aware of the format of the portion of the COMMAREA that represents the aggregated fields.
The application must be able to generate and parse this byte array format. As a sizing guideline, you can calculate the transport size of the SOAP message. The SOAP message size is the sum of the following values:
� SOAP prefix and suffix: 287 bytes
� Operation name: 2 x the length of the program name + 23 bytes
� SOAP body elements: (2 x the length of an average element name + the average element data length + 5) x number of elements
32 WebSphere for z/OS to CICS and IMS Connectivity Performance
As an example, if the average element name length is 8 bytes and the COMMAREA size is 4 KB, Table 3-1 shows the message sizes based on the average data length of an element.
Table 3-1 SOAP message size example
3.1.3 Trader Web front-end user interfaceAll Trader Web modules provide the same basic user interaction (see Figure 3-3 on page 34). The entry page is the Login page. The Login page provides a field to enter a user name and one or more buttons that take you to the applications. The number of buttons depends on the actual Trader application.
For example, TraderMQ provides a choice of using a message-driven bean (MDB). The Login page also allows provides a radio button that, when selected, is used to identify the workload (amount of data) the CICS or IMS transaction receives.
Every transaction, be it Buy, Sell, Quotes or Companies, sends the same amount of data to the back-end CICS program. This is determined by the COMMAREA size that the user selects. These selections are available only at the time of login and the user must log out to select a different size.
Number of elements
Average data length
SOAP message size (bytes)
128 32 7,101
512 8 15,165
2048 2 47,421
4096 1 90,429
Attention: In our measurements, we used CICS TS 2.3. We used the WebSphere Enterprise Developer converter to generate an application handler. With CICS TS 3.1, the SOAP support is integrated into CICS and there are three options you can use:
� The user-written application handler� The CICS Web Services Assistant� WebSphere Enterprise Developer to generate the application handler.
CICS TS 3.1 also introduces improvements to CICS SOAP performance.
Chapter 3. The Trader applications 33
Figure 3-3 Trader screen flow
If the login is successful, a company list appears on the next page (Figure 3-4).
Figure 3-4 Trader companies list
For each company, there are buttons for accessing quotes and holdings status and for buying and selling shares. This list is obtained from the back-end data store.
Login page Companies list Buy shares
Sell sharesQuotes page
Logon
Go to Companies list
SellGo to Companies list
Quotes
Logoff
Buy
Go to Companies list
34 WebSphere for z/OS to CICS and IMS Connectivity Performance
Clicking the Logoff button takes you back to the Login page of the Trader application.
Clicking Buy or clicking Sell takes you to a page with a field where you can enter the number of shares that you want to buy or sell. It includes a button for starting the transaction. When the transaction is done, you see the Companies list again (see Figure 3-4 on page 34).
To see the result of a transaction, go to the Quotes page (Figure 3-5). You do this by clicking the Quotes button on the Companies list page (Figure 3-4).
Figure 3-5 Trader company quotes page
3.1.4 Trader interface architecture and implementationThe overall architecture of the Trader Web application is a classic model-volume-controller (MVC) approach. The TraderServlet contains the control logic, providing a method for each user interaction. Because of time constraints, we decided not to use the Command pattern. We recommend that you use the Command pattern for applications that are larger than the Trader application. The Command pattern provides a better separation of control and command logic, which makes the application easier to maintain.
Chapter 3. The Trader applications 35
Figure 3-6 illustrates the architecture.
Figure 3-6 Main component diagram of the Trader Web application
The TraderProcessEJB contains the front-end business logic: buy, sell, getCompanies, and so forth. The implementation is divided into two parts, an interface (TraderProcess), which is used and seen by the TraderServlet, and the actual implementation, which depends on the connector used. The JSPs format the output for the browser.
To simplify the implementation of the same base application for different variations, we used the simplified class diagram in Figure 3-7 as a basis.
Figure 3-7 Trader class diagram (simplified overview)
JSP
TraderProcessEJBTraderServletBrowser Connector
36 WebSphere for z/OS to CICS and IMS Connectivity Performance
The TraderSuperServlet contained all the control and command logic for the application. The only methods implemented by the actual servlets are a method for creating the TraderProcess instance (createTrader) and the init() method of the servlet. This initialized text strings for the construction of the Uniform Resource Locators (URLs) in the applications and displayed the type of connector that is used.
The TraderProcess implementations are specialized according to the connectors being used. The specific connector issues include:
� CICS ECI connector: This connector uses the CICS Transaction Gateway Java client (J2EE CICS ECI Connector). The code to access the J2C ECI connector is generated by WebSphere Studio Application Developer Integration Edition (Web Services Invocation Framework code). The generated code consists of a Web Service that is implemented as an EJB. It also consists of classes for setting and getting information in the ECI COMMAREA based on the object definitions that are used in the CICS programs.
� IMS connector: This connector uses IMS Connector for Java. The code to access IMS Connector for Java is generated by WebSphere Studio Application Developer Integration Edition (Web Services Invocation Framework code). The generated code consists of a Web Service that is implemented as an EJB. It also consists of classes for setting and getting information in the COMMAREA based on the object definitions used in the IMS programs.
� SOAP for CICS: This uses SOAP messages to talk to CICS. The messages are received by the SOAP for CICS server. The server then sends the SOAP message to a message adapter. The message adapter (a COBOL program generated by WebSphere Enterprise Developer) parses the messages and issues a CICS LINK to the appropriate CICS program. Separate message adapters are generated for each COMMAREA size.
� WebSphere MQ: Instead of going straight to CICS from the application, there is the option to use WebSphere MQ. The TraderMQ application sends a message with WebSphere MQ to the back-end business logic in CICS. The message receiver is the CICS MQ DPL bridge. When the transaction is completed in CICS, the reply is returned by WebSphere MQ to the Trader application in WebSphere. This is a quasi-synchronous front-end solution to any traditional business logic in CICS.
There is an option to use an MDB EJB as the receiver in the Trader Web-application instead of a session EJB that queries the reply queue. When you use this option, select MDB on the TraderMQ login panel and start the message listener ports on the server.
Chapter 3. The Trader applications 37
When the MDB listeners are enabled, the normal TraderMQ scenarios do not work (non-MDB case), because the MDB listener picks up the messages from TRADER.CICS.REPLYQ regardless of whether the option was selected. The XA (two-phase commit) feature must be enabled in the WebSphere MQ connection factory for this to work.
If the message listeners are started when TraderMQ is run and the MDB option is not selected on the logon panel, TraderMQ waits for the message to return from CICS. However, it never receives the reply (you have to push the Abort button). The reason for this is that the MDB already picked up the message from the TRADER.CICS.REPLYQ reply queue and placed it in TRADER.PROCESSQ. Because MDB was not selected, the EJB business logic does not receive the message from TRADER.PROCESSQ.
3.1.5 PackagingThe different Trader applications are packaged in the following Enterprise Application Repository (EAR) files:
� TraderCICS.ear� TraderIMS.ear� TraderMQ.ear� TraderSOAP.ear
Restriction: Trader was not implemented with the purpose of being a fully production-qualified application. Because of this, the screen flow is based on the need to be there. The fault tolerance was limited. The application cannot be expected to run in parallel without flaws.
Also, all resources were not externalized using the “java:comp/env” context. This results in a lack of transactional control (the EJB transaction attribute is set to TX_NOT_SUPPORTED) and part of the implementation not being in compliance with best practices and recommended implementation patterns. However, the applications assist in verifying that a WebSphere Connector is set up properly. They act as an example of how a WebSphere Connector can be used in an application.
38 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 3-8 shows the Trader EAR file content.
Figure 3-8 Trader EAR file contents.
The TraderLib.jar file is shared between all Trader applications. It contains the TraderSuperServlet, TraderProcess, and some utility classes.
The Trader Web module contains the servlet or servlets that are sub-classed from TraderSuperServlet and the JSPs used in the Web application. Because of the way J2EE 1.3 works, it is impossible to share the JSPs in the way that it is done with the TraderSuperServlet. Therefore, each Web module contains its own copy of the JSPs. The Logon.jsp is different for each Trader application, but the other JSPs are not.
The Trader EJB JAR contains the EJBs used by the servlets. In the case of the TraderDB, it also contains the EJBs used for communication with the database and the business logic implementation.
The Trader Connectors JAR contains the Web service that provides access to the J2EE connectors, including the EJB that connects to the J2C connector and the generated classes that are used for getting and setting data in the J2C transaction object or objects. In CICS, this is the ECI COMMAREA In IMS, they are the InputHandler and OutputHandler objects.
Trader Class Library JAR (TraderLib)
Trader Connectors JAR (TraderXXCommand)
Trader Enterprise Application
(TraderXXEAR)
Trader EJB JAR(TraderXXX, can be more
than one EJB JAR per Trader App)
Trader Web module WAR (TraderXXWeb)
Chapter 3. The Trader applications 39
DependenciesEach Trader application depends on the availability of some external resources to be deployable and work. All the resources are, if possible, specified by their Java Naming and Directory Interface (JNDI) name and a type.
For TraderMQ, the necessary external resources are:
� jms/TraderQCF: WebSphere MQ JMS provider connection factory
� jms/TraderCICSReqQ: JMS request destination for CICS
� jms/TraderCICSRepQ: JMS reply destination for CICS
� jms/TraderProcessQ: JMS postprocessing destination for the MDB case
� TraderMQCICSListener: MDB EJB listener (when a message is received in a Queue listened to, the corresponding MDB is executed)
� TraderMQIMSListener: MDB EJB Listener
Depending on your local environment, you might also need to define a Java Authentication and Authorization Services (JAAS) user ID and password to be used by the WebSphere MQ DPL Bridge.
If you want TraderMQ to work, you must set up WebSphere MQ for z/OS, the proper queues, and WebSphere MQ DPL bridge for CICS.
For TraderCICS, the necessary external resource is:
itso/cics/eci/j2ee/trader/TraderCICSECICommandCICSECIServiceTraderCICSECICommandCICSECIPort
This is an ECI J2C connector to CICS.
For TraderIMS, the necessary external resource is:
itso/ims/j2ee/trader/TraderIMSCommandIMSServiceTraderIMSCommandIMSPort.
This is an IMS J2C connector to IMS.
Restriction: Not all of the necessary resources are externalized in the Web deployment and EJB deployment descriptors. You must look up some of the resources directly and not indirectly using the “java:comp/env” context. This also means that there is the possibility of setting up transaction control and redirection is limited.
40 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 3-9 shows an overview of the different connector paths that are implemented in the Trader applications.
Figure 3-9 Trader application connection overview
3.2 TraderCICSWe modified TraderCICS to handle three back-end CICS transactions with COMMAREA sizes of 500 bytes, 5 KB, and 20 KB. The application consists of JSPs, servlets, and both stateful and stateless session EJBs.
The responsibilities of the different components are:
� Servlets:
TraderSuperServlet is the superclass for the TraderCICSECIServlet. The servlet acts as a controller, takes requests from the user (JSPs), invokes the
CICS
IMS
DB2
EJB
EJB
J2C ECI Connector
JMS MQ-CICS BridgeMQ
EJB J2C IMS Connector
EJBJMS MQ-IMS Bridge
MQ
Servlet
Servlet
LogonHTML
CMP
EJB
EJB
Direct JDBC
Servlet
Servlet
Servlet
Servlet
BL
BL
BL
BL
LogonHTML
LogonHTML
LogonHTML
Note: The TraderCICS application was originally created for earlier CICS TG IBM Redbooks and the latest version can be found in WebSphere for z/OS Connectivity Handbook, SG24-7064-01.
Chapter 3. The Trader applications 41
appropriate EJB to do business logic processing, and returns information back to the user.
TraderCICSECIServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB.
� EJBs:
The application consists of stateful and stateless session EJBs. The purpose of using stateful session EJBs (even though it is not necessary in this case) was to demonstrate that an existing application can be modified to talk to CICS. The application consists of three stateful session EJBs, each representing a different back-end CICS program. They are:
– TraderCICSECI20K: Contains logic that populates a 20 KB COMMAREA– TraderCICSECI5K: Contains logic that populates a 5 KB COMMAREA– TraderCICSECI500: Contains logic that populates a 0.5 KB COMMAREA
These stateful session EJBs then invoke corresponding stateless session EJBs that are generated by WebSphere Studio Application Developer Integrated Edition and exposed as Web services. The three stateless session EJBs are Trader20KService, Trader5KService, and Trader500BService.
3.3 TraderSOAPTraderSOAP was modified for our workload to handle two back-end CICS transactions with COMMAREA sizes of 500 bytes and 5 KB. The application consists of JSPs, servlets, and stateful session EJBs.
The responsibilities of the different components are:
� Servlets:
TraderSuperServlet is the superclass of the TraderCICSOAPServlet. The servlet acts as a controller, takes requests from the user (JSPs), invokes the appropriate EJB to do business logic processing (although the EJB in this case does not do any work; it is used here to keep variables in the performance measurements constant), issues the SOAP call to CICS transactions, and returns information back to the user.
TraderCICSECIServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB.
Note: The TraderSOAP application was originally created WebSphere for z/OS Connectivity Handbook, SG24-7064-01.
42 WebSphere for z/OS to CICS and IMS Connectivity Performance
� EJBs:
The application consists of stateful session EJBs. Our purpose in using stateful session EJBs (even though it was not necessary in this case) was to demonstrate that an existing application can be modified to talk to CICS using SOAP. The application consists of single stateful session TraderCICSSOAP EJB.
The SOAP calls are done from the servlet. The code to issue these calls is generated with WebSphere Enterprise Developer. The development process is explained in the connectivity handbook. The code can be found in the Web project. The calls are:
� Trader500BProxy: This does the actual SOAP call and is generated for the 500-byte COMMAREA test case
� Trader500BSvc: This is a java class that was developed to provide indirection. The purpose of this class is to format the COMMAREA contents for the 500-byte COMMAREA test case.
� Trader5KProxy: This does the actual SOAP call and is generated for the 5 KB COMMAREA test case.
� Trader5KSvc: This is a java class that was developed to provide indirection. The purpose of this class is to format the COMMAREA contents for the 5 KB COMMAREA test case.
3.4 TraderMQTraderMQ was modified for our workload to handle three back-end CICS transactions with COMMAREA sizes of 500 bytes, 5 KB, and 20 KB. The application consists of JSPs, servlets, stateful session EJBs, and an MDB.
The responsibilities of the different components are:
� Servlets:
TraderSuperServlet is the superclass of the TraderMQCICSServlet. The servlet acts as a controller, takes requests from the user (JSPs), and invokes the appropriate EJB to do business logic processing.
TraderMQCICSServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB.
Note: The TraderMQ application was originally created for the WebSphere for z/OS Connectivity Handbook, SG24-7064-01.
Chapter 3. The Trader applications 43
� EJBs:
The application consists of stateful and stateless session EJBs. Our purpose in using stateful session EJBs (even though it was not necessary in this case) was to demonstrate that an existing application can be modified to talk to CICS. The application consists of three stateful session EJBs each representing a different back-end CICS program. They are:
– Trader20KMQCICS: Contains logic that populates a 20 KB COMMAREA– Trader5KMQCICS: Contains logic that populates a 5 KB COMMAREA– Trader500BMQCICS: Contains logic that populates a 500-byte
COMMAREA
These stateful session EJBs then use the JMS API to talk to MQ on z/OS.
3.5 TraderIMSTraderIMS was modified for our workload to handle three back-end CICS transactions with COMMAREA sizes of 500 bytes, 5 KB, and 20 KB. The application consists of JSPs, servlets, and stateful and stateless session EJBs.
The responsibilities of the different components are:
� Servlets:
TraderSuperServlet is the superclass of the TraderIMSECIServlet. The servlet acts as a controller, takes requests from the user (JSPs), invokes the appropriate EJB to do business logic processing, and returns information back to the user.
TraderIMSECIServlet is a subclass of the TraderSuperServlet. The servlet is responsible for obtaining an instance of the remote interface of the stateful session EJB.
� EJBs:
The application consists of stateful and stateless session EJBs. Our purpose for using stateful session EJBs (although not necessary in this case) was to demonstrate that an existing application can be modified to talk to IMS.
The application consists of three stateful session EJBs:
– TraderIMSECI20K: Contains logic that populates a 20 KB COMMAREA– TraderIMSECI5K: Contains logic that populates a 5 KB COMMAREA– TraderIMSECI500: Contains logic that populates a 500-byte COMMAREA
Note: The TraderIMS application was originally created for the WebSphere for z/OS Connectivity Handbook, SG24-7064-01.
44 WebSphere for z/OS to CICS and IMS Connectivity Performance
Each EJB represents a different back-end IMS program.
These stateful session EJBs then invoke corresponding stateless session EJBs, which are generated by WebSphere Studio Application Developer Integrated Edition and exposed as Web services.The three stateless session EJBs are Trader20KService, Trader5KService, and Trader500BService.
Chapter 3. The Trader applications 45
46 WebSphere for z/OS to CICS and IMS Connectivity Performance
Chapter 4. Measurements and results
This chapter describes the results of our tests and is broken out into the following sections:
� The testing procedure
� Example of data captured with each test
� Detailed description of each metric that was extracted and compared in our final analysis
� The changes and adjustments that we made
� Results
4
© Copyright IBM Corp. 2006. All rights reserved. 47
Important: Keep the following information in mind as you read this chapter:
� Because all the testing was done in a short period of time, we were unable to refine our results enough to consider them official performance measurements. However, we think our experiences can be used to understand the performance trade-offs that are associated with various connectivity options at a high level.
� We did some basic tuning. We tuned the number of servant regions, the number of threads, the placement of data on Enterprise Storage Subsystem (no local copies), and the reload interval.
� During some of our performance measurements, the Java heap size was not properly tuned. As a result, we saw excessive CPU consumption in our WebSphere servant regions during some runs. Based on other measurements outside the scope of this paper, we believe that the excessive CPU time in the servant region can easily be eliminated with proper heap tuning. Therefore, our results have been adjusted accordingly.
� This was a point in time measurement; changes to the connectors happen all the time. For example:
– CICS TG V6 has performance enhancements.– CICS TS V3.1 has performance improvements to the SOAP connector.
� Application design: We did not make any design changes to improve the measurements.
� The COMMAREA that we used in many of our measurements was complex; it had many repetitive fields with many data types. This might or might not represent your environment. When comparing our results to your environment, be careful. We did a special test for SOAP/CICS to show that reducing the COMMAREA complexity reduces the CPU time requirements considerably.
� The COMMAREA represents the actual application data that is being sent. The different connectivity methods and protocols add headers and layers to the application data, so it grows as it gets to the EIS and vice versa. The actual bytes transferred, which we call transport size, varies by connectivity method and transport protocol.
� We intentionally did not focus on response time, because this metric might not be fair for this task. Our measured response times might have been affected by excessive simulated client load or by the processing used by GC.
48 WebSphere for z/OS to CICS and IMS Connectivity Performance
4.1 The testing procedureThe goal of each test was to drive total processor utilization to approximately 90%, sustain that throughput, and measure the average CPU consumption over that duration. Multiple precautions were taken to minimize the noise of other processes and prevent discrepancies in testing conditions. This was achieved by following these steps for each test:
1. Prepare EIS databases.
2. Swap the SMF data set and restart RMF to clear out unwanted data and ensure that we do not force a swap of SMF during out test.
3. Restart EIS (CICS or IMS).
4. Restart WebSphere Application Server.
5. Initiate workload through WebSphere Studio Workload Simulator.
6. Start RMF Monitor III to view test results while the test is running. This measures overall CPU utilization while the workload is being adjusted to achieve as close as 90% utilization as possible or to the point where CPU utilization stops increasing and enclave wait or queued time starts to grow. Pushing beyond these thresholds skew results because of inflated processor delays or delays in network communications. After the necessary workload has been achieved and the test is running, RMF Monitor III is not used to eliminate the processing that it uses when creating reports.
7. Sustain the test for approximately 20 minutes.
8. Stop the workload.
9. Record the data from WebSphere Studio Workload Simulator.
10.Stop RMF and swap out the SMF data, forcing it to write to a generation data set group (GDG).
11.Run RMF Monitor I report for a 10-minute period that falls in the middle of the sustained test.
12.Record results and save all data.
4.1.1 The test scriptThe test script was the same for every test case. Each simulated client goes through the following interactions with the Trader application:
1. Log in to the Trader application and show a list of available companies. From the Web server perspective, this includes accepting a POST request for a page and a GET request for an image.
2. Obtain a quote for a company.
Chapter 4. Measurements and results 49
3. Show a list of available companies.
4. Obtain a quote for a company.
5. Show a list of available companies.
6. Choose a stock to buy. This includes a POST request for a page and a GET request for an image.
7. Buy 10 shares of the stock and return to the list of available companies.
8. Choose a stock to sell (the same stock that was bought). This includes a POST request for a page and a GET request for an image.
9. Sell 10 shares of the stock and return to the list of available companies.
10.Obtain a quote for a company
11.Show a list of available companies.
12.Log out of the Trader application
4.1.2 RMF Monitor IIIThe RMF Monitor III utility was used with each test case for the initial tuning of the environment and for finding our testing threshold.
4.2 Recorded dataData was recorded from WebSphere Studio Workload Simulator and from RMF Monitor I reports.
4.2.1 WebSphere Studio Workload SimulatorWe configured WebSphere Studio Workload Simulator to record data every 5 seconds. By default, the tool displays summary data in its console every 5 minutes. The console data was captured but not used in our final analysis. The WebSphere Studio Workload Simulator engine works by instantiating simulated clients after a delay; our delay was set to 50 ms. As the test ran, the test administrator increased the simulated clients until the goal of 90% overall CPU utilization was achieved. The data reported in the WebSphere Studio Workload Simulator console averages reported values for the duration of the test. Example 4-1 shows this output.
Example 4-1 WebSphere Studio Workload Simulator console output
11/23/2004 15:38:11 =========================Cumulative Statistics==========================11/23/2004 15:38:11 IWL0038I Run time = 00:20:0311/23/2004 15:38:11 IWL0007I Clients completed = 0/950
50 WebSphere for z/OS to CICS and IMS Connectivity Performance
11/23/2004 15:38:11 IWL0059I Page elements = 395732 11/23/2004 15:38:11 IWL0060I Page element throughput = 328.855 /s11/23/2004 15:38:11 IWL0059I Transactions = 0 11/23/2004 15:38:11 IWL0060I Transaction throughput = 0.000 /s11/23/2004 15:38:11 IWL0059I Network I/O errors = 0 11/23/2004 15:38:11 IWL0059I Web server errors = 0 11/23/2004 15:38:11 IWL0059I Num of pages retrieved = 316089 11/23/2004 15:38:11 IWL0060I Page throughput = 262.672 /s11/23/2004 15:38:11 IWL0060I HTTP data read = 1502.262 MB11/23/2004 15:38:11 IWL0060I HTTP data written = 269.720 MB11/23/2004 15:38:11 IWL0060I HTTP avg. page element response time = 1.180 11/23/2004 15:38:11 IWL0059I HTTP avg. page element response time = 0 (with all clients concurrently running)11/23/2004 15:38:11 ========================================================================
WebSphere Studio Workload Simulator features a utility for graphing this data. We plotted average response time against the time (Figure 4-1).
Figure 4-1 WebSphere Studio Workload Simulator graph
Chapter 4. Measurements and results 51
Due to the fine granularity of the data being captured, the graphs for each test oscillated too much to provide a single average response time. To calculate this mean, the average response time recorded every 5 seconds was averaged over our exact 10 minute test in a spreadsheet. Effectively, we took the mean of the graph in Figure 4-1 over a refined time range.
4.2.2 RMF Monitor IRMF Monitor I reports were run over a short range of time that fell within the 20 minute duration of the test. SMF was set up to record data every 5 minutes, and we summarized our results by reporting over a 10 minute interval. The Job Control Language (JCL) job is shown in Example 4-2. The classes included in the reports are all reporting classes beginning with WAS, CICS4, ERWCTG, MQ4B, or IMS48, and the WAS48 service class.
Example 4-2 RMF Monitor I report JCL job
//*//******************************************************************//*//* CREATED VIA ISPF INTERFACE//* z/OS V1R5 RMF//*//******************************************************************//*//******************************************************************//*//* RMF SORT PROCESSING//*//******************************************************************//RMFSORT EXEC PGM=SORT,REGION=0M//SORTIN DD DISP=SHR,DSN=SMFDATA.RMFRECS(0)//SORTOUT DD DISP=(NEW,PASS),UNIT=SYSDA,SPACE=(CYL,(10,10))//SORTWK01 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(10,10))//SORTWK02 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(10,10))//SORTWK03 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(10,10))//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SYSIN DD * SORT FIELDS=(11,4,CH,A,7,4,CH,A),EQUALS MODS E15=(ERBPPE15,36000,,N),E35=(ERBPPE35,3000,,N)//******************************************************************//*
Attention: Response time is not the primary metric captured for this paper. Our measured response times might have been affected by excessive simulated client load or by the processing used by garbage collection.
52 WebSphere for z/OS to CICS and IMS Connectivity Performance
//* RMF POSTPROCESSING//*//******************************************************************//RMFPP EXEC PGM=ERBRMFPP,REGION=0M//MFPINPUT DD DISP=(OLD,DELETE),DSN=*.RMFSORT.SORTOUT//MFPMSGDS DD SYSOUT=*//******************************************************************//*//* RMF POSTPROCESSING OPTIONS GENERATED FROM://* 1. PROFILE DATA SET: 'WATSON.SG246365.JCL(RMF)'//* 2. POSTPROCESSOR OPTIONS PANEL INPUT//*//******************************************************************//SYSIN DD *SYSRPTS (WLMGL(RCLASS(WAS*)),WLMGL(RCLASS(CICS4*)),WLMGL(SCPER(WAS48)), WLMGL(RCLASS(ERWCTG*)),WLMGL(RCLASS(MQ4B*)),WLMGL(RCLASS(IMS48*))) DATE(11232004,11232004) RTOD(1524,1546) DINTV(0005) SUMMARY(INT,TOT) SYSOUT(A) OVERVIEW(REPORT) SYSID(SC48)
RMF Monitor I produces a summary report and a workload activity report based on class. In our analysis, we used the summary report for overall CPU usage for the LPAR being tested. Because this was captured and reported in 5-minute intervals, we averaged the CPU busy time for two intervals. The data highlighted in Figure 4-3 shows the values for a test that ran from 15:35 to 15:45.
Example 4-3 RMF Monitor I summary report
1 R M F S U M M A R Y R E P O R T PAGE 002 z/OS V1R5 SYSTEM ID SC48 START 11/23/2004-13.55.00 INTERVAL 00.04.25 RPT VERSION V1R5 RMF END 11/23/2004-15.48.01 CYCLE 1.000 SECONDS0 NUMBER OF INTERVALS 14 TOTAL LENGTH OF INTERVALS 01.02.03-DATE TIME INT CPU DASD DASD JOB JOB TSO TSO STC STC ASCH ASCH OMVS OMVS SWAP DEMAND MM/DD HH.MM.SS MM.SS BUSY RESP RATE MAX AVE MAX AVE MAX AVE MAX AVE MAX AVE RATE PAGING011/23 13.55.00 00.10 2.9 4.7 73.6 0 1 1 1 93 93 0 1 6 6 0.00 0.00 11/23 14.46.02 03.57 6.0 6.7 19.7 0 1 1 1 94 93 0 1 6 6 0.00 0.06 11/23 14.50.00 05.00 31.2 1.2 243.1 0 1 1 1 94 94 0 1 6 6 0.00 0.01 11/23 14.55.00 04.59 85.1 0.8 740.3 0 1 1 1 94 94 0 1 6 6 0.00 0.00011/23 15.00.00 05.00 81.6 0.9 723.4 0 1 1 1 94 94 0 1 6 6 0.00 0.00 11/23 15.05.00 05.00 87.5 0.8 818.3 0 1 1 1 94 94 0 1 6 6 0.00 0.01 11/23 15.10.00 04.59 73.2 0.9 683.2 0 1 1 1 94 94 0 1 6 6 0.00 0.03 11/23 15.15.00 04.59 37.5 1.6 251.2 0 1 1 1 94 94 0 1 7 6 0.00 0.03011/23 15.20.00 05.00 91.4 0.9 781.6 0 1 1 1 94 94 0 1 6 6 0.00 0.02 11/23 15.25.00 04.59 90.0 0.8 770.9 0 1 1 1 94 94 0 1 6 6 0.00 0.00
Chapter 4. Measurements and results 53
11/23 15.30.00 05.00 89.9 0.8 770.0 0 1 1 1 94 94 0 1 6 6 0.00 0.02 11/23 15.35.00 04.59 91.5 0.9 795.1 0 1 1 1 94 94 0 1 6 6 0.00 0.01011/23 15.40.00 05.00 88.8 0.9 780.7 0 1 1 1 94 94 0 1 6 6 0.00 0.00 11/23 15.45.00 03.01 63.5 1.0 553.8 0 1 1 1 94 91 0 1 6 6 0.00 0.02-TOTAL/AVERAGE 71.7 0.9 620.3 0 1 1 1 94 94 0 1 7 6 0.00 0.01
Some of the reporting classes and services that were set up for the Workload Activity report are shown in Example 4-4. The values highlighted in Example 4-4 were the metrics used in our final analysis. These values were captured and compared for each of the test cases. Detailed descriptions of each of these fields can be found in 4.3, “Metrics in our final analysis” on page 56.
Example 4-4 RMF Monitor I Workload Activity report
1 W O R K L O A D A C T I V I T Y PAGE 1 z/OS V1R5 SYSPLEX WTSCPLX1 START 11/23/2004-15.35.00 INTERVAL 000.09.59 MODE = GOAL RPT VERSION V1R5 RMF END 11/23/2004-15.45.00
POLICY ACTIVATION DATE/TIME 11/22/2004 16.05.07
------------------------------------------------------------------------------------------------------------ SERVICE CLASS PERIODS
REPORT BY: POLICY=SPSTPC WORKLOAD=WAS SERVICE CLASS=WAS48 RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=2 CRITICAL =CPU
TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE---- --SERVICE TIMES-- PAGE-IN RATES ----STORAGE---- AVG 35.60 ACTUAL 1.145 SSCHRT 0.1 IOC 0 TCB 1563.1 SINGLE 0.0 AVG 0.00 MPL 35.60 EXECUTION 106 RESP 1.5 CPU 305738K SRB 0.0 BLOCK 0.0 TOTAL 0.00 ENDED 201350 QUEUED 1.039 CONN 1.3 MSO 0 RCT 0.0 SHARED 0.0 CENTRAL 0.00 END/S 335.58 R/S AFFINITY 0 DISC 0.0 SRB 0 IIT 0.0 HSP 0.0 EXPAND 0.00 #SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 305738K HST 0.0 HSP MISS 0.0 EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 509565 IFA N/A EXP SNGL 0.0 SHARED 0.00 AVG ENC 35.60 STD DEV 762 APPL% CP 260.5 EXP BLK 0.0 REM ENC 0.00 ABSRPTN 14K APPL% IFACP 0.0 EXP SHR 0.0 MS ENC 0.00 TRX SERV 14K APPL% IFA N/A
RESP -------------------------------- STATE SAMPLES BREAKDOWN (%) ------------------------------- ------STATE------ SUB P TIME --ACTIVE-- READY IDLE -----------------------------WAITING FOR----------------------------- SWITCHED SAMPL(%) TYPE (%) SUB APPL TYP3 LOCAL SYSPL REMOT CB BTE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 CB EXE 9.2 0.3 97.4 0.0 0.0 2.3 0.0 0.0 0.0
GOAL: RESPONSE TIME 000.00.00.350 FOR 90%
RESPONSE TIME EX PERF AVG --- USING% --- ---------- EXECUTION DELAYS % --------- ---DLY%-- -CRYPTO%- ---CNT%-- % SYSTEM ACTUAL% VEL% INDX ADRSP CPU IFA I/O TOT CPU UNKN IDLE USG DLY USG DLY QUIE
SC48 20.1 12.1 **** 382.6 0.7 N/A 0.0 5.1 5.1 94.2 0.0 0.0 0.0 0.0 0.0 0.0
----------RESPONSE TIME DISTRIBUTION---------- ----TIME---- --NUMBER OF TRANSACTIONS-- -------PERCENT------- 0 10 20 30 40 50 60 70 80 90 100 HH.MM.SS.TTT CUM TOTAL IN BUCKET CUM TOTAL IN BUCKET |....|....|....|....|....|....|....|....|....|....| < 00.00.00.175 27944 27944 13.9 13.9 >>>>>>>> <= 00.00.00.210 30596 2652 15.2 1.3 > <= 00.00.00.245 33170 2574 16.5 1.3 > <= 00.00.00.280 35733 2563 17.7 1.3 > <= 00.00.00.315 38076 2343 18.9 1.2 > <= 00.00.00.350 40474 2398 20.1 1.2 > <= 00.00.00.385 42455 1981 21.1 1.0 > <= 00.00.00.420 44450 1995 22.1 1.0 > <= 00.00.00.455 46595 2145 23.1 1.1 > <= 00.00.00.490 48712 2117 24.2 1.1 > <= 00.00.00.525 51272 2560 25.5 1.3 > <= 00.00.00.700 64915 13643 32.2 6.8 >>>> <= 00.00.01.400 128K 63089 63.6 31.3 >>>>>>>>>>>>>>>>
54 WebSphere for z/OS to CICS and IMS Connectivity Performance
> 00.00.01.400 201K 73346 100 36.4 >>>>>>>>>>>>>>>>>>>1 W O R K L O A D A C T I V I T Y PAGE 2 z/OS V1R5 SYSPLEX WTSCPLX1 START 11/23/2004-15.35.00 INTERVAL 000.09.59 MODE = GOAL RPT VERSION V1R5 RMF END 11/23/2004-15.45.00
POLICY ACTIVATION DATE/TIME 11/22/2004 16.05.07
------------------------------------------------------------------------------------------------------------ REPORT CLASS(ES)
REPORT BY: POLICY=SPSTPC REPORT CLASS=CICS48 DESCRIPTION =cics sc48
TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE---- --SERVICE TIMES-- PAGE-IN RATES ----STORAGE---- AVG 1.00 ACTUAL 0 SSCHRT 763.3 IOC 2289K TCB 123.2 SINGLE 0.0 AVG 10286.6 MPL 1.00 EXECUTION 0 RESP 0.5 CPU 24090K SRB 12.9 BLOCK 0.0 TOTAL 10286.6 ENDED 0 QUEUED 0 CONN 0.2 MSO 496078K RCT 0.0 SHARED 0.0 CENTRAL 10286.6 END/S 0.00 R/S AFFINITY 0 DISC 0.0 SRB 2520K IIT 2.7 HSP 0.0 EXPAND 0.00 #SWAPS 0 INELIGIBLE 0 Q+PEND 0.3 TOT 524977K HST 0.0 HSP MISS 0.0 EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 874963 IFA N/A EXP SNGL 0.0 SHARED 0.00 AVG ENC 0.00 STD DEV 0 APPL% CP 23.1 EXP BLK 0.0 REM ENC 0.00 ABSRPTN 875K APPL% IFACP 0.0 EXP SHR 0.0 MS ENC 0.00 TRX SERV 875K APPL% IFA N/A
REPORT BY: POLICY=SPSTPC REPORT CLASS=CICS48E DESCRIPTION =cics enclavesc48
TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT AVG 0.00 ACTUAL 23 MPL 0.00 EXECUTION 0 ENDED 147577 QUEUED 0 END/S 245.96 R/S AFFINITY 0 #SWAPS 0 INELIGIBLE 0 EXCTD 0 CONVERSION 0 AVG ENC 0.00 STD DEV 45 REM ENC 0.00 MS ENC 0.001 W O R K L O A D A C T I V I T Y PAGE 5 z/OS V1R5 SYSPLEX WTSCPLX1 START 11/23/2004-15.35.00 INTERVAL 000.09.59 MODE = GOAL RPT VERSION V1R5 RMF END 11/23/2004-15.45.00
POLICY ACTIVATION DATE/TIME 11/22/2004 16.05.07
REPORT BY: POLICY=SPSTPC REPORT CLASS=WAS48C DESCRIPTION =was control region sc48
TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE---- --SERVICE TIMES-- PAGE-IN RATES ----STORAGE---- AVG 1.00 ACTUAL 0 SSCHRT 0.0 IOC 0 TCB 68.6 SINGLE 0.0 AVG 40705.0 MPL 1.00 EXECUTION 0 RESP 0.0 CPU 13416K SRB 5.2 BLOCK 0.0 TOTAL 40705.0 ENDED 0 QUEUED 0 CONN 0.0 MSO 1094M RCT 0.0 SHARED 0.0 CENTRAL 40705.0 END/S 0.00 R/S AFFINITY 0 DISC 0.0 SRB 1023K IIT 0.0 HSP 0.0 EXPAND 0.00 #SWAPS 0 INELIGIBLE 0 Q+PEND 0.0 TOT 1108M HST 0.0 HSP MISS 0.0 EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 1847K IFA N/A EXP SNGL 0.0 SHARED 15.00 AVG ENC 0.00 STD DEV 0 APPL% CP 12.3 EXP BLK 0.0 REM ENC 0.00 ABSRPTN 1847K APPL% IFACP 0.0 EXP SHR 0.0 MS ENC 0.00 TRX SERV 1847K APPL% IFA N/A
REPORT BY: POLICY=SPSTPC REPORT CLASS=WAS48E DESCRIPTION =was enclaves sc48
TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE---- --SERVICE TIMES-- PAGE-IN RATES ----STORAGE---- AVG 35.60 ACTUAL 1.145 SSCHRT 0.1 IOC 0 TCB 1563.1 SINGLE 0.0 AVG 0.00 MPL 35.60 EXECUTION 106 RESP 1.5 CPU 305738K SRB 0.0 BLOCK 0.0 TOTAL 0.00 ENDED 201350 QUEUED 1.039 CONN 1.3 MSO 0 RCT 0.0 SHARED 0.0 CENTRAL 0.00 END/S 335.58 R/S AFFINITY 0 DISC 0.0 SRB 0 IIT 0.0 HSP 0.0 EXPAND 0.00 #SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 305738K HST 0.0 HSP MISS 0.0 EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 509565 IFA N/A EXP SNGL 0.0 SHARED 0.00 AVG ENC 35.60 STD DEV 762 APPL% CP 260.5 EXP BLK 0.0 REM ENC 0.00 ABSRPTN 14K APPL% IFACP 0.0 EXP SHR 0.0 MS ENC 0.00 TRX SERV 14K APPL% IFA N/A
REPORT BY: POLICY=SPSTPC REPORT CLASS=WAS48S
Chapter 4. Measurements and results 55
DESCRIPTION =was servant sc48
TRANSACTIONS TRANS.-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE---- --SERVICE TIMES-- PAGE-IN RATES ----STORAGE---- AVG 1.00 ACTUAL 1.13.212 SSCHRT 0.0 IOC 30209K TCB 224.8 SINGLE 0.0 AVG 190874 MPL 1.00 EXECUTION 1.13.212 RESP 77.0 CPU 43971K SRB 5.4 BLOCK 0.0 TOTAL 190874 ENDED 8 QUEUED 0 CONN 1.0 MSO 16804M RCT 0.0 SHARED 0.0 CENTRAL 190874 END/S 0.01 R/S AFFINITY 0 DISC 76.0 SRB 1047K IIT 0.0 HSP 0.0 EXPAND 0.00 #SWAPS 4 INELIGIBLE 0 Q+PEND 0.0 TOT 16879M HST 0.0 HSP MISS 0.0 EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 28132K IFA N/A EXP SNGL 0.0 SHARED 26.00 AVG ENC 0.00 STD DEV 1.16.445 APPL% CP 38.4 EXP BLK 0.0 REM ENC 0.00 ABSRPTN 28M APPL% IFACP 0.0 EXP SHR 0.0
MS ENC 0.00 TRX SERV 28M APPL% IFA N/A
Because this was a test of the local connection to CICS, only these classes are seen in this report:
� WAS48: Service class that the WebSphere enclaves run in� CICS48: Report class for the CICS region� CICS48E: tReport class for all CICS transactions� WAS48C: Report class for the WebSphere controller� WAS48S: Report class for all WebSphere servants� WAS48E: Report class for WebSphere enclaves� WAS49C: Report class for the WebSphere controller in the remote cases� WAS49S: Report class for all WebSphere servants in the remote cases� WAS49E: Report class for WebSphere enclaves in the remote cases� IMS48C: Report class for the IMS control region� IMS48W: Report class for the IMS message processing region� IMS48X: Report class for IMS Connect� ERWFTG1: Report class for CICS Transaction Gateway� MQ4BC: Report class for MQ Channels� MQ4BM: Report class for the MQ Master
4.3 Metrics in our final analysisIn our final analysis, we distilled one main metric from WebSphere Studio Workload Simulator and as many as five metrics from each reporting class in the Workload Activity report as follows:
� Overall CPU utilization for each LPAR
Unlike the application percent value discussed later, this is from a total of 100%. This value includes added noise from background processes and other applications, which was minimized. This value can be found in the RMF Summary report under CPU Busy, as seen in Example 4-3 on page 53.
56 WebSphere for z/OS to CICS and IMS Connectivity Performance
� Demand paging for each LPAR
The average number of demand pages per second over the duration of the test was near 0 for all tests.
� Average end-to-end response time
As described in 4.2.1, “WebSphere Studio Workload Simulator” on page 50, the mean of average response times that were captured by WebSphere Studio Workload Simulator over the time range is calculated as the average end-to-end response time. This value takes into account all processing, wait time, and network delays, including the network communication from the workload engine to WebSphere Application Server. In our tests, it is possible that response time was impacted by client load or the processing used by GC.
� WebSphere transaction time (actual)
The average time it took for a job in WebSphere to complete, this value is the sum of execution time and queued time. This is similar to the average end-to-end response time, including all but the network communications between the workload engine and WebSphere. This value can be found in the Workload Activity report under TRANS.-TIME for the report class of WAS48E or WAS49E in the remote case.
� WebSphere transaction time (execution)
The average time it took for ended jobs to complete, from when the job becomes active to completion, it includes the time that WebSphere processes, the time for the transaction to go through the connector and execute in the EIS, and all associated communication time. This value can be found in the Workload Activity report under TRANS.-TIME for the report class of WAS48E or WAS49E in the remote case.
� WebSphere transaction time (queued)
The average time that jobs were delayed while waiting to be activated, we monitored this value during testing, maintaining a balance between having work queued up to sustain high CPU utilization and keeping this value under approximately 1 second. This value is in the Workload Activity report under TRANS.-TIME for the report class of WAS48E or WAS49E in the remote case.
� WebSphere transaction rate
The number of WebSphere transactions that are completed per second of the test duration is the WebSphere transaction rate. A WebSphere transaction is defined as any page element served, including images. The test script outlined in 4.1.1, “The test script” on page 49 details when images are served. This value is labeled END/S in the Workload Activity report under TRANSACTIONS for the WAS48E or WAS49E report classes in the remote case.
Chapter 4. Measurements and results 57
� CICS transaction rate
The number of CICS transactions completed per second of the test duration is the CICS transaction rate. Note that there is no one-to-one correlation between WebSphere transactions and CICS transactions, so in our case, this number was smaller. This value is labeled END/S in the Workload Activity report. In our test script, the following actions were called CICS transactions:
– Show a list of available companies– Obtain a quote for a company– Buy a stock– Sell a stock
� Application percent for all reporting classes
This metric is the percentage of the time for one processor that was used by the class. On the four-way LPARs we used for testing, the maximum application percent is 400%. This value is labeled APPL% CP in the Workload Activity report under TRANSACTIONS for the CICS48E report class. For each test, the application percent for each report class was captured separately. This identifies where the most processor time is being used for each of the different cases.
� Transactions per CPU second
The number of transactions that run per second of overall CPU utilization for the LPAR is calculated by dividing the WebSphere Transaction Rate by the overall CPU Utilization for the LPAR.
� CPU milliseconds per transaction
The average number of CPU seconds it takes for a WebSphere transaction to complete is defined as the number of processors (4) multiplied by 1000 ms, divided by the number of transactions per CPU second, for example, the CICS TG 0.5 KB case transaction per CPU second = 825/.9465 = 872 (rounded). Then, CPU ms per transaction = (4 x 1000)/872 = 4.587.
4.4 Tuning and adjustmentIn this section, we describe the changes to the settings and adjustments that we made for the tests.
4.4.1 Changing the settingsWhen we set up the testing environment, we changed some settings from their defaults to obtain optimal performance and recorded the specific settings used for each test. You can change this settings by following these instructions:
58 WebSphere for z/OS to CICS and IMS Connectivity Performance
� Pass by reference: In the WebSphere Admin Console, select Servers → Application Servers → Your server → ORB Service. Make sure that Pass by reference is turned on.
� Number of server instances: In the WebSphere Admin Console, select Servers → Application Servers → Your server → Server Instance. Make sure that Multiple Instance Enabled is turned on. Set Minimum and Maximum Number of Instances to the total number of servants that you wish to run. We changed this value between tests, depending on the throughput and the workload differences between connectors.
� Workload profile: In the WebSphere Admin Console, select Servers → Application Servers → Your server → ORB Service → Advanced Settings. Set the Workload profile to IOBOUND or LONGWAIT. With LONGWAIT, each servant has 40 threads available. With IOBOUND, each servant has 12 threads available because there are four processors: MIN(30, MAX(5,(Number of CPUs*3))). See 2.3, “WLM” on page 15 for more information.
� Placement of data was tuned on the ESS for optimal placement.
4.4.2 AdjustmentIn this section, we list the adjustments that we made.
Garbage collectionThe results table contains two CPU ms/transaction values. The first value is the actual measurement result. The second value is adjusted to reduce the GC costs to normal values.
While we were analyzing our measurement results, we recognized that the GC resource consumption was too high, especially in cases where the COMMAREA was larger. Other measurements outside the scope of this book indicate that in most cases, it is possible to reduce the GC to under 2% of the total servant region CPU time.
Normally GC processor usage can be reduced by:
� Increasing the Java heap size� Reducing the number of threads in each servant region� Increasing the number of servant regions
This process requires multiple measurements and analysis. We did not have enough time to tune all these cases, and this is the reason we had to adjust the measurement results as follows:
Chapter 4. Measurements and results 59
� Transactions/CPU_sec is computed as WebSphere transaction rate/CPU utilization.
� CPU ms/tran is computed as number_of_CPUs x 1000/ (transactions/CPU_sec).
To have a common ground for all measurements, we reduced the servant region CPU/transaction number to 3% of the total CPU time that is consumed by the enclaves and the servant region. The formula that we used is:
Adjusted Servant CPU/tran = 0.03 x (adjusted servant CPU/tran + actual enclave CPU/tran)
This results in:
Adjusted Servant CPU/tran = 0.03/0.97 x actual Enclave CPU/tran
So:
Adjusted CPU/tran = actual CPU/tran - (actual servant CPU/tran - adjusted servant CPU/tran)
WebSphere Studio Enterprise Developer fixAfter our measurements were completed, a problem in WebSphere Enterprise Developer was found and corrected (in WebSphere Enterprise Developer 5.1.2.1). The fix affects all CICS SOAP measurements, but it has no effect on the other measurements. Based on performance data that was collected by other performance teams in IBM and with the help of CICS performance experts, we have adjusted our CICS SOAP results to reflect the improvements that we can expect from this fix. The Adjusted column in each table in 4.5, “Results for CICS” on page 60 shows both of these adjustments (GC and WebSphere Enterprise Developer), thereby reflecting our best estimates given the constraints of our tests.
4.5 Results for CICSTable 4-1 shows a comparison of the results from all test cases.
Table 4-1 Table of results for CICS
EIS Connector Location App. data size (KB)
CPU ms/Transaction
Actual Adjusted
CICS CICS TG Local 0.5 4.590 4.502
5.0 10.746 9.841
60 WebSphere for z/OS to CICS and IMS Connectivity Performance
More detailed metrics for each test case can be found in subsequent sections.
4.5.1 CICS Transaction GatewayThe results in this section were obtained for six tests of CICS TG. For detailed explanation of the fields, see z/OS V1R6.0 RMF Report Analysis, SC33-7991-09.
20.0 47.253 33.234
CICS CICS TG Remote 0.5 5.571 5.227
5.0 13.396 11.793
20.0 37.938 33.145
CICS SOAP Local 0.5 10.673 8.447
5.0 73.653 59.581
Simple 5.0 16.762 13.091
CICS SOAP Remote 0.5 11.460 9.263
5.0 77.792 64.385
Simple 5.0 17.928 14.323
CICS CICS MQ Bridge Local 0.5 9.774 9.392
5.0 11.430 10.850
20.0 16.045 14.434
CICS CICS MQ Bridge Remote 0.5 12.576 12.497
5.0 14.328 14.201
20.0 18.878 18.544
Important: For all CICS MQ DPL Bridge test cases, very little adjustment was needed. This indicates that CICS MQ DPL Bridge requires a smaller memory footprint in the Java heap than the other connectors. Also, WebSphere Servant appl% might be lower with proper Java heap tuning.
EIS Connector Location App. data size (KB)
CPU ms/Transaction
Actual Adjusted
Chapter 4. Measurements and results 61
Local (0.5 KB)Table 4-2 shows the configuration and Table 4-3 shows the results for the local CICS TG test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a workload profile of LONGWAIT produced the best performance for this case.
Table 4-2 Test configuration
Table 4-3 Test results
Variable name Value
Number of clients 2000
Workload profile LONGWAIT
Number of servants 4
Variable name Value
Actual CPU milliseconds/transaction 4.590 ms
Adjusted CPU milliseconds/transaction 4.502 ms
WebSphere transaction rate 824.82/sec
EIS transaction rate 606.59/sec
WebSphere transaction time (actual) 0.713 sec
WebSphere transaction time (execution) 0.144 sec
WebSphere transaction time (queued) 0.569 sec
Overall CPU utilization for LPAR 94.7%
Demand paging on the LPAR 0.05 / sec
Average end-to-end response time 0.798 sec
WebSphere controller appl% 39.3%
WebSphere servant appl% 14.3%
WebSphere enclaves appl% 227.1%
CICS appl% 56.3%
62 WebSphere for z/OS to CICS and IMS Connectivity Performance
Local (5 KB)Table 4-4 shows the configuration and Table 4-5 shows the results for the local CICS TG test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-4 Test configuration
Table 4-5 Test results
Variable name Value
Number of clients 950
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 10.746 ms
Adjusted CPU milliseconds/transaction 9.841 ms
WebSphere transaction rate 335.58 / sec
EIS transaction rate 245.96 / sec
WebSphere transaction time (actual) 1.145 sec
WebSphere transaction time (execution) 0.106 sec
WebSphere transaction time (queued) 1.039 sec
Overall CPU utilization for LPAR 90.2%
Demand paging on the LPAR 0.01 / sec
Average end-to-end response time 1.247 sec
WebSphere controller appl% 12.3%
WebSphere servant appl% 38.4%
WebSphere enclaves appl% 260.5%
CICS appl% 23.1%
Chapter 4. Measurements and results 63
Local (20 KB)Table 4-6 shows the configuration and Table 4-7 shows the results for the local CICS TG test case with a 20 KB COMMAREA. After preliminary tests, we determined that one servant running with a LONGWAIT workload profile produced the best performance for this test case.
Table 4-6 Test configuration
Table 4-7 Test results
Variable name Value
Number of clients 185
Workload profile LONGWAIT
Number of servants 1
Variable name Value
CPU milliseconds/transaction 47.253 ms
Adjusted CPU milliseconds/transaction 33.234 ms
WebSphere transaction rate 75.72 / sec
EIS transaction rate 55.7 / sec
WebSphere transaction time (actual) 0.837 sec
WebSphere transaction time (execution) 0.390 sec
WebSphere transaction time (queued) 0.446 sec
Overall CPU utilization for LPAR 89.5%
Demand paging on the LPAR 0.01 / sec
Average end-to-end response time 1.213 sec
WebSphere controller appl% 2.5%
WebSphere servant appl% 112.9%
WebSphere enclaves appl% 218.1%
CICS appl% 6.3%
64 WebSphere for z/OS to CICS and IMS Connectivity Performance
Remote (0.5 KB)Table 4-8 shows the configuration and Table 4-9 shows the results for the remote CICS TG test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this test case.
Table 4-8 Test configuration
Table 4-9 Test results
Variable name Value
Number of clients 1200
Workload profile LONGWAIT
Number of servants 4
Variable name Value
CPU milliseconds/transaction 5.571 ms
Adjusted CPU milliseconds/transaction 5.227 ms
WebSphere transaction rate 636.88 / sec
EIS transaction rate 467.43 / sec
WebSphere transaction time (actual) 0.768 sec
WebSphere transaction time (execution) 0.129 sec
WebSphere transaction time (queued) 0.638 sec
Overall CPU utilization for LPAR running WebSphere
62.1%
Overall CPU utilization for LPAR running the EIS
26.7%
Demand paging on the LPAR running WebSphere
0.03 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 0.915 sec
WebSphere controller appl% 24.7%
WebSphere servant appl% 27.0%
WebSphere enclaves appl% 165.5%
Chapter 4. Measurements and results 65
Remote (5 KB)Table 4-10 shows the configuration and Table 4-11 shows the results for the remote CICS TG test case with a 5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload produced the best performance for this case.
Table 4-10 Test configuration
Table 4-11 Test results
CICS appl% 39.3%
CICS Transaction Gateway appl% 44.9%
Variable name Value
Number of clients 700
Workload profile LONGWAIT
Number of servants 4
Variable name Value
CPU milliseconds/transaction 13.396 ms
Adjusted CPU milliseconds/transaction 11.793 ms
WebSphere transaction rate 321.89 / sec
EIS transaction rate 236.12 / sec
WebSphere transaction time (actual) 0.559 sec
WebSphere transaction time (execution) 0.145 sec
WebSphere transaction time (queued) 0.413 sec
Overall CPU utilization for LPAR running WebSphere
88.2%
Overall CPU utilization for LPAR running the EIS
19.6%
Demand paging on the LPAR running WebSphere
0.19 / sec
Demand paging on the LPAR running the EIS
0.01 / sec
Variable name Value
66 WebSphere for z/OS to CICS and IMS Connectivity Performance
Remote (20 KB)Table 4-12 shows the configuration and Table 4-13 shows the results for the remote CICS TG test case with a 20 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-12 Test configuration
Table 4-13 Test results
Average end-to-end response time 0.584 sec
WebSphere controller appl% 10.8%
WebSphere servant appl% 59.5%
WebSphere enclaves appl% 255.2%
CICS appl% 24.1%
CICS Transaction Gateway appl% 35.4%
Variable name Value
Number of clients 200
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 37.938 ms
Adjusted CPU milliseconds/transaction 33.145 ms
WebSphere transaction rate 104.75 / sec
EIS transaction rate 76.98 / sec
WebSphere transaction time (actual) 0.327 sec
WebSphere transaction time (execution) 0.248 sec
WebSphere transaction time (queued) 0.079 sec
Overall CPU utilization for LPAR running WebSphere
86.9%
Variable name Value
Chapter 4. Measurements and results 67
4.5.2 SOAP for CICSThe results in this section were obtained for six tests of SOAP for CICS. In our test environment, we used a single CICS region, and executed the business logic of the Trader application on the CICS QR TCB. As a result, we were only able to drive our throughput to the point where the CICS QR TCB CICS was using approximately 90% of one processor.
Note that in CICS TS V3.1, SOAP work is also off-loaded to other TCBs, which increases throughput in a multi-processor environment.
Because of the nature of XML, the SOAP for CICS results vary with the complexity of the data structure that is being used. We demonstrated this by running two sets of tests in the 5 KB case, a case with a smaller number of larger fields in the COMMAREA, and a more complex case with a very large number of small fields. This test case was run in local and remote scenarios.
Overall CPU utilization for LPAR running the EIS
12.5%
Demand paging on the LPAR running WebSphere
0.02 / sec
Demand paging on the LPAR running the EIS
0.01 / sec
Average end-to-end response time 0.349 sec
WebSphere controller appl% 3.4%
WebSphere servant appl% 58.4%
WebSphere enclaves appl% 264.9%
CICS appl% 9.3%
CICS Transaction Gateway appl% 24.2%
Variable name Value
Note: If the Trader application had been designed as threadsafe, with data access in IBM DB2, we could have used the CICS Open Transaction Environment (OTE), allowing us to run the business logic on additional CICS TCBs, and therefore allowing CICS to use more than one processor. For additional information about how to use the CICS OTE, refer to Threadsafe considerations for CICS, SG24-5631.
68 WebSphere for z/OS to CICS and IMS Connectivity Performance
In the complex 5 KB case, there were 1300 elements and the transport size was 77 KB; in the simple 5 KB case, there were 160 elements and the total transport size was approximately 38 KB.
It should also be noted that the transaction rate for CICS, as it is shown in the RMF report, is twice what the CICS TG case would be. This is because the SOAP case uses a Web Attach transaction. The EIS transaction rate that you see in our examples is calculated by dividing the reported transaction rate by 2.
Keep-aliveWebSphere Application Server for z/OS V5.1 does not support keep-alive for outbound HTTP requests. Therefore, when sending SOAP messages to a CICS TS V2.3 system with SOCKETCLOSE(10) specified in the TCPIPSERVICE resource definition, the socket is closed after each message is processed. Therefore, each SOAP message causes two CICS transactions to be run, namely the Web Attach transaction (CWXN) and the Web Alias transaction (CWBA). WebSphere V6 supports keep-alive for outbound HTTP 1.1 requests. When used in conjunction with HTTP 1.1 support in CICS TS V3.1, keep-alive support for WebSphere SOAP requests to CICS can be used. For more information, refer to:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.zseries.doc/info/zseries/ae/rwbs_transportheaderproperty.html
Local complex (0.5 KB)Table 4-14 shows the configuration and Table 4-15 on page 70 shows the results for the local SOAP for CICS test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML.
Table 4-14 Test configuration
Note: The CICS Appl% values should be much lower after applying the fix to WebSphere Studio Enterprise Developer. Refer to “WebSphere Studio Enterprise Developer fix” on page 60 for more information about this fix.
Variable name Value
Number of clients 700
Workload profile IOBOUND
Number of servants 5
Chapter 4. Measurements and results 69
Table 4-15 Test results
Local complex (5 KB)Table 4-16 shows the test configuration and Table 4-17 on page 71 shows the results for the local SOAP for CICS test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML.
Table 4-16 Test configuration
Variable name Value
CPU milliseconds/transaction 10.673 ms
Adjusted CPU milliseconds/transaction 8.447 ms
WebSphere transaction rate 196.75 / sec
EIS transaction rate 144.14 / sec
WebSphere transaction time (actual) 1.962 sec
WebSphere transaction time (execution) 0.201 sec
WebSphere transaction time (queued) 1.761 sec
Overall CPU utilization for LPAR 52.5%
Demand paging on the LPAR 1.19 / sec
Average end-to-end response time 1.990 sec
WebSphere controller appl% 6.7%
WebSphere servant appl% 4.5%
WebSphere enclaves appl% 83.5%
CICS appl% 93.7%
Variable name Value
Number of clients 70
Workload profile IOBOUND
Number of servants 2
70 WebSphere for z/OS to CICS and IMS Connectivity Performance
Table 4-17 Test results
Local simple (5 KB)Table 4-18 shows the configuration and Table 4-19 on page 72 shows the results for the local SOAP for CICS test case with a 5 KB COMMAREA made up of simpler data. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML.
Table 4-18 Test configuration
Variable name Value
CPU milliseconds/transaction 73.653 ms
Adjusted CPU milliseconds/transaction 59.581 ms
WebSphere transaction rate 30.06 / sec
EIS transaction rate 22.07 / sec
WebSphere transaction time (actual) 0.760 sec
WebSphere transaction time (execution) 0.447 sec
WebSphere transaction time (queued) 0.313 sec
Overall CPU utilization for LPAR 55.4%
Demand paging on the LPAR 0.01 / sec
Average end-to-end response time 0.818 sec
WebSphere controller appl% 1.0%
WebSphere servant appl% 8.6%
WebSphere enclaves appl% 104.2%
CICS appl% 94.9%
Variable name Value
Number of clients 270
Workload profile IOBOUND
Number of servants 5
Chapter 4. Measurements and results 71
Table 4-19 Test results
Remote complex (0.5 KB)Table 4-20 shows the configuration and Table 4-21 on page 73 shows the results for the remote SOAP for CICS test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done in WebSphere to create and parse XML.
Table 4-20 Test configuration
Variable name Value
CPU milliseconds/transaction 16.762 ms
Adjusted CPU milliseconds/transaction 13.091 ms
WebSphere transaction rate 115.14 / sec
EIS transaction rate 84.67 / sec
WebSphere transaction time (actual) 0.738 sec
WebSphere transaction time (execution) 0.200 sec
WebSphere transaction time (queued) 0.537 sec
Overall CPU utilization for LPAR 48.3%
Demand paging on the LPAR 3.15 / sec
Average end-to-end response time 0.791 sec
WebSphere controller appl% 3.6%
WebSphere servant appl% 4.9%
WebSphere enclaves appl% 82.0%
CICS appl% 84.6%
Variable name Value
Number of clients 550
Workload profile IOBOUND
Number of servants 5
72 WebSphere for z/OS to CICS and IMS Connectivity Performance
Table 4-21 Test results
Remote complex (5 KB)Table 4-22 on page 74 shows the configuration and Table 4-23 on page 74 shows the results for the remote SOAP for CICS test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML.
Variable name Value
CPU milliseconds/transaction 11.460 ms
Adjusted CPU milliseconds/transaction 9.263 ms
WebSphere transaction rate 196.68 / sec
EIS transaction rate 144.28 / sec
WebSphere transaction time (actual) 1.209 sec
WebSphere transaction time (execution) 0.258 sec
WebSphere transaction time (queued) 0.951 sec
Overall CPU utilization for LPAR running WebSphere
29.1%
Overall CPU utilization for LPAR running the EIS
27.3%
Demand paging on the LPAR running WebSphere
1.09 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 1.270 sec
WebSphere controller appl% 6.2%
WebSphere servant appl% 4.2%
WebSphere enclaves appl% 83.6%
CICS appl% 93.4%
Chapter 4. Measurements and results 73
Table 4-22 Test configuration
Table 4-23 Test results
Variable name Value
Number of clients 70
Workload profile IOBOUND
Number of servants 2
Variable name Value
CPU milliseconds/transaction 77.792 ms
Adjusted CPU milliseconds/transaction 64.385 ms
WebSphere transaction rate 30.80 / sec
EIS transaction rate 22.63 / sec
WebSphere transaction time (actual) 0.697 sec
WebSphere transaction time (execution) 0.590 sec
WebSphere transaction time (queued) 0.107 sec
Overall CPU utilization for LPAR running WebSphere
32.5%
Overall CPU utilization for LPAR running the EIS
27.5%
Demand paging on the LPAR running WebSphere
0.02 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 0.717 sec
WebSphere controller appl% 1.1%
WebSphere servant appl% 7.3%
WebSphere enclaves appl% 106.7%
CICS appl% 96.7%
74 WebSphere for z/OS to CICS and IMS Connectivity Performance
Remote simple (5 KB)Table 4-24 shows the configuration and Table 4-25 shows the results for the remote SOAP for CICS test case with a 5 KB COMMAREA made up of simpler data. After preliminary tests, we determined that five servants running with an IOBOUND workload profile produced the best performance for this case because of the work that is done by WebSphere to create and parse XML.
Table 4-24 Test configuration
Table 4-25 Test results
Variable name Value
Number of clients 270
Workload profile IOBOUND
Number of servants 5
Variable name Value
CPU milliseconds/transaction 17.928 ms
Adjusted CPU milliseconds/transaction 14.323 ms
WebSphere transaction rate 122.60 / sec
EIS transaction rate 90.14 / sec
WebSphere transaction time (actual) 0.620 sec
WebSphere transaction time (execution) 0.275 sec
WebSphere transaction time (queued) 0.344 sec
Overall CPU utilization for LPAR running WebSphere
28.9%
Overall CPU utilization for LPAR running the EIS
26.1%
Demand paging on the LPAR running WebSphere
0.12 / sec
Demand paging on the LPAR running the EIS
0.01 / sec
Average end-to-end response time 0.627 sec
WebSphere controller appl% 3.8%
WebSphere servant appl% 4.4%
Chapter 4. Measurements and results 75
4.5.3 CICS MQ DPL BridgeThe following results were obtained for six tests of the CICS MQ DPL Bridge.
Local (0.5 KB)Table 4-26 shows the configuration and Table 4-27 shows the results for the local CICS MQ DPL Bridge test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-26 Test configuration
Table 4-27 Test results
WebSphere enclaves appl% 87.7%
CICS appl% 90.1%
Variable name Value
Variable name Value
Number of clients 450
Workload profile LONGWAIT
Number of servants 4
Variable name Value
CPU milliseconds/transaction 9.774 ms
Adjusted CPU milliseconds/transaction 9.392 ms
WebSphere transaction rate 360.95 / sec
EIS transaction rate 265.02 / sec
WebSphere transaction time (actual) 0.174 sec
WebSphere transaction time (execution) 0.091 sec
WebSphere transaction time (queued) 0.083 sec
Overall CPU utilization for LPAR 88.2%
Demand paging on the LPAR 0.01 / sec
Average end-to-end response time 0.174 sec
WebSphere controller appl% 11.6%
76 WebSphere for z/OS to CICS and IMS Connectivity Performance
Local (5 KB)Table 4-28 shows the configuration and Figure 4-29 shows the results for the local CICS MQ DPL Bridge test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-28 Test configuration
Table 4-29 Test results
WebSphere servant appl% 21.1%
WebSphere enclaves appl% 235.7%
CICS appl% 50.7%
MQ master appl% 9.8%
Variable name Value
Number of clients 950
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 11.430 ms
Adjusted CPU milliseconds/transaction 10.850 ms
WebSphere transaction rate 312.69 / sec
EIS transaction rate 229.43 / sec
WebSphere transaction time (actual) 0.209 sec
WebSphere transaction time (execution) 0.114 sec
WebSphere transaction time (queued) 0.095 sec
Overall CPU utilization for LPAR 89.4%
Demand paging on the LPAR 0.00 / sec
Average end-to-end response time 0.221 sec
WebSphere controller appl% 10.0%
WebSphere servant appl% 25.4%
Variable name Value
Chapter 4. Measurements and results 77
Local (20 KB)Table 4-30 shows the configuration and Table 4-31 shows the results for the local CICS MQ DPL Bridge test case with a 20 KB COMMAREA. After preliminary tests, we determined that 2 servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-30 Test configuration
Table 4-31 Test results
WebSphere enclaves appl% 234.7%
CICS appl% 54.9%
MQ master appl% 8.6%
Variable name Value
Number of clients 300
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 16.045 ms
Adjusted CPU milliseconds/transaction 14.434 ms
WebSphere transaction rate 218.51 / sec
EIS transaction rate 157.96 / sec
WebSphere transaction time (actual) 0.310 sec
WebSphere transaction time (execution) 0.144 sec
WebSphere transaction time (queued) 0.165 sec
Overall CPU utilization for LPAR 87.7%
Demand paging on the LPAR 0.00 / sec
Average end-to-end response time 0.379 sec
WebSphere controller appl% 7.5%
WebSphere servant appl% 41.9%
WebSphere enclaves appl% 216.3%
Variable name Value
78 WebSphere for z/OS to CICS and IMS Connectivity Performance
Remote (0.5 KB)Table 4-32 shows the configuration and Table 4-33 shows the results for the remote CICS MQ DPL Bridge test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this test case.
Table 4-32 Test configuration
Table 4-33 Test results
CICS appl% 56.3%
MQ master appl% 6.2%
Variable name Value
Number of clients 600
Workload profile LONGWAIT
Number of servants 4
Variable name Value
CPU milliseconds/transaction 12.576 ms
Adjusted CPU milliseconds/transaction 12.497 ms
WebSphere transaction rate 465.73 / sec
EIS transaction rate 334.70 / sec
WebSphere transaction time (actual) 0.232 sec
WebSphere transaction time (execution) 0.073 sec
WebSphere transaction time (queued) 0.158 sec
Overall CPU utilization for LPAR running WebSphere
83.4%
Overall CPU utilization for LPAR running the EIS
60.2%
Demand paging on the LPAR running WebSphere
0.00 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Variable name Value
Chapter 4. Measurements and results 79
Remote (5 KB)Table 4-34 shows the configuration and Table 4-35 shows the results for the remote CICS MQ DPL Bridge test case with a 5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this test case.
Table 4-34 Test configuration
Table 4-35 Test results
Average end-to-end response time 0.250 sec
WebSphere controller appl% 14.4%
WebSphere servant appl% 12.1%
WebSphere enclaves appl% 273.6%
CICS appl% 63.3%
MQ master appl% 15.8%
MQ channels appl% 129.7%
Variable name Value
Number of clients 500
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 14.328 ms
Adjusted CPU milliseconds/transaction 14.201 ms
WebSphere transaction rate 401.88 / sec
EIS transaction rate 294.93 / sec
WebSphere transaction time (actual) 0.292 sec
WebSphere transaction time (execution) 0.092 sec
WebSphere transaction time (queued) 0.199 sec
Overall CPU utilization for LPAR running WebSphere
82.2%
Variable name Value
80 WebSphere for z/OS to CICS and IMS Connectivity Performance
Remote (20 KB)Table 4-36 shows the configuration and Table 4-37 shows the results for the remote CICS MQ DPL Bridge test case with a 20 KB COMMAREA. After preliminary tests, it was determined that two servants running with a LONGWAIT workload profile produced the best performance for this test case.
Table 4-36 Test configuration
Table 4-37 Test results
Overall CPU utilization for LPAR running the EIS
61.75%
Demand paging on the LPAR running WebSphere
0.00 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 0.304 sec
WebSphere controller appl% 12.8%
WebSphere servant appl% 13.4%
WebSphere enclaves appl% 269.1%
CICS appl% 70.5%
MQ master appl% 14.3%
MQ channels appl% 128.6%
Variable name Value
Number of clients 370
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 18.878 ms
Adjusted CPU milliseconds/transaction 18.544 ms
WebSphere transaction rate 297.28 / sec
EIS transaction rate 218.28 / sec
Variable name Value
Chapter 4. Measurements and results 81
4.6 Results for IMSTable 4-38 is a comparison of the results from all IMS test cases.
Table 4-38 Table of results for IMS
WebSphere transaction time (actual) 0.175 sec
WebSphere transaction time (execution) 0.109 sec
WebSphere transaction time (queued) 0.066 sec
Overall CPU utilization for LPAR running WebSphere
84.65%
Overall CPU utilization for LPAR running the EIS
55.65%
Demand paging on the LPAR running WebSphere
0.00 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 0.184 sec
WebSphere controller appl% 9.1%
WebSphere servant appl% 18.5%
WebSphere enclaves appl% 277.6%
CICS appl% 76.7%
MQ master appl% 11.4%
MQ channels appl% 103.0%
Variable name Value
EIS Connector Location Application data size (KB)
CPU ms / Transaction
actual adjusted
IMS IMS Connect Local 0.5 6.735 6.628
5.0 13.219 12.076
20.0 40.033 32.652
82 WebSphere for z/OS to CICS and IMS Connectivity Performance
More detailed metrics for each test case are discussed in the sections that follow.
4.6.1 IMS ConnectThe following results were obtained for the six tests of IMS Connect.
Local (0.5 KB)Table 4-39 shows the configuration and Table 4-40 shows the results for the local IMS Connect test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-39 Test configuration
Table 4-40 Test results
Remote 0.5 7.263 7.178
5.0 14.371 12.926
20.0 52.457 37.623
MQ/IMS DPL Bridge
Local 5.0 47.248 45.202
Important: WebSphere Servant appl% could be lower with proper Java heap tuning.
EIS Connector Location Application data size (KB)
CPU ms / Transaction
actual adjusted
Variable name Value
Number of clients 700
Workload profile LONGWAIT
Number of servants 4
Variable name Value
CPU milliseconds/transaction 6.735 ms
Adjusted CPU milliseconds/transaction 6.628 ms
WebSphere transaction rate 279.13 / sec
Chapter 4. Measurements and results 83
Local (5 KB)Table 4-41 shows the configuration and Table 4-42 shows the results for the local IMS Connect test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-41 Test configuration
Table 4-42 Test results
EIS transaction rate 204.91 / sec
WebSphere transaction time (actual) 0.866 sec
WebSphere transaction time (execution) 0.196 sec
WebSphere transaction time (queued) 0.669 sec
Overall CPU utilization for LPAR 47.0%
Demand paging on the LPAR 0.165 / sec
Average end-to-end response time 0.980 sec
WebSphere controller appl% 9.5%
WebSphere servant appl% 5.0%
WebSphere enclaves appl% 64.6%
IMS appl% 80.2%
IMS Connect appl% 2.6%
Variable name Value
Number of clients 400
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 13.219 ms
Adjusted CPU milliseconds/transaction 12.076 ms
WebSphere transaction rate 206.06 / sec
EIS transaction rate 147.44 / sec
Variable name Value
84 WebSphere for z/OS to CICS and IMS Connectivity Performance
Local (20 KB)Table 4-43 shows the configuration and Table 4-44 shows the results for the local IMS Connect test case with a 20 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-43 Test configuration
Table 4-44 Test results
WebSphere transaction time (actual) 0.325 sec
WebSphere transaction time (execution) 0.167 sec
WebSphere transaction time (queued) 0.157 sec
Overall CPU utilization for LPAR 68.1%
Demand paging on the LPAR 0.06 / sec
Average end-to-end response time 0.410 sec
WebSphere controller appl% 6.9%
WebSphere servant appl% 28.7%
WebSphere enclaves appl% 166.4%
IMS appl% 44.2%
IMS Connect appl% 2.1%
Variable name Value
Number of clients 135
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 40.033 ms
Adjusted CPU milliseconds/transaction 32.652 ms
WebSphere transaction rate 72.64 / sec
EIS transaction rate 51.89 / sec
WebSphere transaction time (actual) 0.279 sec
Variable name Value
Chapter 4. Measurements and results 85
Remote (0.5 KB)Table 4-45 shows the configuration and Table 4-46 shows the results for the remote IMS Connect test case with a 0.5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-45 Test configuration
Table 4-46 Test results
WebSphere transaction time (execution) 0.229 sec
WebSphere transaction time (queued) 0.050 sec
Overall CPU utilization for LPAR 72.7%
Demand paging on the LPAR 0.01 / sec
Average end-to-end response time 0.310 sec
WebSphere controller appl% 2.8%
WebSphere servant appl% 59.5%
WebSphere enclaves appl% 190.2%
IMS appl% 17.8%
IMS Connect appl% 0.9%
Variable name Value
Number of clients 700
Workload profile LONGWAIT
Number of servants 4
Variable name Value
CPU milliseconds/transaction 7.263 ms
Adjusted CPU milliseconds/transaction 7.178 ms
WebSphere transaction rate 248.37 / sec
EIS transaction rate 182.31 / sec
WebSphere transaction time (actual) 0.951 sec
WebSphere transaction time (execution) 0.263 sec
Variable name Value
86 WebSphere for z/OS to CICS and IMS Connectivity Performance
Remote (5 KB)Table 4-47 shows the \configuration and Table 4-48 shows the results for the remote IMS Connect test case with a 5 KB COMMAREA. After preliminary tests, we determined that four servants running with a LONGWAIT workload profile produced the best performance for this case.
Table 4-47 Test configuration
Table 4-48 Test results
WebSphere transaction time (queued) 0.688 sec
Overall CPU utilization for LPAR running WebSphere
19.6%
Overall CPU utilization for LPAR running the EIS
25.5%
Demand paging on the LPAR running WebSphere
0.01 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 0.976 sec
WebSphere controller appl% 7.7%
WebSphere servant appl% 3.7%
WebSphere enclaves appl% 51.4%
IMS appl% 81.7%
IMS Connect appl% 3.7%
Variable name Value
Number of clients 500
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 14.371 ms
Adjusted CPU milliseconds/transaction 12.926 ms
Variable name Value
Chapter 4. Measurements and results 87
Remote (20 KB)Table 4-49 shows the configuration and Table 4-50 on page 89 shows the results for the remote IMS Connect test case with a 20 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this test case.
Table 4-49 Test configuration
WebSphere transaction rate 240.21 / sec
EIS transaction rate 176.41 / sec
WebSphere transaction time (actual) 1.016 sec
WebSphere transaction time (execution) 0.284 sec
WebSphere transaction time (queued) 0.731 sec
Overall CPU utilization for LPAR running WebSphere
60.9%
Overall CPU utilization for LPAR running the EIS
25.4%
Demand paging on the LPAR running WebSphere
0.00 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 1.080 sec
WebSphere controller appl% 7.7%
WebSphere servant appl% 40.2%
WebSphere enclaves appl% 177.5%
IMS appl% 79.4%
IMS Connect appl% 4.4%
Variable name Value
Number of clients 135
Workload profile LONGWAIT
Number of servants 2
Variable name Value
88 WebSphere for z/OS to CICS and IMS Connectivity Performance
Table 4-50 Test results
4.6.2 IMS MQ DPL BridgeTable 4-51 on page 90 shows the configuration and Table 4-52 on page 90 shows the results for the local IMS MQ DPL Bridge test case with a 5 KB COMMAREA. After preliminary tests, we determined that two servants running with a LONGWAIT workload profile produced the best performance for this case.
Variable name Value
CPU milliseconds/transaction 52.457 ms
Adjusted CPU milliseconds/transaction 37.623 ms
WebSphere transaction rate 76.52 / sec
EIS transaction rate 56.09 / sec
WebSphere transaction time (actual) 0.712 sec
WebSphere transaction time (execution) 0.425 sec
WebSphere transaction time (queued) 0.286 sec
Overall CPU utilization for LPAR running WebSphere
90.75%
Overall CPU utilization for LPAR running the EIS
9.6%
Demand paging on the LPAR running WebSphere
0.00 / sec
Demand paging on the LPAR running the EIS
0.00 / sec
Average end-to-end response time 0.854 sec
WebSphere controller appl% 2.6%
WebSphere servant appl% 120.4%
WebSphere enclaves appl% 222.9%
IMS appl% 21.6%
IMS Connect appl% 2.3%
Chapter 4. Measurements and results 89
Table 4-51 Test configuration
Table 4-52 Test results
4.7 Connector and data size comparisonsWe created several charts that compare different connectors. In the charts, we refer to COMMAREA, which is the actual data content being transferred from WebSphere to the EIS. This does not include the XML tags or any other header or infrastructure related data (SOAP scenarios). For example, in the complex 5 KB case, the transport size was 77 KB. In the simple 5 KB case, the total transport size was approximately 38 KB.
Variable name Value
Number of clients 140
Workload profile LONGWAIT
Number of servants 2
Variable name Value
CPU milliseconds/transaction 47.248 ms
Adjusted CPU milliseconds/transaction 45.202 ms
WebSphere transaction rate 72.85 / sec
EIS transaction rate 53.41 / sec
WebSphere transaction time (actual) 0.860 sec
WebSphere transaction time (execution) 0.644 sec
WebSphere transaction time (queued) 0.215 sec
Overall CPU utilization for LPAR 86.05%
Demand paging on the LPAR 0.225 / sec
Average end-to-end response time 0.876 sec
WebSphere controller appl% 2.4%
WebSphere servant appl% 22.4%
WebSphere enclaves appl% 242.5%
IMS appl% 16.2%
MQ master appl% 6.4%
90 WebSphere for z/OS to CICS and IMS Connectivity Performance
4.7.1 CICS comparison chartsFigure 4-2 is a comparison between the CICS TG, CICS SOAP, and CICS MQ DPL Bridge local and remote cases.
Figure 4-2 CICS results with 500 bytes COMMAREA
Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS system. For the local cases, all EIS activity occurs in Tier 2.
The key findings were:
� CICS TG is more efficient than SOAP or WebSphere MQ DPL Bridge with a small COMMAREA.
� CICS SOAP is more efficient than CICS MQ DPL Bridge with a small COMMAREA.
CICS results with 500 byte COMMAREA
0
2
4
6
8
10
12
14
CICS CTG local CICS CTGremote
CICS SOAPcomplex local
CICS SOAPcomplex remote
CICS MQBridgelocal
CICS MQBridgeremote
CPU
mill
isec
/ tr
an
Tier 3Tier 2
Chapter 4. Measurements and results 91
Figure 4-3 shows the components of the CPU time.
Figure 4-3 CPU time breakdown for 500 byte cases
Servant region CPU time and EIS CPU time in the SOAP test have been adjusted as explained in 4.4.2, “Adjustment” on page 59.
The key findings were:
� In the SOAP case, the EIS CPU time is high because of the XML conversions for the complex COMMAREA.
� In the remote MQ DPL Bridge case, the MQ Channel CPU time is considerable. This value is the time spent in the MQ Channel Initiator to manage connections.
� In both local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues.
� The “other” category includes components such as TCP/IP communication. Note that this component is larger in the remote cases.
� CICS TG is the cheapest in general and it uses the least CPU resources in the WebSphere application and in CICS.
CICS results with 500 byte COMMAREACost breakdown
0
2
4
6
8
10
12
14
CICS CTGlocal
CICS CTGremote
CICS SOAPcomplex
local
CICS SOAPcomplexremote
CICSMQBridge
local
CICSMQBridge
remote
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
92 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-4 is a comparison between the CICS TG, CICS SOAP, and MQ DPL bridge local and remote cases with the medium-sized COMMAREA.
Figure 4-4 CICS results with 5K bytes COMMAREA
Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS. For the local cases, all EIS activity occurs in Tier 2.
The key findings were:
� As the COMMAREA complexity and size increase, SOAP uses more CPU time.
� A simpler (see Example 3-2 on page 31) SOAP COMMAREA of the same size significantly reduced the CPU time (for example, approximately 4X cost reduction; 38KB XML transport size versus 77 KB; 160 data elements versus 1300).
� The local connectors are more efficient than remote connectors.
� With this complex COMMAREA, MQ DPL Bridge performs nearly as well as CICS TG.
� With a simple COMMAREA, SOAP performance was much closer to MQ DPL bridge and CICS TG.
Note: The simple COMMAREA results for CICS SOAP do not provide a direct comparison with CICS TG or CICS MQ DPL Bridge results because CICS TG or CICS MQ DPL Bridge are likely to gain some benefit from the simpler COMMAREA.
CICS results with 5K byte COMMAREA
0
10
20
30
40
50
60
70
CICS CTGlocal
CICS CTGremote
CICS SOAPcomplex
local
CICS SOAPcomplexremote
CICS SOAPsimple local
CICS SOAPsimpleremote
CICSMQBridge
local
CICSMQBridge
remote
CPU
mill
isec
/ tr
anTier3Tier 2
Chapter 4. Measurements and results 93
Figure 4-5 shows the components of the CPU time.
Figure 4-5 CPU time breakdown for 5K byte cases
Servant region CPU time and EIS CPU time in the SOAP tests have been adjusted as explained 4.4.2, “Adjustment” on page 59.
The key findings were:
� The SOAP COMMAREA complexity has a major impact on the CPU cost for both the WebSphere enclaves and the EIS because it includes the XML parsing costs.
� The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections.
� In both the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues.
CICS results with 5K byte COMMAREACost breakdown
0
10
20
30
40
50
60
70
CICS CTGlocal
CICS CTGremote
CICSSOAP
complexlocal
CICSSOAP
complexremote
CICSSOAPsimplelocal
CICSSOAPsimpleremote
CICSMQBridge
local
CICSMQBridge
remote
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
94 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-6 is a comparison between CICS TG and MQ DPL Bridge local and remote cases with our large-sized COMMAREA.
Figure 4-6 CICS results with 20K bytes COMMAREA
Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS. For the local cases, all EIS activity occurs in Tier 2.
The key findings were:
� MQ DPL Bridge performs better than CICS TG with a large and complex COMMAREA.
� CICS TG local and remote have similar consumption with a large and complex COMMAREA.
� The relative CPU cost delta between local and remote decreases as the COMMAREA increases (see Figure 4-7 on page 96).
CICS results with 20K byte COMMAREA
0
5
10
15
20
25
30
35
CICS CTG local CICS CTG remote CICS MQBridge local CICS MQBridge remote
CPU
mill
isec
/ tr
anTier 3Tier 2
Chapter 4. Measurements and results 95
The CPU time breakdown is shown in Figure 4-7.
Figure 4-7 CPU time breakdown for 20K byte cases
The key findings were the same as those for the 5K COMMAREA:
� The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections.
� In both the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues.
CICS results with 20K byte COMMAREACost breakdown
0
5
10
15
20
25
30
35
CICS CTG local CICS CTG remote CICS MQBridge local CICS MQBridgeremote
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
96 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-8 is a comparison of CICS TG performance with varying COMMAREA sizes.
Figure 4-8 CICS TG results with varying COMMAREA sizes
Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EI. For the local cases, all EIS activity occurs in Tier 2.
The key findings were:
� For small and medium-sized COMMAREAs, local CICS TG is better than remote.
� For large, complex COMMAREAs, the performance of local and remote connections is comparable.
CICS TG results Varying size of COMMAREA
0
5
10
15
20
25
30
35
Local 500 bytes Local 5K bytes Local 20K bytes Remote 500bytes
Remote 5Kbytes
Remote 20Kbytes
CPU
mill
isec
/ tr
an
Tier 3Tier 2
Chapter 4. Measurements and results 97
Figure 4-9 shows the components of the CPU time.
Figure 4-9 CPU time breakdown for CICS TG with varying COMMAREA sizes
Servant region CPU time has been adjusted as explained 4.4.2, “Adjustment” on page 59.
The key findings were:
� As the COMMAREA size increases, most of the additional cost is in the WebSphere application.
� The EIS cost is fairly consistent.
� The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections.
� In both of the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues.
CICS TG results Cost breakdown
0
5
10
15
20
25
30
35
Local 500bytes
Local 5Kbytes
Local 20Kbytes
Remote 500bytes
Remote 5Kbytes
Remote20K bytes
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
98 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-10 is a comparison of CICS SOAP performance with varying COMMAREA sizes.
Figure 4-10 CICS SOAP results with varying COMMAREA sizes
Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS system. For the local cases, all EIS activity occurs in Tier 2.
The key findings were:
� CPU costs for SOAP include XML parsing costs, which are much lower for a less complex COMMAREA structure.
� The local connector is consistently more efficient than the remote connector.
CICS SOAP results varying size of COMMAREA
0
10
20
30
40
50
60
70
Local 500 bytescomplex
Remote 500bytes complex
Local 5K bytessimple
Remote 5Kbytes simple
Local 5K bytescomplex
Remote 5Kbytes complex
CPU
mill
isec
/ tr
an
Tier 3Tier 2
Chapter 4. Measurements and results 99
Figure 4-11 shows the components of the CPU time.
Figure 4-11 CPU time breakdown for CICS SOAP with varying COMMAREA sizes
Servant region CPU time and EIS CPU time in the SOAP tests have been adjusted as explained in 4.4.2, “Adjustment” on page 59.
The key findings were:
� The EIS CPU usage goes up considerably (compared to the CICS TG case) as the parsing of the SOAP message generated by the WebSphere application has to be done in CICS.
� The WebSphere enclaves CPU usage increases significantly as the size and complexity of the COMMAREA increases. The WebSphere application has to generate the SOAP request, then parse the response when it is returned from CICS.
CICS SOAP results Cost breakdown
0
10
20
30
40
50
60
70
Local 500bytes
complex
Remote500 bytescomplex
Local 5Kbytes
simple
Remote5K bytessimple
Local 5Kbytes
complex
Remote5K bytescomplex
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
100 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-12 is a comparison of CICS MQ DPL Bridge performance with varying COMMAREA sizes.
Figure 4-12 CICS MQ DPL Bridge with varying COMMAREA sizes
Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the EIS. For the local cases, all EIS activity occurs in Tier 2.
The key findings were:
� Cost per byte with CICS MQ DPL Bridge is very low relative to other CICS connectors.
� The local connector is consistently more efficient than the remote connector.
CICS MQ Bridge results Varying size of COMMAREA
02468
101214161820
Local 500 bytes Local 5K bytes Local 20K bytes Remote 500bytes
Remote 5Kbytes
Remote 20Kbytes
CPU
mill
isec
/ tr
an
Tier 3Tier 2
Chapter 4. Measurements and results 101
Figure 4-13 shows the components of the CPU time.
Figure 4-13 CPU time breakdown for CICS MQ DPL Bridge with varying COMMAREA sizes
Servant region CPU time has been adjusted as explained 4.4.2, “Adjustment” on page 59.
The key findings were:
� As the COMMAREA size increases, most of the increase in the CPU costs is in the WebSphere application. However, this increase is much smaller than with other connectors.
� The MQ channel and other categories are a significant part of the remote MQ DPL Bridge costs.
� The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections.
� In both of the local and remote MQ DPL Bridge cases, Connector time refers to MQ Master, which manages the queues.
CICS MQ Bridge results Cost breakdown
02468
101214161820
Local 500bytes
Local 5Kbytes
Local 20Kbytes
Remote 500bytes
Remote 5Kbytes
Remote 20Kbytes
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
102 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-14 shows the CICS results (including SOAP) based on COMMAREA size.
Figure 4-14 CICS results based on COMMAREA size (with SOAP)
The following observations about the chart are noteworthy:
� This is not a null-truncated COMMAREA, so CICS TG does not show the value of its null-stripping optimization.
� The per-byte cost for MQ DPL Bridge is very low relative to the per-byte cost of any of the other connectors.
� The SOAP results show the high cost of a complex COMMAREA; in the simple case, we obtained better results by reducing the complexity of the COMMAREA. This demonstrates a trend: the simpler the COMMAREA, the better the SOAP results.
CICS results CPU cost based on COMMAREA
0
10
20
30
40
50
60
70
0 5 10 15 20 25
COMMAREA size (KB)
CPU
mill
isec
/ tr
an
CTG localCTG remoteSOAP complex localSOAP complex remoteSOAP simple localSOAP simple remoteMQ DPL Bridge localMQ DPL Bridge remote
Chapter 4. Measurements and results 103
� The simple SOAP case should not be directly compared to any of the other connectors because the COMMAREA structure change might also affect other connectors similarly.
To show the complexity of the COMMAREA, the number of data elements were the following:
� 500 bytes: 36 + (8 x 4) = 68� 5 KB: 36 + (316 x 4) = 1300� 5 KB simple: 36 + (31 x 4) = 160
Figure 4-15 shows the CICS results based on COMMAREA without SOAP.
Figure 4-15 CICS results based on COMMAREA size (without SOAP)
CICS results other than SOAP CPU cost based on COMMAREA
0
5
10
15
20
25
30
35
0 5 10 15 20 25
COMMAREA size (KB)
CPU
mill
isec
/ tra
n
CICS CTG localCICS CTG remoteCICS MQBridge localCICS MQBridge remote
104 WebSphere for z/OS to CICS and IMS Connectivity Performance
The initial cost of CICS MQ DPL Bridge is higher than CICS TG, but the cost per byte is so low that, for larger COMMAREAs, CICS MQ DPL Bridge is a good performer.
4.7.2 IMS comparison chartsFigure 4-16 shows a comparison between IMS Connect local and remote cases with varying COMMAREA sizes. We also did one measurement with a local IMS MQ DPL Bridge for the 5 KB COMMAREA size. Tier 2 is the activity in the WebSphere system and Tier 3 is the activity in the IMS system. For the local cases, all IMS activity occurs in Tier 2.
Figure 4-16 IMS results with varying COMMAREA sizes
The key findings were:
� IMS Connect performs much better than IMS MQ DPL Bridge.
� The CPU cost is higher for remote than for local access.
� The CPU cost delta between local and remote seems to grow as the COMMAREA becomes larger and more complex.
IMS results Varying size of COMMAREAs
05
101520253035404550
ConnectLocal 500
bytes
ConnectLocal 5K
bytes
ConnectLocal 20K
bytes
ConnectRemote 500
bytes
ConnectRemote 5K
bytes
ConnectRemote 20K
bytes
MQBridgeLocal 5K
bytes
CPU
mill
isec
/ tr
an
Tier 3Tier 2
Chapter 4. Measurements and results 105
Figure 4-17 shows the components of the CPU time.
Figure 4-17 IMS cost breakdown with varying message sizes
The key findings were:
� Almost all of the change in CPU costs occurs under the enclaves. The EIS cost is fairly consistent and includes IMS Control Region, DL/I, and the dependent regions.
� The connector is the IMS Connect address space, but does not include IMS Connector for Java, which is charged to enclaves.
� In the local IMS MQ DPL Bridge CPU case, the other category is considerably higher than the IMS Connect local and remote cases.
� The MQ Channel value is the time spent in the MQ Channel Initiator to manage connections.
� In the MQ DPL Bridge case, Connector time refers to MQ Master, which manages the queues.
IMS results Cost breakdown
05
101520253035404550
ConnectLocal 500
bytes
ConnectLocal 5K
bytes
ConnectLocal 20K
bytes
ConnectRemote
500 bytes
ConnectRemote5K bytes
ConnectRemote
20K bytes
MQBridgeLocal 5K
bytes
CPU
mill
isec
/ tr
an OtherMQ channelConnectorEISControllerEnclavesServant
106 WebSphere for z/OS to CICS and IMS Connectivity Performance
Figure 4-18 shows the results based on the IMS message size.
Figure 4-18 IMS results based on message size
The key findings were:
� The overall cost of IMS MQ DPL Bridge is much higher than IMS Connect.� The local connector performs better than the remote connector.� The benefit of using a local connector is greater with a large message size.
IMS results CPU cost based on message size
0
5
10
15
20
25
30
35
40
45
50
0 5 10 15 20 25
message size (KB)
CPU
mill
isec
/ tr
an
IMS Connect localIMS Connect remoteIMS MQBridge local
Chapter 4. Measurements and results 107
108 WebSphere for z/OS to CICS and IMS Connectivity Performance
acronyms
AAT Application Assembly Tool
ACEE Accessor Environment Element
AIX® Advanced Interactive Executive (IBM UNIX)
AOR Application Owning Region
APAR Authorized Program Analysis Report
APF Authorized Program Facility
API Application Programming Interface
APPC Advanced Program to Program Communication
ARM Automatic Restart Manager
ASCII American Standard Code for Information Interchange
AWT Abstract Windowing Toolkit (Java)
BPC Business Process Container
BSC Binary Synchronous Communication
BSF Back space file / Bean scripting format
CCF Common Connector Framework
CCI Common Client Interface
CF Coupling Facility
CICS Customer Information Control system
CMP Container Managed Persistence
CP Control Program
CPU Central Processing Unit
CICS TG CICS Transaction Gateway
COMMAREA communications area
Abbreviations and
© Copyright IBM Corp. 2006. All rights reserved.
CTRACE Component Trace services (MVS)
DASD Direct Access Storage Device
DB Database
DD Dataset Definition
DDF Distributed Data Facility
DDL Data Definition Language
DLL Dynamic Link Library
DN Distinguished Name
DNS Domain Name System
DRA Database Resource Adapter
DVIPA Dynamic Virtual IP Address
EAB Enterprise Access Builder
EAR Enterprise Application Repository
EBCDIC Extended Binary Coded Decimal Interchange Code
EIS Enterprise Information System
EJB Enterprise JavaBean
EJBQL Enterprise JavaBean Query Language
EUI End User Interface
FSP Fault Summary Page
FTP File Transfer Protocol
GC garbage collection
GIF Graphic Interchange Format
GUI Graphical User Interface
GWAPI Go Webserver Application Programming Interface
HFS Hierarchical File Systems
HLQ High Level Qualifier
HTML Hypertext Markup Language
109
HTTP Hypertext Transport Protocol
HTTPS Secure Hypertext Transport Protocol
IT Information Technology
IDE Integrated Development Environment
IE Integrated Edition
IJP Internal JMS Provider
IMS Information Management System
IOR Interoperable Object Reference
IP Internet Protocol
ISHELL ISPF Shell
ISPF Interactive System Productivity Facility
IT Information Technology
J2CA J2EE Connector Architecture
JAAS Java Authentication and Authorization Services
JACL Java Command Language
JAR Java Archive Or Java Application Repository
JCA Java Cryptographic Architecture
JCL Job Control Language
JDBC Java Database Connectivity
JMS Java Message Service
JMX Java Management Extensions
JNDI Java Naming and Directory Interface
JSP JavaServer Page
JVM Java Virtual Machine
LDAP Lightweight Directory Access Protocol
LPAR logical partition
MDB message driven bean
MQ message queue
MVS Multiple Virtual Storage
NFS Network File System
ODBC Open Database Connectivity
OMVS Open MVS
OS operating system
PME Programming Model Extensions
PSP Preventive Service Planning
PTF Program Temporary Fix
RACF Resource Access Control Facility
RAR Resource Archive Repository
RDB Relational Database
REXX Restructured Extended Executor Language
RMI Remote Method Invocation
RMIC Remote Method Invocation Compiler
RRS Resource Recovery Services
SAF Security Authentication Facility
SCM System Configuration Management
SDK Software Developers Kit
SDSF Systems Display and Search Facility
SMAPI Systems Management Applications Programming Interface
SMB System Message Block
SMEUI Systems Management End User Interface
SMP/E System Modification Program /Extended
SNA Systems Network Architecture
SOAP Simple Object Access Protocol
110 WebSphere for z/OS to CICS and IMS Connectivity Performance
SQLID Structured Query Language Identifier
SQLJ Structured Query Language For Java
SSID Subsystem Identification
SSL Secure Sockets Layer
TCB Task Control Block
TCP/IP Transmission Control Protocol/Internet Protocol
TSO Time Sharing Option
UDB Universal Database
UID User Identifier
UNIX AT&T Operating System For Workstations (IBM=AIX)
URI Universal Resource Identifier
URL Uniform Resource Locator
USS UNIX System Services
VI Visual Interface - Visual Screen-based Editor (AIX)
VIPA Virtual IP Address
VM Virtual Machine
WAR Web Application Repository
WLM Work Load Manager
WPC Websphere Process Choreographer
WSDL Web Services Description Language
WSIF Web Services Invocation Framework
XA Extended Architecture
XMI XML Metadata Interchange
XML Extensible Markup Language
XSL Extensible Style Language
XSLT Extensible Style Language Transformations
1PC One-phase Commit
2PC Two-phase Commit
Abbreviations and acronyms 111
112 WebSphere for z/OS to CICS and IMS Connectivity Performance
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this Redpaper.
IBM RedbooksFor information about ordering these publications, see “How to get IBM Redbooks” on page 114. Note that some of the documents referenced here may be available in softcopy only.
� WebSphere for z/OS Connectivity Handbook, SG24-7064-01
� WebSphere for z/OS Connectivity Architectural Choices, SG24-6365
� Threadsafe considerations for CICS, SG24-5631
Other publicationsThese publications are also relevant as further information sources:
� z/OS V1R6.0 RMF Report Analysis, SC33-7991-09
� CICS TS V3.1 Web Services Guide, SC34-6458-02
Online resourcesThese Web sites and URLs are also relevant as further information sources:
� CICS Transaction Gateway homepage
http://www-306.ibm.com/software/htp/cics/ctg/
� IMS Connect homepage
http://www-306.ibm.com/software/data/ims/connect/
� IMS Connector for Java homepage
http://www-306.ibm.com/software/data/db2imstools/imstools/imsjavcon.html
� CICS TS 3.1 CICS Web Services InfoCenter pages
http://publib.boulder.ibm.com/infocenter/cicsts31/topic/com.ibm.cics.ts.doc/dfhws/topics/dfhws_startHere.htm
© Copyright IBM Corp. 2006. All rights reserved. 113
http://publib.boulder.ibm.com/infocenter/cicsts31/topic/com.ibm.cics.ts.doc/pdf/dfhwsb00.pdf
� SOAP for CICS home page
http://www-306.ibm.com/software/htp/cics/soap/
How to get IBM RedbooksYou can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:
ibm.com/redbooks
Help from IBMIBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services
114 WebSphere for z/OS to CICS and IMS Connectivity Performance
Index
Aamount and type of data for communication 6Application Server 14, 16–17availability 5average end-to-end response time 57
Bback-end interface 28, 35
direct access 28back-end logic 28buffers 23
CCICS 17
applications and data stores 29ECI connector 37ECI J2C resource adapter 28
CICS comparison chart 91CICS MQ bridge 28CICS MQ DPL Bridge
cost/byte 6performance 101result 6, 93test case 62
CICS region 15, 54CICS SOAP
performance 99result 99
CICS TG 10, 17, 28, 49, 54, 58–59, 61–62case 31, 69, 100CPU time breakdown 98performance 97Redbooks 42result 97
CICS Transaction Gateway (See CICS TG) CICS transaction 16, 28, 42–45, 54, 58
report class 54CICS Transaction Gateway 6, 10, 12, 15, 17
Java client 37collocation/separation requirements 5COMMAREA 29, 33, 37, 39, 49, 59–60, 63, 93
WebSphere MQ DPL Bridge 91
© Copyright IBM Corp. 2006. All rights reserved.
COMMAREA content 43COMMAREA increase 93, 95COMMAREA size 38, 69, 93, 98, 102complex COMMAREA 31, 103cost/byte 6, 101CPU millisecond 58CPU ms/tran 58, 61, 83CPU time 5, 60, 92–94, 96
DDB2 29demand page 57
EEIS activity 91, 93, 95, 97, 99EIS cost 98, 106EIS system 91, 93–95EIS transaction rate 63–65, 67, 69Enterprise Application Repository (EAR) 38enterprise information system 10ESS DASD 10, 12
HHTTP client 6
IIMS 9–13, 54, 83, 105–106
applications and data stores 29IMS back-end transaction 23IMS connector 37IMS environment 23IMS J2C resource adapter 28IMS MQ Bridge 83, 90IMS version 24
JJ2EE client 6J2EE Web module 28Java class 43Java Naming and Directory Interface (JNDI) 40JDBC connection 28
115
JSPs 27, 36, 39
KKeep-alive 69key findings 91–95, 97–102, 105–106
Llocal case 17, 91, 93, 95, 97, 99local CICS
TG 97TG test case 63–64
local IMS 106LPAR 53, 56, 58
MMDB case 38, 40MDB listener 38message-driven bean (MDB) 38model-volume-controller (MVC) 35MODS E15 52MQ Channel 54
report class 54ms 55–56, 60
Ooverall CPU utilization 48, 50, 56, 58
PPAGE-IN Rate 54–56parse XML 70–73performance (response time, CPU/memory cost) 5preliminary test 63–66product maturity 5prompt answer 5
RRedbooks Web site 114
Contact us xiiiremote case 6, 14, 54, 91–93, 95report class 54–57
application percent 58Resource Access Control Facility (RACF) 14resource adapter 17RMF Monitor 48, 50, 52–53
I report 50, 52III 10, 48, 50
RMF report 69
Sscalable software architecture 6security 4
JAAS 40servant region 15, 17, 49, 60
worker threads 17Service Class
CICSW 16WAS48 15, 52
servlet 36, 39, 42–43session EJB 38, 42–44SESSION-PROPERTIES Protocol 17, 21size COMMAREA 95, 97, 105skills availability 5SMFDATA.RMFR ECS 52SOAP 17standards compliance, interoperability 4stateful session EJB 42–45
remote interface 42, 44superclass 42–44synchronous/asynchronous response requirements 5sysplex configuration 12
TTCP/IP 17test case 10, 12, 25, 49–50
detailed metrics 61, 83maximum throughput 10
test configuration 63–66, 68, 70–72test result 63–66, 68, 70–72test scenario 10Tier 2 91, 93, 95, 97, 99time zone 5Trader
Web front-end, back-end interface architecture and implementation 35
Trader application 27–29, 33–41, 49–50data stores 29dependencies 40following interactions 49IMS and CICS applications and data stores 29logon page 33packaging 38Web front-end user interface 33Web module 33
116 WebSphere for z/OS to CICS and IMS Connectivity Performance
TRADER.CICS.REPLYQ 38TRADER.IMS.REPLYQ 38TRADER.PROCESSQ 38TraderCICS 28TraderDB 28TraderIMS 28TraderMQ 28TraderSuperServlet 36, 39, 42, 44transaction rate 57–58, 61, 69
VVSAM file 29
WWeb Alias transaction (CWBA) 70Web Attach transaction (CWXN) 70Web front-end, back-end interface architecture and implementation 35WebSphere Admin Console 60WebSphere Application Server
different connectivity options 6request broker 16
WebSphere MQ 5, 38, 41connection factory 38DPL Bridge 13, 17, 21, 28, 40JMS provider connection factory 40
WebSphere MQ JMS Provider 28WebSphere MQ/CICS DPL Bridge 9, 21WebSphere Transaction
Rate 57–58, 61Time 57
WebSphere transaction 57–58one-to-one correlation 57
WLM 14–16Workload profile 60, 63–65
XXML conversion 31, 92XML file 51
Index 117
118 WebSphere for z/OS to CICS and IMS Connectivity Performance
®
INTERNATIONAL TECHNICALSUPPORTORGANIZATION
BUILDING TECHNICALINFORMATION BASED ONPRACTICAL EXPERIENCE
IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.
For more information:ibm.com/redbooks
Redpaper
WebSphere for z/OS to CICS and IMS Connectivity Performance
Compare the performance of connectors
Look at the environment that was used
See the key findings for each measurement
The objective of this IBM Redpaper is to help you understand the performance implications of the different connectivity options from WebSphere Application Server for IBM z/OS to CICS or IMS.
This paper is intended to be a companion to WebSphere for z/OS to CICS/IMS Connectivity Architectural Choices, SG24-6365, which describes architectural choices, and the different attributes of a connection, such as availability, security, transactional capability, and performance. However, that IBM Redbook does not have much information about performance.
To provide those details, we ran tests with CICS Transaction Gateway, SOAP for CICS, and CICS MQ DPL Bridge. We also ran tests with IMS Connect and with IMS MQ DPL Bridge. We selected 500-byte, 5 KB, and 20 KB communication area (COMMAREA) sizes with very complex records to simulate complex customer scenarios.
We share these results with you in a series of tables and charts that can help you evaluate your real-life application and decide which architectural solution might be best for you.
Back cover