empowering business innovation by optimization an · pdf fileruns on mainframe as batch jobs...
TRANSCRIPT
28 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
Empowering business Innovation by Optimization – An opportunity to
maximize availability, save time and cost
Manish Bhatnagar1, Prof. Jayant Shekhar
2
1Research Scholar ,
2Director
Subharti Institute of Technology and Engineering
Swami Vivekanand Subharti University, Meerut
India
Abstract—Growing need for legendary Main-
frame for mission-critical processing power, re-
duced energy use and a low TCO. Batch optimi-
zation stays as an important concern for many
companies today, whether merging workloads,
supporting growth, reducing cost and extending
the online window for real customers. By conti-
nually adapting to trends and evolving IT with
the mainframe, we’re driving new approaches to
handle the increasing traffic of transaction load
generated by increasing IT dependency around
the globe and connecting all the boundaries in
the world, for meeting this demand IT need to
work restless and reinvent their suite of infra-
structure and unleash the power of their existing
technology to make the extraordinary possible,
today's IT enabled world need 100% availability
of system with Comprehensive, multi-layered
strategy for reliability and serviceability Opti-
mized for real-world, enterprise-class transaction
processing and batch processing. This paper ex-
plains the importance of maximizing Operational
efficiency and different techniques to review the
operation processing of your current production
batch to determine whether it is operating effi-
ciently.
Keywords—Mainframe, Batch Optimization,
Reliable, Availability, Secure, Cost, Schedule,
Business.
I. INTRODUCTION
This paper describes a general approach that can
be used to optimize the batch window in a z/OS
environment. This paper outlines a structured
methodology using anti-patterns and tools that
can be followed to increase batch productivity.
Ultimately Business needs to innovate un-
matched scalability to support millions of trans-
actions per day, to gain total confidence in ser-
vices through the highest security and availabili-
ty with their tight Integration with core business
processes, applications and data. There are sig-
nificant workloads that still run on Mainframes
and hence it becomes important to have an opti-
mized batch schedule which promotes cost re-
duction in terms of CPU utilization and the re-
sources needed support and monitor batch
processing. Simultaneously, it will also reduce
the other related costs as well. The recurring li-
cense costs are associated with usage, optimizing
Mainframe batch workload will also go down.
The biggest companies in the world, whose vi-
sionary endeavors make our world smarter and
more connected, rely on the IBM mainframe. In
this competitive, revolutionary world, IT leaders
are busy inventing new ways to reduce your
mainframe cost structure. Batch optimization is a
universal approach in analyzing the batch contri-
bution to overall system performance. This paper
talks about the different ways in which we can
look towards batch tuning or optimization.
The ability to constantly evolve IT enabled
business rapidly and reliably is a core challenge
for today’s software engineering community.
Mainframe is endlessly leading the tag of most
robust, trustworthy, secure and available system
ever developed. That's why 92 of the top 100
banks, 23 of the top 25 retailers in the US, and
10 out of 10 of the top insurers trust their busi-
ness to run on the IBM mainframe. Especially in
a large-scale enterprise computing, the constant
market pressure, along with frequent mergers
29 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
and acquisitions in growing world, forces enter-
prises to constantly optimize their internal organ-
ization. Unfortunately, this internal organization
is essentially tied to the architecture of the IT
portfolio. When that portfolio resists evolution,
it becomes an undefeatable.
In the current world of 24X7 requirements for
online systems which are being used across mul-
tiple time zones of the globe, data centers con-
tinues to process a considerable workload that
runs on mainframe as batch jobs for a large
number of clients. Batch processing is one of the
instruments to save time and reduce cost imple-
ment lean aspects for you Client. By focusing, on
the functional business critical aspects with
available batch processing automation and opti-
mization technology, We can boost our return on
your mainframe investment, you’ll also align IT
closer with the business, and that is the key to
both IT and business success.
Mitigating business risk in a digitally enabled
world, as all business operations become depen-
dent on IT Mobile devices pose unprecedented
security challenges, Globally used services like
VISA, ATM, online Trading etc has created a
revolution in IT world and now there is no time
for downtime, companies cannot tolerate the av-
erage cost of system failure or security breaches.
Today Client need unmatched availability for
continuous operations. Business demands pre-
dictive IT analytics to spot potential failures be-
fore they occur and the most the most compre-
hensive suite of disaster recovery solutions.
The hardware’s reliability, availability and
serviceability along with the operating system’s
security, and its data’s integrity are unmatched
by any other technology. 'RAS' is one of the
most important things when you talk about a sys-
tem or an infrastructure, as it includes numbers
of aspects of a computer and application, reveal-
ing its capacity to be in service every time. In
fact, we can define a system in seconds knowing
its RAS level. The more an infrastructure RAS
level is high, the more it may be trusted. We can
then talk about a 24/24 and 7/7 service, which
mean there is no down-time accepted, and we
expect IT infrastructures with RAS characteris-
tics to have a full up-time. These features help a
system to stay fully operational for a very long
period (months and even years for Mainframes)
without reboot or crash.
II. RELATIVE PERFORMANCE ANALY-
SIS AND SCALABILITY
Different research communities have addressed
different aspects of the problem, bringing to bear
a variety of research traditions, problem perspec-
tives, and analytical techniques [1]. The extent
of this restructuring process varies among differ-
ent areas of the financial services industry [7].
Already, several truncated branch-and-bound
techniques, priority-rule methods, and schedule-
improvement procedures of types tabu search
and genetic algorithm has suggested improve-
ment procedures for resource-constrained project
scheduling [10].
A Statistics analysis: You can obtain statistics
with log analysis and provide reports with Tivoli
Dynamic Workload Console.
B Analysis track log: You can create a report of
track log events. The following options are
available when creating a report:- Audit, debug,
or both.
C Audit created through job batch submis-
sion: When creating the report, the only required
value is the type of input for the report. If you do
not provide data for the remaining fields, all the
records in the input files are selected. The output
of the report is written to the file specified dur-
ing installation.
D Reporting: The standard and user-defined re-
ports on the Tivoli Dynamic Workload Console
can:
Display product plan details Track and tune
workload capacity Control the timely workload
execution according to goals and SLA’s.
III. RELATED PREVIOUS WORK
This paper talks about the basic practices and
processes that can be used in optimizing a batch
workload and reducing burning MIPS. Batch
Processing is used in various mission critical
business environments spanning from banking
30 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
systems that process interest calculations and
other accounting related tasks to retail and
supply chain environments where the inventory
processing and re-order values need to be up-
dated, as well as an environment for business
intelligence where the overnight batch stream
replicates key business data to create a produc-
tion copy for use in data marts for business anal-
ysis and in support of business intelligence. In
the past, job scheduling has been primarily sug-
gested for supercomputers, real-time, and paral-
lel computers [4]. In security-aware job schedul-
ing, the scheduling process becomes much more
challenging [5], [6]. There have been several re-
cent advancements in tackling the replication-
based job scheduling problem. Bansal et al. [3]
suggested minimizing the turnaround time of a
parallel application using replication.
A. System Configuration
As a first step, look at the complete holistic view
of the systems health by examining parameters
such as CPU usage and paging/swapping. Sys-
tem capacity trends should be analyzed, based on
spare capacity availability, the provisioning of
initiators can also improve the performance. The
analysis of the CPU usage with batch contention
for a batch workload can provide pointers to
spare system capacity. Studying batch workload
peaks will further help us to determine what ac-
tions to take, either in shifting the specific batch
workload to a different time zone to reduce its
contribution to overall MSU Peak.
B. Analysis of top CPU Consuming
Applications
We should primarily focus on batch throughput,
scheduling, timing and resource consumption
and improving the batch window to reduce costs.
The research on transaction system based on
mainframe mainly states system requirements,
running environments, system structure, basic
design concepts, functionality and system securi-
ty precautions [13].
C. Batch Scheduling
The management of a batch window either in
production or test environments will have an im-
pact on the processing delay and on larger batch
windows. Though the batch scheduling is done
using scheduling tools such as IBM TWS/OPC,
CA7, ZEKE, Control-M or ASG products.
Few Best Practices for Batch Scheduling:
1. Draw and analyze the internal and external
dependency charts before actually setting up in
the tool
2. Identity the critical path for a particular se-
quence of the job flow and try to minimize or
avoid the redundant dependencies
3. Create a logical group of jobs in one stream
based on application and/or business need
4. Explore parallel running of additional
streams
5. Consider the allocation of special resources
for a set of jobs accessing common
tables/objects
6. Every six months carry out an assessment of
obsolete jobs and periodically clean up the sys-
tem for both production and test environments
D. Handling of frequently failing jobs
Though the frequently failing jobs will cause a
lot of resource consumption in the form of CPU
cycles, etc., they will also require human re-
sources in restarting the jobs and frequent inves-
tigation of exception. The key is to apply soft-
ware engineering to the maintenance and en-
hancement of existing up and running systems
lies in applying reverse-engineering approaches
[8] and definitely not only looking ahead but al-
so optimizing the current running system and
building the best availability and scalability.
Here are few steps which can help us to reduce
these frequently failing jobs/applications, just by
following simple techniques we can reduce CPU
usage drastically and also the related man power
behind resolving these reoccurring issues:
1. Record all the incidents/problems /calls/ is-
sues/ changes, along with the number of occur-
rences with time for the defined period within a
month or quarter with quick resolution and per-
manent fix.
2. Track detailed information such as the ac-
tual root cause of the job failure with informa-
31 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
tion on which step the job got failed in and any
work around solutions.
3. Most important is to perform the cyclic re-
view of these jobs with the applications’ Subject
Matter Experts (SME)/owners/Client to develop
a permanent solution for them, which will be a
mark that same will not happen again also we
can think of futuristic problem which might oc-
cur in near future know as farsightedness, which
will surely increase client trust and build person-
al relationship to attract more business.
4. Analyze the dependencies of the frequently
failing jobs and determine if the culprit job can
be moved out of the current stream, which can
reduce the risk of a job holding up a successor
job.
5. Publish the CPU savings achieved through
reduction of the failures which will motivate the
support team to develop the practice of identify-
ing potential batch candidates for fixing on a
regular basis.
E. Implementing Data in Memory
There is one important aspect which increase
speed and accelerate the use of data in memory
concept to reduce the repeated usage of I/O sys-
tem/program calls to the same set of data. Ac-
cording to IBM, Data-In-Memory enables indi-
vidual jobs/applications to run much faster by
reducing the I/O operations and reduce the
elapsed time for the job to execute on server.
This reduction is achieved by reading/writing the
reusable or more demanding data from (or in
some cases writing to) a buffer and process it
when required directly from buffer processor sto-
rage space instead of reading tape or DASD sto-
rage.
Data-In-Memory technique will benefit us in
multiple ways and help our processing at much
faster rate:
1. The frequent read/write operations will dras-
tically reduced, and therefore the element of
elapsed time will also reduce in proportion to the
reduction in Input and Output calls.
2. The remaining Input and Output instruc-
tions/calls are done to a less busy read/write
module and on the other hand it will also offer
the remaining read write operations will execute
at faster response.
IV. OUR NEW APPROACH
IT used to have the luxury of batch windows —
time that was dedicated to doing only batch
processing. Today, however, batch windows
have slammed shut. Today’s clients look ahead
to be able to perform business at any time. That
means organization should be ready to provide
100% availability to process transactions in real
time and around the clock, leaving small time
frame for dedicated batch possessing. So how
does batch processing get completed? It must be
rightly mixed and intelligently mingled with real
time transaction processing. And you have to
manage with no batch processing interfere on
transactions and degrade their availability or per-
formance, which signifies you need to have very
logical and efficient batch processing with right
tools to support dynamic workload scheduling. A
new approach to system architecture is needed
that reduces the complexity and costs of IT busi-
ness as well as increases flexibility to accommo-
date change thru optimization [7].
Earlier, the market was not so competitive
when you were allowed to ―throw MIPS‖ at the
problem resolutions and fixing bugs. On the oth-
er hand, mainframe resources are so costly, and
all IT organizations are under immense pressure
to cut down the costs. The only way left is to op-
timize the use of mainframe technology to meet
out the enormous and increasing demands placed
on the mainframe by the business can give you
the capacity you need while still keeping costs
under control.
So how do you deliver batch service delivery
timely, and supervise costs, and build up a model
that scales with the increasing demand of your
competitive business? To accomplishing this
soaring demand it requires a holistic approach —
one that lets you optimize your batch processing.
Mainframe batch optimization and maximiz-
ing availability with saving time and cost can be
achieved by following below listed techniques:
32 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
1. To manage from a service viewpoint, you
need a single, unified view of your production
environment. That view must show all the ser-
vices you deliver to client and all of the asso-
ciated application with their horizontal and ver-
tical service domains. It will enable you to make
decisions from a business perspective also know
as chain coordination. You can also determine
the business impact of actions before you im-
plement them, so you can take action in the pro-
duction environment with confidence and clarity
on your implementation which also helps you to
proper backup and restore system if something
goes unexpected. You also need to understand
your service level commitments agreed to the
business in terms of service availability and per-
formance else you may need to shell heavy pe-
nalties. In the process of establishing business
SLA's, it’s important to recognize and categorize
your work, in collaboration with the business
consultants, the required service level agree-
ments and the relative business priority of each
category should be properly documented and re-
viewed. Multiple IT work load automation solu-
tions and tools are available that easily monitor
and help you to manage mainframe batch
processing with respect to SLA's. An important
feature of these solutions is that they provide
early notification/warning of potential problems
that may result in breach of SLA's. It will guide
you to remediate problems/issues and even res-
tart a batch job where technical staff can priorit-
ize batch applications with high focus on resolv-
ing issues with high business impact.
Governance is also needed to ensure that the
problem which precipitated the batch optimiza-
tion never occurs again. It promises better Hard-
ware resource utilization, will also decrease of
application maintenance cost, batch schedule is
expediently arranged, advance planning that the
proposed resolution fits within the time window,
have fewer troubles with the batch it is easier for
you to do troubleshooting of any further problem
occurrence and makes it is easier to congregate
and sustain your service level agreements. This
means we can collaborate with our clients to
tackle challenges never before it is thought poss-
ible.
2. It is pointless to say, every IT organization is
under pressure to reduce cost, and that certainly
includes the mainframe batch on high radar be-
ing expensive. Eventually, it becomes essential
to optimize the batch run cycle. Major cost is
headed under day to day MIPS (million instruc-
tions per second) consumption, IBM’s capricious
work load license charges are also based on a
four-hour rolling average of million service unit
consumption (MSU's). To realize these goals
first we need to discover best suitable time for
scheduling critical batch, so high MIPS consum-
ing application should be schedule off to a lower
usage interlude, but need to be precautious about
disruption in a downstream application/batch
process and reports or feeds and obvious related
business impacting services. Also need to guar-
antee that lower priority jobs don’t exhaust off
resources from higher priority applications and
degrade system performance. Between a high-
quality capacity management solution that lets
you imagine the usage and impact to crucial ser-
vice and an intelligent workload automation so-
lution that helps you determine best fit for batch
scheduling, you can very effectively manage
usage costs.
It’s essential to scrutinize these system com-
ponents and evaluate whether they are fully
tuned up for crucial batch processing. As sug-
gested by Shintani, Y, Inoue, K, Kamada, E,
Shonai, T. that for online transaction processing
(OLTP), no configuration increased performance
more than 4%. To make the superscalar architec-
ture more effective for OLTP, it is important to
reduce execution cycles per instruction (CPI), by
reducing overhead caused by sequential
processes [14]. We can also allow or define each
job to be executed between its arrival time and
deadline by a single processor with variable
speed by following different scheduling models
[12], under the assumption that energy usage per
unit time. Below is the list of system compo-
nents which are very important during initializa-
tion/installation:
33 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
◦ IMS subsystems
◦ CICS subsystems
◦ DB2 subsystems
◦ Virtual storage access method (VSAM)
◦ Tape drives and libraries
◦ Workload Manager (WLM) policies
◦ Transmission mechanisms
◦ Job classes and initiators
◦ Buffer management
3. Most of the time mainframe batch processing
includes updating data, extracting data, reconcil-
ing data, and reporting on the huge business data
marts stored in very large subsystems under
DB2/IMS DB and VSAM files on the main-
frame. Usually this involves the execution of
multiple batch jobs in sequence. Batch optimiza-
tion solutions available that facilitate this partic-
ular kind of overlap. Think of the time you could
save when you consider the large number of
batch jobs that can take advantage of this capa-
bility.
Another technique, known as parallel
processing, which saves additional time and
maximize availability. Here you execute mul-
tiple batch jobs concurrently on parallel proces-
sors in the mainframe and lets exploit these
time-saving methods to unleash the power of
mainframe which will shrink the time required to
run batch and increase availability of online
window.
Select jobs to fine tune key applications on an
average may have thousands of related jobs,
which makes it impractical to tune every single
job. Therefore, you should select the ones that
would be most beneficial to tune. We can use the
following criteria to help you select the jobs to
tune:
a. Processor cycles: If your challenge is to reduce
processor cycles, choose the highest processor
consuming jobs. Perhaps restrict yourself to jobs
that run at the peak of the rolling four hour aver-
age of processor consumption.
b. Critical path: The longest running critical path
jobs should be in the list. Sometimes jobs near
one of the critical paths are also valuable tuning
them, so consider those as well.
c. Similar jobs: Jobs that are similar to many
other jobs might be worth tuning. Also, if you
can easily apply the lessons learned from tuning
them to other jobs, consider adding those to the
list as well.
d. Sometimes jobs which do not initially meet
these criteria might emerge from tuning those
that do. For example, a job that writes a big data
set read by a selected job might appear as also
being having an important effect to tune. There-
fore, the list of jobs to tune can change as you go
through this process.
4. We should determine the core functional batch
alternatively there can be more than one critical
path serving different business functions, this
critical path should be given the upmost priority
and best suitable resource availability to success-
ful completion of this batch. Once this critical
path is finished than the online point can be
flagged green and the reporting, back-up applica-
tions can run simultaneously giving the highest
availability to customers.
5. Special recommendation should be achieved
by learning deadlock possibilities in business
functionality. Almost any situation in which
processes can be granted exclusive access to
some resources (e.g. dedicated I/O devices) has
potential for deadlock. Alternatively, we must
consider a preemptive and non-preemptive sche-
duling model as one of the advanced scheduling
problems considered by a constraint program-
ming techniques [9]. Below are the four condi-
tions required for deadlock, which should be
kept in solution phase to avoid contention issue,
prevent it by negating one of these four neces-
sary conditions.
a. Mutual exclusion condition
b. Hold and wait condition
c. No pre-emption condition
d. Circular wait condition
6. You must ensure that your JCL is complete-
ly error free, as this is a basic step which should
be prioritized before scheduling in job scheduler.
JCL has the critical purpose of calculating batch
34 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
job processing, it interrupt processing and con-
sume critical MIPS and boost batch finishing
point time and achieve online point, it may take
hours to track down and fix, hence it’s moment-
ous to verify that your JCL is error-free before
executing it, to compress the batch window.
Beyond typical JCL syntax abends, JCL imple-
mentation may have typical syntax errors that are
unique to the scheduling way out you are using.
This dedicated syntax can be very influential, but
it still needs to be validated along with the stan-
dard JCL syntax.
JCL has dependencies on organization’s data
center structure and infrastructure environment.
This environment is extremely dynamic, and
changes can cause tribulations in JCL. So focus
on doing more than just verifying your JCL pri-
marily. JCL verification/pre-check solutions are
available that can perform all types of verifica-
tion/authentication described above.
7. Explore parallel execution of additional
streams which exploits powerful mainframe
server and provides all prospect to optimize
mainframe batch, illustrate and analyze the in-
ternal and external dependency charts/graphs and
try to diminish or avoid the redundant dependen-
cies. Before tuning up any job person should per-
form three kinds of analysis:
a. Find the longest running steps or program log-
ic and
tune them individually.
b. Examine the inter-step dependencies and as-
sess whether
they can be eliminated, splitting the job into
separate logical
function or goals where ever possible.
c. Scrutinize the inter-job dependencies and wait
situation
and strive to eliminate them, adjusting the batch
schedule
as appropriate.
d. If your goal is to reduce processor cycle con-
sumption, the tuning of long-running steps is the
only approach needed; splitting jobs or removing
inter-job dependencies will not help.
7. Monthly or quarterly there should be technic-
al and
functional review to track detailed information
such as the cause of the job failures, after seek-
ing for a temporary fix there should be a investi-
gation on the failure with the 5 Whys (developed
by Saki chi Toyoda) is an iterative question-
asking technique to explore the root cause-and-
effect relationships underlying a particular prob-
lem. There should a solution team explicitly hav-
ing a dedicated role for permanent resolution of
batch error fixing. Sometimes it may happen that
the application/job which is failing does not ex-
plains the exact reason of failure but the prede-
cessor application holds the real rationale which
should be deeply analyzed and dig in for exact
root cause analysis together with SME’s and
business, which will help to promote stable
batch environment with zero abends. Also look
more towards Check pointing and Failure Re-
covery methods and demonstrate how to avoid
the permanent loss of business function [4].
Job Name System CPU
Elapsed
M/S Remarks
PABD010 DB2 00:45:11 00:01:11
File not
available,
simple res-
tart
PABM090 DB2 00:10:19 00:12:19
Duplicate
data entry
XABD020
IMS-
DB 01:34:22 00:04:22
Check point
restart
XDCD010
IMS-
DB 03:22:33 00:22:33
Step100:
PORG
Stock need
to be tuned.
SDTVY01 DB2 23:44:23 00:02:00
Need Re-
indexing &
Re-
organization
XFVW010
IMS-
DB 02:11:24 00:11:24
Parentage
not set
CT010T01 CICS 00:00:04 00:00:01
Transaction
abended
8. Repeated efforts should be made for batch op-
timization
identification as it can yield us better ways to cut
the cost on burning MIPS usage. Assuming a
35 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
phase of batch-demanding work, you can refrain
the components in particular for batch. Here are
the few traditions to fine tune batch are men-
tioned below:
a. Altering the DB2 buffer pools to support
chronological processing. Batch often has a more
sequential data access pattern than other work-
loads.
b. Ever-increasing the buffer pool dimension to
take benefit of the lower demand for memory in
the batch window. On the other hand, this might
not always apply because many installations
keep other applications available all the way
through the batch window.
c. It also requires provisioning more processor
capacity or power to trim down queuing or pro-
visioning more memory to prevent paging de-
fault.
d. Altering Logical Partition (LPAR) weights to
support the LPAR policies where batch runs.
This might be a manual course of action, or In-
telligent Resource Director (IRD) might be
without human intervention shift weights be-
tween various LPAR’s.
e. Ensuring DB2 or other cataloguing bandwidth
is tolerable for the batch peak. In this way it will
also not conflict with other user logging necessi-
ties, which might actually be higher than that of
the scheduled batch.
f. Switching Work load manager policies to fa-
vor batch scheduling. To achieve this, some in-
stallations have diverse WLM policies for over-
night and daytime batch processing.
9. Report review and customization as
per Client requirements, there is always a chance
to review and later on customize the report gen-
eration as per requirements. Also there can be a
scope to customize daily reporting stuffs to
weekly, monthly, quarterly or yearly. As per
business need if something is less important can
be produced on specialized on demand batches
where and when required by business, it will
save a significant number of MIPS and will also
save time of batch run cycle which will increase
online window. After the technical strategy is set
and there is an understanding of the batch land-
scape, you can derive detailed optimization ac-
tions to implement cost saving strategies as
small projects.
10. On long term progressive planning land-
scape, business should also think of decommis-
sioning of any unused application, it will allow
us to free the unused costly infrastructure (serv-
ers or components) or replace more CPU con-
suming batch applications with new logical in-
terface providing same kind of results, sometime
we need to standardize working platform and
proficiently use the limited resources. Also we
should think of early alarming modules to
achieve critical bench marks alternately it will
provide technical staff a better chance to avoid
major failures or circumvent show stoppers.
V. ANALYSIS METHODOLOGY
Our methodology is the systematic, theoretical
and practical analysis of the methods applied to
analyze and tune batch processing is an abstract
methodology that can be applied to batch
processing independent of technology platform.
This platform unbiased approach can also be ap-
plied to batch applications that cross over a
number of operating environments, a characteris-
tic of many installations nowadays. We can use
stepladder specific to IBM Z servers to demon-
strate how to implement the methodology. At a
high level, this methodology consists of three
phases namely initialization, analysis and im-
plementation as described below:
1 Initialization
Initialization is a key segment of any venture,
especially one as multifaceted as a batch window
reduction mission. In this phase, you must cate-
gorize a clear scope for the mission to take ad-
vantage of the likelihood of success. Several
characteristic of this phase comprise of:
a.) Investigate the problem to its root.
b.) Comprehend the business problem and its
conversion to a technical problem and functional
relation.
c.) Excellent governance model.
36 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
d.) Establish a formal process for making deci-
sions and fundamental expectations on a conti-
nuous business needs.
Our objectives is to understand the business
problems and how this translates into a technical
and functional understanding, set reasonable ob-
jectives and identify the benefits of the elucida-
tion, identify and define the measurement miles-
tones, collect data and essential information spe-
cifically needed for the mainframe batch sche-
dule, define roles and responsibilities of the par-
ticipants and as a final point attempt to setup op-
timization project, such as documenting tasks,
defining required activities, solution document,
document of understanding, statement of work,
metrics going to use while in next phases,
project tailoring guideline, review checklist and
so on.
Governance for our batch schedule project.
2. Analyses
The analysis phase defines a set of logical and
focused procedures that will be implemented lat-
er. This analytic module consists of three steps:
A. Strategy and Planning
In this stage, you can recognize system environ-
ment and any technical or functional constraints.
Do not produce any optimization measures that
have a variance with each other. The approach is
composed of seven key themes:
Ensuring the system environment is correct-
ly configured and verify system configuration to
ensure that there are enough processor cycles.
Implementing Data In Memory – smart buf-
fer space utilization.
Optimizing Input/outputs operation han-
dling.
Increasing parallel processing solutions.
Decreasing failures impact – handling show
stoppers situation.
Increasing operational effectiveness effi-
ciently.
Improving application output efficiency.
B. Understanding the critical batch landscape
Most batch environments are extremely critical
and complex, typically with hundreds of thou-
sands of applications, jobs, files, and database
objects with large number of input/output opera-
tions. Complexity is the biggest obstruction to
administrate on time batch issue handling. A
number of tools and mainframe utilities are
available that can assist to control this intricacy.
Moreover several system and critical batch sche-
duling with multiple interfaces makes it more
complex, it can be very difficult to gain a good
technical and functional understanding of the
batch landscape.
In this stage it is important to determine what
business need to be optimized and what kind of
benefits can be reaped from this implementation.
The organization needs to fully understand their
batch processing, so a review of the batch is
possible. Some examples of components that
must be verified and reviewed are as listed be-
low:
Standard naming conventions should be en-
forced and promoted.
Functionality and business object of batch
Job and applications must be well understood
and documented.
Application ownership and responsibilities
must be clearly marked.
While granting access to every database ob-
ject must be identified and grouped.
Critical path of batch and end points must be
identified as milestones, can be termed as online
point.
Batch reliability must be assessed and scru-
tinized time to time and opportunities to make
37 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
the most of it must be intended with proper ap-
proach.
Job elapsed time and execution time with
resource consumption details must be tracked
down and fully understood because it can vary
with your daily critical ongoing business transac-
tions load.
C. Deriving comprehensive optimization im-
plementation plan
To determine the job-level optimization meas-
ures to implement, scrutinize the jobs in detail;
follow a similar approach for all other compo-
nents. The following tasks are some situations of
optimization actions you can take:
1. Examine the times of the job steps and then
review file access for the major steps.
2. Review and fine tune the database system
buffer pools.
3. Check for correct referential integrity.
4. Scheduled re-indexing and perform update
on run stats information.
Also inspect and enhance the server processing
swiftness.
3 Implementation
Implementation of any batch optimization
project is very analogous, at a high level, to any
other projects, so there is no need to assess the
standard components here. However, some con-
siderations specific to batch optimization
projects, including the following:
1. The size of a batch project can be different
from other implementation projects. The scale of
a batch optimization can be very large because
many changes too many components all together
it becomes big enhancement instead of simple
business as usual change, such as program mod-
ules and databases re-alignment, might be essen-
tial. In addition, if there are major changes re-
quired in the current business functionality and
there are more prospective changes required to
synchronize the recent projects will take us to-
wards new functionality which in turn can have
business impact on the entire organization.
2. Normally, there is a need for system testing,
environment testing, user acceptance testing, but
some required changes can be difficult to test
and verify results with test data. Testing whether
a change continues to permit a logical program,
such as a batch component, to function appro-
priately is usually simple. However, testing and
narrating the impact of performing a change on
the overall batch environment is much more
complex. Also very tedious to predict the nature
of change in real time environment before im-
plementing the change in production we should
first implement it in test environment which are
more like production and analyze their impact
for around a week test batches before promoting
them to production environment with the prox-
imity to fallback and proper backup strategy.
3. To implement batch optimization projects we
may require a various kind of specialized skills,
for example skilled programmers, database ad-
ministrators, automation experts, system plat-
form experts, workload schedulers, operations
specialist or application developers. But to be
precise, which skills are needed to implement
real project can not be determined until the com-
prehensive tasks are identified and documented.
Here we evaluate few z/OS specific examples for
each listed approach:
1. Make sure that the system is appropriately
configured. Make sure that the LPAR where the
batch runs has enough memory and buffer space
and that the Workload Manager (WLM) setup is
appropriate for kind of batch, we need to sche-
dule with its forecasted frequency.
2. Implement Data In Memory
38 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
Implement sufficient buffering of DB2 index
spaces and table spaces and use the Virtual Sto-
rage Access Method (VSAM) with Local Shared
Resources (LSR) buffering adequately and inline
to business goals.
3. Optimize Input-Output operations
Mount an adequate amount of disk Input-output
bandwidth and use chronological data striping
appropriately.
4. Increase parallel processing approach
Break multistep jobs into separate independent
jobs and use Batch Pipes/MVS (Pipes) to allow
those steps to run parallel to each other.
5. Shrink the impact of batch failures
Use an ICETOOL VERIFY step to scan a dis-
gracefully unreliable data set and only run the
main processing step if the ICETOOL declares it
to be crystal clean. On the other hand, a more
strategic solution is to fix the root causes of data
corruption and find its origin.
6. Increase operational effectiveness
If we use available scheduler functions to auto-
mate the batch application it will provide ready
support to all these scheduling issues via sche-
duling tools available like CA7, TWS, Zeke,
Control M.
7. Improve application efficiency
Use Application Performance Analyzer for z/OS
(APA) to tune the application program code.
VI. CONCLUSIONS
Mainframe batch optimization can be achieved
in number of ways. This paper states some tech-
niques to reduce cost and optimize mainframe
batch processing. We are putting up various op-
tions which can help us to optimize our existing
mainframe batch environment which can reduce
the MSU cost for the organization. Another ob-
servation is that although we look at optimizing
the batch stream, we should also consider mul-
tiple options however they might not be very
huge or attractive at first sight, but it really value
to your business and if you the return over a pe-
riod of 5 years, it’s a huge saving.
Success in optimization of a batch workload is
achieved more through closely controlled and
consistent use of the methods/processes
adopted.. The key feature is to link the savings
generated in terms of CPU MIPS and expressive
to business in terms of potential cost savings
generated by the optimization of mainframe
batch over a period of time. Milestone jobs are
identified that are critical to the business and
must not miss their deadlines. The promotion
algorithm uses the WLM integration technique
which will promote jobs on the critical paths that
are late as per schedule and might affect the on-
line point.
If business operation is critical for your business
value and must complete by the set deadline, you
must specify that operation to be considered as
the target of a critical path. Then during daily
plan processing, a critical path that includes the
internal and external predecessors of the target
operation is calculated, and a specific dependen-
cy schedule is created. While the current plan is
running in scheduler, monitor the critical paths
that are consuming their slack time and, these
become more critical than the paths calculated
during planning phase. A proper allocation of
resources to tasks enables the organization to
achieve its planned objective and by these objec-
tives we can support our business financial port-
folio and helps to empower IT innovation by op-
timization which is a doorstep opportunity to
maximize availability, save time, cost and max-
imizing organizations throughput with increase
customer satisfaction with increased online
availability of applications 24x7x365, which it-
self means increase in business and confidence
in market and company stakeholder.
VII. References
[1] Roadmmer F A, White K P. A recent survey
of production scheduling. IEEE Transac-
tions on Systems, Man, and Cybernetics,
1988, 18(6): 841-851.
[2] Alex Louwe Kooijmans, Elsie Ramos, Jan
van Capelle, Lydia Duijvestijn, Tomohiko
Kaneki,,Martin PackerBatch Processing in a
Parallel Sysplex‖, http://www. red-
39 Manish Bhatnagar, Prof. Jayant Shekhar
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 4, Special Issue
May 2015
books.ibm. com/ redbooks/pdfs/
sg245329.pdf
[3] S. Bansal, P. Kumar and K. Singh, ―An Im-
proved Duplication Strategy for Scheduling
Precedence Constrained Graphs in Multi-
processor Systems,‖ IEEE Trans. Parallel
and Distributed Systems, vol. 14, no. 6, pp.
533-544, June 2003.
[4] K. Hwang and Z. Xu, Scalable Parallel
Computing: Technology, Architecture, Pro-
gramming. San Francisco: McGraw-Hill,
Feb. 1998.
[5] V. Welch, F. Siebenlist, I. Foster, J. Bresna-
han, K. Czajkowski, J.Gawor, C. Kessel-
man, S. Meder, L. Pearlman, and S. Tuecke,
―Security for Grid Services,‖ Proc. Int’l
Symp. High Performance Distributed Com-
puting (HPDC-12), 2003.
[6] T. Xie and X. Qin, ―Enahancing Security of
Real-Time Applications on Grids through
Dynamic Scheduling,‖ Proc. 11th Work-
shop Job Scheduling Strategies for Parallel
Processing (JSSPP2005), pp. 146- 158, June
2005.
[7] U. Homann, M. Rill, and A. Wimmer,
―Flexible Value Structures in Banking,‖
Comm. ACM, vol. 47, no. 5, 2004, pp. 34–
36.
[8] E.J. Chikofsky and J.H. Cross II, ―Reverse
Engineering and Design Recovery: A Tax-
onomy,‖ IEEE Software, vol. 7, no. 1, 1990,
pp. 13–17.
[9] Yun Y S, Gen M. Advanced scheduling
problem using constraint programming
techniques in SCM environent. Computers
& Industrial Engineering, 2002, 4 3 : 2 1 3 -
2 2 9.
[10] Franck B , Neumann K , Schwindt C. Trun-
cated branch and bound, schedule-
construction, for resource constrained
project scheduling. OR Spektrum, 2001, 2 3
: 2 9 7 - 3 2 4.
[11] Jaime Cerdá, ―Optimization methods for
batch scheduling‖.
[12] F. Yao, A. Demers, et al. A Scheduling
Model for Reduced CPU Energy, Proc. of
36th IEEE Symposium on Foundations of
Computer Science, pp. 374-382, 1995.
[13] Qin Li ; Di Liu. The research on transaction
system based on mainframe, Apperceiving
Computing and Intelligence Analysis, 2009.
ICACIA 2009. International Conference on
Digital Object Identifier:
10.1109/ICACIA.2009.5361064, 2009 ,
Page(s): 436 – 439
[14] Shintani, Y. ; Inoue, K. ; Kamada, E. ;
Shonai, T. A performance and cost analysis
of applying superscalar method to main-
frame computers. Digital Object Identifier:
10.1109/12.392847. Computers, IEEE
Transactions on Volume: 44 , Issue: 7 1995
, Page(s): 891 – 902.