introduction to mainframe hardware

64
1 © 2012 IBM Corporation Esra Ufacık, Mainframe Client Technical Specialist, IBM Turk – [email protected] Introduction to Mainframe Hardware 2 Agenda WHO uses Mainframes? And Why? WHY the Mainframe delivers ARCHITECTURE & Hardware Platform HIGHLIGHTS of z196 z196 Book Concept z196 Processor Unit Design z196 Memory z196 Connectivity z196 RAS BACKUP Slides

Upload: esra-ufacik

Post on 11-Apr-2017

84 views

Category:

Technology


5 download

TRANSCRIPT

Page 1: Introduction to mainframe hardware

1

© 2012 IBM Corporation

Esra Ufacık, Mainframe Client Technical Specialist, IBM Turk – [email protected]

Introduction to Mainframe Hardware

2

Agenda

WHO uses Mainframes? And Why? WHY the Mainframe delivers ARCHITECTURE & Hardware Platform HIGHLIGHTS of z196 z196 Book Concept z196 Processor Unit Design z196 Memory z196 Connectivity z196 RAS BACKUP Slides

Page 2: Introduction to mainframe hardware

2

3

Definition of Mainframe

The Mainframe is a computer system designed to continuously

run very large, mixed workloads at high levels of utilization,

meeting user-defined service level objectives.

A Mainframe is what businesses use to host their commercial

databases, transaction servers, and applications that require a

greater degree of security and availability than is commonly

found on smaller-scale machines.

4

What is a Mainframe?

A style of operation, applications, and operating system

facilities.

Computers that can support thousands of applications and

input/output devices to simultaneously serve thousands of

users.

A mainframe is the central data repository, or hub, in a

corporation‟s data processing center, linked to users through

less powerful devices such as workstations, terminals or

media devices.

Can run multiple, but isolated operating systems

concurrently.

Centralized control of resources.

Optimized for I/O in all business-related data processing

applications supporting high speed networking and terabytes

of disk storage.

Page 3: Introduction to mainframe hardware

3

5

WHO uses Mainframes?

And Why?

6

[Germany]

Bank of NZ

Bank of Montreal

Banco Itau

GOVERNMENTS TRANSPORTATION HOSTING

PROVIDERS

OTHERS

[Belarusian Railways]

[Slovenia]

[Gaming] [Brazil]

[Russian Hydrometeorological Research Institute]

Bank of China

[Vietnam]

BANKS OTHER FINANCIAL

INST.

RETAILERS HEALTHCARE UTILITIES /

ENERGY

Sample of Customers Large and Small from Around the World

[Brazil]

[Peru]

[Kazakhstan]

Central Bank of Russia

Industrial

Bank of Korea

Philippines

Savings Bank

[India]

[Manufacturer/USA]

[USA]

Page 4: Introduction to mainframe hardware

4

7

Mainframe is the platform of choice for the top 25 global banks

Based on asset size, 96 out of the top 100 worldwide banks use System

z.1

50 out of the top 50 worldwide banks use System z.

72 out of the top 75 worldwide banks use System z.

For the latest generation of Mainframe;

$1.5 billion in R&D investment for z196

5,000+ IBMers worked more than three years, 31 million hours, 24-

hour development process across 18 IBM labs worldwide.

More than 40 IBM Fellows and Distinguished Engineers led this effort.

The zEnterprise 196 is the world‟s fastest and most scalable enterprise

system. 2

1 Based on „The Banker‟, System z install base and financial records

2 Based on 5.2GHz core processor speed

8

Banking Insurance Retail Healthcare Public

Sector

Core Banking Internet Rate

Quotes

On-line

Catalog

Patient Care Systems

Electronic Tax

Processing

Wholesale

Banking –

Payments

Policy Sales &

Management

(e.g. Life,

Annuity, Auto)

Supply Chain

Management

On–line Claims

Submission & Payments

Web based

Social

Security

Customer

Care & Insight

Claims

Processing

Customer

Analysis

Fee payments

What is a workload?

The relationship between a group of applications and/or systems that are related across several

business functions to satisfy one or more business processes.

Workloads That Customers Run on Mainframe

Page 5: Introduction to mainframe hardware

5

9

Academic Initiative

1,067 Schools enrolled in the IBM System z Academic Initiative program, reaching students

in 67 countries (more than half outside the US).

Mainframe contests continue to draw high school, college, and university student

participation around the world with 43,825 students from 32 countries.

3,246 students from 40 countries have taken the The IBM System z Mastery Test. This test

is offered to students globally in 2011 at no cost.

http://www-05.ibm.com/tr/industries/education/mainframe-index.html

IBM Academic Initiative System z resources (system access, course materials, education)

are available globally for universities and colleges.

Systemzjobs.com (Job board) is a no-fee service that connects students and experienced

professionals seeking System z job opportunities with IBM System z clients, partners, and

businesses.

10

WHY the Mainframe delivers

Page 6: Introduction to mainframe hardware

6

11

No other platform offers reliability, availability, scalability, resiliency, security, and management features comparable to Mainframe, including:

High levels of application and data availability

Solid security and privacy

Massive horizontal and vertical scalability

Optimized operating environment

Advanced virtualization capabilities

Intelligent workload management

Utilization of open and industry standards

Unmatched platform for heritage (legacy) applications and data

Specialty engines that integrate with existing assets

World-class support

Centralized control

Continuing compatibility

The Differentiation of Mainframe: Why the Mainframe Is Just Another Computer, but a Different Kind of Computer?

12

Mainframe Operating System Heritage

OS/360 -> OS/VS -> MVS -> MVS/SP -> MVS/XA -> MVS/ESA -> OS/390 -> z/OS z/OS, IBM‟s premier zSeries operating system, is a highly secure, scalable, high-performance enterprise operating system on which to build and deploy traditional and Java-enabled applications, providing a comprehensive and diverse application execution environment.

DOS/360 -> DOS/VS -> VSE/SP -> VSE/ESA -> z/VSE z/VSE enables proven, robust, and cost-effective solutions. z/VSE provides sturdy batch and industrial strength on-line transaction processing (CICS) capabilities. z/VSE can fit comfortably into a legacy of thrifty, dependable z/VSE solutions.

ACP -> TPF-> z/TPF TPF is the platform driving the business-critical systems for many of IBM's largest and most sophisticated users of online transaction processing - airlines, railroads, hotels, financial services, government, credit card and banking industries.

CP/67 -> VM/370 -> VM/SP -> VM/XA -> VM/ESA -> z/VM z/VM provides a highly flexible test and production environment for enterprises deploying the latest e-business solutions. z/VM helps enterprises meet their growing demands for multi-user server solutions with support for a broad range of operating systems.

Page 7: Introduction to mainframe hardware

7

13

Mainframe Software

Customer Information Control System (CICS®) - a transaction processing environment that allows applications to share information and to satisfy customer requirements for information.

Information Management System Transaction Manager (IMS TM) – a transaction processing environment and an application server

Information Management System Database Server (IMS DB) - a database system that is capable of extremely high performance.

DB2 - a full, functional relational database that leverages all of the features of Mainframe and z/OS to provide high availability and high performance.

WebSphere Application Server (WAS) – an application server that runs on z/OS, and provides true application portability between Mainframe and distributed systems

14

1 Source: goMainframe.com Joe Clabby

Up to 100% server utilization compared to 10-20% distributed server utilization1

Shared everything infrastructure allows for maximum utilization of resources

– CPU, Memory, Network, Adapters, Cryptography, Devices

Lowers power consumption for each work unit

Customer savings

Underutilized distributed servers Up to 100% utilized Mainframe server

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

App App App App App App

App App App App App App

App App App App App App

App App App App App App

Multiple applications on one Mainframe server

Typically single application per server

High business

growth

z HIGH

SCALABILITY

z GREEN

SAVINGS

Green strategy Running out of energy & space

Maximizing Utilization of Resources

Page 8: Introduction to mainframe hardware

8

15

The potential performance impact of the Linux server farm is isolated from the other Logical partitions (LPAR)

LPAR – Up to 60 Logical partitions

Mainframe

Deploy virtual servers in seconds

Highly granular resource sharing (<1%)

Add physical resources without taking system down, scale out to 1000s of virtual servers

Do more with less: More virtual servers per core, Share more physical resources across servers

Extensive virtual server life-cycle management

Hardware-enforced isolation

Distributed Platforms

Limited per-core virtual server scalability

Physical server sprawl is needed to scale

Operational complexity increases as virtual server images grow

VMWare, Xen, Virtual Iron focus only on x86, not cross platform hardware management.

PR/SM™ Hypervisor in firmware

z/VM® – 100s of virtual servers – Shared Memory

Hardware support: 10% of circuits are used for virtualization

Each LPAR is one Virtual Mainframe

VIRTUALIZATION

Flexibility to respond Mainframe‟s Extreme Virtualization – Built into the architecture not an “add on” feature

16

RAS

Workload Workload Workload

Workload

Workload

Workload Workload

RAS

Processor

Speed

Processor

Speed

Cache Cache

Mixed workloads stress cache usage, requiring more context switches

Working sets may be too large to fit in cache

“Fast” processor speed is not fully realized due to cache misses

System z cache is able to contain more working sets

Processor speed is optimized by increased cache usage

Additional RAS function is beneficial for mixed workloads

Replicated Server Chip Design Consolidated Server Chip Design

Chip

Real E

sta

te C

hip

Real E

sta

te

Note: System representations are not to scale, proportions may vary based on generation of chip and model

Chip Design Affects Virtualization Capabilities

Page 9: Introduction to mainframe hardware

9

17

A Fully virtualized infrastructure enables optimization to maximize efficiency

80%

90%

90%

Up to

Up to

Up to

Reduction in energy consumption

Reduction in Floor Space

Reduction in SW License costs

Consolidate the workloads of thousands of distributed servers in a single system

Mainframe

EFFICIENCY

Green strategy Running out of energy & space

EXTREME

VIRTUALIZATION

Flexibility to respond

70% Up to Reduction in total cost of ownership

18

The Gold Standard for Security Up to 52% lower security administrative costs.

Reduce business risk

Mainframe

SECURITY

Mainframe Security Reducing risk – Protecting businesses

Helping businesses:

Protect from INTRUSION

– z/OS and z/VM Integrity Statement

Protect DATA

– Built in encryption accelerators in every server

– FIPS-140-20 Level 4 certified encryption co-processors for highly secure encryption

Ensure PRIVACY

– Access to all resources is controlled by an integrated central security manager

Protect VIRTUAL SERVERS

– The only servers with EAL5 Common Criteria Certification for partitioning

Respond to COMPLIANCE REGULATIONS

– Up to 70% in security audit savings

Page 10: Introduction to mainframe hardware

10

19

Designed to deliver continuous availability at the application level

Single Mainframe Multiple

Mainframe

Site 1 Site 2

Multiple Data Centers

Where mean time between failure is

measured in decades

Designed for application availability

of 99.999%

Industry-leading solution for

disaster recovery

Avoiding the cost of downtime

Ensuring access to critical

applications

Maintaining productivity of users

Open to clients 24/7

Continuous

business operations

z HIGH

AVAILABILITY

RELIABILITY

Manage Risk with Mainframe Resiliency Availability built in across the system

Mainframe

AVAILABILITY

RELIABILITY

20

Summary of the Benefits of Mainframe

Page 11: Introduction to mainframe hardware

11

21

ARCHITECTURE & Hardware

Platform

22

History

7th April 1964, Poughkeepsie NY: Announcement of the first general-purpose computers:

A new generation of electronic computing equipment was introduced today by International Business Machines Corporation. IBM Board Chairman Thomas J. Watson Jr. called the event the most important product announcement in the company's history.

The new equipment is known as the IBM System/360.

"System/360 represents a sharp departure from concepts of the past in designing and building computers. It is the product of an international effort in IBM's laboratories and plants and is the first time IBM has redesigned the basic internal architecture of its computers in a decade. The result will be more computer productivity at lower cost than ever before. This is the beginning of a new generation - - not only of computers - - but of their application in business, science and government." *

* from the April 1964 announcement press release

Page 12: Introduction to mainframe hardware

12

23

IBM System/360

1964

IBM System/370

1970

IBM System/390

1990

IBM System z900

2000

IBM System z990

2003

IBM System z9 EC

2005

IBM System z9 BC

2006

IBM System z10 EC

2008

IBM System z10 BC

2009

IBM System zEnterprise

196

2010

Evolution of the Mainframe

IBM System zEnterprise

114

2011

24

OLD S/360

The central processor box contains the processors,

memory,1 control circuits and interfaces for

channels

RECENT

Current CPC designs are considerably more

complex than the early S/360 design. This

complexity includes many areas:

I/O connectivity and configuration

I/O operation

Partitioning of the system

Conceptual diagram

Page 13: Introduction to mainframe hardware

13

25

Mainframe basic components

26

Mainframe is a balanced system design

Balanced System

CPU, nWay, Memory,

I/O Bandwidth*

Memory

System I/O Bandwidth

Processors

PCI for

1-way

1.5 TB**

64-way

920

288 GB/Sec*

80-way

3 TB** 1202

172.8 GB/sec*

600 512 GB

54-way

96 GB/sec

450 256

GB

32-way

24 GB/sec

300 64

GB

16-way

z10 EC

z9 EC

zSeries 990

zSeries 900

z196

* Servers exploit a subset of its designed I/O capability

** Up to 1 TB per LPAR

PCI - Processor Capacity Index

Page 14: Introduction to mainframe hardware

14

27

The virtualization concept

The ability to share everything is based on one of the major concepts of the Mainframe. As

it is commonly used in computing systems, virtualization refers to the technique of hiding

the physical characteristics of the computing resources from users of those resources. For

example, each operating system in a logical partition of the Mainframe platform thinks it has

exclusive access to a real hardware image. The actual resources are shared with other

logical partitions. The virtualization concept is not an add-on to the platform. z/Architecture

contains many facilities in both the hardware and software that facilitate resource

virtualization. All elements of Mainframe enable the virtualization concept.

Virtualizing the Mainframe environment involves creating virtual systems (logical partitions

and virtual machines), and assigning virtual resources, such as memory and I/O channels, to

them. Resources can be dynamically added or removed from these logical partitions

though operator commands.

Dynamic logical partitioning and virtual machines increase flexibility, enabling selected

system resources such as processors, memory, and I/O components to be added and

deleted from partitions while they are actively in use. The ability to reconfigure logical

partitions dynamically enables system administrators to dynamically move resources from

one partition to another.

Processor Resource/Systems Manager™ (PR/SM™) is a hypervisor integrated with all

Mainframe elements that maps physical resources into virtual resources so that many logical

partitions can share the physical resources.

28

PR/SM and logical partitions

Processor Resource/Systems Manager is a

feature that enables logical partitioning of

the central processor complex (CEC).

PR/SM provides isolation between

partitions, which enables installations to

separate users into distinct processing

images, or to restrict access to certain

workloads where different security

clearances are required.

Each logical partition operates as an

independent server running its own

operating environment. On the latest

Mainframe models, you can define up to 60

logical partitions running z/VM®, z/OS,

Linux on IBM Mainframe, z/TPF, and more

operating systems. PR/SM enables each

logical partition to have dedicated or

shared processors and I/O channels, and

dedicated memory (which you can

dynamically reconfigure as needed).

Either can be use to configure the IOCDS

Support Elements (SEs)

Page 15: Introduction to mainframe hardware

15

29

LPAR concept

30

Logical Partitions (LPARs) or Servers

A system programmer can assign

different operating environments to

each partition with isolation

An LPAR can be assigned a number

of dedicated or shared processors.

Each LPAR can have different

storage (CSTOR) assigned depending

on workload requirements.

The I/O channels (CHPIDs) are

assigned either statically or

dynamically as needed by server

workload.

Provides an opportunity to

consolidate distributed environments

to a centralized location

Page 16: Introduction to mainframe hardware

16

31

LPAR Logical Dispatching (Hypervisor)

The next logical CP to be dispatched is chosen from the logical CP ready queue based on the logical CP weight.

LPAR LIC dispatches the selected logical CP (LCP5 of MVS2 LP) on a physical CP in the CPC (CP0, in the visual).

The z/OS dispatchable unit running on that logical processor (MVS2 logical CP5) begins to execute on physical CP0. It executes until its time slice (generally between 12.5 and 25 milliseconds) expires, or it enters a wait, or it is intercepted for some reason.

In the visual, the logical CP keeps running until it uses all its time slice. At this point the logical CP5 environment is saved and control is passed back to LPAR LIC, which starts executing on physical CP0 again.

LPAR LIC determines why the logical CP ended execution and requeues the logical CP accordingly. If it is ready with work, it is requeued on the logical CP ready queue and step 1 begins again.

32

HiperDispatch

PR/SM and z/OS work in tandem to more efficiently use processor resources. HiperDispatch

is a function that combines the dispatcher actions and the knowledge that PR/SM has about

the topology of the server.

Performance can be optimized by redispatching units of work to same processor group,

keeping processes running near their cached instructions and data, and minimizing transfers

of data ownership among processors/books.

The nested topology is returned to z/OS by the Store System Information (STSI) 15.1.3

instruction, and HiperDispatch utilizes the information to concentrate logical processors

around shared caches (L3 at PU chip level, and L4 at book level), and dynamically optimizes

assignment of logical processors and units of work.

z/OS dispatcher manages multiple queues, called affinity queues, with a target number of

four processors per queue, which fits nicely into a single PU chip. These queues are used to

assign work to as few logical processors as are needed for a given logical partition workload.

So, even if the logical partition is defined with a large number of logical processors,

HiperDispatch optimizes this number of processors nearest to the required capacity. The

optimal number of processors to be used are kept within a book boundary where possible.

Page 17: Introduction to mainframe hardware

17

33

LPAR

Characteristics

LPARs are the equivalent of a

separate mainframe for most practical

purposes

Each LPAR runs its own operating

system

Devices can be shared across several

LPARs

Processors can be dedicated or

shared

When shared each LPAR is assigned

a number of logical processors (up to

the maximum number of physical

processors)

Each LPAR is independent

Summary

System administrators assign:

– Memory

– Processors

– CHPIDs either dedicated or shared

This is done partly in the IOCDS and

partly in a system profile on the

Support Element (SE) in the CEC.

This is normally updated through the

HMC.

Changing the system profile and

IOCDS will usually require a power-on

reset (POR) but some changes are

dynamic

A maximum of 60 LPARs can be

defined in a z196 CPC

34

Terminology Overlap

Page 18: Introduction to mainframe hardware

18

35

The channel subsystem

One of the main strengths of the Mainframe computers is the ability to deal with a large number of simultaneous I/O operations. The channel subsystem (CSS) has a major role in providing this strength. The CSS manages the flow of information between I/O devices and central memory. By doing so it relieves CPUs of the task of communicating directly with I/O devices and permits data processing to proceed concurrently with I/O processing. The channel subsystem is built on the following components:

SAP (system assist processor) One of the Mainframe processor types, the SAP connects the channel subsystem to the I/O devices that are attached to the channels. The SAP uses the I/O configuration loaded in the Hardware System Area (HSA), and knows which device is connected to each channel, and what is its protocol. The SAP manages the queue of I/O operations passed to the channel subsystem by z/OS.

Channels These are small processors that communicate with I/O control units (CU). They manage the transfer of data between central storage and the external device.

Channel path The channel subsystem communicates with I/O devices through channel paths. If a channel is shared between multiple logical partitions, each logical partition establishes a unique channel path to the each device using this channel.

Subchannels A subchannel provides the logical appearance of a device to the program and contains the information required for performing a single I/O operation. One subchannel is provided for each I/O device addressable by the channel subsystem.

36

Channel Subsystem Relationship to Channels, Control Units and I/O Devices

CU CU CU

Partitions

Subchannels

@ @

= =

@ @

= =

Control Units

Devices

(disk, tape, printers)

z196

Channel Subsystem

Channels

Controls queuing, de-queuing, priority management and

I/O identification of all I/O operations performed by LPARs

Supports the running of an OS and allows CPs, memory

and Subchannels access to channels

This represents an I/O device to the hardware and is used

by the OS to pass an I/O request to the channel subsystem

The communication path from the channel subsystem to the

I/O network and connected Control Units/devices

Page 19: Introduction to mainframe hardware

19

37

Logical-channel Subsystem 0

Logical-channel Subsystem 1

Logical-channel Subsystem 2

Logical-channel Subsystem 3

FICON Switch, Control Unit, Devices, etc.

ESCON Switch, Control Unit, Devices, etc.

Physical-Channel Subsystem

Cache Cache Cache Cache

H I P E R V I S O R

H S A

MBA SAP MBA SAP MBA SAP

-One FICON channel shared by all LCSs and all partitions

-A MCSS-Spanned Channel Path

- One ESCON channel shared by all partitions configured to LCS15

- A MIF-shared channel path

MBA SAP

The Mainframe I/O Logical Channel Subsystem Schematic

38

Multiple channel subsystems and multiple subchannel sets

Page 20: Introduction to mainframe hardware

20

39

I/O Device addressing

40

System z I/O Connectivity

ESCON and FICON channels

Switches to connect peripheral devices to more than one CEC

CHPID addresses are two hex digits (FF / 256)

Multiple partitions can share CHPIDs (MIF)

I/O subsystem layer exists between the operating system and the CHPIDs

I/O control layer uses a control file IOCDS that translates physical I/O addresses into

devices numbers that are used by z/OS

Device numbers are assigned by the system programmer when creating the IODF and

IOCDS and are arbitrary (but not random!)

On modern machines they are three or four hex digits

example - FFFF = 64K devices can be defined The ability to have dynamic addressing theoretically 7,848,900 maximum number of

devices can be attached.

Page 21: Introduction to mainframe hardware

21

41

statically dynamically assigned assigned

MIF Channel Consolidation - example

42

HCD & Sample IOCP

ID MSG2='ESRAU.IODF99 - 2011-02-15 22:05',SYSTEM=(2097,1), *

LSYSTEM=CECDCEA, *

TOK=('CECDCEA',00800007C74E2094220520410111046F00000000,*

00000000,'11-02-15','22:05:20','ESRAU','IODF99')

RESOURCE PARTITION=((CSS(0),(PCF2GAR2,5),(PRDA,1),(PRDC,2),(PR*

DE,3),(PRDF,4),(*,6),(*,7),(*,8),(*,9),(*,A),(*,B),(*,C)*

,(*,D),(*,E),(*,F)),(CSS(1),(*,1),(*,2),(*,3),(*,4),(*,5*

),(*,6),(*,7),(*,8),(*,9),(*,A),(*,B),(*,C),(*,D),(*,E),*

(*,F)),(CSS(2),(*,1),(*,2),(*,3),(*,4),(*,5),(*,6),(*,7)*

,(*,8),(*,9),(*,A),(*,B),(*,C),(*,D),(*,E),(*,F)),(CSS(3*

),(*,1),(*,2),(*,3),(*,4),(*,5),(*,6),(*,7),(*,8),(*,9),*

(*,A),(*,B),(*,C),(*,D),(*,E),(*,F)))

CHPID PATH=(CSS(0),00),SHARED,PARTITION=((PRDA,PRDC),(=)), *

CPATH=(CSS(0),00),CSYSTEM=CECDD6A,AID=0C,PORT=1,TYPE=CIB

...

CNTLUNIT CUNUMBR=FFFD,PATH=((CSS(0),08,09,0A,0B,0C,0D,0E,0F)),*

UNIT=CFP

IODEVICE ADDRESS=(FF90,007),CUNUMBR=(FFFD),UNIT=CFP

IODEVICE ADDRESS=(FF97,007),CUNUMBR=(FFFD),UNIT=CFP

Page 22: Introduction to mainframe hardware

22

43

Appl. A Appl. B

100

UCB 100 UCB 100

UCB Busy

Appl. C

UCB 100

Device Busy

One I/O to

one volume

at one time

I/O without PAV/MA

Appl. A Appl. B

100

UCB 100 UCB 1FF

Alias to

UCB 100

Appl. C

UCB 100

Parallel Access

Volumes

Multiple

Allegiance

I/O with PAV/MA

PAV (Parallel Access Volume) & MA (Multiple Allegiance)

IBM PAV enables System z to process multiple I/Os to same logical volume at once

IBM Multiple Allegiance (concurrent execution of multiple requests from multiple hosts)

expands capability across multiple System z servers

I/O Priority enables use of info from z/OS Workload Manager to manage I/O processing

order

44

NoPAV PAV

0

1

2

3

4

5

6

7

8

9

Response (M

S)

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

I/O

pe

r s

ec

on

d

<Queue

<Pend

<Connect

<Disc

I/O Rate>

PAV Performance I/O flow of PAV & MA

IBM PAV – Static, Dynamic & Hyper

Static PAV: Aliases are always associated with the same Base Address.

Dynamic PAV: Aliases are assigned up front but can be reassigned to any base address as

need dictates: WLM function call Dynamic Alias Management - reactive alias assignment

HyperPAV: On demand/ Proactive alias assignment

Page 23: Introduction to mainframe hardware

23

45

Static PAV & Dynamic PAV

Aliases assigned to specific addresses

46

Hyper PAV

Alias is taken from and returned to a pool instead of LCU.

Define max. number of aliases per LCU in HCD.

Every I/O Request is queued when all aliases allocated in the pool in the same LCU are

exhausted. Aliases kept in pool for use as needed

Page 24: Introduction to mainframe hardware

24

47

ESCON (Enterprise Systems Connection) is a data connection created by IBM commonly used to

connect their mainframe computers to peripheral devices.

ESCON replaced the older, slower parallel Bus&Tag channel technology

The ESCON channels use a director to support dynamic switching.

ESCON Connectivity

48

ESCD ESCD

ESCON Director

Page 25: Introduction to mainframe hardware

25

49

Fiber Connectivity (FICON)

FICON (for Fiber Connectivity) was the next generation high-speed input/output (I/O)

interface used by for mainframe computer connections to storage devices.

FICON channels increase I/O capacity through the combination of a new architecture and

faster physical link rates to make them up to eight times as efficient as ESCON.

50

FICON Connectivity

Page 26: Introduction to mainframe hardware

26

51

ESCON vs FICON

ESCON

- 20 Mbytes / Second

- Lots of “dead time”. One active request at a time.

- One target control unit

FICON

- 800 Mbytes / Second

- Uses FCP standard

- Fiber Optic cable (less space under floor)

- Currently, up to 64 simultaneous “I/O packets” at a time with up to 64 different

control units

- Supports Cascading switches

52

Hardware Management Console

Mainframe hardware can be managed by using either the Support Elements

directly attached to the server or using the Hardware Management Console

(HMC), which is a desktop application providing the end user interface to control

and monitor the status of the system.

Working from an HMC, an operator, a system programmer, or IBM technical

personnel can perform basic operations on the Mainframe servers. The HMC is a

powerful console that can load and shut down system images as well as the

entire machine. Some of the common capabilities of the HMC are:

Load the Mainframe hardware configuration.

Load or reset a system image.

Add and change the hardware configuration

(most of them dynamically).

Access the hardware logs.

All these functions can be executed using a

browser interface in a totally secure environment.

Page 27: Introduction to mainframe hardware

27

53

Reserved processors & storage

Reserved processors are defined by the Processor Resource/Systems Manager (PR/SM) to

allow for a nondisruptive capacity upgrade. Reserved processors are like spare logical

processors, and can be shared or dedicated. Reserved CPs should be defined to a logical

partition to allow for nondisruptive image upgrades.

Reserved storage can optionally be defined to a logical partition, allowing a nondisruptive

image memory upgrade for this partition. Reserved storage can be defined to both central

and expanded storage, and to any image mode, except the coupling facility mode.

LPAR dynamic storage reconfiguration

Dynamic storage reconfiguration on z196 servers allows an operating system running in a

logical partition to add (nondisruptively) its reserved storage amount to its configuration, if

any unused storage exists. This unused storage can be obtained when another logical

partition releases some storage or when a concurrent memory upgrade takes place. With

dynamic storage reconfiguration, the unused storage does not have to be continuous.

54

HIGHLIGHTS of z196

Page 28: Introduction to mainframe hardware

28

55

Machine Type

– 2817

5 Models

– M15, M32, M49, M66 and M80

Processor Units (PUs)

– 20 (24 for M80) PU cores per book

– Up to 14 SAPs per system, standard

– 2 spares designated per system

– Dependant on the H/W model - up to 15,32,49,66 or 80 PU

cores available for characterization • Central Processors (CPs), Integrated Facility for Linux (IFLs),

Internal Coupling Facility (ICFs), Mainframe Application

Assist Processors (zAAPs), Mainframe Integrated

Information Processor (zIIP), optional - additional System

Assist Processors (SAPs)

– Sub-capacity available for up to 15 CPs • 3 sub-capacity points

Memory

– System Minimum of 32 GB

– Up to 768 GB per book

– Up to 3 TB for System and up to 1 TB per LPAR • Fixed HSA, standard

• 32/64/96/112/128/256 GB increments

I/O

– Up to 48 I/O Interconnects per System @ 6 GBps each

– Up to 4 Logical Channel Subsystems (LCSSs)

STP - optional (No ETR)

z196 Overview

56

zEnterprise 196 Functions and Features (GA Driver 86 – August, 2010)

z196

Five hardware models

Quad core PU chip

Up to 80 processors configurable as

CPs, zAAPs, zIIPs, IFLs, ICFs, or

optional SAPs

Increased capacity processors

Out of order instruction execution

Over 100 new and enhanced

instructions

Improved processor cache design

Up to 15 subcapacity CPs at capacity

settings 4, 5, or 6

Up to 3 TB of Redundant Array of

Independent Memory (RAIM)

Unified Resource Manager suites

Cryptographic enhancements

On Demand enhancements

Energy efficiencies

2 New OSA CHPIDs – OSX and OSM

Three subchannel sets per LCSS

8 slot, 2 domain I/O drawer

Concurrent I/O drawer add, remove,

replace

FICON discovery and autoconfiguration

Doubled HiperSockets to 32

Physical Coupling Links increased to 80

Doubled Coupling CHPIDs to 128

CFCC Level 17

Optional water cooling

Optional High Voltage DC power

Static Power Save Mode

Optional Top Exist I/O cable exit

STP enhancements

zBX-002 with IBM Smart Analytics Optimizer, IBM Blades

Page 29: Introduction to mainframe hardware

29

57

zEnterprise 196 Functions and Features (GA Driver 93 – September, 2011)

Additional Cryptographic enhancements

8 GBps PCIe interconnects

Additional STP enhancements

Doubled Coupling CHPIDs to 128

Improved PSIFB Coupling Link

Physical Coupling Links increased to 104

New 32 slot PCIe Based I/O Drawer

Increased granularity of I/O adapters

New form factor I/O adapters i.e FICON Express8S and OSA-Expres4S

58

zEnterprise 114, 196 and zBX Enhancements - October 12, 2011)

zHPF enhancements

EAL5+ Certification for z196

GDPS® Support for zEnterprise

GDPS xDR support for z/VSE

GDPS reduced HyperSwap® impact and improved

scalability

New IBM Implementation Services for System z

New and revised zEnterprise Statements of Direction

zBX IEDN TOR 1 GbE optics for the customer network

Additional zBX 8 Gbps FCS optics for POWER7 and

System x blade connections to the customer SAN

zBX support for Windows on select System x blades

New DataPower XI50z firmware support

zBX support for AIX 7.1 on select POWER7 blades

Unified Resource Manager APIs

Unified Resource Manager dynamic discovery of storage

resources

z/VM 6.2 enhanced virtualization capabilities

z196 z114

zBX

IBM DB2 Analytics Accelerator

Page 30: Introduction to mainframe hardware

30

59

Design highlights

Offer a flexible infrastructure

Offer state-of-the-art integration capability

Offer high performance

Offer the high capacity and scalability

Offer the capability of concurrent upgrades for processors, memory, and I/O

connectivity

Implement a system with high availability and reliability

Have broad internal and external connectivity offerings

Provide the highest level of security in which every two PUs share a CP Assist for

Cryptographic Function (CPACF).

Be self-managing and self-optimizing – IRD, Hiperdispatch

Have a balanced system design

60

z196 Processor Design Basics

Processor Design

– CPU (core)

• cycle time

• pipeline, execution order

• branch prediction

• hardware vs. millicode*

– Memory subsystem (nest)

• high speed buffers

(caches)

on chip, on book

private, shared

• buses

number, bandwidth

• latency

distance

speed of light

Logical View of Single Book

Memory

Shared Cache

Private Cache

CPU

Private Cache

CPU

Private Cache

CPU …

*executes complex instructions

Page 31: Introduction to mainframe hardware

31

61

Performance Drivers with z196

Hardware

–Memory subsystem

• focus on keeping data “closer” to the processor unit

• new chip-level shared cache

• much larger book-level shared cache

–Processor

• Out-of-Order execution

• new instructions to allow for reduced processor quiesce effects

reduced cache misses reduced pipeline disruption

• up to 80 configurable processor units

• Total of 4 different uni speeds, 3 sub-capacity

HiperDispatch

–exploits new cache topology

–reduced cross-book “help”

–better locality for multi-task address spaces

State benefits if you re-compile (Run-time modules or to make use of out-of-order execution)

62

Hypervisor and OS Basics

Hypervisor (PR/SM)

– Virtualization layer at OS level

– Distributes physical resources

• Memory

• Processors

Logicals dispatched on physicals

Dedicated

Shared

Affinities

OS

– Virtualization layer at address space level

– Distributes logical resources

• Memory

• Processors

Tasks dispatched on logicals

Enhanced cooperation

– HiperDispatch with z10 EC

• z/OS + PR/SM

Memory

L2 Cache L2 Cache

L1

CPU

L1

CPU

Memory

L1

CPU

L1

CPU

L1

CPU

… L1

CPU

Logical View of 2 books

PR/SM

… LP1 LP2 LPn

TCBa

TCBb TCBx z/OSi

… LP1 LP2 LPm

TCBa

TCBb TCBx z/OSj

Page 32: Introduction to mainframe hardware

32

63

z196 – Under the covers (Model M66 or M80)

Internal

Batteries

(optional)

Power

Supplies

I/O cage

Fiber Quick Connect (FQC) Feature

(optional)

Processor Books, Memory, MBA and

HCA cards

2 x Cooling

Units

InfiniBand I/O

Interconnects

2 x Support

Elements

FICON & ESCON FQC

Ethernet cables for

internal System LAN

connecting Flexible

Service Processor (FSP) cage controller

cards

I/O drawers

64

z196 Water cooled – Under the covers (Model M66 or M80) front view

z196TLLB64

Internal

Batteries

(optional)

Power

Supplies

I/O cage

Processor Books, Memory, MBA and

HCA cards

2 x Water Cooling

Units

InfiniBand I/O

Interconnects

Support

Elements

Ethernet cables for

internal System LAN

connecting Flexible

Service Processor (FSP) cage controller

cards

I/O drawers

Page 33: Introduction to mainframe hardware

33

65

z196 System Upgrades: Protecting your investments

z196 to higher hardware z196

model

– Upgrade of z196 Models M15, M32,

M49 and M66 to M80 is disruptive

– When upgrading to z196 all the Books

are replaced

– Upgrade from Air to Water cooled not

available

Any z9 EC to any z196

Any z10 EC to any z196

z9 EC

z10 EC

M15

M32

M49

M66

M80

Co

ncu

rren

t U

pg

rad

e

66

z196 Architecture

Continues line of upward-compatible Mainframe processors

Rich CISC (Complex Instruction Set Computer) Instruction Set Architecture

(ISA)

– 984 instructions (762 implemented entirely in hardware)

– 24, 31, and 64-bit addressing modes

– Multiple address spaces robust inter-process security

– Multiple arithmetic formats

– Industry-leading virtualization support

• High-performance logical partitioning via PR/SM

• Fine-grained virtualization via z/VM scales to 1000‟s of images

– Precise, model-independent definition of hardware/software interface

Architectural extensions for IBM z196

– 110+ new instructions added to improve compiled code efficiency

– Decimal floating point quantum exceptions

– New crypto functions and modes

– Virtual architectural level

*RISC = Reduced Instruction Set Computer

Page 34: Introduction to mainframe hardware

34

67

z196 Book Concept

68

Book design

A z196 system has up to four books on a fully connected topology,

up to 80 processor units can be characterized, and up to 3 TB of

memory capacity. Memory has up to 12 memory controllers, using

5-channel redundant array of independent memory (RAIM)

protection, with DIMM bus cyclic redundancy check (CRC) error

retry. The 4-level cache hierarchy is implemented with eDRAM

(embedded) caches. Up until recently eDRAM was considered to

be too slow for this kind of use, however, a break-through in IBM

technology has negated that. In addition eDRAM offers higher

density, less power utilization, fewer soft-errors, and better

performance. Concurrent maintenance allows dynamic book add

and repair.

The z196 server uses 45nm chip technology, with advancing low

latency pipeline design, leveraging high speed yet power efficient

circuit designs. The multichip module (MCM) has a dense

packaging, allowing modular refrigeration units (MRUs) cooling, or

as an option, water cooling. The water cooling option is

recommended as it can lower the total power consumption of the

server.

Page 35: Introduction to mainframe hardware

35

69

Book Layout MCM @ 1800W

Refrigeration Cooled or

Water Cooled Backup Air Plenum

8 I/O FAN OUT

2 FSP

3x DCA 14X DIMMs

100mm High

16X DIMMs

100mm High

11 VTM Card Assemblies

8 Vertical

3 Horizontal

Rear Front

DCA Power Supplies

Fanout

Cards

Cooling from/to MRU

Memory

Memory

70

z196 PU chip, SC chip and MCM

z196 Quad Core PU CHIP

MCM

BOOK 96 MB

SC CHIP Front View

Front View

Fanouts

Page 36: Introduction to mainframe hardware

36

71

Fully connected 4 Book system:

96 total cores, 80way

Total system cache

768 MB shared L4 (eDRAM)

576 MB L3 (eDRAM)

144 MB L2 private (SRAM)

19.5 MB L1 private (SRAM)

CP1 CP1 CP2 CP2

CP4 CP4 CP5 CP5 CP3 CP3

SC0 SC1

Mem1 Mem0

FBCs

Mem2

CP0 CP0

Book:

FBCs

z196 Cache / Book Topology

The z196 Books are fully interconnected

in a point to point topology as shown in

the diagram

Data transfers are direct between Books

via the Level 4 Cache chip in each

MCM.

Level 4 Cache is shared by all PU chips

on the MCM

72 3/2/2012

192MB eDRAM

Inclusive L4

2 SC Chips

LRU Cast-Out

CP Stores

Data Fetch Return

24MB eDRAM

Inclusive L3

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

24MB eDRAM

Inclusive L3

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

24MB eDRAM

Inclusive L3

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

24MB eDRAM

Inclusive L3

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

24MB eDRAM

Inclusive L3

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

24MB eDRAM

Inclusive L3

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

L1

1.5MB

L2

PU PU PU PU PU PU

z196 Book Level Cache Hierarchy

Page 37: Introduction to mainframe hardware

37

73

Cache levels and memory structure

74

Enhanced book availability

Book Add

– Model Upgrade by the addition of a single new book adding physical processors, memory,

and I/O Connections.

Continued Capacity with Fenced Book

– Make use of the LICCC defined resources of the fenced book to allocate physical resources

on the operational books as possible.

Book Repair

– Replacement of a defective book when that book had been previously fenced from the

system during the last IML.

Book Replacement

– Removal and replacement of a book for either repair or upgrade

Page 38: Introduction to mainframe hardware

38

75

z196 Processor Unit

Design

76

Mainframe Servers Continue to Scale with z196

Each new range continues to deliver:

– New function

– Unprecedented capacity to meet

consolidation needs

– Improved efficiency to further

reduce energy consumption

– Continues to delivering flexible and

simplified on demand capacity

– A Mainframe that goes beyond the

traditional paradigm

z900 z/OS 1.6

z990 z/OS 1.6

z9 EC z/OS 1.6

z10 EC z/OS 1.8

77 engines 64-way

Minimum PCI

20 engines 16-way

48 engines 32-way

64 engines 54-way

Ma

xim

um

PC

I

z900

z990

z9 EC

z10 EC

z/OS release used for LSPR measurements

z196 measurements are for a xx-way

PCI - (Processor Capacity Index

96 engines 80-way

z196 z/OS 1.11

z196

Page 39: Introduction to mainframe hardware

39

77

0

1000

2000

3000

4000

5000

1997 G4

1998 G5

1999 G6

2000 z900

2003 z990

2005 z9 EC

2008 z10 EC

2010 z196

300

MHz

420

MHz

550

MHz

770

MHz

1.2

GHz

1.7

GHz

4.4

GHz

5.2

GHz

G4 – 1st full-custom CMOS S/390®

G5 – IEEE-standard BFP; branch target prediction G6 – Copper Technology (Cu BEOL)

z900 – Full 64-bit z/Architecture

z990 – Superscalar CISC pipeline z9 EC – System level scaling

z10 EC – Architectural extensions z196 – Additional Architectural

extensions and new cache structure

MH

z

z196 Continues the CMOS Mainframe Heritage

Fastest

Microprocessor

In The Industry

78

z196 GHz Leadership

Mainframe continues to grow frequency by

– Leveraging 45nm technology

– Advancing low latency pipeline design

– Leveraging high speed yet power-efficient circuit designs

z196 achieves industry leadership 5.2 GHz operation

High-frequency processor design is enabled by

– Innovative power-efficient circuit designs

– IBM‟s SOI silicon technology

– Leadership packaging technology including glass-ceramic MCMs

– Leadership cooling technology, with choice of MRU or chilled-water cooling

GHz is especially important for CPU-intensive applications

System performance is not linear with frequency

– Need to use LSPR + Mainframe capacity planning tools for real client /

workload sizing

GHz is not the only dimension that matters

– Must balance frequency, pipeline efficiency, energy efficiency, cache/memory

design, I/O design

Page 40: Introduction to mainframe hardware

40

79

z196 Out of Order (OOO) Value

OOO yields significant performance benefit for compute intensive apps through

–Re-ordering instruction execution

• Later (younger) instructions can execute ahead of an older stalled instruction

–Re-ordering storage accesses and parallel storage accesses

OOO maintains good performance growth for traditional apps

L1 miss

Instrs

1

2

3

4

5

Time

In-order core execution Out-of-order core execution

L1 miss

Time Execution

Storage access

80

Superscalar processor

A scalar processor is a processor that is based on a single-issue architecture, which means that only

a single instruction is executed at a time. A superscalar processor allows concurrent execution of

instructions by adding additional resources onto the microprocessor to achieve more parallelism

by creating multiple pipelines, each working on its own set of instructions.

A superscalar processor is based on a multi-issue architecture. In such a processor, where

multiple instructions can be executed at each cycle, a higher level of complexity is reached,

because an operation in one pipeline stage might depend on data in another pipeline stage.

Therefore, a superscalar design demands careful consideration of which instruction sequences

can successfully operate in a long pipeline environment.

On the z196, up to three instructions can be decoded per cycle and up to five

instructions/operations can be executed per cycle. Execution can occur out of (program) order.

If the branch prediction logic of the microprocessor makes the wrong prediction, removing all

instructions in the parallel pipelines also might be necessary. Obviously, the cost of the wrong

branch prediction is more costly in a high-frequency processor design. For this reason, a variety of

history-based branch prediction mechanisms are used.

The branch target buffer (BTB) runs ahead of instruction cache pre-fetches to prevent branch

misses in an early stage. Furthermore, a branch history table (BHT) in combination with a pattern

history table (PHT) and the use of tagged multi-target prediction technology branch prediction offer

an extremely high branch prediction success rate.

Page 41: Introduction to mainframe hardware

41

81

Branch prediction

Because of the ultra high frequency of the PUs, the penalty for a wrongly predicted branch is

high. For that reason a multi-pronged strategy for branch prediction, based on gathered

branch history combined with several other prediction mechanisms, is implemented on each

microprocessor. The branch history table (BHT) implementation on processors has a large

performance improvement effect. Originally introduced on the IBM ES/9000 9021 in 1990,

the BHT has been continuously improved.

The BHT offers significant branch performance benefits. The BHT allows each PU to take

instruction branches based on a stored BHT, which improves processing times for

calculation routines. Besides the BHT, the z196 uses a variety of techniques to improve the

prediction of the correct branch to be executed. The techniques include:

Branch history table (BHT)

Branch target buffer (BTB)

Pattern history table (PHT)

BTB data compression

The success rate of branch prediction contributes significantly to the superscalar aspects of

the z196. This is because the architecture rules prescribe that, for successful parallel

execution of an instruction stream, the correctly predicted result of the branch is essential.

82

Wild branch

When a bad pointer is used or when code overlays a data area containing a pointer to code, a random branch is the result, causing a 0C1 or 0C4 abend. Random branches are very hard to diagnose because clues about how the system got there are not evident. With the wild branch hardware facility, the last address from which a successful branch instruction was executed is kept. z/OS uses this information in conjunction with debugging aids, such as the SLIP command, to determine where a wild branch came from and might collect data from that storage location. This approach decreases the many debugging steps necessary when looking for where the branch came from.

IEEE floating point Over 130 binary and hexadecimal floating-point instructions are present in z196. They incorporate IEEE Standards into the platform.

Translation look-aside buffer The translation look-aside buffer (TLB) in the instruction and data L1 caches use a secondary TLB to enhance performance. In addition, a translator unit is added to translate misses in the secondary TLB.

Instruction fetching, decode, and grouping The superscalar design of the microprocessor allows for the decoding of up to three instructions per cycle and the execution of up to five instructions per cycle. Both execution and storage accesses for instruction and operand fetching can occur out of sequence.

Extended translation facility Used in data conversion operations for data encoded in Unicode, causing applications that are enabled for Unicode or globalization to be more efficient. These data-encoding formats are used in Web services, grid, and on-demand environments where XML and SOAP technologies are used.

Instruction set extensions Hexadecimal floating point, Immediate, Load, Cryptographic, Extended Translate Facility-3 and Assist instructions,

Page 42: Introduction to mainframe hardware

42

83

z196 Compression and Cryptography Accelerator

Data compression engine

– Static dictionary compression and expansion

– Dictionary size up to 64KB (8K entries)

• Local 16KB cache per core for dictionary

data

CP Assist for Cryptographic Function

(CPACF)

– Enhancements for new NIST standard

– Complemented prior ECB and CBC

symmetric cipher modes with XTS, OFB,

CTR, CFB, CMAC and CCM

– New primitives (128b Galois Field multiply) for

GCM

Accelerator unit shared by 2 cores

– Independent compression engines

– Shared cryptography engines

Core 0 Core 1

IB IB OB OB TLB TLB

2nd Level Cache

Cmpr Exp

Cmpr Exp

16K 16K

Crypto Cipher

Crypto Hash

84

Decimal floating point accelerator

The decimal floating point (DFP) accelerator function is present on each of the microprocessors (cores) on the quad-core chip. Its implementation meets business application requirements for better performance, precision, and function.

Base 10 arithmetic is used for most business and financial computation. Floating point computation that is used for work typically done in decimal arithmetic has involved frequent necessary data conversions and approximation to represent decimal numbers. This has made floating point arithmetic complex and error-prone for programmers using it for applications in which the data is typically decimal data.

Benefits

Avoids rounding issues such as those happening with binary-to-decimal conversions.

Has better functionality over existing binary coded decimal (BCD) operations.

Follows the standardization of the dominant decimal data and decimal operations in commercial computing supporting industry standardization (IEEE 745R) of decimal floating point operations. Instructions are added in support of the Draft Standard for Floating-Point Arithmetic, which is intended to supersede the ANSI/IEEE Std 754-1985.

Page 43: Introduction to mainframe hardware

43

85

Processor error detection and recovery

The PU uses something called transient recovery as an error recovery mechanism.

When an error is detected, the instruction unit retries the instruction and attempts to

recover the error. If the retry is not successful (that is, a permanent fault exists), a

relocation process is started that restores the full capacity by moving work to another

PU. Relocation under hardware control is possible because the R-unit has the full

architected state in its buffer

86

z196 Multi-Chip Module (MCM) Packaging

96mm x 96mm MCM

– 103 Glass Ceramic layers

– 8 chip sites

– 7356 LGA connections

– 20 and 24 way MCMs

– Maximum power used by MCM is 1800W

CMOS 12s chip Technology

– PU, SC, S chips, 45 nm

– 6 PU chips/MCM – Each up to 4 cores • One memory control (MC) per PU chip

• 23.498 mm x 21.797 mm

• 1.4 billion transistors/PU chip

• L1 cache/PU core

64 KB I-cache

128 KB D-cache

• L2 cache/PU core

1.5 MB

• L3 cache shared by 4 PUs per chip

24 MB

• 5.2 GHz

– 2 Storage Control (SC) chip • 24.427 mm x 19.604 mm

• 1.5 billion transistors/SC chip

• L4 Cache 96 MB per SC chip (192 MB/Book)

• L4 access to/from other MCMs

– 4 SEEPROM (S) chips • 2 x active and 2 x redundant

• Product data for MCM, chips and other engineering

information

– Clock Functions – distributed across PU and SC chips • Master Time-of-Day (TOD) function is on the SC

PU 0 PU 2

SC 0 SC 1

PU 1

S00

S01

PU 5 PU 3 PU 4

S10

S11

Page 44: Introduction to mainframe hardware

44

87

z196 Quad Core PU Chip Detail

Up to Four active cores per chip

– 5.2 GHz

– L1 cache/ core • 64 KB I-cache • 128 KB D-cache

– 1.5 MB private L2 cache/ core

Two Co-processors (COP)

– Crypto & compression accelerators

– Includes 16KB cache

– Shared by two cores

24MB eDRAM L3 Cache

– Shared by all four cores

Interface to SC chip / L4 cache

– 41.6 GB/sec to each of 2 SCs

I/O Bus Controller (GX)

– Interface to Host Channel Adapter (HCA)

Memory Controller (MC)

– Interface to controller on memory DIMMs

– Supports RAIM design

Chip Area – 512.3mm2

– 23.5mm x 21.8mm

– 8093 Power C4‟s

– 1134 signal C4‟s

12S0 45nm SOI

Technology

– 13 layers of metal

– 3.5 km wire

1.4 Billion Transistors

88

L4 Cache

(24MB)

Perv

Clk Repower

PLL

L4 Controller Fabric

IOs

Data

Bit-

Stack

ETR/

TOD

L4 Cache

(24MB)

L4 Cache

(24MB)

L4 Cache

(24MB)

Fabric

IOs

Data

Bit-

Stack Perv Perv

z196 SC Chip Detail

12S0 45nm SOI

Technology – 13 layers of metal

Chip Area –

478.8mm^2 – 24.4mm x 19.6mm

– 7100 Power C4‟s

– 1819 signal C4‟s

1.5 Billion

Transistors – 1 Billion cells for

eDRAM

eDRAM Shared L4

Cache – 96 MB per SC chip

– 192 MB per Book

6 CP chip interfaces

3 Fabric interfaces

2 clock domains

5 unique chip voltage

supplies

Page 45: Introduction to mainframe hardware

45

89

Major advance in Mainframe processor design

– Deep, high-frequency (5.2 GHz) pipeline

– Aggressive out of order execution core

– 4-level cache hierarchy with eDRAM L3 and L4

Synergy between hardware and software design

– z/Architecture (ISA - Instruction Set Architecture) extensions

– Compiler and micro-architecture co-optimization

– Robust performance gain on existing binaries (code)

Major step up in processor performance

– Up to 40% performance gain on existing compute-intensive code

– Additional gains achievable with recompilation

z196 Microprocessor Summary

90

z196 Processor Features

Model Books/

PUs CPs

IFLs

uIFLs zAAPs zIIPs ICFs

SAPs

Std

Optional

SAPs

Std.

Spares

M15 1/20 0-15 0-15 0-14

0-7 0-7 0-15 3 0-4 2

M32 2/40 0-32 0-32 0-31

0-16 0-16 0-16 6 0-10 2

M49 3/60 0-49 0-49 0-48

0-24 0-24 0-16 9 0-15 2

M66 4/80 0-66 0-66 0-65

0-33 0-33 0-16 12 0-20 2

M80 4/96 0-80 0-80 0-79

0-40 0-40 0-16 14 0-18 2

Notes: 1. At least one CP, IFL, or ICF must be purchased in every machine

2. One zAAP and one zIIP may be purchased for each CP purchased even if CP capacity is “banked”.

3. “uIFL” stands for Unassigned IFL

z196 Models M15 to M66 use books each with a 20 core MCM (two 4-core and four 3-core PU chips)

Concurrent Book Add is available to upgrade from model to model (except to the M80)

z196 Model M80 has four books each with a 24 core MCM (six 4-core PU chips)

Disruptive upgrade to z196 Model M80 is done by book replacement

Page 46: Introduction to mainframe hardware

46

91

z196 Memory

92

Memory design

The z196 memory design also provides

flexibility and high availability

Memory assignment or allocation is done at

power-on reset (POR) when the system is

initialized. PR/SM is responsible for the

memory assignments.

PR/SM has knowledge of the amount of

purchased memory and how it relates to

the available physical memory in each of

the installed books. PR/SM has control

over all physical memory and therefore is

able to make physical memory available to

the configuration when a book is

nondisruptively added. PR/SM also

controls the reassignment of the content of

a specific physical memory array in one

book to a memory array in another book.

Page 47: Introduction to mainframe hardware

47

93

Large page support

By default, page frames are allocated with a 4 KB size. The z196 supports a large page size

of 1 MB. The first z/OS release that supports large pages is z/OS V1R9.

The translation look-aside buffer (TLB) exists to reduce the amount of time required to

translate a virtual address to a real address by dynamic address translation (DAT) when it

needs to find the correct page for the correct address space.

It is very desirable to have one's addresses in the TLB. With 4 K pages, holding all the

addresses for 1 MB of storage takes 256 TLB lines. When using 1 MB pages, it takes only 1

TLB line. This means that large page size exploiters have a much smaller TLB footprint.

Hardware system area

The hardware system area (HSA) is a non-addressable storage area that contains server

Licensed Internal Code and configuration-dependent control blocks. The HSA has a fixed

size of 16 GB and is not part of the purchased memory that is ordered and installed.

94

z196 Redundant Array of Independent Memory (RAIM)

z196 introduces the redundant array of

independent memory (RAIM). The RAIM

design detects and recovers from DRAM,

socket, memory channel or DIMM failures.

The RAIM design requires the addition of

one memory channel that is dedicated for

RAS

The parity of the four “data” DIMMs are

stored in the DIMMs attached to the fifth

memory channel. Any failure in a memory

component can be detect and corrected

dynamically. This design takes the RAS of

the memory subsystem to another level,

making it essentially a fully fault tolerant

“N+1” design.

Page 48: Introduction to mainframe hardware

48

95

z196 Memory Offerings

Purchase Memory – Memory available for

assignment to LPARs

Hardware System Area – Standard 16 GB

outside customer memory for system use

Standard Memory – Provides minimum

physical memory required to hold base

purchase memory plus 16 GB HSA

Flexible Memory – Provides additional

physical memory needed to support

activation base customer memory and HSA

on a multiple book z196 with one book out of

service.

Plan Ahead Memory – Provides additional

physical memory needed for a concurrent

upgrade (LIC CC change only) to a

preplanned target customer memory

Model Standard

Memory GB

Flexible

Memory

GB

M15 32 - 704 NA

M32 32 - 1520 32 - 704

M49 32 - 2288 32 - 1520

M66 32 - 3056 32 - 2288

M80 32 - 3056 32 - 2288

96

z196 Connectivity

Page 49: Introduction to mainframe hardware

49

97

z196 I/O Infrastructure

Book 3 Book 1 Book 0 Book 2

IFB-MP RII

333 MBps mSTI

ESCON

Channels

500 MBps mSTI

ISC-3

ISC

2 GBps mSTI

OSA-Express3

10 GbE

Ports

2GBps

mSTI

HCA

FBC/L4 Cache

HCA

FBC/L4 Cache

HCA

PU

FBC/L4 Cache

HCA

FBC/L4 Cache

. .

FP

GA

FP

GA

1st level CopperCables

6 GBps

2nd level Embedded

HCA2-C fanout

RII RII RII IFB-MP IFB-MP IFB-MP IFB-MP IFB-MP

Coupling Links

PU PU PU PU PU PU PU PU

PU PU PU PU PU PU PU PU PU PU PU

IFB-MP

FICON Express8

2/4/8 Gbps

2 GBps

mSTI

Channels

.

1 GBps

mSTI

ESCON ESCON ESCON ESCON

IFB-MP

ISC ISC ISC

PU PU PU PU

Memory Memory Memory Memory

98

z196 system structure for I/O

Acronym Full name

AID Adapter identification

CIB Coupling using InfiniBand

HCA Host Channel Adapter

MBA Memory Bus Adapter

PSIFB Parallel Sysplex using

InfiniBand

IFB-MP InfiniBand Multiplexer

STI-MP Self-Timed Interconnect

Multiplexer

Page 50: Introduction to mainframe hardware

50

99

z196 Channel Type and Crypto Overview

I/O Channels

FICON Express8S

FICON Express8

FICON Express4 (Carry forward on upgrade)

ESCON – Migrate Away (240 or fewer)

Networking

OSA-Express4S

OSA-Express3

• 10 Gigabit Ethernet LR and SR

• Gigabit Ethernet LX and SX

• 1000BASE-T Ethernet

OSA-Express2 (Carry forward on upgrade)

• 1000BASE-T Ethernet

• Gigabit Ethernet LX and SX

HiperSockets (Define only)

Coupling Links

InfiniBand Coupling Links

• 12x InfiniBand

• 1x InfiniBand

ISC-3 – Migrate Away (Peer mode only)

IC (Define only)

Crypto

Crypto Express3

• Configurable Coprocessor / Accelerator

Supported Channel Types

100

CHPID type Purpose / Traffic Operating Systems

OSD

All OSA features

z196, z10, z9, zSeries

Supports Queue Direct Input/Output (QDIO) architecture

TCP/IP traffic when Layer 3 (uses IP address)

Protocol-independent when Layer 2 (uses MAC address)

z/OS, z/VM

z/VSE, z/TPF

Linux on Mainframe

OSE

1000BASE-T

z196, z10, z9, zSeries

Non-QDIO; for SNA/APPN/HPR traffic

and TCP/IP “passthru” traffic

z/OS, z/VM

z/VSE

OSC

1000BASE-T

z196, z10, z9, z990, z890

OSA-Integrated Console Controller (OSA-ICC)

Supports TN3270E, non-SNA DFT to IPL CPCs & LPs

z/OS, z/VM

z/VSE

OSM

1000BASE-T

z196 exclusive

OSA-Express for Unified Resource Manager

Connectivity to intranode management network (INMN)

from z196 to Unified Resource Manager functions

z/OS, z/VM

Linux on Mainframe*

OSN

GbE, 1000BASE-T

z196, z10, z9 exclusive

OSA-Express for NCP

Appears to OS as a device supporting CDLC protocol

Enables Network Control Program (NCP) channel-related functions

Provides LP-to-LP connectivity

OS to IBM Communication Controller for Linux (CCL)

z/OS, z/VM

z/VSE, z/TPF

Linux on Mainframe

OSX

10 GbE

z196 exclusive

OSA-Express for zBX

Connectivity and access control to intraensemble data network (IEDN) from

z196 to zBX

z/OS, z/VM

Linux on Mainframe*

*IBM is working with its Linux distribution partners to include support in future Linux on Mainframe distribution releases.

OSA Express 3 Configuration Options Summary

Page 51: Introduction to mainframe hardware

51

101

FICON Express8S – SX and 10KM LX in the PCIe I/O drawer

For FICON, zHPF, and FCP environments

– CHPID types: FC and FCP

• 2 PCHIDs/CHPIDs

Auto-negotiates to 2, 4, or 8 Gbps

Increased performance compared to

FICON Express8

10KM LX - 9 micron single mode fiber

– Unrepeated distance - 10 kilometers (6.2

miles)

– Receiving device must also be LX

SX - 50 or 62.5 micron multimode fiber

– Distance variable with link data rate and

fiber type

– Receiving device must also be SX

2 channels of LX or SX (no mix)

Small form factor pluggable (SFP) optics

– Concurrent repair/replace action for each SFP

LX SX

FC 0409 – 10KM LX, FC 0410 – SX

OR

2, 4, 8 Gbps

2, 4, 8 Gbps

FLASH

SFP+

SFP+ IBM

ASIC

IBM

ASIC

PCIe Switch

HBA ASIC

HBA ASIC

FLASH

102

HiperSockets

L3 z/VM LP4

L2 z/VM LP5

z/VSE LP3

Linux LP2

z/OS LP1

z196 HiperSockets – doubled the number

High-speed “intraserver” network

Independent, integrated, virtual LANs

Communication path – system memory

Communication across LPARs

– Single LPAR - connect up to 32 HiperSockets

Support for multiple LCSS's & spanned channels

Virtual LAN (IEEE 802.1q) support

HiperSockets Network Concentrator

Broadcast support for IPv4 packets

IPv6

HiperSockets Network Traffic Analyzer (HS NTA)

No physical cabling or external connections required

Page 52: Introduction to mainframe hardware

52

103

Type Description Use Link

data rate Distance

z196 Maximum

z196 Max links

z196 Max

CHPIDs

IC

(ICP)

Internal Coupling

Channel

Internal

communication

Internal

speeds NA 32 NA

128

CHPIDs

InfiniBan

d

(CIB)

12x InfiniBand z196 & z10

z196 & z10 to z9

6 GBps

3 GBps*

150 meters

(492 feet) 32

InfiniBan

d

(CIB)

1x InfiniBand z196 & z10 5 Gbps

or 2.5 Gbps**

10 km unrepeated

(6.2 miles)

100 km repeated

32 32

ISC-3

(CFP)

InterSystem

Channel-3 z196, z10, z9 2 Gbps

10 km unrepeated

(6.2 miles)

100 km repeated

48 48

Maximum of 16 InfiniBand fanouts are allowed, 2 links per fanout

Maximum of 128 coupling CHPIDs (ICP, CIB, CFP) per server

Each InfiniBand link supports definition of multiple CIB CHPIDs, up to 16 per fanout * z196 & z10 negotiate to 3 GBps when connected to a z9 ** May negotiate to 2.5 Gbps when connected to a DWDM Note: ICB-4 is not supported on z196

z196 Coupling Link Options

104

InfiniBand coupling links on z196 and z114 Summary

* For z114 carry forward only ** Performance considerations may reduce the number of CHPIDs per port

Type Speed Distance Fanout Cabling

12x InfiniBand 6 or 3 GBps 150 meters HCA2-O HCA3-O

50µ MM (OM3) fiber

1x InfiniBand 5 or 2.5 Gbps 10 km HCA2-O LR* HCA3-O LR

9µ SM fiber

Ports exit from the front of a book/processor drawer with HCAs. Unlike ISC-3 links, does not use I/O card slots

12x InfiniBand – z196, z114, z10, z9 – 6 GBps

• z196/z114 and z10 – 3 GBps

• z196/z114 & z10 to z9 • z9 to z9 connection not supported

1x InfiniBand – z196/z114 and z10 (not available for z9)

– 5 Gbps (Server to Server and with DWDM) – 2.5 Gbps (with DWDM)

HCA2-O IFB IFB

Up to 16 CHPIDs – across 2 ports

HCA2-O LR* IFB IFB

Up to 16 CHPIDs – across 2 ports

Up to 16 CHPIDs – across 2 ports**

HCA3-O IFB & IFB3 IFB & IFB3

Up to 16 CHPIDs – across 4 ports**

HCA3-O LR IFB IFB IFB IFB

Page 53: Introduction to mainframe hardware

53

105

Cryptographic Coprocessor Facility – Supports “Secure key” cryptographic processing PCICC Feature – Supports “Secure key” cryptographic processing PCICA Feature – Supports “Clear key” SSL acceleration PCIXCC Feature – Supports “Secure key” cryptographic processing CP Assist for Cryptographic Function allows limited “Clear key” crypto functions from any CP/IFL

– NOT equivalent to CCF on older machines in function or Crypto Express2 capability Crypto Express2 – Combines function and performance of PCICA and PCICC Crypto Express3 – PCI-e Interface, additional processing capacity with improved RAS

Mainframe Crypto History

2001 2002 2003 2004 2005

Cryptographic Coprocessor Facility (CCF)

PCI Cryptographic Coprocessor (PCICC)

ICSF

OS/390

Cryptographic

Coprocessor

Single Chip Module G3, G4, G5, G6, z900, z800

G5, G6, z900, z800

PCI Cryptographic Accelerator (PCICA)

z800/z900 z990

2006

z990 PCIX Cryptographic Coprocessor

(PCIXCC)

Crypto Express2

z890

z890

z990/z890

2007

z9 EC

CP Assist for Cryptographic Functions z990 z890 z9 EC z9 BC

z9 BC

2008

z10 EC/BC

z10 EC/BC

2009

Crypto Express3 z10 EC/BC

2010

z196

106

Coprocessor #1

Coprocessor #2

PCIe interfaces

Crypto Express3 2-P Physical View

Page 54: Introduction to mainframe hardware

54

107

Crypto Express3 features highlights

Virtualization: all logical partitions (LPARs) in all Logical Channel Subsystems (LCSSs) have

access to the Crypto Express3 feature, up to 32 LPARs per feature

Designed for improved reliability, availability and serviceability (RAS)

Executes its cryptographic functions asynchronously to a Central Processor (CP) operation

Dynamic addition / configuration of cryptographic features to logical partitions without an

outage

Dynamic power management to maximize RSA performance while keeping within

temperature limits of the tamper-responding package

Secure code loading that enables the updating of functionality while installed in application

systems

Lock-step-checking of dual CPUs for enhanced error detection and fault isolation of

cryptographic operations performed by coprocessor when a PCIe adapter is defined as a

coprocessor

Updated cryptographic algorithms used in loading the Licensed Internal Code (LIC) with the

TKE workstation to keep in step with current recommendations for cryptographic strength

Support for smart card applications using Europay, MasterCard, Visa specifications

Health Monitoring of mesh, temperature, voltage, soft tamper and low battery. Monitoring is

internal to the card. Cannot be accessed or exploited by the customer

108

z196 New and exclusive cryptographic capabilities

Elliptic Curve Cryptography Digital Signature Algorithm (ECC), an emerging public key algorithm expected eventually to replace RSA cryptography in many applications. ECC is capable of providing digital signature functions and key agreement functions. The new CCA functions provide ECC key generation and key management and provide digital signature generation and verification functions compliance with the ECDSA method described in ANSI X9.62 "Public Key Cryptography for the Financial Services Industry: The Elliptic Curve Digital Signature Algorithm (ECDSA)". ECC uses keys that are shorter than RSA keys for equivalent strength-per-key-bit; RSA is impractical at key lengths with strength-per-key-bit equivalent to AES-192 and AES-256. So the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves.

ANSI X9.8 PIN security which facilitates compliance with the processing requirements defined in the new version of the ANSI X9.8 and ISO 9564 PIN Security Standards and provides added security for transactions that require Personal Identification Numbers (PIN).

Enhanced Common Cryptographic Architecture (CCA), a Common Cryptographic Architecture (CCA) key token wrapping method using Cipher Block Chaining (CBC) mode in combination with other techniques to satisfy the key bundle compliance requirements in standards including ANSI X9.24-1 and the recently published Payment Card Industry Hardware Security Module (PCI HSM) standard.

Secure Keyed-Hash Message Authentication Code (HMAC), a method for computing a message authentication code using a secret key and a secure hash function. It is defined in the standard FIPS 198, "The Keyed-Hash Message Authentication Code". The new CCA functions support HMAC using SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 hash algorithms. The HMAC keys are variable-length and are securely encrypted so that their values are protected.

Modulus Exponent (ME) and Chinese Remainder Theorem (CRT), RSA encryption and decryption with key lengths greater than 2048-bits and up to 4096-bits.

Page 55: Introduction to mainframe hardware

55

109

z/OS z/VM

Linux Linux Linux

Virtualization with partitions

Cryptography

Common Criteria EAL4+

‒with CAPP and LSPP

‒z/OS 1.10

‒z/OS 1.11 June 2010

‒z/OS 1.12 under evaluation

IdenTrust™ certification for

z/OS as a Digital Certificate

Authority (PKI Services)

z/OS

z9 EC, z9 BC, z10 EC, z10 BC – EAL5

z196 - EAL5+

Common Criteria EAL5

for Logical partitions

Common Criteria EAL4+

‒z/VM 5.4 + RACF with

CAPP and LSPP –

designed for

compliance

‒Plan to have z/VM 6.1

evaluated during 2011

z/VM

Common Criteria

‒Novell SUSE SLES9

and 10 certified at

EAL4+ with CAPP

‒RHEL5 EAL4+ with

CAPP and LSPP

Linux on Mainframe

See: www.ibm.com/security/standards/st_evaluations.shtml

FIPS 140-2 level 4

for Crypto Express2

The Common Criteria

program developed by

NIST and NSA

establishes an

organizational and

technical framework to

evaluate the

trustworthiness of IT

Products and

protection profiles

Certifications for Mainframe

110

z196 RAS

Page 56: Introduction to mainframe hardware

56

111

Scheduled (CIE+Disrupt ive Pat ches + ECs)

Planned - (MES + Driver Upgrades)

Unscheduled (UIRA)

Sources of Outages Pre z9

-Hrs/Year/Syst- Prior Servers z9 EC Z10 EC z196

Unscheduled Outages

Scheduled Outages

Planned

Outages

Preplanning

requirements

Power & Thermal

Management

Temperature = Silicon Reliability Worst Enemy Wearout = Mechanical Components Reliability Worst Enemy.

Mainframe overall RAS Strategy …..Continuing our RAS focus helps avoid outages

Imp

act o

f Ou

tag

e

112

Redundancy in Mainframe Hardware

Power 2x Power Supply 2x Power feed

Internal Battery Feature Optional internal battery in

cause of loss of external power)

Cooling, air & water-cooling options Dynamic oscillator switchover Processors

Multiprocessors Spare PUs

Memory RAIM Chip sparing Error Correction and Checking

Enhanced book availability

The zEnterprise System continues to reduce

customer down time by focusing on all

sources of outages: unscheduled outages,

scheduled outages and planned outages.

Power and cooling requirements were reduced

while still managing reliability.

– Major new Memory design for preventing

outages

– Introducing new IO drawers technologies

designed for concurrent service

– Introducing Mainframe management to the

mixed computing environment

– Delivering Green functions and RAS together

Page 57: Introduction to mainframe hardware

57

113

Preventing All Outages

Unscheduled Outages

Advanced Memory RAIM (Redundant Array of Independent Memory) design

Enhanced Reed-Solomon code (ECC) – 90B/64B

Protection against Channel/DIMM failures

Chip marking for fast DRAM replacements

Mirrored Key cache

Improved chip packaging

Improved condensation management

Integrated TCP/IP checksum generation/checking

Integrated EPO switch cover (protecting the switch during repair actions)

Continued focus on Firmware

Scheduled Outages

Double memory data bus lane sparing (reducing repair actions)

Single memory clock bus sparing

Field Repair of interface between processor chip and cache chip and between

cache chips (fabric bus)

Fast bitline delete on L3/L4 cache (largest caches)

Power distribution using N+2 Voltage Transformation Modules (VTM)

Redundant (N+2) Humidity Sensors

Redundant (N+2) Altitude Sensors

Unified Resource Manager for zBX

114

Improving Reliability

Power & Thermal Optimization and Management

Static Power Save Mode under z/OS control

Smart blower management by sensing altitude and humidity

Enhanced evaporator design to minimize temperature variations

MCM cooling with N+1 design feature

MCM backup cooling with Heat Exchanger

Available air to water heat exchanger for customer water cooling

Concurrent Maintenance and Upgrades

– Duplex Units

• Power Supplies,

– Concurrent Microcode (Firmware) updates

– Hot Pluggable I/O

– PU Conversion

– Permanent and Temporary Capacity Upgrades

• Capacity Upgrade on Demand (CUoD)

• Customer Initiated Upgrade (CIU)

• On/Off Capacity on Demand (On/Off CoD)

– Capacity BackUp (CBU)

– Flexible & Plan Ahead Memory Options

Page 58: Introduction to mainframe hardware

58

115

z196 RAS Design of ……..Fully Redundant I/O Subsystem – of existing IO cage and drawers

Fully Redundant I/O Design SAP / CP sparing

SAP Reassignment

I/O Reset & Failover

I/O Mux Reset / Failover

Redundant I/O Adapter

Redundant I/O interconnect

Redundant Network Adapters

Redundant ISC links

Redundant Crypto processors

I/O Switched Fabric

Network Switched/Router Fabric

High Availability Plugging Rules

I/O and coupling fanout

rebalancing on CBA

Channel Initiated Retry

High Data Integrity Infrastructure

I/O Alternate Path

Network Alternate Path

Virtualization Technology

processor

Failover

I/O Mux

I/O Hub I/O Hub

I/O Mux I/O Mux I/O Mux

Failover

I/O

adapte

r.

I/O

adapte

r.

Netw

ork

adapte

r.

Netw

ork

adapte

r.

Cry

pro

Accele

rato

r

Cry

pto

Accele

rato

r

ISC

Lin

ks

ISC

Lin

ks

Alternate Path

I/O

Switch

Control

Unit

I/O

Switch

Control

Unit

Network

Switch

End

User

Network

Switch

End

User

116

Create a redundant I/O configuration

LP

AR

1

LP

AR

n

LP

AR

2

CSS /

CHPID

LP

AR

1

LP

AR

n

LP

AR

2

Director

(Switch)

....

DASD CU DASD CU

Page 59: Introduction to mainframe hardware

59

117

BACKUP Slides

117

118

Did You Know...

Page 60: Introduction to mainframe hardware

60

119

Did You Know...

120

Did You Know...

Page 61: Introduction to mainframe hardware

61

121

Did You Know...

122

Quick Quiz: Which is the Better Vehicle?

VS.

The Race - ¼ Mile Dragstrip

Maserati MC12 Peterbilt Semi

Page 62: Introduction to mainframe hardware

62

123

Quick Quiz: Which is the Better Vehicle?

VS.

The Race - ¼ Mile Dragstrip

Carrying 10 tons of crates

100 Maserati MC12 Peterbilt Semi

124

Is this Better?

VS.

The Race - ¼ Mile Dragstrip

Carrying 10 tons of crates

Peterbilt Semi

10 Ford F-450 1 ton Pickups

Page 63: Introduction to mainframe hardware

63

125

Mainframe History - Myth vs. Truth

http://www-01.ibm.com/software/info/getztruth/index.jsp

ftp://public.dhe.ibm.com/s390/misc/bookoffer/download/360revolution_040704.pdf

http://www.vm.ibm.com/devpages/jelliott/pdfs/zhistory.pdf

ftp://ftp.software.ibm.com/s390/audio/pdfs/zSeriesMainframeInnovations.pdf

http://www.ca.com/us/bigiron.aspx

126

Academic Initiative: Success Story

IBM Training Center for Mainframe opens at VIT, Vellore, India – Students in Indian universities and colleges are generally taught about personal

computers and small server technology. But mainframes run businesses, and students need exposure to this critical technology before they enter the workforce to maximize their effectiveness. The need was recognized by Chancellor G. Viswanathan of the Vellore Institute of Technology (VIT) in India – as well as by an IBM team looking for engineering institutes.

• The opportunity opened in September 2010 for IBM to establish a partnership with VIT, a top-tier academic institute, which was ready to commit to nurturing and growing a System z ecosystem. Developed as part of the IBM Academic Initiative System z Program, this unique initiative builds partnerships with colleges and universities worldwide to add IBM enterprise systems education to curricula and put the mainframe back in computer science programs.

– The program kicked off with several training sessions that the India System z ISV team conducted at the VIT campus to educate professors and students on z/OS concepts. IBM helped the university explore enterprise system solutions as they built a long-term partnership with professors and the student community. In August, the “IBM Training Center for Mainframes” was officially launched at VIT. This is the second Training Center for Mainframe of its type in the world, and the first one in Asia. It has 60 dedicated high-end computers with 24/7 connectivity to the centralized System z in Dallas, Texas.

– VIT has added a course on “Introduction to Mainframe Computing” and offered it as an elective in its Master of Computer Applications curriculum.

– Chancellor Viswanathan has referred to IBM as a big role model and ensured that VIT will make use of IBM‟s presence to foster innovation. “India has the big advantage of producing the largest number of IT professionals in the world. The VIT will try to emerge as one of IBM‟s best partners,” he said.

Page 64: Introduction to mainframe hardware

64

127

Master the Mainframe Contest

No experience with mainframes is necessary. In fact, the contest is designed for students

with little or no mainframe experience, increasing with difficulty as the contest progresses.

Students just need to bring drive and competitive spirit and be ready to compete!

Experience!

Today's mainframes are growing in popularity and require a new generation of mainframe

experts. This contest is designed to equip you with basic skills to make you more

competitive for jobs in the enterprise computing industry. Participation in the Master the

Mainframe contest could give you the edge you need.

128

References

Introduction to the New Mainframe: Large-Scale Commercial Computing, SG24-7175-00

IBM Mainframe Strengths and Values, SG24-7333-01

Introduction to the New Mainframe: z/OS Basics, SG24-6366-01

ABCs of z/OS System Programming Volume 1, SG24-6981-01

ABCs of z/OS System Programming Volume 2, SG24-6982-02

ABCs of z/OS System Programming Volume 3, SG24-6983-03

ABCs of z/OS System Programming Volume 4, SG24-6984-00

ABCs of z/OS System Programming Volume 5, SG24-6985-01

ABCs of z/OS System Programming Volume 6, SG24-6986-00

ABCs of z/OS System Programming Volume 7, SG24-6987-01

ABCs of z/OS System Programming Volume 8, SG24-6988-00

ABCs of z/OS System Programming Volume 9, SG24-6989-04

ABCs of z/OS System Programming Volume 9, SG24-6989-05

ABCs of z/OS System Programming Volume 10, SG24-6990-03

ABCs of z/OS System Programming Volume 11, SG24-6327-01

ABCs of z/OS System Programming Volume 12, SG24-7621-00

ABCs of z/OS System Programming Volume 13, SG24-7717-00

IBM zEnterprise 196 Technical Guide, SG24-7833-00

IBM zEnterprise System Technical Introduction, SG24-7832-00

IBM zEnterprise 114 Technical Guide, SG24-7954-00

IBM System z Connectivity Handbook, SG24-5444-12

z/Architecture Principles of Operation, SA22-7832-08

z/Architecture Reference Summary, SA22-7871-06