ibm system p - home | guug · e d s w t c h smt core smt core l 3 l 3 d m e m c r l m e m t r l smt...

35
IBM System p © 2007 IBM Corporation POWER6 - Virtualization Dr. Martin Springer IBM System p Technical Sales

Upload: truongtuyen

Post on 27-Apr-2018

217 views

Category:

Documents


3 download

TRANSCRIPT

IBM System p

© 2007 IBM Corporation

POWER6 - Virtualization

Dr. Martin Springer

IBM System p Technical Sales

IBM System p

2 © 2007 IBM Corporation IBM Systems

IBM’s 40-year History of Leadership in Virtualization

IBM develops hypervisorthat would become VM on the mainframe

IBM announces first machines to do physical partitioning

IBM announces LPAR on the mainframe

POWER LPAR designbegins

19671967 19731973 19871987

IBM introduces LPAR in POWER4™with AIX

Advanced POWER Virtualizationships

200420042001200119971997

client quote source: rku.it case study published at http://www.ibm.com/software/success/cssdb.nsf/CS/JSTS-6KXPPG?OpenDocument&Site=eserverpseries

“In our opinion, they [System p servers] bring mainframe-quality virtualization capabilities to the world of AIX.”

- Ulrich Klenke, CIO, rku.itJanuary 2006

Advanced POWER Virtualizationon IBM System p servers

������ �����

20072007

IBM announces POWER6, the first UNIX®servers with Live Partition Mobility

IBM System p

3 © 2007 IBM Corporation IBM Systems

Hypervisor software/firmwareruns directly on server.

Hypervisor software runs ona host operating system.

zSeries PR/SM and zVMPOWER HypervisorVMware ESX ServerEmerging open source hypervisor

VMware GSXMicrosoft Virtual ServerWin4Lin

Sun Domains, HP nPartitionsPhysical partitioning

Adjustablepartitions

PartitionController

...

SMP Server

OS

Apps

OS

Apps

Hypervisor

SMP Server

...OS

Apps

OS

Apps

Host OS

SMP Server

Hypervisor

...OS

Apps

OS

Apps

Hardware Partitioning Hypervisor: Type 1 Hypervisor: Type 2

Server-Virtualisierung

IBM System p

4 © 2007 IBM Corporation IBM Systems

POWER5/6 – CPU Virtualization�Processors

– Dedicated or shared processors

– Fine-grained resource allocation• Minimum 0.1 cores• Granularity 0.01 cores

– Shared processor controls*• # of virtual processors• Entitlements• Capped and uncapped• Weights

– Dynamically adjustable via DLPAR

�Memory– From 128MB to all of physical

memory

– Dedicated physical memory

– Dynamically adjustable via DLPAR

� Capacity On-Demand

DedicatedPhysical CPUs

Shared Pool ofPhysical CPUs

CPUCPU

CPUCPU CPUCPU

���

AIX 5L V5.22 CPU

AIX 5L V5.32.6 CPU

Weight: 50

i5/OS0.65 CPUWeight: 20

Linux0.75 CPUCapped

CoDCPUs

Virtual CPU

Virtual CPU

Virtual CPU

Virtual CPU

Virtual CPU Virtual CPU

Virtual CPU

CoD CPU

CoD CPU

Dynamic Spares andCapacity on Demand

�Scaling– Up to 254 partitions per server

– Partitions scale from 0.1 to 64 Cores

CPUCPU

IBM System p

5 © 2007 IBM Corporation IBM Systems

Dynamische Verteilung von freien Prozessor-Kapazitäten auf logische Partitionen

Granularität: 1/100 p5-CPU

In der Summe ist eine geringere Server-kapazität notwendig, da die Auslegung nicht mehr auf die Summe der jeweiligen Spitzenlasten erfolgen muss, sondern auf die Spitze der kumulierten Lasten.

Server Logischer Server

Logischer Server

Logischer Server

0%

20%

40%

60%

80%

100%

Die IBM pSeries DLPAR-Vorteile - wie die Isolation der Anwendungen - bleiben erhalten.

Dynamische Verteilung der Kapazität

p5-Mikropartitionierung

t

Lastabhängige, dynamische Zuordnung von CPUs oder

CPU-Anteilen (Micro Partioning)

IBM System p

6 © 2007 IBM Corporation IBM Systems

� Uncapped versus capped– Der Hypervisor gibt automatisch von den LPARs

(Uncapped und Capped) nicht genutzte CPU-Cycles in den Shared Processor Pool.

– Uncapped Mikropartitionen können mehrProzessorressourcen verwenden, wenn freieRessourcen im Shared Processor Pool zurVerfügung stehen.Die Zuteilung wird über einen Wichtungsfaktorgeregelt.

– Capped Mikropartitionen bekommen nie mehr alsdas zugewiesene Capacity Entitlement (CPU-Leistung).

Definition von Mikropartitionen

Shared ProzessorPool

SMT CoreSMT Core

1.9 MB L2 Cache1.9 MB L2 Cache

Chip-Chip / MCM-MCM / SMPLink

Enhanced distributed sw

itch

SMT CoreSMT Core

L3 Dir

L3 Dir

Mem

Ctrl

Mem

Ctrl

SMT CoreSMT Core

1.9 MB L2 Cache1.9 MB L2 Cache

Chip-Chip / MCM-MCM / SMPLink

Enhanced distributed sw

itch

SMT CoreSMT Core

L3 Dir

L3 Dir

Mem

Ctrl

Mem

Ctrl

SMT CoreSMT Core

1.9 MB L2 Cache1.9 MB L2 Cache

Chip-Chip / MCM-MCM / SMPLink

Enhanced distributed sw

itch

SMT CoreSMT Core

L3 Dir

L3 Dir

Mem

Ctrl

Mem

Ctrl

SMT CoreSMT Core

1.9 MB L2 Cache1.9 MB L2 Cache

Chip-Chip / MCM-MCM / SMPLink

Enhanced distributed sw

itch

SMT CoreSMT Core

L3 Dir

L3 Dir

Mem

Ctrl

Mem

Ctrl

CPU 0 CPU 1

CPU 2 CPU 3

POWER Hypervisor

Mikropartitionen

IBM System p

7 © 2007 IBM Corporation IBM Systems

Beispiel 1 for CPU Sharing (1/7)

LPAR 1 CE=0,4

...

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

CEat 0,4 CPUsuncapped

LPAR n

vP = Virtual Processor

vP

Shared Pool

uncapped, workload variable

capped, workload constant

capped, workload constant

capped, workload constant

capped, workload constant

vP vP

IBM System p

8 © 2007 IBM Corporation IBM Systems

Beispiel 1: LPAR1 uncapped (2/7)

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

10ms = 1 Dispatch Interval

CEat 0,4 CPUsuncapped

LPAR n

Shared Processor PoolCapacity of 0,4 Real CPUsis guaranteed for LPAR1

RealCPU1

RealCPU2

RealCPU3

RealCPU4

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP vP

IBM System p

9 © 2007 IBM Corporation IBM Systems

Beispiel 1: LPAR1 uncapped (3/7)

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

CEat 0,4 CPUsuncapped

LPAR n

Shared Processor PoolSince LPAR1 is uncapped, it can grab more CPU resources when available

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP vP

10ms = 1 Dispatch Interval

RealCPU1

RealCPU2

RealCPU3

RealCPU4

IBM System p

10 © 2007 IBM Corporation IBM Systems

Beispiel 1: LPAR1 uncapped (4/7)

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

CEat 0,4 CPUsuncapped

LPAR n

Shared Processor Pool

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP vP

10ms = 1 Dispatch Interval

RealCPU1

RealCPU2

RealCPU3

RealCPU4

IBM System p

11 © 2007 IBM Corporation IBM Systems

Beispiel 1: LPAR1 uncapped (5/7)

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

CEat 0,4 CPUsuncapped

LPAR n

Shared Processor Pool

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP vP

10ms = 1 Dispatch Interval

RealCPU1

RealCPU2

RealCPU3

RealCPU4

IBM System p

12 © 2007 IBM Corporation IBM Systems

Beispiel 1: LPAR1 uncapped (6/7)

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

CEat 0,4 CPUsuncapped

LPAR n

Shared Processor Pool

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP vP

10ms = 1 Dispatch Interval

RealCPU1

RealCPU2

RealCPU3

RealCPU4

IBM System p

13 © 2007 IBM Corporation IBM Systems

Beispiel1: LPAR1 uncapped (7/7)

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

CEat 0,4 CPUsuncapped

LPAR n

Shared Processor PoolIf workload demands< 0,4 CPUs, LPAR1 frees unused resourcesto PHYP

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP vP

10ms = 1 Dispatch Interval

RealCPU1

RealCPU2

RealCPU3

RealCPU4

IBM System p

14 © 2007 IBM Corporation IBM Systems

Beispiel 2: LPAR3 capped

LPAR 1

LPAR 2

LPAR 3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

Pool of RealCPUs

LPAR n

Shared Processor Pool

LPAR 3

CE=0,6 (capped)

CE=0,4

...

vP vP

vP vP vP CE=1,1

vP

vP vP

vP vP

CE=0,6

vP = Virtual Processor

vP

Shared Pool

vP

10ms = 1 Dispatch Interval

RealCPU1

RealCPU2

RealCPU3

RealCPU4

IBM System p

15 © 2007 IBM Corporation IBM Systems

CPU Sharing und Ressourcen

0

2

4

6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

02

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0

2

4

6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0

2

4

6

8

10

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

• Durch CPU Sharing gibt es keine nicht nutzbaren Ressourcen• Auslastung eines System p - Servers bis zu 100% möglich

IBM System p

16 © 2007 IBM Corporation IBM Systems

Hochverfügbare Storage-Anbindung:Spiegeln mit dem AIX Logical Volume Manager

Internal SCSI disksSAN attached disks

Virtual I/O Server 1

Phy

.ada

pter

Server SCSI adaptervhost0

Backed device:hdisk orlogical volume

vscsi target devicevtscsi0

Virtual I/O Server 2

Server SCSI adaptervhost0

Backed device:hdisk orlogical volume

vscsi target devicevtscsi0

Internal SCSI disksSAN attached disks

Client SCSI adaptervscsi0

Client SCSI adaptervscsi1

hdisk0rootvg

hdisk1rootvg

LVM mirroring

AIX client partition

POWER Hypervisor

LVM

Phy

.ada

pter

LVM

IBM System p

17 © 2007 IBM Corporation IBM Systems

SAN - Storage - LUN

Virtual I/O Server 1P

hy.a

dapt

er

Server SCSI adaptervhost0

Backed device:hdiskreserve_policy=no_reserve

vscsi target devicevtscsi0

Virtual I/O Server 2

Server SCSI adaptervhost0

Backed device:hdiskreserve_policy=no_reserve

vscsi target devicevtscsi0

Client SCSI adaptervscsi0

Client SCSI adaptervscsi1

hdisk0rootvg

LVM

AIX client partition

POWER Hypervisor

Phy

.ada

pter

MPIO default PCMfailover only

Phy

.ada

pter

Phy

.ada

pter

MPIO SDDPCMload balancing

MPIO SDDPCMload balancing

Hochverfügbare Storage-Anbindung über AIX Multi Path I/O (MPIO) für SAN-Storage

IBM System p

18 © 2007 IBM Corporation IBM Systems

Virtuelles Ethernet

� Hypervisor funktioniert alsEthernet-Switch

� Virtual Ethernet basiert auf Speicher-Kopien zwischenLPARs

� Die LPARs sehen virtuelleEthernet-Adapter

� Gesamtbandbreite etwa wieGigabit-Ethernet

� Kein eigener physischerNetzwerk-Adapter für jede LPAR notwendig

AIX 5.3 AIX 5.3 Red Hat AS 4

SLES9/10

Virtual Ethernet driver

Virtual Ethernet driver

Virtual Ethernet driver

Hypervisor

Virtual Ethernet driver

Virtual Ethernet switch

IBM System p

19 © 2007 IBM Corporation IBM Systems

Virtual I/O Server als Ethernet-Bridge

� Shared Ethernet Adapter (SEA)dient als Layer 2-Bridge in das externe Netzwerk

� Für jedes VLAN kann ein eigener virtuellerAdapter im Virtual I/O Server angelegt werden

– Mit mehreren Adaptern exisitieren mehr Queues für höhere Performance

� Mehrere virtuelle Adapter können mit einemrealen Adapter verbunden werden

� Jeder virtuelle Adapter kann auch mit einemeigenen realen Adapter verknüpft werden

� Redundante SEAs in zwei Virtual I/O Servernmöglich

LPAR LPARVirtual I/O Server

Server

VLAN1

Server

SEA

VLAN1VLAN2

VIOA VIOA10.1.1.11

VIOA

1 2

VLAN2

VIOA10.1.2.11

RIOA10.1.1.14

RIOA10.1.2.15

IBM System p

20 © 2007 IBM Corporation IBM Systems

Hochverfügbare Netzwerkanbindung über Shared Ethernet Adapter und AIX Network Interface Backup

ent0phys.

adapter

ent1virt.

adapter

Hypervisor

ent2SEA

ent0phys.

adapter

ent1virt.

adapter

ent2SEA

VIO1 VIO2 LPAR

ent0virt.

adapter

PVID 1 PVID 1P

VID

2

PV

ID 1

ent1virt.

adapter

PV

ID 1

PV

ID 2

ent2NIB

Kein VLAN-Tagging möglich!

Netzwerk-Switch(e)

IBM System p

21 © 2007 IBM Corporation IBM Systems

Shared Ethernet Adapter Failover

ent0phys.

adapter

ent1virt.

adapter

ent2virt.

adapter

ent3SEA

ent0phys.

adapter

ent1virt.

adapter

ent2virt.

adapter

ent3SEA

VIO1 VIO2 LPAR

ent0virt.

adapter

active standby

VLAN ID 100VLAN ID 200PVID 1

VLAN ID 100VLAN ID 200PVID 1

prio1 prio2PVID 99 PVID 99

VLAN ID 100VLAN ID 200PVID 1

VLAN-Tagging möglich!

Hypervisor

IBM System p

22 © 2007 IBM Corporation IBM Systems

Beispiel I/O Infrastruktur Kosten –dedizierte Server

– 10 x 2 Wege HA-Server– Je 2 x Fibre Channel– Je 2 x Gigabit Ethernet

20 LAN Ports

20 SAN Ports 30.000,- €3.000,- €FC Switchport 1.500,- €

24.000,- €2.400,- €GBE HBA 1.200,- €

112.000,- €11.200,- €Summe

28.000,- €2.800,- €GBE Switchport 1.400,- €

30.000,- €3.000,- €FC HBA 1.500 €

Summe Server

Einzel-server

Position

IBM System p

23 © 2007 IBM Corporation IBM Systems

– 1 x 8 Wege HA-Server– 2 x Fibre Channel– 4 x Gigabit Ethernet

4 GBE Ports

2 FC Ports3.000,- €FC Switchport 1.500,- €

4.800,- €GBE HBA 1.200,- €

16.400,- €Summe

5.600,- €GBE Switchport 1.400,- €

3.000,- €FC HBA 1.500 €

Summe Server

Position

Virtual I/OVirtual CPU

Ersparnis 95.600 € oder 85%

Beispiel I/O Infrastruktur Kosten –virtualisierte Server

IBM System p

24 © 2007 IBM Corporation IBM Systems

Vorteile durch Virtualisierung

� Gemeinsame Nutzung von Ressourcen

� Effizientere Nutzung von Ressourcen– CPU virtuelle CPU

– Netzwerk virtuelles Ethernet

– Platten virtuelle Platten

� Weniger Hardware – geringere Kosten

� Mehr Flexibilität

� Schnellere Reaktion auf neue Anforderungen

� Dynamische LPARs / virtuelle Ressourcen

� Beispiele:– Testumgebung

– Pilotprojekte

– Prototypen

– Varianten

� Trotzdem: Gleiche Leistung für Endnutzer

IBM System p

25 © 2007 IBM Corporation IBM Systems

� Keine HMC� Single-Server Management� Einstiegs-Partitionierung

� Begrenzte Service Funktionen� Client LPARs können nur virtuelle

Devices zugeordnet bekommen� Ein Profile pro Partition

� Begrenzte Redundanz� Unterstützung für folgende Modelle:

IBM System p5 505IBM System p5 520IBM System p5 550IBM System p5 550QIBM ~ p5 510IBM ~ p5 520IBM ~ p5 550IBM ~ OpenPower 710IBM ~ OpenPower 720

� HMC erforderlich� Multiples-Server Management� Erweiterte Partitionierung

� Physikalische und/oder virtuelle Devices können der Client LPAR zugeordnet werden

� Mehrere Profile pro Partition� Volle Redundanz� Unterstützung für alle IBM

System p5, IBM ~ p5 und IBM ~OpenPower Modelle

� KeineVirtualisierungs-funktionalität

� Unterstützung für alle IBM System p5, IBM ~ p5 und IBM ~OpenPower Modelle

SP

Browser

SPSP

Hypervisor

Par

titio

nP

artit

ion

Par

titio

nP

artit

ion

Par

titio

n

IVM

SP

Browser

Optionen für System p - Virtualisierungsmanagement

Stand-Alone IVM Managed HMC

IBM System p

26 © 2007 IBM Corporation IBM Systems

POWER6: Shared Dedicated Capacity

0

25

50

75

100

125

150

175

200

0.5 Uncapped 20.5 Uncapped 1Wasted Dedicated1 Core Dedicated

– With the new support, a dedicated partition will donate its excess cycles to the uncapped partitions

– Each uncapped partition will consume an entire processor if available (when dedicated at 0%) and will split a processor when dedicated fully utilized (when dedicated at 100%)

– The total processor capacity in the system is better utilized while the dedicated processor partition maintains the performance characteristics and predictability of the dedicated environment when under load

IBM System p

27 © 2007 IBM Corporation IBM Systems

IBM System p Flexible Resource ManagementA new method of virtualization on IBM System p: AIX V6 Workload Partitions

Workload Isolation

Res

ourc

e Fl

exib

ility

Micro-partitionsAIX V5.3 on POWER5™ or later

WorkloadPartitions

AIX 6 on POWER4or later

AIX Workload Manager

AIX V4.3.3 on POWER3™or later

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

28 © 2007 IBM Corporation IBM Systems

AIX V6 Workload Partitions (WPAR)

� WPARs are– separate regions of application space

– Created entirely within a single AIX system image

– Created entirely in software (no HW assist or configuration)

� Software partitioned system capacity – Each Workload Partition obtains a regulated share of

system resources

– The amount of system memory, CPU resources, paging space allocated to each WPAR can be set.

– Each Workload Partition can have unique network, filesystems and security

� Separate administrative control– Each Workload Partition is a separate

administrative and security domain

– The WPAR appears to be a stand alone AIX system

WorkloadPartition

ApplicationServer

WorkloadPartitionWeb

Server

WorkloadPartitionBilling

AIX instance

WorkloadPartition

TestWorkloadPartition

BI

Improved administrative efficiency by reducing the number of AIX images to maintain

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

29 © 2007 IBM Corporation IBM Systems

LPARs and AIX Workload Partitions are complementary technologies and can be used together

LPARAsia

LPAR LPAREMEA

LPARAmericas

VIOServer

Micro-partition Processor PoolDedicated Processor

LPARFinance

Dedicated Processor

LPARPlanning

POWER Hypervisor

WPAR #1Bus Dev

WPAR #1MFG

WPAR #2Planning

WPAR #1eMail

WPAR #2Test

WPAR #3Billing

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

30 © 2007 IBM Corporation IBM Systems

IBM announces Two Methods of Mobility (11/2007)Live Partition Mobility – move a running POWER6 partition …Live Application Mobility – move a running AIX 6 application …

… From one server to another

Workload Isolation

Res

ourc

e Fl

exib

ility

Micro-partitionsAIX V5.3 on POWER5™ or later

WorkloadPartitions

AIX 6 on POWER4or later

AIX Workload Manager

AIX V4.3.3 on POWER3™or later

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Live Application Mobility

Live Partition Mobility

IBM System p

31 © 2007 IBM Corporation IBM Systems

WorkloadPartition

QA

AIX # 2

WorkloadPartition

Data Mining

AIX 6 Live Application Mobility

WorkloadPartition

App Server

WorkloadPartition

Web

AIX # 1

WorkloadPartition

Dev

Move a running Workload Partition from one server to anotherfor outage avoidance and multi-system workload balancing

Workload Partitione-mail

Works on any hardware supported by AIX 6 including POWER5

WorkloadPartitionsManager

Policy

WorkloadPartitionBilling

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

32 © 2007 IBM Corporation IBM Systems

Live Partition Mobility with POWER6*Allows migration of a running LPAR to another physical server

� Reduce impact of planned outages� Relocate workloads to enable growth� Provision new technology with no disruption to service� Save energy by moving workloads off underutilized servers

�����������

����� ������ �� �����

������������� ����

�� ������������������ ����� �� ���� ��� ������������������ ����� �� ���� �

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

33 © 2007 IBM Corporation IBM Systems

Continuous Application AvailabilityWith Live Partition Mobility and Live Application Mobility, planned outages for hardware and firmware maintenance and upgrades can be a thing of the past

Relocate all partitions from one server to another when performing maintenance. Move the partitions back when maintenance is complete

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

34 © 2007 IBM Corporation IBM Systems

Energy Savings During non-peak hours, consolidate workloads and power off excess servers

Move partitions off of underutilized servers and then power them off to save electricity using Live Partition Mobility and Live Application Mobility

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

IBM System p

35 © 2007 IBM Corporation IBM Systems

Workload Balancing with Live Partition Mobility*As computing needs spike, redistribute workloads onto multiple physical servers without service interruption

As one server gets overtaxed from a spike in demand, relocate partitions to other servers

* All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.