openvms cluster ic and storage update 1g07 - … cluster ic and storage update 1g07 manfred kaser...

24
HP IT-Symposium 2006 www.decus.de 1 © 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice OpenVMS Cluster IC and Storage Update 1G07 Manfred Kaser HP Services Hewlett Packard GmbH [email protected] OpenVMS Cluster IC and Storage Update Cluster IC Update – V8.3 V8.2-1 & V8.3 New Storage Features MSA Update EVA Update XP Update Storage Array Queuing Considerations

Upload: vobao

Post on 29-May-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 1

© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice

OpenVMS Cluster IC and Storage Update

1G07

Manfred KaserHP Services Hewlett Packard [email protected]

OpenVMS Cluster IC and Storage Update

−Cluster IC Update – V8.3− V8.2-1 & V8.3 New Storage Features−MSA Update− EVA Update− XP Update− Storage Array Queuing Considerations

Page 2: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 2

Cluster IC Update –V8.3• New PEdriver Features & SCACP Commands to manage

them:• Data Compression: Enable on a per VC basis, or globally

for all VCs by using:• SCACP> SET VC nodename /COMPRESS• NISCS_PORT_SERV bit 2 (mask: 4 hex)• Availability Manager fix

• Transmit & Receive window sizes:• Automatically adjusted for aggregate link speed.BUT, you can over-ride:• SCACP>SET VC /RECEIVE_WINDOW=4• SCACP>SET VC /TRANSMIT_WINDOW=4

CAUTION: Read the release notes & new features rules carefully to avoid VC closures.

• SCACP>CALCULATE WINDOW_SIZE /SPEED=n –/DISTANCE=[KILOMETERS or MILES]=d

IP Clusters & RNIC• Customers badly need IP based clusters NOW• PEdriver over UDP• iSCSI

Software approach in V8.3 isn’t suitable for heavy production use, has:− too high CPU cost−Has the same memory BW vs NIC BW problem the rest

of the industry is facing (next slide).

Page 3: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 3

PEdriver over UDP

TCP

NIC HWdevice Driver

UDP

IP

TCP/IP Services:

Node discovery, IP Route Selection,

ARP,

Mgt. Tools

...

TCP/IP

stack

LAN VCI

SYS$SCS

CNXMAN MSCP serverDUdriver

TCP/IP VCI

PEdriver

Existing Cluster Component

Existing VMS TCP/IP component

New Component-Component interaction

NEW VMS Cluster component

PEM_IP.C module

General Industry Challenge for IP/Ethernet Scaling:Memory bandwidth limitations

• Host-based TCP/IP consumes memory bandwidth equal to 5x to 7x the raw LAN data rate:− 1-2 buffer copies + DMA− each buffer copy = 3x memory

touches

• Current memory bandwidth is ~ 3-6 GB/sec. 10Gb NICs can consume more than that!

• 100Gb/s NICs are on the horizon, will require 50,000-87,000 MB/s memory BW.

• Memory Controller Bandwidth is NOT SCALING with CPU and network bandwidth.

1

10

100

1000

10000

100000

1980 1990 1995 2000 2005 2010

CPUNetworkMemory bus

1250 MB/sec

125 MB/sec

Raw Data Rate

5000-8750 MB/sec

500-875 MB/sec

Required Memory Bandwidth (RX)

10 GbE

1 GbE

Ethernet

bandwidth growth rates

Page 4: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 4

OpenVMS V8.2-1 & V8.3 New Storage Features

OpenVMS New Storage Features

−−MultiMulti--Host SCSI (Integrity only)Host SCSI (Integrity only)−−FC performance monitoringFC performance monitoring−−Storage controller AA path selectionStorage controller AA path selection−−4gb FC (Integrity only)4gb FC (Integrity only)−−SAS/SATASAS/SATA−−No IOLOCK8 FC port driversNo IOLOCK8 FC port drivers−− iSCSIiSCSI

Page 5: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 5

Multi-Host SCSI (V8.2-1 Integrity)

• Our desire to drive FC to multi-host SCSI prices never happened−VMS FC-AL code / MSA interaction problems−Cheap switches never really got that cheap

• Multi-Host SCSI in V8.2-1− Integrity only (1600 / 2600 / 4600)−A7173A only−MSA30-MI only (no funny “Y” cables needed)− 2 host only− 4 shared busses max

• There are no plans to support the MSA30-MI on Alpha

PCI

CPU 1 CPU 2

SCSI MI Cluster Configuration

LAN / Cluster Interconnect

A B

A B

SCSI Bus 1 (7 slots)

SCSI Bus 2 (7 slots)

MSA30-MI

A B

A B

SCSI Bus 1 (7 slots)

SCSI Bus 2 (7 slots)

MSA30-MI

A7173A

PCI

A7173A

PCI

CPU 1 CPU 2

A7173A

PCI

A7173A

Page 6: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 6

FC Performance Monitoring• Alpha and IPF FC drivers now keep a

performance array for every mounted disk− Keeps count of completed IO across latency, IO size

and read/write−Data can be displayed with SDA (or with a $QIO code)− SDA> fc performance

− [device-name] (device to display)− [/rscc | /systime] (time resolution cc or systime)− [/compress] (compress white space)− [/csv] (writes data to csv file(s))− [/clear] (clear array)

−Data collection always on in systime mode in V8.2+ and V7.3-2 fibre_scsi_v400

FC Performance MonitoringSDA> fc perf $1$dga3107 /comp

FibreChannel Disk Performance Data----------------------------------

$1$dga3107 (write)Using EXE$GQ_SYSTIME to calculate the I/O time

accumulated write time = 2907297312uswrites = 266709total blocks = 1432966

I/O rate is less than 1 mb/sec

LBC <2us <2ms <4ms <8ms <16ms <32ms <64ms <128ms <256ms <512ms <1s=== ======== ======== ======== ======== ======== ======== ======== ======== ======== ======== ========1 46106 20630 12396 13605 13856 15334 14675 8101 777 8 - 1454882 52 21 8 9 5 5 6 1 2 - - 1094 40310 13166 3241 3545 3423 3116 2351 977 88 - - 702178 2213 1355 360 264 205 225 164 82 5 - - 487316 16202 6897 3283 3553 3184 2863 2323 1012 108 - 1 3942632 678 310 36 39 47 44 33 27 6 - - 122064 105 97 18 26 41 43 42 24 7 - - 403128 592 3642 555 60 43 31 23 9 2 - - 4957256 - 9 7 - - - - - - - - 16

106258 46127 19904 21101 20804 21661 19617 10233 995 8 1 266709

Page 7: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 7

FC Performance MonitoringFibreChannel Disk Performance Data----------------------------------

$1$dga3107 (read)Using EXE$GQ_SYSTIME to calculate the I/O time

accumulated read time = 1241806687usreads = 358490total blocks = 1110830

I/O rate is less than 1 mb/sec

LBC <2us <2ms <4ms <8ms <16ms <32ms <64ms <128ms <256ms <512ms <2s=== ======== ======== ======== ======== ======== ======== ======== ======== ======== ======== ========1 46620 12755 6587 7767 3758 2643 1133 198 5 - - 814662 574 134 66 158 82 20 21 4 1 - - 10604 162060 35896 20059 18677 15851 11298 5527 1300 25 2 1 2706968 355 79 46 97 59 36 28 10 - - - 71016 241 103 32 150 77 24 13 1 - - - 64132 916 355 76 302 316 61 25 10 - - - 206164 725 380 64 248 140 17 10 3 - - - 1587128 13 22 13 36 21 6 - - - - - 111256 10 41 28 15 49 13 2 - - - - 158

211514 49765 26971 27450 20353 14118 6759 1526 31 2 1 358490

SDA>

FC Performance Monitoring• FC tapes will have performance arrays starting in

V8.3• SAS/SATA will have performance arrays when

they ship

Page 8: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 8

Active-Active Storage Host Ports• EVA / MSA active-active host ports

− Today the EVA and MSA forces access to any given lun to be through a single controller

8 9 C D

Controller A Controller B

A

A

A A

AB B

B

B

B

Mirrorport

I/O paths to device 4Path PGA0.5000-1FE1-0011-B158 (CLETA), primary path.

Error count 0 Operations completed 8Path PGB0.5000-1FE1-0011-B15D (CLETA), current path.

Error count 0 Operations completed 1557Path PGA0.5000-1FE1-0011-B15C (CLETA).

Error count 0 Operations completed 0Path PGB0.5000-1FE1-0011-B159 (CLETA).

Error count 0 Operations completed 0

Active-Active Storage Host Ports• EVA / MSA active-active host ports

−With active-active host ports a lun can be accessed via all host ports• The lun is still bound to a single controller, but the reference

comes into the other controller, the request will be forwarded via the mirror port−little write impact since the data needed to be mirrored anyway−there will be a small latency impact on reads via the “non-optimized” controller−Prior to V8.3 VMS does not distinguish between optimal and non-optimal ports for path selection−The EVA and MSA will move a lun across controllers if the workload is sufficiently unbalanced

Page 9: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 9

Active-Active Storage Host Ports• EVA / MSA active-active host ports

− Active-active host ports are present on the EVA4000/6000/8000, the MSA V6 firmware and the EVA3000/5000 VCS V4 firmware.

− Active-active host ports requires no changes to VMS and will be backwards compatible with current releases

− Starting in V8.3 VMS will always select an “optimized “ port as “current” if possible.• Set dev $1$dgaxxx /switch/path=xxxxx will always set the specified path

as “optimized”• Other hosts with access to the same LUN will follow if controllers are

switched− You can see which EVA port is optimized via the CommandView

EVA LUN presentation screen• You can set the optimized controller by specifying a preferred path in

CommandView EVA

4gb Fibre Channel (V8.3)

• 4gb is the next logical progression of FC speeds− Auto speed matching between 1gb, 2gb and 4gb− 2gb cabling works for 4gb− Expect 4gb to be a constant cost replacement for 2gb

• VMS plans to support 4gb FC on IPF only….− VMS will support the Qlogic family of 4gb FC HBAs

• There are modest driver changes which are now complete• Qual cards running fine in cell based platforms

−A pluto chip bug may prevent support of 4gb FC in 16xx, 26xx and 46xx platforms

• Initial tests show sustained throughput of 400+MB/sec• Single and dual port cards planned (PCI-266 and PCI-e)• Combo cards are in the works too

• Several B-series 4gb switches are available• 4gb EVAs due by mid-2006

Page 10: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 10

SAS / SATA (V8.3)• SAS = Serial Attached SCSI (3gb/sec per channel)• SATA = Serial ATA (advanced technology attachment)• Parallel SCSI has reached its technology limits at U320

− No new SCSI generations planned (possible PCI-e card)• Ruby/Sapphire platforms will have SAS core IO in late 2006

− Initially SAS will be limited to internal SFF (2.5”) devices− MSA50 (12x SFF), will follow− Longer term multi-host SAS is likely− SAS and SATA drives will plug into any SFF slot

• Qual of SATA drives are TBD, but seem to work just fine• SAS tapes are planned• VMS will use FC style names with user specified device ###s

− VMS utility will be available to name devices− Names are sticky and move with the drive

• No plans for Alpha SAS/SATA

No IOLOCK8 FC port drivers (V8.3?)

• Current FC port drivers (Emulex and Qlogic), support fast path, but continue to use IOLOCK8 in the port driver−This limits the concurrency that can be achieved

between multiple HBAs in SMP systems

• We’ve developed enhancements to FC drivers to use a unique port lock instead of IOLOCK8− Initial tests show max IO rates at 2x (100+K IO/sec on

modern SMP platforms)

• Code is complete and ready for integration into V8.3, but the qual schedule is tight

Page 11: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 11

iSCSI (latent, but not qualed in V8.3)• iSCSI seems to be finally gaining some steam…

− iSCSI bridge for EVA− iSCSI MSA− iSCSI ports for XP− Good iSCSI – FC bridging products from Cisco

• Windows / HP-UX / Linux all support software clients using standard NICs− Modest performance (high CPU overhead)− Still interesting for systems with lower IO requirements and topology

challenged systems (no FC infrastructure)− Typically use iSCSI bridge to SAN based storage

• We’ve engineered a software based client for VMS which acts like a port driver, but talks to the IP stack rather than hardware− No boot support− Devices named like FC (you can have FC and iSCSI paths to the same

SAN based LUNs)− Tape support

iSCSI (latent, but not qualed in V8.3)

• The iSCSI port driver is latent in V8.3, but will not be qualified in time for general support−Tested to date with

• StorageWorks 2122-2• Cisco 9216i• EVA bridge (crystal cove)

−Still need to test• iSCSI MSA• iSCSI XP ports

−Setup is pretty complicated with all but the EVA bridge− Initial support is likely to be on specific TBD devices−CPU cost per IO is high compared to FC (especially for

long transfers due to buffer copy operations)−Overall IO throughput is respectable

Page 12: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 12

MSA Update

MSA Update• MSA active-active controllers

− Today the MSA1000/1500 has an active/standby controller architecture. V6 firmware will allow both controllers to be active.• better performance via path balancing• nicer lun failover characteristics (today its all or nothing)

−Near final firmware in test now with anticipated customer availability in H1CY06

− VMS will delay hardware support for the MSA1500 until V6 firmware is available• MSA1500 support on V7.3-2 and later

−hopefully no TIMA kit is necessary− Its not known if/when V6 firmware will be available for

the MSA1000

Page 13: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 13

EVA Update

EVA Update− EVA 4000/6000/8000 fully supported on all current

versions of VMS− EVA 3000/5000 to get active/active host ports in VCS

V400 in early 2006• Little advantage to VMS customers but we need to support it for

mixed OS support−Active/active host ports does simplify booting

• Existing customers could see performance loss if they don’t carefully set paths

− EVA 4000/6000/8000 to get 4gb host ports in mid-2006• VMS has test systems and it is possible to get 400MB+/sec to a

single LUN with 4gb HBAs− EVA gets integrated iSCSI bridge in early 2006

• Probably the primary target for initial VMS iSCSI support• We have working early production units• Very nicely integrated with CommandView EVA

Page 14: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 14

XP Update

XP Update− Lots of interest in XP from VMS customers

• Good CA functionality− VMS fully supports XP128, XP1024, XP10000 and

XP12000• XP engineering is interested in working with OpenVMS

− EVA customers will find the XP very hard to configure and manage• In-flexible device naming• Must carefully configure raid sets for good performance• Must use host based raid for very high IO LUNs

Page 15: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 15

XP Update− XP10000/12000 performance issues

• Host ports limited to 5-10K I/O per second−must use host based raid to multiple LUNs on different ports to get around this issue

• Host ports don’t like queue depths over 16−Access common hot LUNs via different host ports from different hosts−Be very careful with backup

• No qfull support−Performance goes south in a hurry if you over drive a host port and none of our back-off schemes get triggered

Storage Array Queuing Considerations

Page 16: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 16

Storage Array Queuing Considerations• Unfortunately Fibre Channel storage is based on

the SCSI protocol and there is no well defined mechanism for storage level flow control between hosts and storage arrays.−Raid based LUNS need concurrent commands for

maximum performance (~2-4 per spindle)−VMS tries to immediately send all IO to the storage

controller−This can cause choke points in the storage arrays

• Input queues• Cache• Mirror ports

Storage Array Queuing Considerations• The storage arrays don’t seem to know when

they’re getting into trouble and often respond too late for us to do much about it−The HSG, MSA and EVA use qfull to signal that their

input queues are overloaded−The XP does nothing at all…

• VMS responds to qfull by backing off IO and slowly letting the IO load grow−No algorithm is right for all arrays (or even all situations

in any given array)

Page 17: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 17

Storage Array Queuing Considerations• There is no fix to all of this, but consider these

points−Try to avoid doing massively concurrent IO

• Backup is the worst offender• A new version of backup is available with user settable

concurrent IO limits• The IO limit default is 8 which seems to work well with modern

tape drives• Run sequential backup jobs if possible. Your tapes and your

storage will thank you−Spread the IO load across as many storage controller

ports as possible−Keep an eye on LUN queue depths via monitor disk

/item=queue−Watch qfull counts

Checking qfull responses• SDA> fc stdt /all

• PGA0 SPDT 8157A040 STDTs• ------------------------• | STDT PRLI Port Dev Cred | Act Cmd Cnf Rst PRLO Cls | QF Tgt Ill Seq• STDT FC-LA Port Name | Stat Stat I/Os I/Os I/Os | Sus Sus Pnd Act Pnd Pau | Seen Rsts Frms Tmo• -------- ----- ------------------- + ---- ---- ---- ---- ---- + --- --- --- --- --- --- + ---- ---- ---- ----• 816AC880 0000E 5000.1FE1.0001.84D1 | 0001 0001 0000 0020 0007 | 000 000 000 000 000 000 | 0913 0000 0000 00A9• 816C4840 0000F 5000.1FE1.0001.84D3 | 0001 0001 0000 0009 000B | 000 000 000 000 000 000 | 00FC 0000 0000 0278• 81A6EC00 00025 5000.1FE1.5001.A3E8 | 0001 0001 0000 0000 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0000• 81A6F0C0 00026 5000.1FE1.5001.A3EC | 0001 0001 0000 0000 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0000• 816DF640 00012 5006.0E80.034E.9B00 | 0001 0001 0000 0028 0000 | 000 000 000 000 000 000 | 0000 0000 0000 3D97• 817BCF40 00023 5008.05F3.0001.AF11 | 0001 0001 0000 0000 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0000

• PGB0 SPDT 817C6740 STDTs• ------------------------• | STDT PRLI Port Dev Cred | Act Cmd Cnf Rst PRLO Cls | QF Tgt Ill Seq• STDT FC-LA Port Name | Stat Stat I/Os I/Os I/Os | Sus Sus Pnd Act Pnd Pau | Seen Rsts Frms Tmo• -------- ----- ------------------- + ---- ---- ---- ---- ---- + --- --- --- --- --- --- + ---- ---- ---- ----• 818A8F40 00010 5000.1FE1.0001.84D4 | 0001 0001 0000 0012 0001 | 000 000 000 000 000 000 | 00DC 0000 0000 0152• 81864140 0000B 5000.1FE1.0001.84D2 | 0001 0001 0000 001D 000B | 000 000 000 000 000 000 | 0869 0000 0000 0128• 81AB3E00 00020 5000.1FE1.5001.A3E9 | 0001 0001 0000 0000 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0000• 81AB4300 00022 5000.1FE1.5001.A3ED | 0001 0001 0000 0000 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0000• 818D6B80 00012 5006.0E80.034E.9B14 | 0001 0001 0000 002D 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0B96• 8196DA40 0001E 5008.05F3.0001.AF19 | 0001 0001 0000 0000 0000 | 000 000 000 000 000 000 | 0000 0000 0000 0000•

• SDA>

Page 18: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 18

Storage Array Queuing Considerations• If all else fails, there are ways to place IO caps on

the host ports of storage arrays−To date these are not formally documented, but info is

seeping to the field about them−We’re considering ways to make these easier to manage

for a port V8.3 release• The EVA and MSA behave pretty well (especially

with the new backup image)• HSG is problematic with faster host systems and

its getting too old to fix at this point• The XP totally fails in this area now, but we’re

working with Hitachi to try to get a meaningful qfull indication in a future FW release

Questions and Answers???

Page 19: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 19

2004 2006 2007 +

Evolution of Fibre Channel and iSCSI

Co-Existence

Convergence UnifiedStorageWorks

Grid

Fiber Channel

4/32 Switch 8Gb FC

Intro

iSC

SI

Initiator software

2005

iSCSI Storage Server

EVA Option

XP12000 Native

Native MSA

10Gb HW Assist NIC• HP-UX 11i, OpenVMS• iSCSI, RDMA

EVA Grid

Remote/legacy/$$/slow Low cost system/slow Challenges Fibre Channel - performance

XP120004Gb HBA

MSA

EVA

2Gb standard

Tape Libraries

4Gb standard Transition to 8Gb FTransition to 4Gb Fibre Channel

Routers/Switches

1/10Gb NIC

No support for iSCSI on OpenVMS until ‘07

EVA4000/6000/8000 New controllers

2nd generation EVA controller architecture

• New modular design• Larger policy memory

• Double the FC host ports (on EVA8000)

• Larger cache capability • Dual FC cache mirror ports

• New cache management ASIC

• Faster internal busses• Faster microprocessor Feature

• High redundancy, easy serviceability• More table space for larger capacities

• Improved data bandwidth• Improved performance for larger configs• Improved write data bandwidth

• Improved data bandwidth• Improved data bandwidth• Faster command executionBenefit

Page 20: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 20

EVA 4000/6000/8000Three capacity and performance ranges

EVA 60002C8D

EVA 40002C4D

Maximum ofMaximum ofMaximum of

72.0 TB 33.6 TB 16.8 TB

1300 MB/s read bandwidth (sequential 64 KB blocks)

720 MB/s read bandwidth (sequential 64 KB blocks)

360 MB/s read bandwidth (sequential 64 KB blocks)

54,500 random read I/Os (8 KB blocks)

26,000 random read I/Os (8 KB blocks)

13,000 random read I/Os (8 KB blocks)

240 drives112 drives56 drives18 drive enclosures8 drive enclosures4 drive enclosures

EVA 80000C6D

EVA 80002C12D

HP StorageWorks MSA1500

• 2-Gb Fibre Channel SAN 2U controller shelf—Smart Array Technology

• Connects up to 8 MSA 20 SATA enclosures—max 96 drives equaling 24 Terabytes raw storage in a space of only 18U

• Connects up to 4 StorageWorks MSA 30 SCSI enclosures—max 56 drives equaling 16.8 Terabytes raw storage

• Mix StorageWorks MSA 20 SATA and StorageWorks MSA 30 SCSI with one MSA1500 for tiered storage

• Dual or Single I/O cards available

HP StorageWorks MSA1500

Page 21: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 21

MSA1500 configuration maximumsVersion SATA

+MSA20Version SCSI

+MSA30

MIXED versionSATA and SCSI

+MSA20 & MSA30

20.4-TB with 71 SCSI/SATA hard drives

24-TB Configuration96 SATA Drives

16.8-TB Configuration 56 SCSI Drives

XP 12000 External Storage

• Connect lower-cost MSA or legacy XP storage through XP12000• Accessed as internal XP12000 storage• Supported by all XP solutions (BC, CA*, AutoLUN)• Up to 14 Petabytes of low cost storage

** Business Intelligence, low-cost archiving, on-line recovery images **

FCCHIP

CHIP

XP12000 External StorageHosts

FCCHIP

CHIP

LUN

MSA1000

MSA1000

XP1024

SAN

Configure both internal and external storage to match cost, performance, and availability objectives

SAN

* 2nd release

EVA (future)

Page 22: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 22

HP StorageWorks arrays:Modular, flexible and scalable storage

MSAFamily

Outstanding TCO < 72 T Bytes

• Storage consolidation + disaster recovery

• Simplification through virtualization

• Windows, HP-UX, Linux, + more

Always-on Availability < 332 T Bytes

• Data center consolidation + disaster recovery

• Large scale Oracle/SAP applications

• HP-UX, Windows, + 20 more including mainframe

Low Cost Consolidation< 24 T Bytes

• WEB, Exchange, SQL • Simple DAS-to-SAN (ProLiant)• Windows, Linux, Netware + more

Scalability

Agg

rega

te T

hrou

ghpu

t

New EVA Family

EVA4000EVA6000

EVA8000

XP Family

New!

OpenV

MS

on Integrity

StorageWorks array feature roadmapOpenVMS

Storage Interconnect Technologies

2004 2005 2006 2007

Mid

Entry

HighXP12000• 165TB• 3 Site DR• External Storage

MSA1500• Active/Active• SATA disks• Capacity+

EVA arrays• FATA• Cross Vraid

snapshots

MSA1000MSA family• SFF SAS

disks• 4Gb FC• iSCSI

XP Enhancements

• 2x cache• 2x performance• 2x shared memory

XP12000 arrays• CA enhancement• Cache upgrades• Partioning• RAID 6• iSCSI, NAS• Non-stop servers

MSA nextgen• 10Gb Ethernet• iSCSI• SAS

Next Generation EVA• Grid technology• 4Gb FC/10Gb Ethernet• Flexible scalability,

availability, performance• Single system image

EVA nextgen• Capacity+• Connectivity+• Cache+• Performance+• CA+

XP12000 arrays• 4Gb FC host• 300GB disks• 300+TB

2Gb FC 4Gb FC 8 FC

10Gb iSCSI

P-SCSI SAS

EVA Enhancements

• 4Gb FC• iSCSI

1Gb iSCSI

Page 23: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 23

HP StorageWorks 6000 Virtual Library System

Integrating seamlessly into the existing environment, the HP StorageWorks 6000 Virtual Library System improves the performance and reliability of your data protection process.

VLS6510

VLS6105

Product Features• Emulates popular HP tape drives and libraries

– integrating seamlessly into your current backup and recovery application and processes.

• Scales capacity and performance as your requirements change

• Utilizes LUN masking and mapping to protect backup and recovery jobs from disruptions caused by SAN events

Dense, scalable solutions that make data protection and recovery simple for your SAN environment

EML 103e

EML 245e

EML fully configured

Product Features• Easy to manage single or multiple libraries

through a single pane of glass with Command View ESL Tape Library Software

• Scalable from 103 slots to over 440 slots and up to 16 Ultrium tape drives

HP StorageWorks EML E-Series Tape Libraries

Page 24: OpenVMS Cluster IC and Storage Update 1G07 - … Cluster IC and Storage Update 1G07 Manfred Kaser ... • Other hosts with access to the same LUN will follow if ... −Typically use

HP IT-Symposium 2006

www.decus.de 24

HP Automated Nearline Storage

Autoloaders

ESL E-Series

Backup/Restore

1/8

EML E-Series

New

Archiving

• High capacity and performance

• Available & scalable• Heterogeneous

environments• Enterprise data center• Large scale SANs• High-density, scalable

products• Heterogeneous

environments• Medium-to-large

enterprises• Entry-level SANs

• Long-term archival storage• Removable media• Scalable• Entry-level to enterprise

Optical• Low cost• Heterogeneous

environments• Perfect for small servers or

small networks• DAS and LAN backup

SSL1016

MSL series

• Heterogeneous environments

• Medium-to-large enterprises

• 100-440+ slots• Up to 16 drives

New

Virtual Tape

Automated storage product roadmap

2004 2005 2006 2007

Entry

ESL E-series LTO-2• 142 TB • 2.6 TB/hr

1/8 AL & SSL LTO-2• 3.2 TB• 108 GB/hr

Autoloader LTO-3• 6.4 TB• 288 GB/hr

MSL LTO-2• 12 TB• 432 GB/hr

MSL LTO-3• 24 TB• 1.2 TB/hr

Autoloader LTO-4• 12.8 TB• 576 GB/hr

MSL LTO-4• 48 TB• 2.4 TB/hr

Optical

Mid-range

EML LTO-2 & LTO-3• 200 TB• 4.6 TB/hr

ESL E-series LTO-3• 282 TB• 6.9 TB/hr

Optical library UDO• 7.14 TB• 10 drives

Optical library UDO-2• 14.3 TB• 10 drives

Capacity and performance are expressed as native rates.

EML LTO-4• 400 TB• 9.2 TB/hr

ESL E-series LTO-4• 564 TB• 13.8 TB/hr

Technologies LTO-2UDO-2UDO

LTO-3 (Ultrium) LTO-4 SDLT600

SDLT600 tape• 320 GB• 32 MB/s

SDLT320

Enterprise

4Gb FC2Gb FC

EML LTO-2 & LTO-3• 200 TB• 4.6 TB/hr