sun update for idc - hpc user forum

15
Presenter’s Name Title and Division Sun Microsystems Dave Teszler, Global HPC Sales Manager Dan Dowling, Ph.D. HPC Sales Specialist High Performance Computing Division Sun Microsystems, Inc. Sun Update for IDC September 9 th , 2009

Upload: others

Post on 03-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sun Update for IDC - HPC User Forum

Presenter’s NameTitle and DivisionSun Microsystems

Dave Teszler, Global HPC Sales ManagerDan Dowling, Ph.D. HPC Sales Specialist

High Performance Computing Division Sun Microsystems, Inc.

Sun Update for IDC September 9th, 2009

Page 2: Sun Update for IDC - HPC User Forum

2

What's up with Oracle?

Page 3: Sun Update for IDC - HPC User Forum

3

QuestionsWhat does a growth company need ? - Unsolved

problems1. Are all problems solved in HPC ?

2. Better still, are all Software problems solved ?

3. Are there no Data Intensive problems left ?

4. Would you care about OS Jitter and latency, if you were building a distributed database ?

5. If you were a database and analytics company, would you care about the next grand challenges in data analytics ? (Say Large Graph Algorithims)

6. Would a database vendor be interested in the faster memory cache hierarchies that Flash and SSD promise ?

7. Would you be interested in a growing product line ?

Page 4: Sun Update for IDC - HPC User Forum

4

What's New in HPC at Sun?

Page 5: Sun Update for IDC - HPC User Forum

Constellation Review: TACC Ranger•Ranger 579 TFlops

6048 Rack Pegasus Blade DataCenter 3456 IB Switch x4500 Storage (4xAMD Barcelona) (SDR/DDR) (Thumper)

Page 6: Sun Update for IDC - HPC User Forum

Sun HPC Momentum•NexGen Constellation Architecture

C48+ Rack Vayu Blade DataCenter 648 IB Switch J4400 Storage“Glacier” doors 2x2xNehalem (QDR) (Riverwalk)PCIe Gen2 Backplane QNEM Leaf Switch

Page 7: Sun Update for IDC - HPC User Forum

Sun Blade 6000 Chassis•Modular Chassis Design>10RU, 10x SB6000-series servers>Updated midplane supports PCIe Gen2 & SAS2>4x PCIe Gen2 x8 links per Blade Slot, 2x NEM per chassis

•Power, Cooling and Availability, 2x 5600W PSU (N+N), 1x CMM>All redundant, hot-plug active components

10-Rack-Unit Blade Chassis

Sun Blade 6048 Unibody Chassis•Unibody Chassis Design>Full rack, 48x SB6000-series Blades, Up to 96 nodes/768 Cores/Rack ~9 TF (Nehalem 2.93)>Updated midplane supports PCIe Gen2 & SAS2>4x PCIe Gen2 links, 2x PCIe EM per Blade Slot, 2x NEM per chassis

•Power, Cooling and Availability, 8x 8400W PSU (N+N)>Glacier Cooling Door option>All redundant, hot-plug active components

42-Rack-Unit Blade Chassis

Page 8: Sun Update for IDC - HPC User Forum

Blades,Blades and more Blades

Page 9: Sun Update for IDC - HPC User Forum

“Magnum 9”Densest 2-tier CLOS switch using 36-p chips>648 QDR ports with 72-p CXP Line Card>41 Tbps total capacity>300ns latency (QDR)

Line Cards and Fabric Cards>9 Line Cards>72-p CXP>10G/FCoE/FC gateway>9 Fabric Cards11RU 19” Enclosure>Redundant Power and Cooling>7kW power consumptionManagement>Redundant hot-swap service processors>External dual redundant subnet managers

…9

line

car

ds…

…9 fabric cards…

1-8 VLs per Path

Multiple

Paths

Page 10: Sun Update for IDC - HPC User Forum

Compute Node and M9 Fat-Tree Connectivity

24 Nodesper

Shelf48-port Leaf Switch

● 5,184 Compute Nodes (54 Racks)●

24 Nodesper

Shelf48-port Leaf Switch

648-port Core Switch

648-port Core Switch

648-port Core Switch

648-port Core Switch

1 12x connector to each core Switch from the

Leaf Switches648-port Core Switch

648-port Core Switch

648-port Core Switch

648-port Core Switch

Page 11: Sun Update for IDC - HPC User Forum

3D-Torus Design

ShelfMid-

plane

QNEM

Switch 03ports

Switch 1

36-port(4x)

QDR IBswitch

5 12x

IB ca

bles

To 12nodes

+X–X+Y–Y+Z

5 12x

IB ca

bles

+X–X+Y–Y–Z

±Z

Vayu2 x 2S

Node 0

Quick-Path

CPU

PCIe2x8

IO Hub

CPU

ConnectXQDR

IB HCA

Node 1

CPU

Six 2

GB

DDR3

DIM

Ms

CPU

Six 2

GB

DDR3

DIM

Ms

IB x4

IB x4 36-port(4x)

QDR IBswitch

To 12nodes

15ports

15ports

Quick-Path

PCIe2x8

IO Hub

ConnectXQDR

IB HCA

3-DInfiniBand

cabling (each arrow is two cables)

Shelf 1

Shelf 2

Shelf 3

Shelf 4

X

Y

Z

3. XxYxZ - 4x3x8 Shown (1,152 Nodes) (576 Vayu/48 QNEMs/12 C48)

1. 12 VAYU Blades + QNEM per Shelf

Chip-to-chipon NEM Row n Logical plane number

0’ 1’ 2’

YZ X ZYX

YZ X ZYX

YZ X ZYX

YZ X ZYX

1

3

5

7

2

4

6

8

2. C48: 48 Vayu Blades + 4 QNEMs

Page 12: Sun Update for IDC - HPC User Forum

New Server Memory Hierarchy

Page 13: Sun Update for IDC - HPC User Forum

Sun HPC Software – Linux Edition

Open Source ApplicationsISV Applications

ISV CompilersOpen Source Compilers

Conman/PowermanOneSIS

OpenMPI/MVAPICH

Resource Scheduler

Linux Distro

Hardware

OFED

LustreNFSEXT3

Integ

ration

, Insta

llatio

n, Co

nfigu

ratio

n

Page 14: Sun Update for IDC - HPC User Forum

Wins/Momentum•Major wins in All Geographies (US, APAC, EMEA, EM)•Mostly Research focused with a few Operational Centers•Over 20 Systems Contracted at time of Nehalem Launch•~200 6048 Racks (Won/Contracted)>~9600 Vayu Blades>~38,000 Nehalems>~1.5 PFlops

Page 15: Sun Update for IDC - HPC User Forum

200 C48/Vayu Racks (more than1.5 Pflops)