dr paul calleja director cambridge hpc service

39
Cape Town 2013 Dr Paul Calleja Director Cambridge HPC Service

Upload: nemo

Post on 14-Jan-2016

48 views

Category:

Documents


0 download

DESCRIPTION

SKA The worlds largest Radio Telescope streaming data processor. Dr Paul Calleja Director Cambridge HPC Service. Overview. Introduction to Cambridge HPCS Overview of the SKA project SKA streaming data processing challenge The SKA SDP consortium. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Dr Paul Calleja

Director Cambridge HPC Service

Page 2: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• Introduction to Cambridge HPCS

• Overview of the SKA project

• SKA streaming data processing challenge

• The SKA SDP consortium

Overview

Page 3: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Cambridge University

• The University of Cambridge is a world leading teaching & research institution, consistently ranked within the top 3 Universities world wide

• Annual income of £1200M - 40% is research related - one of the largest R&D budgets within the UK HE sector

• 17000 students, 9,000 staff

• Cambridge is a major technology centre– 1535 technology companies in surrounding science parks– £12B annual revenue– 53000 staff

• The HPCS has a mandate to provide HPC services to both the University and wider technology company community

Page 4: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Four domains of activity

Commodity HPC Centre of

Excellence

Promoting uptake of HPC by UK Industry

Driving Discovery

Advancing development and

application of HPCHPCR& D

Page 5: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• 750 registered users from 31 departments

• 856 Dell Servers - 450 TF sustained DP performance• 128 node Westmere (1536 cores) (16 TF)

• 600 node (9600 core) full non blocking Mellanox FDR IB 2,6 GHz sandy bridge (200 TF) one of the fastest Intel clusters in he UK

• SKA GPU test bed -128 node 256 card NVIDIA K20 GPU • Fastest GPU system in UK 250 TF • Designed for maximum I/O throughput and message rate

• Full non blocking Dual rail Mellanox FDR Connect IB• Design for maximum energy efficiency

• 2 in Green500 • Most efficient air cooled supercomputer in the world

• 4 PB storage – Lustre parallel file system 50GB/s

• Run as a cost centre – charges our users – 20% income from industry

Cambridge HPC vital statistics

Page 6: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

CORE – Industrial HPC service & consultancy

Page 7: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Dell | Cambridge HPC Solution Centre• The Solution Centre is a Dell Cambridge joint funded HPC centre of

excellence, provide leading edge commodity open source HPC solutions.

Page 8: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SA CHPC collaboration

• HPCS has a long term strategic

partnership with CHPC

• HPCS has been working closely

with CHPC for last 6 years

• Technology strategy, system design

procurement

• HPC system stack development

• SKA platform development

Page 9: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• Next generation radio telescope• Large multi national Project

• 100 x more sensitive• 1000000 X faster • 5 square km of dish over 3000 km

• The next big science project

• Currently the worlds most ambitious IT Project

• First real exascale ready application

• Largest global big-data challenge

Square Kilometre Array - SKA

Page 10: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA location

• Needs a radio-quiet site• Very low population density• Large amount of space• Two sites:

• Western Australia• Karoo Desert RSA

A Continental sized Radio A Continental sized Radio

TelescopeTelescope

Page 11: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA phase 1 implementation

SKA1_LowSKA1_Mid

incl MeerKAT

SKA Element Location

Dish Array SKA1_Mid RSA

Low Frequency Aperture Array SKA1_Low ANZ

Survey Instrument SKA1_AIP_Survey ANZ

SKA1_AIP_Survey incl ASKAP

+

Page 12: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA phase 2 implementation

SKA2_LowSKA2_Mid_Dish SKA2_AIP_AA

SKA Element Location

Low Frequency Aperture Array SKA2_Low ANZ

Mid Frequency Dish Array SKA2_Mid_Dish RSA

Mid Frequency Aperture Array SKA2_Mid_AA RSA

Page 13: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

What is radio astronomy

X X X X X X

SKY Image

Detect & amplify

Digitise & delay

Correlate

Process Calibrate, grid, FFT

Integrate

s

B1 2

Astronomical signal (EM wave)

Page 14: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA – Key scientific drivers

Cradle of lifeCosmic Magnetism

Evolution of galaxies

Pulsar surveygravity waves

Exploring the dark ages

Page 15: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA is a cosmic time machine

Page 16: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

But……

Most importantly the SKA will investigate

phenomena we have not even

imagined yet

Most importantly the SKA will investigate

phenomena we have not even

imagined yet

Page 17: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA timeline

2022 Operations SKA1 2024: Operations SKA2

2023-2027 Construction of Full SKA, SKA2

€2 B

2017-2022 10% SKA construction, SKA1

€650M

2012 Site selection

2012 - 2016 Pre-Construction: 1 yr Detailed design

€90MPEP 3 yr Production Readiness

2008 - 2012 System design and refinement of specification

2000 - 2007 Initial concepts stage

1995 - 2000 Preliminary ideas and R&D

Page 18: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA project structure

SKA BoardSKA Board

Director GeneralDirector General

Work Package Consortium 1 Work Package Consortium 1

Work Package Consortium n Work Package Consortium n

Advisory Committees(Science, Engineering, Finance, Funding …)

Advisory Committees(Science, Engineering, Finance, Funding …)

……

Project Office (OSKAO)

Project Office (OSKAO)

Locally funded

Page 19: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Work package breakdown

UK (lead), AU (CSIRO…), NL (ASTRON…) South Africa SKA, Industry (Intel, IBM…)

UK (lead), AU (CSIRO…), NL (ASTRON…) South Africa SKA, Industry (Intel, IBM…)

1. System

2. Science

3. Maintenance and support /Operations Plan

4. Site preparation

5. Dishes

6. Aperture arrays

7. Signal transport

8. Data networks

9. Signal processing

10. Science Data Processor

11. Monitor and Control

12. Power

SPO

Page 20: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA = Streaming data processor Challenge

• The SDP consortium led by Paul Alexander University of Cambridge

• 3 year design phase has now started (as of November 2013)

• To deliver SKA ICT infrastructure need a strong multi-disciplinary team

• Radio astronomy expertise

• HPC expertise (scalable software implementations; management)

• HPC hardware (heterogeneous processors; interconnects; storage)

• Delivery of data to users (cloud; UI …)

• Building a broad global consortium:

• 11 countries: UK, USA, AUS, NZ, Canada, NL, Germany, China, France, Spain, South Korea

• Radio astronomy observatories; HPC centres; Multi-national ICT companies; sub-contractors

Page 21: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SDP consortium membersManagement Groupings Workshare (%)

University of Cambridge (Astrophysics & HPFCS) 9.15Netherlands Institute for Radio Astronomy 9.25International Centre for Radio Astronomy Research 8.35SKA South Africa / CHPC 8.15STFC Laboratories 4.05Non-Imaging Processing Team 6.95 University of Manchester Max-Planck-Institut für Radioastronomie University of Oxford (Physics)University of Oxford (OeRC) 4.85Chinese Universities Collaboration 5.85New Zealand Universities Collaboration 3.55Canadian Collaboration 13.65Forschungszentrum Jülich 2.95Centre for High Performance Computing South Africa 3.95iVEC Australia (Pawsey) 1.85Centro Nacional de Supercomputación 2.25Fundación Centro de Supercomputación de Castilla y León 1.85Instituto de Telecomunicações 3.95University of Southampton 2.35University College London 2.35University of Melbourne 1.85French Universities Collaboration 1.85Universidad de Chile 1.85

Page 22: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SDP –strong industrial partnership

• Discussions under way with

• DelI, NVIDIA, Intel, HP IBM, SGI, l, ARM, Microsoft Research

• Xyratex, Mellanox, Cray, DDN

• NAG, Cambridge Consultants, Parallel Scientific

• Amazon, Bull, AMD, Altera, Solar flare, Geomerics, Samsung,

CISCO

• Apologies to those I’ve forgotten to list

Page 23: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SDP work packages

Page 24: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA data rates

..

Sparse AA

Dense AA

..

Central Processing Facility - CPF

User interfacevia Internet

...

To 250 AA Stations

DSP...

DSP

To 1200 Dishes

...15m Dishes

16 Tb/s

10 Gb/s

Data

Time

Control

70-450 MHzWide FoV

0.4-1.4 GHzWide FoV

1.2-10 GHzWB-Single Pixel feeds

Tile &Station

Processing

OpticalData links

... AA slice

... AA slice

... AA slice

...D

ish & AA+D

ish Correlation

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

ProcessorBuffer

Data sw

itch ......Data

Archive

ScienceProcessors

Tb/s Gb/s Gb/s

...

...

TimeStandard

Ima

gin

g P

roce

ssors

Control Processors & User interface

Pb/s

Correlator UV Processors Image formation Archive

Aperture Array Station

16 Tb/s 4 Pb/s

24 Tb/s

20 Gb/s

20 Gb/s

1000Tb/s

Page 25: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA conceptual data flow

Science Data Processor Local M &C

Science Data Processor

Telescope Manager

Cor

rela

tor

/ B

eam

form

er

Data Routing Ingest

Visibility processing

Multiple Reads

Time Series Search

Multiple Reads

Data BufferData Routing

Time Series Processing

Image Plane Processing

Data Prodcuts

Sky Models, Calibration

Parameters ...

Meta Data

Master ControllerMaster ControllerLocal M&C Database

Tiered Data Delivery

Page 26: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA conceptual data flowTiered Data Delivery

Astronomer

Regional Centre

Cloud

Sub-set of Archive

Data routing

Regional Centre

Sub-set of Archive

Regional Centre

Sub-set of Archive

Cloud access

SDP Core Facility South Africa

SDP Core Facility Australia

Page 27: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Science data processor pipeline

10 Pflop 1 Eflop

100 Pflop

Software complexity

10 Tb/s 200 Pflop

10 Eflop

…IncomingData fromcollectors

Switch

Buffer store

Switch

Buffer store

Bulk StoreBulk Store

CorrelatorBeam

former

UV

Processor

Imaging:

Non-Imaging:

CornerTurning

CourseDelays

Fine F-step/Correlation

VisibilitySteering

ObservationBuffer

GriddingVisibilities Imaging

ImageStorage

CornerTurning

CourseDelays

Beamforming/De-dispersion

BeamSteering

ObservationBuffer

Time-seriesSearching

Searchanalysis

Object/timingStorage

HPC science

HPC science

processingprocessing

Image

Processor

1000Tb/s 1 Eflop10 EB/y SKA 2 SKA 1 1 EB/y

10 Tb/s

50 PB10/1 TB/s

Page 28: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SDP processing rack – feasibility model

Host processorMulti-core X86

M-Core - >10TFLO

P/s

M-Core

- >10TFLOP/s

To rackswitches

Disk 1≥1TB

56Gb/s

PCI Bus

Disk 2≥1TB

Disk 3≥1TB

Disk 4≥1TB

Processing blade 1

Processing blade 2

Processing blade 3

Processing blade 4

Processing blade 5

Processing blade 6

Processing blade 7

Processing blade 8

Processing blade 9

Processing blade 10

Processing blade 11

Processing blade 12

Processing blade 13

Processing blade 14

Processing blade 15

Processing blade 16

Processing blade 17

Processing blade 18

Processing blade 19

Processing blade 20

Leaf Switch-1 56Gb/sLeaf Switch-2 56Gb/s

42U Rack

Processing Blade:

GGPU, MIC,…?GGPU, MIC,…?

Blade Specification

Blade Specification

Page 29: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA feasibility model

AA-low

Data 1

1

280

AA-low

Data 2

1

280

Dishes

Data 4

1 2 16

… 1 3 N

HPCHPC

BulkBulkStoreStore

2

SwitchSwitch

Correlator/UV processor

Further UV processors

Imaging Processor

Corner Turnerswitches

56Gb/s each

AA-low

Data 3

1

280

1

250

……

Page 30: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

SKA conceptual software stack

SKA subsystems and service components

SKA Common Software Application FrameworkUIF Toolkit

Access ControlMonitoring Archiver

Live Data Access

Logging System

Alarm ServiceConfiguration Management

Scheduling Block Service

Communication Middleware

Database SupportThird-party tools and

librariesDevelopment tools

Operating System

High-level APIs and Tools

Core Services

Base Tools

Page 31: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• HPC development and prototyping lab for SKA

• Coordinated out of Cambridge and run jointly by HPCS and CHPC

• Will work closely with COMP to test and design various potential compute, networking, storage and HPC system / application software components

• Rigorous system engineering approach, which describes a formalised design and prototyping loop

• Provides a managed, global lab for the whole of the SDP consortium

• Provide touch stone and practical place of work for interaction with vendors

• First major test bed in the form of a Dell / Mellanox / NVIDIA GPU cluster has been deployed in the lab last month and will be used by consortium to drive design R&D

SKA Open Architecture Lab

Page 32: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• The SKA SDP compute facility will be at the time of deployment one of the largest HPC systems in existence

• Operational management of large HPC systems is challenging at the best of times - When HPC systems are housed in well established research centres with good IT logistics and experienced Linux HPC staff

• The SKA SDP could be housed in a desert location with little surrounding IT infrastructure, with poor IT logistics and little prior HPC history at the site

• Potential SKA SDP exascale systems are likely to consist of 100,000 nodes occupy 800 cabinets and consume 30 MW. This is very large – around 5 times the size of one today largest supercomputer –Titan Cray at Oakridge national labs.

• The SKA SDP HPC operations will be very challenging

SKA Exascale computing in the desert

Page 33: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• Although the operational aspects of the SKA SDP exacscale facility are challenging they are tractable if dealt with systematically and in collaboration with the HPC community.

The challenge is tractable

Page 34: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• We can describe the operational aspects by functional element

Machine room requirements **SDP data connectivity requirementsSDP workflow requirements System service level requirementsSystem management software requirements**Commissioning & acceptance test procedures System administration procedureUser access proceduresSecurity procedureMaintenance & logistical procedures **Refresh procedure System staffing & training procedures **

SKA HPC operations – functional elements

Page 35: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• Machine room infrastructure for exascale HPC facilities is challenging

• 800 racks, 1600M squared• 30MW IT load• ~40 Kw of heat per rack

• Cooling efficiency and heat density management is vital

• Machine infrastructure at this scale is both costly and time comsuming

• The power cost alone at todays cost is £30M per year

• Desert location presents particular problems for data centre

• Hot ambient temperature - difficult for compressor less cooling

• Lack of water - difficult for compressor less cooling• Very dry air - difficult for humidification• Remote location - difficult for DC maintenance

Machine room requirements

Page 36: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• System management software is the vital element in HPC operations

• System management software today does not scale to exascale

• Worldwide coordinated effort to develop system management software for exascale

• Elements of system management software stack:-Power management Network managementStorage managementWorkflow management OSRuntime environment Security managementSystem resilience System monitoring System data analytics Development tool

System management software

Page 37: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• Current HPC technology MBTF for hardware and system software result in failure rates of ~ 2 nodes per week on a cluster a ~600 nodes.

• It is expected that SKA exascale systems could contain ~100,000 nodes

• Thus expected failure rates of 300 nodes per week could be realistic

• During system commissioning this will be 3 or 4 X

• Fixing nodes quickly is vital otherwise the system will soon degrade into a non functional state

• The manual engineering processes for fault detection and diagnosis on 600 will not scale to 100,000 nodes. This needs to be automated by the system software layer

• Vendor hardware replacement logistics need to cope with high turn around rates

Maintenance logistics

Page 38: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

• Providing functional staffing levels and experience at remote desert location will be challenging

• Its hard enough finding good HPC staff to run small scale HPC systems in Cambridge – finding orders of magnitude more staff to run much more complicated systems in a remote desert location will be very Challenging

• Operational procedures using a combination of remote system administration staff and DC smart hands will be needed.

• HPC training programmes need to be implemented to skill up way in advance

Staffing levels and training

Page 39: Dr Paul Calleja Director Cambridge HPC Service

Cape Town 2013

Early Cambridge SKA solution - EDSAC 1

Maurice Wilkes