big data hpc convergence

66
Big Data HPC Convergence 5th Multicore World 15-17 February 2016 – Shed 6, Wellington, New Zealand http://openparallel.com/multicore-world-2016/ 1 Geoffrey Fox February 16, 2016 [email protected] http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/ Department of Intelligent Systems Engineering School of Informatics and Computing, Digital Science Center Indiana University Bloomington 02/16/2016

Upload: geoffrey-fox

Post on 09-Jan-2017

6.869 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Big Data HPC Convergence

Big Data HPC Convergence

5th Multicore World15-17 February 2016 – Shed 6, Wellington, New Zealand

http://openparallel.com/multicore-world-2016/

1

Geoffrey FoxFebruary 16, 2016

[email protected] http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/

Department of Intelligent Systems EngineeringSchool of Informatics and Computing, Digital Science Center

Indiana University Bloomington

02/16/2016

Page 2: Big Data HPC Convergence

2

Abstract• Two major trends in computing systems are the growth in high performance computing

(HPC) with an international exascale initiative, and the big data phenomenon with an accompanying cloud infrastructure of well publicized dramatic and increasing size and sophistication.

• In studying and linking these trends one needs to consider multiple aspects: hardware, software, applications/algorithms and even broader issues like business model and education.

• In this talk we study in detail a convergence approach for software and applications / algorithms and show what hardware architectures it suggests.

• We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View.

• We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack.

• We give examples of data analytics running on HPC systems including details on persuading Java to run fast.

• Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf

02/16/2016

Page 3: Big Data HPC Convergence

3

NIST Big Data Initiative

Led by Chaitin Baru, Bob Marcus, Wo ChangAnd

Big Data Application Analysis

02/16/2016

Page 4: Big Data HPC Convergence

4

NBD-PWG (NIST Big Data Public Working Group) Subgroups & Co-Chairs• There were 5 Subgroups

– Note mainly industry• Requirements and Use Cases Sub Group

– Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco • Definitions and Taxonomies SG

– Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD• Reference Architecture Sub Group

– Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented Intelligence

• Security and Privacy Sub Group– Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE

• Technology Roadmap Sub Group– Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data Tactics

• See http://bigdatawg.nist.gov/usecases.php• and http://bigdatawg.nist.gov/V1_output_docs.php

02/16/2016

Page 5: Big Data HPC Convergence

Use Case Template• 26 fields completed for 51 apps• Government Operation: 4• Commercial: 8• Defense: 3• Healthcare and Life Sciences:

10• Deep Learning and Social

Media: 6• The Ecosystem for Research:

4• Astronomy and Physics: 5• Earth, Environmental and

Polar Science: 10• Energy: 1• Now an online form

502/16/2016

Page 6: Big Data HPC Convergence

602/16/2016

http://hpc-abds.org/kaleidoscope/survey/

Online Use Case Form

Page 7: Big Data HPC Convergence

702/16/2016

http://hpc-abds.org/kaleidoscope/survey/

Page 8: Big Data HPC Convergence

51 Detailed Use Cases: Contributed July-September 2013Covers goals, data features such as 3 V’s, software, hardware• http://bigdatawg.nist.gov/usecases.php• https://bigdatacoursespring2014.appspot.com/course (Section 5)• Government Operation(4): National Archives and Records Administration, Census Bureau• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,

Digital Materials, Cargo shipping (as in UPS)• Defense(3): Sensors, Image surveillance, Situation Assessment• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,

Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd

Sourcing, Network Science, NIST benchmark datasets• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source

experiments• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron

Collider at CERN, Belle Accelerator II in Japan• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,

Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors

• Energy(1): Smart grid

8

26 Features for each use case Biased to science

02/16/2016

Page 9: Big Data HPC Convergence

9

Features and Examples

02/16/2016

Page 10: Big Data HPC Convergence

51 Use Cases: What is Parallelism Over?• People: either the users (but see below) or subjects of application and often both• Decision makers like researchers or doctors (users of application)• Items such as Images, EMR, Sequences below; observations or contents of online

store– Images or “Electronic Information nuggets”– EMR: Electronic Medical Records (often similar to people parallelism)– Protein or Gene Sequences;– Material properties, Manufactured Object specifications, etc., in custom dataset– Modelled entities like vehicles and people

• Sensors – Internet of Things• Events such as detected anomalies in telescope or credit card data or atmosphere• (Complex) Nodes in RDF Graph• Simple nodes as in a learning network• Tweets, Blogs, Documents, Web Pages, etc.

– And characters/words in them• Files or data to be backed up, moved or assigned metadata• Particles/cells/mesh points as in parallel simulations

1002/16/2016

Page 11: Big Data HPC Convergence

Features of 51 Use Cases I• PP (26) “All” Pleasingly Parallel or Map Only• MR (18) Classic MapReduce MR (add MRStat below for full count)• MRStat (7) Simple version of MR where key computations are simple

reduction as found in statistical averages such as histograms and averages

• MRIter (23) Iterative MapReduce or MPI (Spark, Twister)• Graph (9) Complex graph data structure needed in analysis • Fusion (11) Integrate diverse data to aid discovery/decision making;

could involve sophisticated algorithms or could just be a portal• Streaming (41) Some data comes in incrementally and is processed

this way• Classify (30) Classification: divide data into categories• S/Q (12) Index, Search and Query

1102/16/2016

Page 12: Big Data HPC Convergence

Features of 51 Use Cases II• CF (4) Collaborative Filtering for recommender engines• LML (36) Local Machine Learning (Independent for each parallel entity) –

application could have GML as well• GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI,

MDS, – Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief

Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can call EGO or Exascale Global Optimization with scalable parallel algorithm

• Workflow (51) Universal • GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual

Earth, Google Earth, GeoServer etc.• HPC (5) Classic large-scale simulation of cosmos, materials, etc.

generating (visualization) data• Agent (2) Simulations of models of data-defined macroscopic entities

represented as agents

1202/16/2016

Page 13: Big Data HPC Convergence

13

Local and Global Machine Learning• Many applications use LML or Local machine Learning where machine

learning (often from R) is run separately on every data item such as on every image

• But others are GML Global Machine Learning where machine learning is a single algorithm run over all data items (over all nodes in computer)– maximum likelihood or 2 with a sum over the N data items – documents,

sequences, items to be sold, images etc. and often links (point-pairs). – Graph analytics is typically GML

• Covering clustering/community detection, mixture models, topic determination, Multidimensional scaling, (Deep) Learning Networks

• PageRank is “just” parallel linear algebra• Note many Mahout algorithms are sequential – partly as MapReduce limited;

partly because parallelism unclear– MLLib (Spark based) better

• SVM and Hidden Markov Models do not use large scale parallelization in practice?

02/16/2016

Page 14: Big Data HPC Convergence

14

13 Image-based Use Cases• 13-15 Military Sensor Data Analysis/ Intelligence PP, LML, GIS, MR• 7:Pathology Imaging/ Digital Pathology: PP, LML, MR for search becoming

terabyte 3D images, Global Classification• 18&35: Computational Bioimaging (Light Sources): PP, LML Also materials• 26: Large-scale Deep Learning: GML Stanford ran 10 million images and 11 billion

parameters on a 64 GPU HPC; vision (drive car), speech, and Natural Language Processing

• 27: Organizing large-scale, unstructured collections of photos: GML Fit position and camera direction to assemble 3D photo ensemble

• 36: Catalina Real-Time Transient Synoptic Sky Survey (CRTS): PP, LML followed by classification of events (GML)

• 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets: PP, LML to identify glacier beds; GML for full ice-sheet

• 44: UAVSAR Data Processing, Data Product Delivery, and Data Services: PP to find slippage from radar images

• 45, 46: Analysis of Simulation visualizations: PP LML ?GML find paths, classify orbits, classify patterns that signal earthquakes, instabilities, climate, turbulence

02/16/2016

Page 15: Big Data HPC Convergence

15

Internet of Things and Streaming Apps• It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco)

billion devices on the Internet by 2020. • The cloud natural controller of and resource provider for the Internet of

Things. • Smart phones/watches, Wearable devices (Smart People), “Intelligent

River” “Smart Homes and Grid” and “Ubiquitous Cities”, Robotics.• Majority of use cases are streaming – experimental science gathers data in

a stream – sometimes batched as in a field trip. Below is sample• 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML• 13: Large Scale Geospatial Analysis and Visualization PP GIS LML• 28: Truthy: Information diffusion research from Twitter Data PP MR for

Search, GML for community determination• 39: Particle Physics: Analysis of LHC Large Hadron Collider Data:

Discovery of Higgs particle PP for event Processing, Global statistics• 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML• 51: Consumption forecasting in Smart Grids PP GIS LML

02/16/2016

Page 16: Big Data HPC Convergence

http://www.kpcb.com/internet-trends

IoT

100BDevices

~2025

Page 17: Big Data HPC Convergence

17

Big Data and Big Simulations Patterns – the Convergence

Diamonds

02/16/2016

Page 18: Big Data HPC Convergence

Big Data - Big Simulation (Exascale) Convergence

• Lets distinguish Data and Model (e.g. machine learning analytics) in Big data problems

• Then almost always Data is large but Model varies– e.g. LDA with many topics or deep learning has large model– Clustering or Dimension reduction can be quite small

• Simulations can also be considered as Data and Model– Model is solving particle dynamics or partial differential equations– Data could be small when just boundary conditions or– Data large with data assimilation (weather forecasting) or when data

visualizations produced by simulation• Data often static between iterations (unless streaming), model

varies between iterations

1802/16/2016

Page 19: Big Data HPC Convergence

19

Classifying Big Data and Big Simulation Applications• “Benchmarks” “kernels” “algorithm” “mini-apps” can serve multiple

purposes• Motivate hardware and software features

– e.g. collaborative filtering algorithm has feature of parallelizes well with MapReduce and suggests using Hadoop on a cloud

– e.g. deep learning on images dominated by matrix operations; needs CUDA&MPI and suggests HPC cluster

• Benchmark sets can be designed to cover key features of systems in terms of features and sizes of “important” applications

• Take 51 uses cases derive specific features; each use case has multiple features

• Generalize and systematize with features termed “facets”• 50 Facets (Big Data) or 64 Facets (Big Simulation and Data) divided

into 4 sets or views where each view has “similar” facets– Allow one to study coverage of benchmark sets and architectures

• Discuss Data and Model together as built around problems which combine them but we can get insight by separating and this allows better understanding of Big Data - Big Simulation “convergence”

02/16/2016

Page 20: Big Data HPC Convergence

20

7 Computational Giants of NRC Massive Data Analysis Report

1) G1: Basic Statistics e.g. MRStat2) G2: Generalized N-Body Problems3) G3: Graph-Theoretic Computations4) G4: Linear Algebraic Computations5) G5: Optimizations e.g. Linear Programming6) G6: Integration e.g. LDA and other GML7) G7: Alignment Problems e.g. BLAST

02/16/2016

http://www.nap.edu/catalog.php?record_id=18374 Big Data Models?

Page 21: Big Data HPC Convergence

21

HPC (Simulation) Benchmark Classics• Linpack or HPL: Parallel LU factorization

for solution of linear equations• NPB version 1: Mainly classic HPC solver kernels

– MG: Multigrid– CG: Conjugate Gradient– FT: Fast Fourier Transform– IS: Integer sort– EP: Embarrassingly Parallel– BT: Block Tridiagonal– SP: Scalar Pentadiagonal– LU: Lower-Upper symmetric Gauss Seidel

02/16/2016

Simulation Models

Page 22: Big Data HPC Convergence

22

13 Berkeley Dwarfs1) Dense Linear Algebra 2) Sparse Linear Algebra3) Spectral Methods4) N-Body Methods5) Structured Grids6) Unstructured Grids7) MapReduce8) Combinational Logic9) Graph Traversal10) Dynamic Programming11) Backtrack and

Branch-and-Bound12) Graphical Models13) Finite State Machines

02/16/2016

First 6 of these correspond to Colella’s original. (Classic simulations)Monte Carlo dropped.N-body methods are a subset of Particle in Colella.

Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method.Need multiple facets!

Largely Models for Data or Simulation

Page 23: Big Data HPC Convergence

2302/16/2016

Pleasingly ParallelClassic MapReduceMap-CollectiveMap Point-to-Point

Shared MemorySingle Program Multiple DataBulk Synchronous ParallelFusionDataflowAgentsWorkflow

Geospatial Information SystemHPC SimulationsInternet of ThingsMetadata/ProvenanceShared / Dedicated / Transient / PermanentArchived/Batched/StreamingHDFS/Lustre/GPFSFiles/ObjectsEnterprise Data ModelSQL/NoSQL/NewSQL

Performance

Metrics

FlopsperB

yte;M

emory

I/OExecution

Environment;

Core

librariesVolum

eVelocityVarietyVeracityC

omm

unicationStructure

Data

Abstraction

Metric

=M

/Non-M

etric=

N

ଶ=

NN

/ሺሻ

=N

Regular

=R

/Irregular=

ID

ynamic

=D

/Static=

S

Visualization

Graph A

lgorithms

Linear Algebra K

ernelsA

lignment

Streaming

Optim

ization Methodology

LearningC

lassificationSearch / Q

uery / Index

Base Statistics

Global A

nalyticsLocal A

nalyticsM

icro-benchmarks

Recom

mendations

Data Source and Style View

Execution View

Processing View

234

6789

101112

10987654

321

1 2 3 4 5 6 7 8 9 10 12 14

9 8 7 5 4 3 2 114 13 12 11 10 6

13

Map Streaming 5

4 Ogre Views and 50 Facets

Iterative/Sim

ple

11

1

Problem Architecture View

Page 24: Big Data HPC Convergence

2402/16/2016

Loca

l (An

alyti

cs/In

form

atics

/Sim

ulati

ons)

2M

Data Source and Style View

Pleasingly ParallelClassic MapReduce

Map-CollectiveMap Point-to-Point

Shared MemorySingle Program Multiple Data

Bulk Synchronous Parallel

FusionDataflow

AgentsWorkflow

Geospatial Information SystemHPC SimulationsInternet of ThingsMetadata/ProvenanceShared / Dedicated / Transient / Permanent

Archived/Batched/Streaming – S1, S2, S3, S4, S5

HDFS/Lustre/GPFS

Files/ObjectsEnterprise Data ModelSQL/NoSQL/NewSQL

1M

Mic

ro-b

ench

mar

ks

Execution View

Processing View1234

6

7891011M

12

10D98D7D6D

5D

4D

3D2D1D

Map Streaming 5

Convergence Diamonds Views and

Facets

Problem Architecture View

15M

Core

Lib

rarie

sVi

sual

izatio

n14M

Grap

h Al

gorit

hms

13M

Line

ar A

lgeb

ra K

erne

ls/M

any

subc

lass

es

12M

Glob

al (A

naly

tics/

Info

rmati

cs/S

imul

ation

s)

3M

Reco

mm

ende

r En

gine

5M

4M

Base

Dat

a St

atisti

cs

10M

Stre

amin

g Da

ta A

lgor

ithm

sO

ptim

izatio

n M

etho

dolo

gy

9M

Lear

ning

8M

Data

Cla

ssifi

catio

n

7M

Data

Sear

ch/Q

uery

/Inde

x

6M

11M

Data

Alig

nmen

t

Big Data Processing Diamonds

Mul

tisca

le M

etho

d

17M

16M

Itera

tive

PDE

Solv

ers

22M

Natu

re o

f mes

h if

used

Evol

ution

of D

iscr

ete

Syst

ems

21M

Parti

cles

and

Fie

lds

20M

N-bo

dy M

etho

ds

19M

Spec

tral M

etho

ds

18M

Simulation (Exascale) Processing Diamonds

DataAbstraction

D12

ModelAbstraction

M12

DataM

etric=

M/Non-M

etric=

N

D13

DataM

etric=

M/Non-M

etric=

N

M13

��ଶ

=NN

/�ሺ�ሻ

=N

M14

Regular=R

/Irregular=IM

odel

M10

Veracity

7

Iterative/Sim

ple

M11

Comm

unicationStructure

M8

Dynamic

=D

/Static=S

D9

Dynamic

=D

/Static=S

M9

Regular=R

/Irregular=IData

D10

ModelVariety

M6

DataVelocity

D5

Performance

Metrics

1

DataVariety

D6

FlopsperByte/M

emory

IO/Flopsperw

att

2

ExecutionEnvironm

ent;Corelibraries

3

DataVolum

e

D4

ModelSize

M4

SimulationsAnalytics(Model for Data)

Both

(All Model)

(Nearly all Data+Model)

(Nearly all Data)

(Mix of Data and Model)

Page 25: Big Data HPC Convergence

Dwarfs and Ogres give Convergence Diamonds

• Macropatterns or Problem Architecture View: Unchanged from Ogres

• Execution View: Significant changes to separate Data and Model and add characteristics of Simulation models

• Data Source and Style View: Same for Ogres and Diamonds – present but less important for Simulations compared to big data

• Processing View is a mix of Big Data Processing View and Big Simulation Processing View and includes some facets like “uses linear algebra” needed in both: has specifics of key simulation kernels and in particular includes NAS Parallel Benchmarks and Berkeley Dwarfs

2502/16/2016

Page 26: Big Data HPC Convergence

26

Facets of the Convergence Diamonds

Problem Architecture

Meta or Macro Aspects of DiamondsValid for Big Data or Big Simulations as describes Problem

which is Model-Data combination

02/16/2016

Page 27: Big Data HPC Convergence

27

Problem Architecture View (Meta or MacroPatterns)i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including

Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-imagery, radar images (pleasingly parallel but sophisticated local analytics)

ii. Classic MapReduce: Search, Index and Query and Classification algorithms like collaborative filtering (G1 for MRStat in Features, G7)

iii. Map-Collective: Iterative maps + communication dominated by “collective” operations as in reduction, broadcast, gather, scatter. Common datamining pattern

iv. Map-Point to Point: Iterative maps + communication dominated by many small point to point messages as in graph algorithms

v. Map-Streaming: Describes streaming, steering and assimilation problems vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on

shared rather than distributed memory – see some graph algorithmsvii. SPMD: Single Program Multiple Data, common parallel programming featureviii. BSP or Bulk Synchronous Processing: well-defined compute-communication phasesix. Fusion: Knowledge discovery often involves fusion of multiple methods. x. Dataflow: Important application features often occurring in composite Ogresxi. Use Agents: as in epidemiology (swarm approaches) This is Model onlyxii. Workflow: All applications often involve orchestration (workflow) of multiple

components

02/16/2016

Most (11 of total 12) are properties of Data+Model

Page 28: Big Data HPC Convergence

28

Relation of Problem and Machine Architecture• Problem is Model plus Data• In my old papers (especially book Parallel Computing Works!), I discussed

computing as multiple complex systems mapped into each other

Problem Numerical formulation Software Hardware

• Each of these 4 systems has an architecture that can be described in similar language

• One gets an easy programming model if architecture of problem matches that of Software

• One gets good performance if architecture of hardware matches that of software and problem

• So “MapReduce” can be used as architecture of software (programming model) or “Numerical formulation of problem”

02/16/2016

Page 29: Big Data HPC Convergence

6 Forms of MapReducecover “all” circumstances

Describes - Problem (Model reflecting data) - Machine - Software

Architecture

2902/16/2016

Page 30: Big Data HPC Convergence

30

Data Analysis Problem Architectures 1) Pleasingly Parallel PP or “map-only” in MapReduce

BLAST Analysis; Local Machine Learning 2A) Classic MapReduce MR, Map followed by reduction

High Energy Physics (HEP) Histograms; Web search; Recommender Engines 2B) Simple version of classic MapReduce MRStat

Final reduction is just simple statistics 3) Iterative MapReduce MRIter

Expectation maximization Clustering Linear Algebra, PageRank 4A) Map Point to Point Communication

Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph 4B) GPU (Accelerator) enhanced 4A) – especially for deep learning 5) Map + Streaming + some sort of Communication

Images from Synchrotron sources; Telescopes; Internet of Things IoT Apache Storm is (Map + Dataflow) +Streaming Data assimilation is (Map + Point to Point Communication) + Streaming

6) Shared memory allowing parallel threads which are tricky to program but lower latency Difficult to parallelize asynchronous parallel Graph Algorithms

02/16/2016

Page 31: Big Data HPC Convergence

31

Diamond Facets

Execution Features View

Many similar Features for Big Data and Simulations

02/16/2016

Page 32: Big Data HPC Convergence

32

View for Micropatterns or Execution Featuresi. Performance Metrics; property found by benchmarking Diamondii. Flops per byte; memory or I/Oiii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate

gradient, reduction, broadcast; Cloud, HPC etc.iv. Volume: property of a Diamond instance: a) Data Volume and b) Model Sizev. Velocity: qualitative property of Diamond with value associated with instance. Only Datavi. Variety: important property especially of composite Diamonds; Data and Model separatelyvii. Veracity: important property of applications but not kernels;viii. Model Communication Structure; Interconnect requirements; Is communication BSP,

Asynchronous, Pub-Sub, Collective, Point to Point? ix. Is Data and/or Model (graph) static or dynamic?x. Much Data and/or Models consist of a set of interconnected entities; is this regular as a set

of pixels or is it a complicated irregular graph?xi. Are Models Iterative or not?xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items; Model can

have same or different abstractions e.g. mesh points, finite element, Convolutional Networkxiii. Are data points in metric or non-metric spaces? Data and Model separately?xiv. Is Model algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2)

02/16/2016

Page 33: Big Data HPC Convergence

Comparison of Data Analytics with Simulation I• Simulations produce big data as visualization of results – they are data

source– Or consume often smallish data to define a simulation problem– HPC simulation in (weather) data assimilation is data + model

• Pleasingly parallel often important in both• Both are often SPMD and BSP• Non-iterative MapReduce is major big data paradigm

– not a common simulation paradigm except where “Reduce” summarizes pleasingly parallel execution as in some Monte Carlos

• Big Data often has large collective communication– Classic simulation has a lot of smallish point-to-point messages– Motivates Map-Collective model

• Simulations characterized often by difference or differential operators• Simulation dominantly sparse (nearest neighbor) data structures

– Some important data analytics involves full matrix algorithm but– “Bag of words (users, rankings, images..)” algorithms are sparse, as is

PageRank02/16/2016 33

Page 34: Big Data HPC Convergence

“Force Diagrams” for macromolecules and Facebook

02/16/2016 34

Page 35: Big Data HPC Convergence

Comparison of Data Analytics with Simulation II• There are similarities between some graph problems and particle

simulations with a strange cutoff force.– Both Map-Communication

• Note many big data problems are “long range force” (as in gravitational simulations) as all points are linked.– Easiest to parallelize. Often full matrix algorithms– e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith-

Waterman, etc., between all sequences i, j.– Opportunity for “fast multipole” ideas in big data. See NRC report

• In image-based deep learning, neural network weights are block sparse (corresponding to links to pixel blocks) but can be formulated as full matrix operations on GPUs and MPI in blocks.

• In HPC benchmarking, Linpack being challenged by a new sparse conjugate gradient benchmark HPCG, while I am diligently using non- sparse conjugate gradient solvers in clustering and Multi-dimensional scaling.

02/16/2016 35

Page 36: Big Data HPC Convergence

36

Convergence Diamond Facets

Big Data and Big Simulation Processing View

All Model Properties but differences between Big Data and Big Simulation

02/16/2016

Page 37: Big Data HPC Convergence

37

Diamond Facets in Processing (runtime) View Iused in Big Data and Big Simulation

• Pr-1M Micro-benchmarks ogres that exercise simple features of hardware such as communication, disk I/O, CPU, memory performance

• Pr-2M Local Analytics executed on a single core or perhaps node• Pr-3M Global Analytics requiring iterative programming models (G5,G6)

across multiple nodes of a parallel system• Pr-12M Uses Linear Algebra common in Big Data and simulations

– Subclasses like Full Matrix – Conjugate Gradient, Krylov, Arnoldi iterative subspace methods– Structured and unstructured sparse matrix methods

• Pr-13M Graph Algorithms (G3) Clear important class of algorithms -- as opposed to vector, grid, bag of words etc. – often hard especially in parallel

• Pr-14M Visualization is key application capability for big data and simulations

• Pr-15M Core Libraries Functions of general value such as Sorting, Math functions, Hashing

02/16/2016

Page 38: Big Data HPC Convergence

38

Diamond Facets in Processing (runtime) View IIused in Big Data

• Pr-4M Basic Statistics (G1): MRStat in NIST problem features• Pr-5M Recommender Engine: core to many e-commerce, media businesses;

collaborative filtering key technology• Pr-6M Search/Query/Index: Classic database which is well studied (Baru, Rabl

tutorial)• Pr-7M Data Classification: assigning items to categories based on many methods

– MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Classification• Pr-8M Learning of growing importance due to Deep Learning success in speech

recognition etc..• Pr-9M Optimization Methodology: overlapping categories including

– Machine Learning, Nonlinear Optimization (G6), Maximum Likelihood or 2 least squares minimizations, Expectation Maximization (often Steepest descent), Combinatorial Optimization, Linear/Quadratic Programming (G5), Dynamic Programming

• Pr-10M Streaming Data or online Algorithms. Related to DDDAS (Dynamic Data-Driven Application Systems)

• Pr-11M Data Alignment (G7) as in BLAST compares samples with repository

02/16/2016

Page 39: Big Data HPC Convergence

39

Diamond Facets in Processing (runtime) View IIIused in Big Simulation

• Pr-16M Iterative PDE Solvers: Jacobi, Gauss Seidel etc.• Pr-17M Multiscale Method? Multigrid and other variable

resolution approaches• Pr-18M Spectral Methods as in Fast Fourier Transform• Pr-19M N-body Methods as in Fast multipole, Barnes-Hut• Pr-20M Both Particles and Fields as in Particle in Cell method• Pr-21M Evolution of Discrete Systems as in simulation of

Electrical Grids, Chips, Biological Systems, Epidemiology. Needs Ordinary Differential Equation solvers

• Pr-22M Nature of Mesh if used: Structured, Unstructured, Adaptive

02/16/2016

Covers NAS Parallel Benchmarks and Berkeley Dwarfs

Page 40: Big Data HPC Convergence

40

Facets of the Diamonds (same as Ogres here)

Data Source and Style Aspects

Present but often less important for Simulations (that use and produce data)

02/16/2016

Page 41: Big Data HPC Convergence

41

Data Source and Style Diamond View Ii. SQL NewSQL or NoSQL: NoSQL includes Document,

Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit NoSQL performance

ii. Other Enterprise data systems: 10 examples from NIST integrate SQL/NoSQL

iii. Set of Files or Objects: as managed in iRODS and extremely common in scientific research

iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage: Separated from computing or colocated? HDFS v Lustre v. Openstack Swift v. GPFS

v. Archive/Batched/Streaming: Streaming is incremental update of datasets with new algorithms to achieve real-time response (G7); Before data gets to compute system, there is often an initial data gathering phase which is characterized by a block size and timing. Block size varies from month (Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real time control, streaming)

• Streaming divided into categories overleaf

02/16/2016

Page 42: Big Data HPC Convergence

42

Data Source and Style Diamond View II • Streaming divided into 5 categories depending on event size and

synchronization and integration• Set of independent events where precise time sequencing unimportant.• Time series of connected small events where time ordering important.• Set of independent large events where each event needs parallel processing with time sequencing not

critical • Set of connected large events where each event needs parallel processing with time sequencing critical. • Stream of connected small or large events to be integrated in a complex way.

vi. Shared/Dedicated/Transient/Permanent: qualitative property of data; Other characteristics are needed for permanent auxiliary/comparison datasets and these could be interdisciplinary, implying nontrivial data movement/replication

vii. Metadata/Provenance: Clear qualitative property but not for kernels as important aspect of data collection process

viii. Internet of Things: 24 to 50 Billion devices on Internet by 2020ix. HPC simulations: generate major (visualization) output that often needs to be

mined x. Using GIS: Geographical Information Systems provide attractive access to

geospatial data

02/16/2016

Page 43: Big Data HPC Convergence

43

2. Perform real time analytics on data source streams and notify users when specified events occur

02/16/2016

Storm, Kafka, Hbase, Zookeeper

Streaming Data

Streaming Data

Streaming Data

Posted Data Identified Events

Filter Identifying Events

Repository

Specify filter

Archive

Post Selected Events

Fetch streamed Data

Page 44: Big Data HPC Convergence

44

5. Perform interactive analytics on data in analytics-optimized database

02/16/2016

Hadoop, Spark, Giraph, Pig …

Data Storage: HDFS, Hbase

Data, Streaming, Batch …..

Mahout, R

Page 45: Big Data HPC Convergence

45

5A. Perform interactive analytics on observational scientific data

02/16/2016

Grid or Many Task Software, Hadoop, Spark, Giraph, Pig …

Data Storage: HDFS, Hbase, File Collection

Streaming Twitter data for Social Networking

Science Analysis Code, Mahout, R

Transport batch of data to primary analysis data system

Record Scientific Data in “field”

Local Accumulate and initial computing

Direct Transfer

NIST examples include LHC, Remote Sensing, Astronomy and Bioinformatics

Page 46: Big Data HPC Convergence

46

HPC-ABDS

02/16/2016

Page 47: Big Data HPC Convergence

Data Platforms

4702/16/2016

Page 48: Big Data HPC Convergence

4802/16/2016

Big Data and (Exascale) Simulation Convergence IIKaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross-

Cutting Functions

1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination: Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca

17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar, Docker Compose, KeystoneML 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, PLASMA MAGMA, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, IBM System G, GraphBuilder(Intel), TinkerPop, Parasol, Dream:Lab, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree, DC.js, TensorFlow, CNTK 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Neptune, Google MillWheel, Amazon Kinesis, LinkedIn, Twitter Heron, Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics, Floe, Spark Streaming, Flink Streaming, DataTurbine 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, MR-MPI, Stratosphere (Apache Flink), Reef, Disco, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi, Galois, Medusa-GPU, MapGraph, Totem 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, HPX-5, Argo BEAST HPX-5 BEAST PULSAR, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Marionette Collective, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan, VoltDB, H-Store 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB, Spark SQL 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, ZHT, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, graphdb, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker (Machine, Swarm), Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Sahara, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive, OpenTOSCA, Winery, CloudML, Blueprints, Terraform, DevOpSlang, Any2Api 5) IaaS Management from HPC to hypervisors: Xen, KVM, QEMU, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, rkt, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds Networking: Google Cloud DNS, Amazon Route 53

21 layers Over 350 Software Packages January 29 2016

Page 49: Big Data HPC Convergence

49

Functionality of 21 HPC-ABDS Layers1) Message Protocols:2) Distributed Coordination:3) Security & Privacy:4) Monitoring: 5) IaaS Management from HPC to hypervisors:6) DevOps: 7) Interoperability:8) File systems: 9) Cluster Resource Management: 10) Data Transport: 11) A) File management

B) NoSQLC) SQL

12) In-memory databases&caches / Object-relational mapping / Extraction Tools13) Inter process communication Collectives, point-to-point, publish-subscribe, MPI:14) A) Basic Programming model and runtime, SPMD, MapReduce:

B) Streaming:15) A) High level Programming:

B) Frameworks16) Application and Analytics: 17) Workflow-Orchestration:

Here are 21 functionalities. (including 11, 14, 15 subparts)

4 Cross cutting at top17 in order of layered diagram starting at bottom

02/16/2016

Page 50: Big Data HPC Convergence

02/16/2016 50

Page 51: Big Data HPC Convergence

51

Java Grande

Revisited on 3 data analytics codesClustering

Multidimensional ScalingLatent Dirichlet Allocation

all sophisticated algorithms

02/16/2016

Page 52: Big Data HPC Convergence

Java MPI performs better than Threads I

5202/16/2016

• 48 24 core Haswell nodes 200K DA-MDS Dataset size• Default MPI much worse than threads• Optimized MPI using shared memory node-based messaging is much better than

threads

Page 53: Big Data HPC Convergence

MPI versus OpenMPI Intranode 400K DA-MDS

• Note, the default Java + OMPI could NOT handle 400K data• Java + SM + OMPI is our zero intra-node messaging

implementation using Java shared memory maps• SPIDAL Java is above plus other optimizations • Input 400K x 400K binary Short (2 bytes) matrix ~300GB• Runtime 5302/16/2016

OpenMPI Intranode

OpenMP Intranode

Page 54: Big Data HPC Convergence

Java MPI performs better than Threads II128 24 core Haswell nodes

5402/16/2016

200K Dataset Speedup

Best Threads

Best MPI

Page 55: Big Data HPC Convergence

DA-MDS Speedup versus Problem Sizes

• All tests were done with the SPIDAL Java version• Cores 48 to 1152 were done on 48 nodes by varying the number of

cores used within a node from 1 to 24• Cores 2304 and 3072 were done on 96 and 128 nodes while using

24 cores per node

5502/16/2016

Page 56: Big Data HPC Convergence

DA-MDS Scaling on 32 36 core Haswell Nodes

• Core counts of 32 to 1152 were done on 32 nodes by varying the number of cores used within a node from 1 to 36• Scales up to 32 cores per node (16 cores per chip)

5602/16/2016

MPI both Internodeand Intranode

Page 57: Big Data HPC Convergence

Java compared to Fortran and C

5702/16/2016

• NAS Java is old Java Grande benchmark• NAS serial benchmark total times in

seconds• SPIDAL Java optimizations were

incomplete and included Zero-Garbage Collection and Cache Optimizations only

• This is our machine learning code• DA-MDS block matrix multiplication

times in milliseconds• 1x1* is the serial version and does the

computation of a single process in 24x48 pattern.

• SPIDAL Java includes all optimizations

SPIDAL Java1. Cache and memory optimizations - these include blocked loops, 1D arrays to represent 2D data, and loop re-ordering2. Minimal memory and zero full Garbage Collection (GC) - these include using statically allocated arrays to reduce GC and using the minimum number of arrays possible3. Off-heap data structures - these were used to load initial data, intra-node messaging, and do MPI communications.

Page 58: Big Data HPC Convergence

446K sequences~100 clusters

02/16/2016 58

Page 59: Big Data HPC Convergence

59

July 21 2007 Positions

End 2008 Positions

10 year US Stock daily price time series mapped to 3D (work in progress)

3400 stocksSector Groupings

02/16/2016

Page 60: Big Data HPC Convergence

Clueweb

6002/16/2016

enwiki

Bi-gram

Clueweb

Latent Dirichlet Allocation on 100 Haswell nodes: red is Map-Collective

Page 61: Big Data HPC Convergence

61

Big Data Exascale convergence

02/16/2016

Page 62: Big Data HPC Convergence

Big Data and (Exascale) Simulation Convergence I• Our approach to Convergence is built around two ideas that avoid addressing

the hardware directly as with modern DevOps technology it isn’t hard to retarget applications between different hardware systems.

• Rather we approach Convergence through applications and software.• Convergence Diamonds Convergence unify Big Simulation and Big Data

applications and so allow one to more easily identify good approaches to implement Big Data and Exascale applications in a uniform fashion.

• Software convergence builds on the HPC-ABDS High Performance Computing enhanced Apache Big Data Software Stack concept (http://dsc.soic.indiana.edu/publications/HPC-ABDSDescribed_final.pdf, http://hpc-abds.org/kaleidoscope/ )– This arranges key HPC and ABDS software together in 21 layers showing

where HPC and ABDS overlap. It for example, introduces a communication layer to allow ABDS runtime like Hadoop Storm Spark and Flink to use the richest high performance capabilities shared with MPI Generally it proposes how to use HPC and ABDS software together.

– Layered Architecture offers some protection to rapid ABDS technology change

6202/16/2016

Page 63: Big Data HPC Convergence

Dual Convergence Architecture• Running same HPC-ABDS across all platforms but data management

machine has different balance in I/O, Network and Compute from “model” machine

6302/16/2016

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

C D

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

CD

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

Data Management Model for Big Data and Big Simulation

Page 64: Big Data HPC Convergence

Things to do for Big Data and (Exascale) Simulation Convergence II

• Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets– Indicated how to extend Big Data Ogres to Big Simulations

by looking separately at model and data in Ogres– Diamonds have four views or collections of facets: Problem

Architecture; Execution; Data Source and Style; Big Data and Big Simulation Processing

– Facets cover data, model or their combination – the problem or application

– Note Simulation Processing View has by construction, similarities to old parallel computing benchmarks

6402/16/2016

Page 65: Big Data HPC Convergence

Things to do for Big Data and (Exascale) Simulation Convergence III

• Convergence Benchmarks: we will use benchmarks that cover the facets of the convergence diamonds i.e. cover big data and simulations; – As we separate data and model, compute intensive simulation benchmarks (e.g.

solve partial differential equation) will be linked with data analytics (the model in big data)

– IU focus SPIDAL (Scalable Parallel Interoperable Data Analytics Library) with high performance clustering, dimension reduction, graphs, image processing as well as MLlib will be linked to core PDE solvers to explore the communication layer of parallel middleware

– Maybe integrating data and simulation is an interesting idea in benchmark sets• Convergence Programming Model

– Note parameter servers used in machine learning will be mimicked by collective operators invoked on distributed parameter (model) storage

– E.g. Harp as Hadoop HPC Plug-in– There should be interest in using Big Data software systems to support exascale

simulations– Streaming solutions from IoT to analysis of astronomy and LHC data will drive

high performance versions of Apache streaming systems

6502/16/2016

Page 66: Big Data HPC Convergence

Things to do for Big Data and (Exascale) Simulation Convergence IV

• Converge Language: Make Java run as fast as C++ (Java Grande) for computing and communication

• Surprising that so much Big Data work in industry but basic high performance Java methodology and tools missing– Needs some work as no agreed OpenMP for Java parallel

threads– OpenMPI supports Java but needs enhancements to get

best performance on needed collectives (For C++ and Java)

– Convergence Language Grande should support Python, Java (Scala), C/C++ (Fortran)

6602/16/2016