slide-1 sc2002 tutorial mit lincoln laboratory dod sensor processing: applications and supporting...

75
Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization Office under Air Force Contract F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the Unites States Government.

Upload: cleopatra-cooper

Post on 31-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

Slide-1SC2002 Tutorial

MIT Lincoln Laboratory

DoD Sensor Processing:Applications and Supporting

Software TechnologyDr. Jeremy Kepner

MIT Lincoln Laboratory

This work is sponsored by the High Performance Computing Modernization Office under Air Force Contract F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the Unites States Government.

Page 2: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-2

SC2002 Tutorial

P0 P1 P2 P3

Node Controller

Parallel Embedded Processor

System Controller

ConsolesOther

Computers

ControlCommunication:

CORBA, HP-CORBASCA

Data Communication:MPI, MPI/RT, DRI

Computation:VSIPL

DefinitionsVSIPL = Vector, Signal, and Image

Processing LibraryMPI = Message-passing interfaceMPI/RT = MPI real-timeDRI = Data Re-org InterfaceCORBA = Common Object Request Broker

ArchitectureHP-CORBA = High Performance CORBA

Preamble: Existing Standards

• A variety of software standards support existing DoD signal processing systems

Page 3: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-3

SC2002 Tutorial

Preamble: Next Generation Standards

Performance (1.5x)

Por

tabi

lity

(3x)

Productivity (3x)

HPECSoftwareInitiative

Demonstrate

Develop A

pplie

dR

esea

rch

Object O

rientedO

pen

Sta

ndar

ds

Interoperable & Scalable

Portability lines-of-code changed to port/scale to new systemProductivity lines-of-code added to add new functionalityPerformance computation and communication benchmarks

• Software Initiative Goal: transition research into commercial standards• Software Initiative Goal: transition research into commercial standards

Page 4: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-4

SC2002 Tutorial

HPEC-SI: VSIPL++ and Parallel VSIPL

Demonstrate insertions into fielded systems (e.g., CIP)• Demonstrate 3x portability

High-level code abstraction• Reduce code size 3x

Unified embedded computation/ communication standard•Demonstrate scalability

Demonstration: Existing Standards

Phase 1

Phase 2

Phase 3

Time

Development: Object-Oriented Standards

Applied Research: Unified Comp/Comm Lib

Demonstration: Object-Oriented Standards

Applied Research: Fault tolerance

Demonstration: Unified Comp/Comm Lib

Development: Fault tolerance

Applied Research: Self-optimization

Development: Unified Comp/Comm Lib

Fu

nct

ion

alit

y

VSIPL++

prototypeParallelVSIPL++

prototype

VSIPLMPI

VSIPL++

ParallelVSIPL++

Page 5: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-5

SC2002 Tutorial

Preamble: The Links

High Performance Embedded Computing Workshophttp://www.ll.mit.edu/HPEC

High Performance Embedded Computing Software Initiativehttp://www.hpec-si.org/

Vector, Signal, and Image Processing Libraryhttp://www.vsipl.org/

MPI Software Technologies, Inc.http://www.mpi-softtech.com/Data Reorganization Initiative

http://www.data-re.org/CodeSourcery, LLC

http://www.codesourcery.com/MatlabMPI

http://www.ll.mit.edu/MatlabMPI

Page 6: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-6

SC2002 Tutorial

• DoD Needs• Parallel Stream Computing• Basic Pipeline Processing

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 7: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-7

SC2002 Tutorial

$0.0

$1.0

$2.0

$3.0

FY98FY99FY00FY01FY02FY03FY04FY05

SoftwareHardware

Why Is DoD Concerned with Embedded Software?

Source: “HPEC Market Study” March 2001

Estimated DoD expenditures for embedded signal and image processing hardware and software ($B)

• COTS acquisition practices have shifted the burden from “point design”

hardware to “point design” software (i.e. COTS HW requires COTS SW)

• Software costs for embedded systems could be reduced by one-third

with improved programming models, methodologies, and standards

• COTS acquisition practices have shifted the burden from “point design”

hardware to “point design” software (i.e. COTS HW requires COTS SW)

• Software costs for embedded systems could be reduced by one-third

with improved programming models, methodologies, and standards

Page 8: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-8

SC2002 Tutorial

Embedded Stream Processing

Requires high performance computing and networkingRequires high performance computing and networking

Pea

k B

isec

tio

n B

and

wid

th (

GB

/s) 10000.0

1000.0

100.0

10.0

1.0

0.11 10 100 1000 10000 100000

Peak Processor Power (Gflop/s)

Moore’sLaw

FasterNetworks

Desired region of performance

Today

COTS

GoalRadar Sonar

WirelessVideo

Medical

Scientific Encoding

Page 9: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

Slide-9SC2002 Tutorial

MIT Lincoln Laboratory

Military Embedded Processing

MIT Lincoln Laboratory• Signal processing drives computing requirements• Rapid technology insertion is critical for sensor dominance• Signal processing drives computing requirements• Rapid technology insertion is critical for sensor dominance

0.001

0.01

0.1

1

10

100

1990 1995 2000 2005 2010

YEAR

TFLOPS

Airborne Radar

ShipboardSurveillance

UAV

Missile Seeker

SBR

Small UnitOperations

SIGINT

REQUIREMENTS INCREASINGBY AN ORDER OF MAGNITUDEEVERY 5 YEARS

EMBEDDED PROCESSING REQUIREMENTS WILLEXCEED 10 TFLOPS IN THE 2005-2010 TIME FRAME

EMBEDDED PROCESSING REQUIREMENTS WILLEXCEED 10 TFLOPS IN THE 2005-2010 TIME FRAME

Page 10: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-10

SC2002 Tutorial

Military Query Processing

Sensors ParallelComputing

Wide AreaImaging

Hyper SpecImaging

SAR/GMTI

BoSSNET

Targeting

ForceLocation

InfrastructureAssessment

High SpeedNetworks

Missions

ParallelDistributedSoftware

MultiSensor

Algorithms

Software

• Highly distributed computing• Fewer very large data movements• Highly distributed computing• Fewer very large data movements

Page 11: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-11

SC2002 Tutorial

ParallelComputer

Parallel Pipeline

BeamformXOUT = w *XIN

DetectXOUT = |XIN|>c

FilterXOUT = FIR(XIN )

Signal Processing Algorithm

Mapping

• Data Parallel within stages• Task/Pipeline Parallel across stages • Data Parallel within stages• Task/Pipeline Parallel across stages

Page 12: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-12

SC2002 Tutorial

Filtering

XOUT = FIR(XIN,h)

• Fundamental signal processing operation• Converts data from wideband to narrowband via filter

O(Nsamples Nchannel Nh / Ndecimation)• Degrees of parallelism: Nchannel

• Fundamental signal processing operation• Converts data from wideband to narrowband via filter

O(Nsamples Nchannel Nh / Ndecimation)• Degrees of parallelism: Nchannel

Xin

Nchannel

Nsamples

Nsamples/Ndecimation

Nchannel

Xout

Page 13: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-13

SC2002 Tutorial

Beamforming

XOUT = w *XIN

• Fundamental operation for all multi-channel receiver systems• Converts data from channels to beams via matrix multiply

O(Nsamples Nchannel Nbeams)• Key: weight matrix can be computed in advance• Degrees of Parallelism: Nsamples

• Fundamental operation for all multi-channel receiver systems• Converts data from channels to beams via matrix multiply

O(Nsamples Nchannel Nbeams)• Key: weight matrix can be computed in advance• Degrees of Parallelism: Nsamples

Xin

Nchannel

Nsamples

Nsamples

Nbeams

Xout

Page 14: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-14

SC2002 Tutorial

Detection

XOUT = |XIN|>c

• Fundamental operation for all processing chains• Converts data from a stream to a list of detections via

thresholding O(Nsamples Nbeams)• Number detections is data dependent• Degrees of parallelism: Nbeams Nchannels or Ndetects

• Fundamental operation for all processing chains• Converts data from a stream to a list of detections via

thresholding O(Nsamples Nbeams)• Number detections is data dependent• Degrees of parallelism: Nbeams Nchannels or Ndetects

Xin

Nbeams

Nsamples

Ndetects

Xout

Page 15: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-15

SC2002 Tutorial

Types of Parallelism

InputInput

FIRFIlters

FIRFIlters

SchedulerScheduler

Detector 2

Detector 2

Detector 1

Detector 1

Beam-former 2Beam-

former 2 Beam-

former 1 Beam-

former 1

Task Parallel Task Parallel

Pipeline Pipeline

Round RobinRound Robin

Data ParallelData Parallel

Page 16: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-16

SC2002 Tutorial

• Filtering• Beamforming• Detection

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 17: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-17

SC2002 Tutorial

FIR Overview

MIT Lincoln Laboratory

FIR

• Uses: pulse compression, equalizaton, …

• Formulation: y = h o x– y = filtered data [#samples]– x = unfiltered data [#samples]– f = filter [#coefficients]– o = convolution operator

• Algorithm Parameters: #channels, #samples, #coefficents, #decimation

• Implementation Parameters: Direct Sum or FFT based

Page 18: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-18

SC2002 Tutorial

Basic Filtering via FFT

• Fourier Transform (FFT) allows specific frequencies to be selected O(N log N)

FFTFFT

time frequency

time frequency

DC

DC

Page 19: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-19

SC2002 Tutorial

Basic Filtering via FIR

freqf1 f2DC

(Example: Band-Pass Filter)

FIR(x,h)x y

Power in anyfrequency

Power onlybetweenf1 and f2

h1 h2 hLh3

Delay Delay Delay

y

• Finite Impulse Response (FIR) allows a range of frequencies to be selected O(N Nh)

Page 20: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-20

SC2002 Tutorial

Multi-Channel Parallel FIR filter

MIT Lincoln Laboratory

• Parallel Mapping Constraints:– #channels MOD #processors = 0– 1st parallelize across channels– 2nd parallelize within a channel based on #samples and

#coefficients

FIRFIRFIRFIR

Channel 1Channel 2Channel 3Channel 4

Page 21: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-21

SC2002 Tutorial

• Filtering• Beamforming• Detection

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 22: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-22

SC2002 Tutorial

Beamforming Overview

MIT Lincoln Laboratory

• Uses: angle estimation

• Formulation: y = wHx– y = beamformed data [#samples x #beams]– x = channel data [#samples x #channels]– w = (tapered) stearing vectors [#channels x #beams]

• Algorithm Parameters: #channels, #samples, #beams, (tapered) steering vectors,

Beamform

Page 23: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-23

SC2002 Tutorial

Basic Beamforming Physics

Wav

efro

nts

ReceivedPhasefront

• Received phasefront creates complex exponential across array with frequency directly related to direction of propagation

• Estimating frequency of impinging phasefront indicates direction of propagation

• Direction of propagation is also known as angle-of-arrival (AOA) or direction-of arrival (DOA)

e j1 e j2 e j3 e j4 e j5 e j6 e j7

Direction of

propagation

Source

Page 24: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-24

SC2002 Tutorial

Parallel Beamformer

MIT Lincoln Laboratory

• Parallel Mapping Constraints:– #segment MOD #processors = 0– 1st parallelize across segments– 2nd parallelize across beams

Segment 1Segment 2Segment 3Segment 4

BeamformBeamform

BeamformBeamform

Page 25: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-25

SC2002 Tutorial

• Filtering• Beamforming• Detection

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 26: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-26

SC2002 Tutorial

CFAR Detection Overview

MIT Lincoln Laboratory

• Constant False Alarm Rate (CFAR)

• Formulation: x[n] > T[n]– x[n] = cell under test

– T[n] = Sum(xi)/2M, Ngaurd < |i - N| < M + Ngaurd – Angle estimate: take ratio of beams; do lookup

• Algorithm Parameters: #samples, #beams, steering vectors, #noise samples, #max detects

• Implementation Parameters: Greatest Of, Censored Greatest Of, Ordered Statistics, … Averaging vs Sorting

CFAR

Page 27: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-27

SC2002 Tutorial

Two-Pass Greatest-Of Excision CFAR(First Pass)

L L L L L L L LTTTTTTTT

T

L

Range cell under test

Guard cells

Trailing training cells

Leading training cells

MM GG

[ ]ix [ ] MnnzT ... , ; 1=

[ ] MnnzL ... , ; 1=

[ ] [ ] [ ] ,

Retain

Excise

⎟⎟⎠

⎞⎜⎜⎝

⎛× ∑∑

==

><

M

nL

M

nT nznz

M

Tix

1

2

1

212 max

Range

Input Data x[i]

Noise EstimateBuffer b[i]

.... ....1/M 1/M

Reference: S. L. Wilson, Analysis of NRL’s two-pass greatest-of excision CFAR, Internal Memorandum, MIT Lincoln Laboratory, October 5 1998.

Page 28: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-28

SC2002 Tutorial

Two-Pass Greatest-Of Excision CFAR(Second Pass)

L L L L L L L LTTTTTTTT

T

L

Cell under test

Guard cells

Trailing training cells

Leading training cells

MM GG

[ ]ix [ ] MnnzT ... , ; 1=

[ ] MnnzL ... , ; 1=

[ ] [ ] [ ] ,

Noise

Target

⎟⎟⎠

⎞⎜⎜⎝

⎛× ∑∑

==

><

M

nL

M

nT nznz

M

Tix

1

2

1

222 max

Input Data x[i]

Noise EstimateBuffer b[i]

( ) , , 1 FAPTMfT =2

Page 29: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-29

SC2002 Tutorial

Parallel CFAR Detection

MIT Lincoln Laboratory

• Parallel Mapping Constraints:– #segment MOD #processors = 0– 1st parallelize across segments– 2nd parallelize across beams

Segment 1Segment 2Segment 3Segment 4

CFAR

CFAR

CFAR

CFAR

Page 30: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-30

SC2002 Tutorial

• Latency vs. Throughput• Corner Turn• Dynamic Load Balancing

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 31: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-31

SC2002 Tutorial

Latency and throughput

BeamformXOUT = w *XIN

DetectXOUT = |XIN|>c

FilterXOUT = FIR(XIN)

Signal Processing Algorithm

ParallelComputer

• Latency: total processing + communication time for one frame of data (sum of times)

• Throughput: rate at which frames can be input (max of times)

• Latency: total processing + communication time for one frame of data (sum of times)

• Throughput: rate at which frames can be input (max of times)

0.5seconds

0.5seconds

1.0seconds

0.3seconds

0.8seconds

Latency = 0.5+0.5+1.0+0.3+0.8 = 3.1 seconds

Throughput = 1/max(0.5,0.5,1.0,0.3,0.8) = 1/second

Page 32: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-32

SC2002 Tutorial

Global

Optimum

Example: Optimum System Latency

1

10

100

0 8 16 24 32

Filter

Beamform

Latency < 8

Hard

ware <

32

Hardware Units (N)

La

ten

cy

Component LatencyLocalOptimum

FilterLatency = 2/N

BeamformLatency = 1/N

• Simple two component system• Local optimum fails to satisfy

global constraints• Need system view to find

global optimum

• Simple two component system• Local optimum fails to satisfy

global constraints• Need system view to find

global optimum

0

8

16

24

32

0 8 16 24 32

Hardware < 32

Latency < 8

Beamform Hardware

Filt

er

Ha

rdw

are

System Latency

Page 33: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-33

SC2002 Tutorial

System Graph

Filter Beamform Detect

Node is a unique parallel mapping of a computation task

Edge is the conduit between a pair of parallel mappings

• System Graph can store the hardware resource usage of every possible Task & Conduit

• System Graph can store the hardware resource usage of every possible Task & Conduit

Page 34: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-34

SC2002 Tutorial

Optimal Mapping of Complex Algorithms

Input

XINXIN

Low Pass Filter

XINXIN

W1W1

FIR1FIR1 XOUTXOUT

W2W2

FIR2FIR2

Beamform

XINXIN

W3W3

multmult XOUTXOUT

Matched Filter

XINXIN

W4W4

FFTFFT

IFFTIFFT XOUTXOUT

Workstation

EmbeddedMulti-computer

PowerPCCluster

EmbeddedBoard

IntelCluster

Application

Hardware

Different Optimal Maps

• Need to automate process of mapping algorithm to hardware

• Need to automate process of mapping algorithm to hardware

Page 35: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-35

SC2002 Tutorial

• Latency vs. Throughput• Corner Turn• Dynamic Load Balancing

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 36: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-36

SC2002 Tutorial

Weights

Beam M

Weights

Beam 2

Beam 1

InputChannel NN

2

Channel Space -> Beam Space

• Data enters system via different channels

• Filtering performed in a channel parallel fashion

• Beamforming requires combining data from multiple channels

1Inpute

Channel 1

Page 37: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-37

SC2002 Tutorial

Corner Turn Operation

Each processor sends data to each other processor

Each processor sends data to each other processor

Ch

ann

els

Samples

Half the data movesacross the bisection of the machine

Original Data Matrix

Corner-turned Data Matrix

Processor Ch

ann

els

Samples

BeamformFilter

Page 38: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-38

SC2002 Tutorial

P2 Processors

Sample

Channel

Pulse

Sample

Pulse

Channel

Corner Turn for Signal Processing

P1 Processors

Corner Turn Model

B = Bytes per messageQ = Parallel paths = Message startup cost = Link bandwidth

TCT = P1P2 ( + B/)

Q

All-to-all communication where each of P1 processors sends a message of size B to each of P2 processors Total

data cube size is P1P2B

Corner turn changes matrix distribution to exploit parallelism in successive pipeline stages

Page 39: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-39

SC2002 Tutorial

• Latency vs. Throughput• Corner Turn• Dynamic Load Balancing

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 40: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-40

SC2002 Tutorial

Dynamic Load Balancing

Static Parallel Implementation

0.13

0.15

0.24

0.97

0.08 0.11

0.30

0.10

Load: balanced Load: unbalanced

• Static parallelism implementations lead to unbalanced loads• Static parallelism implementations lead to unbalanced loads

Image Processing Pipeline

0.13

0.15

0.24

0.97

0.08 0.11

0.30

0.10

EstimationDetection

∝Work Pixels(static)

Detections(dynamic)∝Work

Page 41: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-41

SC2002 Tutorial

Static Parallelism and Poisson’s Walli.e. “Ball into Bins”

1

10

100

1000

1 2 4 8 16 32 64 128 256 512 1024Number of Processors

Speedup

LinearM=1000, f=10%M=1000, f=1%M=1000, f=0.1% 15% efficient

• Random fluctuations bound performance• Much worse if targets are correlated• Sets max targets in nearly every system

• Random fluctuations bound performance• Much worse if targets are correlated• Sets max targets in nearly every system

50% efficient

M = # units of workf = allowed failure rate

Page 42: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-42

SC2002 Tutorial

Static Derivation

speedup ≡Nd

Nf

Nd ≡ Total detectionsNf ≡ Allowed detections with failure ratefNp ≡ Number of processors

λ ≡Nd Np

Nf : Pλ(N f )Np =1− f

Pλ(N) =λne−λ

n!n=0,N∑

Page 43: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-43

SC2002 Tutorial

Dynamic Parallelism

1

10

100

1000

1 2 4 8 16 32 64 128 256 512 1024Number of Processors

Speedup

LinearM=1000, f=0

50% efficient

• Assign work to processors as needed• Large improvement even in “worst case”

• Assign work to processors as needed• Large improvement even in “worst case”

94% efficient

M = # units of workf = allowed failure rate

Page 44: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-44

SC2002 Tutorial

Dynamic Derivation

worst case speedup =Nd

λ + gNd=

Nd

Nd Np + gNd=

Np

1 +gNp

Nd ≡ Total detectionsNp ≡ Number of processorsg ≡ granularity of work

λ ≡Nd Np

Page 45: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-45

SC2002 Tutorial

Static vs Dynamic Parallelism

1

10

100

1000

1 4 16 64 256 1024

LinearDynamicStatic 15% efficient

50% efficient

Par

alle

l Sp

eed

up

Number of Processors

• Dynamic parallelism delivers good performance even in worst case

• Static parallelism is limited by random fluctuations (up to 85% of processors are idle)

• Dynamic parallelism delivers good performance even in worst case

• Static parallelism is limited by random fluctuations (up to 85% of processors are idle)

50% efficient

94% efficient

Page 46: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-46

SC2002 Tutorial

• PVL• PETE• S3P• MatlabMPI

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 47: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-47

SC2002 Tutorial

Current Standards for Parallel Coding

• Industry standards (e.g. VSIPL, MPI) represent a significant improvement over coding with vendor-specific libraries

• Next generation of object oriented standards will provide enough support to write truly portable scalable applications

Vendor Supplied Libraries

Vendor Supplied Libraries

CurrentIndustry

Standards

CurrentIndustry

Standards

ParallelOO

Standards

ParallelOO

Standards

Page 48: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-48

SC2002 Tutorial

Goal: Write Once/Run Anywhere/Anysize

Develop codeon a workstation

A = B + C;D = FFT(A);

.(matlab like)

Demo Real-Timewith a cluster

(no code changes;roll-on/roll-off)

Deploy onEmbedded System(no code changes)

Scalable/portable code provides high productivityScalable/portable code provides high productivity

Page 49: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-49

SC2002 Tutorial

• Algorithm and hardware mapping are linked• Resulting code is non-scalable and non-portable

• Algorithm and hardware mapping are linked• Resulting code is non-scalable and non-portable

Current Approach to Parallel Code

Algorithm + Mapping Code

Proc1

Proc1

Proc2

Proc2

Stage 1

Proc3

Proc3

Proc4

Proc4

Stage 2while(!done){ if ( rank()==1 || rank()==2 )

stage1 ();else if ( rank()==3 || rank()==4 )

stage2();}

Proc5

Proc5

Proc6

Proc6

while(!done){ if ( rank()==1 || rank()==2 )

stage1(); else if ( rank()==3 || rank()==4) ||

rank()==5 || rank==6 ) stage2();

}

Page 50: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-50

SC2002 Tutorial

Scalable Approach

Single Processor Mapping

Multi Processor Mapping

A = B + C#include <Vector.h>#include <AddPvl.h>

void addVectors(aMap, bMap, cMap) { Vector< Complex<Float> > a(‘a’, aMap, LENGTH); Vector< Complex<Float> > b(‘b’, bMap, LENGTH); Vector< Complex<Float> > c(‘c’, cMap, LENGTH);

b = 1; c = 2; a=b+c;

}

A = B + C

• Single processor and multi-processor code are the same• Maps can be changed without changing software• High level code is compact

• Single processor and multi-processor code are the same• Maps can be changed without changing software• High level code is compact

Page 51: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-51

SC2002 Tutorial

PVL Evolution

Parallel Processing

Library

Parallel Communications

Single processor Library

1988 20009998979695949392919089

Applicability

= Scientific (non-real-time) computing

= Real-time signal processing

VSIPL

MPI/RT

LAPACK

MPI

ScaLAPACK

STAPL

PVL

• Fortran• Object-based

• C• Object-based

• Fortran

• C• Object-based

• C• Object-based

• C• Object-based

• C++• Object- oriented

PETE • C++• Object-oriented

• Transition technology from scientific computing to real-time• Moving from procedural (Fortran) to object oriented (C++)• Transition technology from scientific computing to real-time• Moving from procedural (Fortran) to object oriented (C++)

Page 52: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-52

SC2002 Tutorial

Anatomy of a PVL Map

Vector/Matrix Computation Conduit Task

Map

Grid

{0,2,4,6,8,10}Distribution

• All PVL objects contain maps

• PVL Maps contain

•Grid•List of nodes•Distribution•Overlap

• All PVL objects contain maps

• PVL Maps contain

•Grid•List of nodes•Distribution•Overlap

List of Nodes

Overlap

Page 53: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-53

SC2002 Tutorial

Sig

nal

Pro

cess

ing

& C

on

tro

lM

app

ing

Library Components

Data & TaskPerforms signal/image processing functions on matrices/vectors (e.g. FFT, FIR, QR)

Computation

DataUsed to perform matrix/vector algebra on data spanning multiple processors

Vector/Matrix

Task & Pipeline

Supports data movement between tasks (i.e. the arrows on a signal flow diagram)

Conduit

Task & Pipeline

Supports algorithm decomposition (i.e. the boxes in a signal flow diagram)

Task

Organizes processors into a 2D layoutGrid

Data, Task & Pipeline

Specifies how Tasks, Vectors/Matrices, and Computations are distributed on processor

Map

ParallelismDescriptionClass

• Simple mappable components support data, task and pipeline parallelism• Simple mappable components support data, task and pipeline parallelism

Page 54: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-54

SC2002 Tutorial

PVL Layered Architecture

Map

Vector/MatrixVector/Matrix CompCompTask

Conduit

Grid

Distribution

Math Kernel (VSIPL) Messaging Kernel (MPI)

Application

ParallelVectorLibrary

Hardware

Input Analysis Output

UserInterface

HardwareInterface

Workstation

EmbeddedMulti-computer

PowerPCCluster

EmbeddedBoard

IntelCluster

Productivity

Portability

Performance

• Layers enable simple interfaces between the application, the library, and the hardware

• Layers enable simple interfaces between the application, the library, and the hardware

Page 55: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-55

SC2002 Tutorial

• PVL• PETE• S3P• MatlabMPI

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 56: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-56

SC2002 Tutorial

C++ Expression Templates and PETE

A=B+C*DA=B+C*D

BinaryNode<OpAssign, Vector,BinaryNode<OpAdd, VectorBinaryNode<OpMultiply, Vector,

Vector >>>

ExpressionTemplates

Expression

Expression TypeParse Tree

B+C A

Main

Operator +

Operator =

+

B& C&

1. Pass B and Creferences tooperator +

4. Pass expression treereference to operator

2. Create expressionparse tree

3. Return expressionparse tree

5. Calculate result andperform assignment

copy &

copy

B&,

C&

Parse trees, not vectors, createdParse trees, not vectors, created

• Expression Templates enhance performance by allowing temporary variables to be avoided

• Expression Templates enhance performance by allowing temporary variables to be avoided

Page 57: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-57

SC2002 Tutorial

Experimental Platform

• Network of 8 Linux workstations– 800 MHz Pentium III processors

• Communication– Gigabit ethernet, 8-port switch– Isolated network

• Software– Linux kernel release 2.2.14– GNU C++ Compiler – MPICH communication library over

TCP/IP

Page 58: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-58

SC2002 Tutorial

Experiment 1: Single Processor

A=B+C A=B+C*D A=B+C*D/E+fft(F)

VSIPLPVL/VSIPLPVL/PETE

VSIPLPVL/VSIPLPVL/PETE

Vector Length Vector Length

0.6

0.8

0.9

1

1.2

0.7

1.1

0.8

0.9

1.1

1

VSIPLPVL/VSIPL

PVL/PETE

0.9

1

1.1

1.2

1.3

32 128512

2048

Vector Length

Rel

ati

ve

Exe

cuti

on

Tim

e

819232768

1310728

Rel

ati

ve

Exe

cuti

on

Tim

e

Rel

ati

ve

Exe

cuti

on

Tim

e 1.2

32 128512

20488192

32768131072

8 32 128512

20488192

32768131072

8

• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL

Page 59: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-59

SC2002 Tutorial

Experiment 2: Multi-Processor(simple communication)

A=B+C A=B+C*D A=B+C*D/E+fft(F)

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

Vector Length

0.9

1

1.2

1.3

1.4

1.1

1.5

0.6

0.8

0.7

1.2

1.3

0.9

1.4

1.1

1

0.9

1

1.1

Rel

ati

ve

Exe

cuti

on

Tim

e

Rel

ati

ve

Exe

cuti

on

Tim

e

Rel

ati

ve

Exe

cuti

on

Tim

e

32 128512

20488192

32768131072

8

Vector Length

32 128512

20488192

32768131072

8

Vector Length

32 128512

20488192

32768131072

8

• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL

Page 60: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-60

SC2002 Tutorial

Experiment 3: Multi-Processor (complex communication)

A=B+C A=B+C*D A=B+C*D/E+fft(F)

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

1. 1

1

0.9

1

1.1

0.9

0.8 0.9

1

1.1

Rel

ati

ve

Exe

cuti

on

Tim

e

Rel

ati

ve

Exe

cuti

on

Tim

e

Rel

ati

ve

Exe

cuti

on

Tim

e

Vector Length

32 128512

20488192

32768131072

8

1

Vector Length

32 128512

20488192

32768131072

8

Vector Length

32 128512

20488192

32768131072

8

• Communication dominates performance• Communication dominates performance

Page 61: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-61

SC2002 Tutorial

• PVL• PETE• S3P• MatlabMPI

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 62: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-62

SC2002 Tutorial

S3P Framework Requirements

• Each compute stage can be mapped to different sets of hardware and timed

• Each compute stage can be mapped to different sets of hardware and timed

FilterXOUT = FIR(XIN)

DetectXOUT = |XIN|>c

BeamformXOUT = w *XIN

Mappableto different sets of hardware

Measurableresource usage of each mapping

Decomposableinto Tasks (comp)and Conduits (comm)

Task

Task

Task

Conduit

Conduit

Page 63: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-63

SC2002 Tutorial

S3P Engine

Hardware InformationHardware

Information

Algorithm InformationAlgorithm

Information

SystemConstraints

SystemConstraints

ApplicationProgram

ApplicationProgram

S3P EngineS3P Engine“Best”SystemMapping

“Best”SystemMapping

• Map Generator constructs the system graph for all candidate mappings• Map Timer times each node and edge of the system graph• Map Selector searches the system graph for the optimal set of maps

• Map Generator constructs the system graph for all candidate mappings• Map Timer times each node and edge of the system graph• Map Selector searches the system graph for the optimal set of maps

MapGenerator

MapGenerator

MapTimerMap

TimerMap

SelectorMap

Selector

Page 64: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-64

SC2002 Tutorial

Test Case: Min(#CPU | Throughput)

Input Low Pass Filter Beamform Matched Filter

3.2 31.5

1.4 15.7

1.0 10.4

0.7 8.2

16.1 31.4

9.8 18.0

6.5 13.7

3.3 11.5

52494642472721244429202460332315

1231-

571617-

28149.1-

181815-

14

8.38.73.32.67.38.39.48.0----

17141413

33 frames/sec(1.6 MHz BW)

66 frames/sec (3.2 MHz BW)

• Vary number of processors used on each stage

• Time each computation stage and communication conduit

• Find path with minimum bottleneck

• Vary number of processors used on each stage

• Time each computation stage and communication conduit

• Find path with minimum bottleneck

1 CPU

2 CPU

3 CPU

4 CPU

Page 65: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-65

SC2002 Tutorial

Dynamic Programming

N = total hardware unitsM = number of tasksPi = number of mappings for task i

t = MpathTable[M][N] = all infinite weight pathsfor( j:1..M ){ for( k:1..Pj ){ for( i:j+1..N-t+1){ if( i-size[k] >= j ){ if( j > 1 ){ w = weight[pathTable[j-1][i-size[k]]] + weight[k] + weight[edge[last[pathTable[j-1][i-size[k]]],k] p = addVertex[pathTable[j-1][i-size[k]], k] }else{ w = weight[k] p = makePath[k] } if( weight[pathTable[j][i]] > w ){ pathTable[j][i] = p } } } } t = t - 1}

• Graph construct is very general• Widely used for optimization problems• Many efficient techniques for choosing “best” path (under constraints)

such as Dynamic Programming

• Graph construct is very general• Widely used for optimization problems• Many efficient techniques for choosing “best” path (under constraints)

such as Dynamic Programming

Page 66: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-66

SC2002 Tutorial

Predicted and AchievedLatency and Throughput

• Find– Min(latency | #CPU)– Max(throughput | #CPU)

• S3P selects correct optimal mapping

• Excellent agreement between S3P predicted and achieved latencies and throughputs

• Find– Min(latency | #CPU)– Max(throughput | #CPU)

• S3P selects correct optimal mapping

• Excellent agreement between S3P predicted and achieved latencies and throughputs

#CPU

Lat

ency

(se

con

ds)

Large (48x128K)Small (48x4K)

Th

rou

gh

pu

t (

fram

es/s

ec)

4 5 6 7 8

25

20

15

100.25

0.20

0.15

0.10

1.5

1.0

0.5

predicted

achieved

predicted

achieved

predicted

achieved

predicted

achieved

5.0

4.0

3.0

2.04 5 6 7 8

Problem Size

#CPU

1-1-1-1

1-1-1-1

1-1-1-1

1-1-1-1

1-1-2-1

1-2-2-1

1-2-2-2 1-3-2-2

1-1-2-1

1-1-2-2

1-2-2-2

1-3-2-2

1-1-1-2

1-1-2-2

1-2-2-2

1-2-2-3

1-1-2-1

1-2-2-1

1-2-2-2

2-2-2-2

Page 67: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-67

SC2002 Tutorial

• PVL• PETE• S3P• MatlabMPI

Outline

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 68: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-68

SC2002 Tutorial

Modern Parallel Software Layers

Vector/MatrixVector/Matrix CompCompTask

Conduit

Math Kernel Messaging Kernel

Application

ParallelLibrary

Hardware

Input Analysis Output

UserInterface

HardwareInterface

Workstation

PowerPCCluster

IntelCluster

• Can build any parallel application/library on top of a few basic messaging capabilities

• MatlabMPI provides this Messaging Kernel

• Can build any parallel application/library on top of a few basic messaging capabilities

• MatlabMPI provides this Messaging Kernel

Page 69: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-69

SC2002 Tutorial

MatlabMPI “Core Lite”

• Parallel computing requires eight capabilities– MPI_Run launches a Matlab script on multiple processors– MPI_Comm_size returns the number of processors– MPI_Comm_rank returns the id of each processor– MPI_Send sends Matlab variable(s) to another processor– MPI_Recv receives Matlab variable(s) from another processor– MPI_Init called at beginning of program– MPI_Finalize called at end of program

Page 70: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-70

SC2002 Tutorial

MatlabMPI:Point-to-point Communication

load

detect

Sender

variable Data filesave

createLock file

variable

ReceiverShared File System

MPI_Send (dest, tag, comm, variable);

variable = MPI_Recv (source, tag, comm);

• Sender saves variable in Data file, then creates Lock file• Receiver detects Lock file, then loads Data file• Sender saves variable in Data file, then creates Lock file• Receiver detects Lock file, then loads Data file

Page 71: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-71

SC2002 Tutorial

Example: Basic Send and Receive

MPI_Init; % Initialize MPI.comm = MPI_COMM_WORLD; % Create communicator.comm_size = MPI_Comm_size(comm); % Get size.my_rank = MPI_Comm_rank(comm); % Get rank.source = 0; % Set source.dest = 1; % Set destination.tag = 1; % Set message tag.

if(comm_size == 2) % Check size. if (my_rank == source) % If source. data = 1:10; % Create data. MPI_Send(dest,tag,comm,data); % Send data. end if (my_rank == dest) % If destination. data=MPI_Recv(source,tag,comm); % Receive data. endend

MPI_Finalize; % Finalize Matlab MPI.exit; % Exit Matlab

• Uses standard message passing techniques• Will run anywhere Matlab runs• Only requires a common file system

• Uses standard message passing techniques• Will run anywhere Matlab runs• Only requires a common file system

• Initialize• Get processor ranks• Initialize• Get processor ranks

• Execute send• Execute recieve• Execute send• Execute recieve

• Finalize• Exit• Finalize• Exit

Page 72: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-72

SC2002 Tutorial

MatlabMPI vs MPI bandwidth

• Bandwidth matches native C MPI at large message size• Primary difference is latency (35 milliseconds vs. 30 microseconds)• Bandwidth matches native C MPI at large message size• Primary difference is latency (35 milliseconds vs. 30 microseconds)

1.E+05

1.E+06

1.E+07

1.E+08

1K 4K 16K 64K 256K 1M 4M 32M

C MPIMatlabMPI

Message Size (Bytes)

Ban

dw

idth

(B

ytes

/sec

)Bandwidth (SGI Origin2000)

Page 73: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-73

SC2002 Tutorial

Image Filtering Parallel Performance

Parallel performance

1

10

100

1 2 4 8 16 32 64

LinearMatlabMPI

Fixed Problem Size (SGI O2000)

0

1

10

100

1 10 100 1000

MatlabMPILinear

Number of Processors

Gig

afl

op

s

Scaled Problem Size (IBM SP2)

Number of Processors

Sp

eed

up

• Achieved “classic” super-linear speedup on fixed problem• Achieved speedup of ~300 on 304 processors on scaled problem• Achieved “classic” super-linear speedup on fixed problem• Achieved speedup of ~300 on 304 processors on scaled problem

Page 74: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-74

SC2002 Tutorial

Productivity vs. Performance

10

100

1000

0.1 1 10 100 1000

Matlab

VSIPL/MPI

SingleProcessor

SharedMemory

DistributedMemory

Matlab

C

Peak Performance

Lin

es o

f C

od

e

C++

VSIPL

ParallelMatlab*

MatlabMPI

VSIPL/OpenMP

PVLVSIPL/MPI

CurrentResearch

CurrentPractice

• Programmed image filtering several ways

•Matlab•VSIPL•VSIPL/OpenMPI•VSIPL/MPI•PVL•MatlabMPI

• MatlabMPI provides•high productivity•high performance

• Programmed image filtering several ways

•Matlab•VSIPL•VSIPL/OpenMPI•VSIPL/MPI•PVL•MatlabMPI

• MatlabMPI provides•high productivity•high performance

Page 75: Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory

MIT Lincoln LaboratorySlide-75

SC2002 Tutorial

Summary

• Exploiting parallel processing for streaming applications presents unique software challenges.

• The community is developing software librariea to address many of these challenges:

– Exploits C++ to easily express data/task parallelism– Seperates parallel hardware dependencies from software– Allows a variety of strategies for implementing dynamic

applications(e.g. for fault tolerance)– Delivers high performance execution comparable to or better than

standard approaches

• Our future efforts will focus on adding to and exploiting the features of this technology to:

– Exploit dynamic parallelism– Integrate high performance parallel software underneath mainstream

programming environments (e.g Matlab, IDL, …)– Use self-optimizing techniques to maintain performance