computational support for parallel/distributed amr

Post on 16-Jan-2016

32 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Computational Support for Parallel/Distributed AMR. Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/TASSL. Roadmap. Introduction to Berger-Oliger AMR Hierarchical Linked Lists (L. Wild) Overview of the GrACE Infrastructure - PowerPoint PPT Presentation

TRANSCRIPT

Computational Support for Parallel/Distributed AMR

Manish ParasharThe Applied Software Systems Laboratory

ECE/CAIP, Rutgers Universitywww.caip.rutgers.edu/~parashar/TASSL

30 September, 1999 Manish Parashar 2

Roadmap

Introduction to Berger-Oliger AMR Hierarchical Linked Lists (L. Wild) Overview of the GrACE Infrastructure GrACE Programming Model and API GrACE Design & Implementation Current Research & Future Direction

30 September, 1999 Manish Parashar 3

Cactus and GrACE

Cactus + GrACE– Transparent access to AMR via Cactus

» GrACE Infrastructure Thorn

» AMR Driver Thorn

– Status » Unigrid driver in place

» AMR driver under development

Berger-Oliger Adaptive Mesh Refinement

30 September, 1999 Manish Parashar 5

The AMR Concept

Problem: How to maximize the solution accuracy for a given problem size with limited computational resources ?

Solution: Use dynamically adaptive grids (instead of uniform grids) where the grid resolution is defined locally based on application features and solution quality.

Method: Adaptive Mesh Refinement (AMR)

30 September, 1999 Manish Parashar 6

Adaptively Griding the Application Domain

Marsha Berger et al. (http://cs.nyu.edu/faculty/berger/)

30 September, 1999 Manish Parashar 7

Adaptive Grid Structure

30 September, 1999 Manish Parashar 8

Berger-Oliger AMR: Algorithm Define adaptive grid structure Define grid functions Initialize grid functions Repeat NumTimeSteps

– if (RegridTime) Regrid at Level– Integrate at Level– if (Level+1 exists)

Integrate at Level+1Update Level from Level+1

End Repeat

30 September, 1999 Manish Parashar 9

Berger-Oliger AMR: Grid Hierarchy

Hierarchical Linked Lists (HLL)

30 September, 1999 Manish Parashar 11

HLL

AMR system devised by Lee Wild in 1996

Grid points split into nodes of size refinement-factor in each direction

Refine on nodes– Avoids clustering problems needed by

box based AMR schemes

30 September, 1999 Manish Parashar 12

Status of HLL

Lee wrote a shared memory version which was tested on various problems and showed excellent scaling properties.

It is currently being re-implemented as a standalone library with shared memory and MPI parallelism. This library will be used by a Cactus thorn to provide an AMR driver layer.

GrACE:An Framework for Distributed AMR

30 September, 1999 Manish Parashar 14

GrACE: An Overview

30 September, 1999 Manish Parashar 15

Programming Interface

Coarse grained SPMD data parallelism C++ driver

– declares and defines computational domain and application variables in terms of GrACE programming abstractions

– defines overall structure of the AMR algorithms FORTRAN/FORTRAN 90/C computational

kernels– defined on regular arrays

30 September, 1999 Manish Parashar 16

Programming Abstractions

Grid Hierarchy Abstraction– Template for the distributed adaptive grid

hierarchy Grid Function Abstraction

– Application fields defined on the adaptive grid hierarchy

Grid Geometry Abstraction– High-level tools for addressing regions in the

computational domain

30 September, 1999 Manish Parashar 17

Grid Geometry Abstractions

Coords– rank, x, y, z, ...

BBox– lb, ub, stride

BBoxList Operations

– union, intersection, cluster, refine/coarsen, difference, ...

(lbx, lby)

(ubx, uby)

dx

dy

30 September, 1999 Manish Parashar 18

GridHierarchy Abstraction

Attributes:– number of dimensions

– maximum number of levels

– specification of the computational domain

– distribution type

– refinement factor

– boundary type/width

GridHierarchy GH(Dim,GridType,MaxLevs)

30 September, 1999 Manish Parashar 19

GridFunction Abstraction

GridFunction(DIM)<T> GF(“gf”, Stencils,GH,…)

Attributes:– dimension and type

– vector?

– spatial/temporal stencils

– associated GridHierarchy

– prolongation/restriction functions

– “shadow” specification

– alignments

– ghost cells

– boundary types/updates

– interaction types

– flux registers?

– parent storage?

30 September, 1999 Manish Parashar 20

GridFunction Operations GridFunction storage for a particular time, level,

and component (and hierarchy) is managed as a Fortran 90 array object.

GF(t, l, c, Main/Shadow) <op> Scalar

GF(t, l, c, Main/Shadow) <op> GF2(….)

RedOp(GF, t, l, Main/Shadow)– <op> : =,+=,-=,/+,*=,…– RepOp: Max, Min, Sum, Product, Norm,….

30 September, 1999 Manish Parashar 21

Ghost Communications

Ghost region communications based on GridFunction stencil attribute at the specified grid level

Sync (GF, Time,Level,Main/Shadow)Sync (GF, Time, Level,Axis,Dir,Main/Shadow)Sync (GH, Time,Level,Main/Shadow)

30 September, 1999 Manish Parashar 22

Arbitrary copy (add, subtract) from Region1 to Region 2 the specified grid level.

Region-based Communications

Copy (GF, Time, Level, Reg1, Reg2, Main/Shadow)

R1

R2

30 September, 1999 Manish Parashar 23

Data-parallel forall operator

forall (gf, time, level, component) Call FORTRAN Subroutine…...end_forall

Parallel operation for all grid components at a particular time step and level.

30 September, 1999 Manish Parashar 24

Refinement & Regriding

Refine(GH, Level, BBoxList)RecomposeHierarchy(GH)

Encapsulates:– Generation of refined grids

– Redistribution

– Load-balancing

– Data-transfers

– Interaction schedules

30 September, 1999 Manish Parashar 25

Prolongation/Restriction Functions Set prolong/restrict function for each GridFunction

foreachGF(GH, GF, DIM, GFType)

SetProlongFunction(GF, Pfunc);

SetRestrictFunction(GF, Rfunc);

end_forallGF

Prolong/RestrictProlong(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, ….,

Main/Shadow);

Restrict(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, …., Main/Shadow);

30 September, 1999 Manish Parashar 26

Checkpoint/Restart/Rollback Checkpoint

Checkpoint(GH,ChkPtFile);» Each GridFunction can be individually selected or

deselected for checkpointing» Checkpoint files independent of # of processors

Restart

ComposeHierarchy(GH,ChkPtFile); Rollback

RecomposeHierarchy(GH,ChkPtFile);

30 September, 1999 Manish Parashar 27

IO Interface Initialize IO

ACEIOInit(); Select IO Type

ACEIOType(GH, IOType);» IOType := ACEIO_HDF, ACEIO_IEEEIO,..

BEGIN_COMPUTE/END_COMPUTE mark region not executed by a dedicated IO node

Do IOWrite(GF, Time, Level, Main, Double);

End IOACEIOEnd(GH);

30 September, 1999 Manish Parashar 28

Multigrid Interface Determine the number of multigrid levels available

MultiGridLevels(GH, Level, Main/Shadow); Setup the multigrid hierarchy for a GridFunction

SetUpMultiGrid(GF, Time, Level, MGLf, MGlc, Main/Shadow);SetUpMultiGrid(GF, Time, Level, Axis, MGlf, MGlc,

Main/Shadow); Do Multigrid

GF(Time, Level, Comp, MGl, Main/Shadow)….; Release multigrid hierachy

ReleaseMultiGrid(GF, Time, Level, Main/Shadow);

GrACE: Design & Implementation

30 September, 1999 Manish Parashar 30

Software Engineering in the Small: Design Principles

Separation of Concerns» policy from mechanisms» data management from solution methods» storage semantics from addressing and access» computer science from computational science from

engineering Hierarchical Abstractions

» application specific programming abstractions» semantically specialized DSM» distributed shared objects» hierarchical, extendible index space + distributed

dynamic storage

31Manish Parashar30 September, 1999

Separation of Concerns => Hierarchical Abstractions

Application

Application Components

Programming Abstractions

Dynamic Data-Management

Grid GeometryGrid Function Grid Structure

Cell Centered

Vertex Centered

Face Centered

Region

Point

Multigrid Hierarchy

Main Hierarchy

Shadow Hierarchy

Modules Kernels

Solver

Clusterer

Interpolator

Error Estimator

App. Objects

Grid

Tree

Mesh

Access

Storage

HDDA

Index Space

Method SpecificApplication Specific Adaptive Data-Mgmt

30 September, 1999 Manish Parashar 32

Hierarchical Distributed Dynamic Array (HDDA) Distributed Array

– Preserve array semantics over distribution» Reuse FORTRAN/C computational components

– Communications are transparent– Automatic partitioning & load-balancing

Hierarchical array– Each element can be a HDDA

Dynamic Array– HDDA can grow and shrink dynamically

Efficient data-management for adaptivity

33Manish Parashar30 September, 1999

Separation of Concerns => Hierarchical Abstractions

HDDA

Index Space

Name Resolution

Partitioning Expansion & Contraction

Storage

Display Objects

Data Objects Interaction Objects

Access

Consistency Communication

30 September, 1999 Manish Parashar 34

Distributed Dynamic Storage

Application Locality

Index Locality

Storage Locality

30 September, 1999 Manish Parashar 35

Partitioning Issues

Locality Parallelism Load-balance Cost

30 September, 1999 Manish Parashar 36

Composite Distribution

Inter-grid communications are local Data and task parallelism exploited Efficient load redistribution and clustering Overhead of generating & maintaining composite structure

30 September, 1999 Manish Parashar 37

IO & Visualization

30 September, 1999 Manish Parashar 38

Integrated Visualization & IO Grid Hierarchy

» Views: Multi-level, multi-resolution grid structure and connectivity, hierarchical and composite grid/mesh views, ….

» Commands: Refine, coarsen, re-distribute, read, write, checkpoint, rollback, ….

Grid Function» Views: Multi/single-resolution plots, feature extraction and

reduced models, isosurfaces, streamlines, etc….» Commands: Read, write, interpolate, checkpoint, rollback, ….

Grid Geometry» Views: Wire-frames with resolution and ownership information» Commands: Read, write, refine coarsen, merge, ….

top related