the cactus framework: design, applications and future
Post on 12-Nov-2021
0 Views
Preview:
TRANSCRIPT
1
The Cactus Framework:
Design, Applications and
Future Directions
Gabrielle Allen
gallen@cct.lsu.edu
Center for Computation & Technology
Departments of Computer Science & Physics
Louisiana State University
7/22/06
Cactus Code• Freely available, modular, portable and
manageable environment forcollaboratively developing parallel, high-performance multi-dimensionalsimulations (Component-based)
• Developed for Numerical Relativity, butnow general framework for parallelcomputing (CFD, astro, climate, chemeng, quantum gravity, …)
• Finite difference, AMR (Carpet, Samrai,Grace), new FE/FV, multipatch
• Active user and developer communities,main development now at LSU and AEI.
• Open source, documentation, etc
2
7/22/06
Gravitational Wave Physics
Observations Models
Analysis & Insight
7/22/06
3
7/22/06
Cactus User Community
Goddard Penn State
Wash UAEI
TACTuebingen
Southampton
SISSA
Thessaloniki
Earth Modeling(LSU, Utrecht+)
Chem Eng (U.Kansas)
Bio-Informatics(Chicago/Canada)
Astrophysics(LSU, Zeus, Innsbruck)
EU AstrophysicsNetwork
Numerical Relativity Other Applications
Portsmouth
RIKEN
Pittsburg
Quantum Gravity(Hamilton)LSU
Texas, BrownsvilleCFD
(KISTI, LSU)
Garching Prototype App(Many CS Projects)
BenchMarkingUNAM
Texas, Austin
Cosmology(Houston)
7/22/06
Computational Science Needs
• Requires incredible mix of technologies & expertise!
• Many scientific/engineering components
– Physics, astrophysics, CFD, engineering,...
• Many numerical algorithm components
– Finite difference? Finite volume? Finite elements?
– Elliptic equations: multigrid, Krylov subspace,...
– Mesh refinement
• Many different computational components
– Parallelism (HPF, MPI, PVM, ???)
– Multipatch / multiscale / multimodel
– Architecture (MPP, DSM, Vector, PC Clusters, FPGA, ???)
– I/O (generate TBs/simulation, checkpointing…)
– Visualization of all that comes out!
• New technologies and needc
– Grid computing, Steering, data archives
– Semantics, data driven, networks
4
7/22/06
Design Issues for Cactus
• Usable for many different
physical systems
• Collaborative
• Portable
• Large scale !
• High throughput
• Easy to understand and
interpret results
• Community Driven, Open
Source
• Supported and developed
• Produce believed results
• Flexible
• Reproducible
• Have generic computationaltoolkits
• Incorporate otherpackages/technologies
• Access to latest technologies
• Easy to use/program (Empirecode ported in 2 weeks)
• Preparing for future (DDDAS,multimodel, multiscale,coupling, petascale)
7/22/06
Large Scale
• Typical run needs > 100GB of memory:– 200 Grid Functions
– 400x400x400 grid
• Typical run makes 3000 iterations with6000 Flops per grid point:– 600 TeraFlops !!
• Output of just one Grid Function at just onetime step– 500 MB (1 TB for 15GF each 50 timesteps)
• One simulation longer than queue times– Need 10-50 hours
• Computing time is a valuable resource– One simulation: 5000 to 20000 SUs
– Need to make each simulation count
Parallelism
Parallel/Fast IO,Data Management,
Visualization
Optimization
Interactivemonitoring, steering,visualization, portals
Checkpointing
5
7/22/06
Cactus Structure
Core “flesh” with plug-in “thorns”
Core “Flesh”
Plug-In “Thorns”(modules)
driverdriver
input/outputinput/output
interpolationinterpolation
SOR solverSOR solver
coordinatescoordinates
boundaryboundary conditionsconditions
black holesblack holes
equations of stateequations of state
remote steeringremote steering
wave evolverswave evolvers
multigridmultigrid
parametersparameters
gridgrid variablesvariables
errorerror handlinghandling
schedulingscheduling
extensibleextensible APIsAPIs
makemake systemsystem
ANSI CANSI CFortran/C/C++Fortran/C/C++
Your Physics !!Your Physics !!
ComputationalComputationalTools !!Tools !!
7/22/06
Cactus Flesh
• Written in ANSI C
• Independent of all thorns
• Contains flexible build system, parameterparsing, rule based scheduler, …
• After initialization acts as utility/service librarywhich thorns call for information or to requestsome action (e.g. parameter steering)
• Contains abstracted APIs for:– Parallel operations, IO and checkpointing, reduction
operations, interpolation operations, timers. (APIsdesigned for science needs)
• All actual functionality provided by (swappable)thorns
6
7/22/06
Cactus Thorns (Components)
• Can be written in C, C++, Fortran 77, Fortran 90,(Java, Perl, Python)
• Separate libraries encapsulating some functionality
• To keep distinction between functionality andimplementation of functionality each thorn declaresit provides a certain “implementation”
• Different thorns can provide the same“implementations”
• Thorn dependencies expressed in terms of“implementations”, so that thorns providing same“implementation” are interchangeable.
7/22/06
Thorn Specification
• Each thorn contains configuration files whichspecify interface with Flesh and other thorns
• Configuration files converted at compile time(Fortran) into a set of routines the Flesh cancall for thorn information– Scheduling directives
– Variable definitions
– Function definitions
– Parameter definitions
– Configuration details
• Configuration files have well defined language, canuse as basis to build interoperability with othercomponent based frameworks
7
7/22/06
Program Flow
Stop if errors
Check consistency
7/22/06
Initialise Simulation
Check parametersmake sense
Set up coordinates
Initial data
8
7/22/06
Evolve Simulation
7/22/06
Numerical Methods
• Most Cactus application codes use finitedifferences on structured meshes
• Parallel driver thorns: Unigrid, FMR (Carpet),AMR (Grace, SAMRAI), finitevolume/element on structured meshes
• Method of lines thorn
• Elliptic solver interface (PETSc, SOR,Multigrid, Trilinos)
• Multipatch now with Carpet driver
• Unstructured mesh support being added.
9
7/22/06
IO
• Support for IO in different formats
– 2-d slices as jpegs
– N-d ASCII data
– N-d data in IEEEIO format
– N-d data in HDF5 format (to disk or streamed)
– Panda parallel IO
– Isosurfaces, MPEGS
• Checkpoint/restart (move to any new machine)
7/22/06
External Packages
• IO API: HDF5, FlexIO, jpeg, mpeg,NetCDF, Panda
• Elliptic/Solver subsystem: LAPACK,BLAS, PETSc, Trilinos, FFTW
• Timer API: PAPI
• Driver API: MPI, Grace, SAMRAI,PARAMESH
– New programming models
• Grid: GAT, MPICH-G2, (Globus)
10
7/22/06
Toolkits
• Cactus Computational Toolkit
– Core thorns which provides many basic
utilities including: Boundaries, I/O methods,
Reduction/interpolation operations,
Coordinates, Parallel drivers, Elliptic
solvers
• Einstein Toolkit
• CFD Toolkit
7/22/06
• Cactus modules (thorns)for numerical relativity.
• Many additional thornsavailable from othergroups (AEI, CCT, …)
• Agree on few basics (e.g.names of variables) andthen can share evolution,analysis etc.
• Over 100 relativitypapers & 30student theses
ADM
EvolSimple
Evolve
ADMAnalysis
ADMConstraints
AHFinder
Extract
PsiKadelia
TimeGeodesic
Analysis
IDAnalyticBH
IDAxiBrillBH
IDBrillData
IDLinearWaves
IDSimple
InitialData
CoordGauge
Maximal
Gauge Conditions
SpaceMask ADMCoupling
ADMMacros StaticConformal
ADMBaseCactus Einstein
11
7/22/06
• Unified numerics on arbitrary discretizations
(FDM/FVM/FEM) and meshes
(Structured/Multi-block/Unstructured)
• Multi-scale and Multi-physics Framework
• Coupling multi-model simulations
• Plethora of solution algorithms: e.g. Artificial
compressiblity, SIMPLE, Fractional step
method (incomp.), LDFSS (comp.)
• Variety of solvers to exploit the advantages and
avoid limitations of specific numerical solvers
(PETSc/Trilinos/MUMPS).
• Examples & Test suites: benchmarks for
numerical validation/verification and petascale
applications.
• Interdisciplinary applications: toolkit is common
“language” for collaboration
•Turbomachinery
flows
•Combustion
chamber
•Stirred tank reactors
•Rheology
•Filter-feeding
•Moving geometries
•Glaucoma
•Drug delivery /
surgery
Application Areas: Biology, Mech/Chem/Civil/Environ./Petrol. Engg., Oceanography,
Computer Sci., Mathematics
Cactus CFD Toolkit
7/22/06
Easy to Use and Program
• Program in favorite language (C,C++,F90,F77)
• Hidden parallelism and optimization
• Computational Toolkits
• Good error, warning, info reporting
• Modularity !! Transparent interfaces with othermodules (protection from other modules)
• Extensive parameter checking
• Work in the same way on different machines
• Interface with favorite visualization packages
• Documentation
12
7/22/06
Portable
• Solid and flexible make system,
supported on all tried platforms
including Windows, MacOSX, XT3,
BG/L, Earth Simulator, PlayStation 2, X-
Box, iPAQ.
7/22/06
Cactus Grid Scenarios
• Dynamic, adaptive apps …
• Distributed MPI
• SC2000: “Cactus Worm”thorn, turned any Cactusapplication into intelligentself-migrating creature.
• SC2001: Spawner thorn,any analysis method can besent to another resource forcomputation
• SC2002: Cactus TaskFarm,designed for distributingMPI apps on the Grid
13
7/22/06
SDSC IBM SP1024 procs5x12x17 =1020
NCSA Origin Array256+128+1285x12x(4+2+2) =480
OC-12 line
(But only 2.5MB/sec)
GigE:100MB/sec17
12
5
4 2
12
5
2
Cactus + MPICH-G2Communications dynamically adapt
to application and environmentAny Cactus applicationScaling: 15% -> 85%
“Gordon Bell Prize”
(With U. Chicago/Northern,Supercomputing 2001, Denver)
Dynamic Adaptive
Distributed Computation
7/22/06
• Abstract programming
interface between
applications and Grid
services
• Designed for
applications (move file,
run remote task,
migrate, write to
remote file)
• Led to GGF Simple
API for Grid
Applications
www.gridlab.org/GAT
Main resultfrom GridLab
project
Scp, DRMAA, Condor, SGE, SRB, Curl, RFT.Under Develop
Basic functionality, will work on singleisolated machine (e.g. cp, fork/exec)
DefaultAdaptors
GRMS, Mercury, Delphoi, iGridGridLabAdaptors
Core Globus functionality: GRAM, MDS, GT-RLS, GridFTP
Globus Adaptors
Grid Application Toolkit (GAT)
14
7/22/06
HTTP
Streaming HDF5Autodownsample
Any Viz Client:
LCA Vision, OpenDX
Changing steerableparameters• Parameters• Physics, algorithms• Performance
Remote Viz & Steering
7/22/06
Cactus Portal
Based on GridSphere
JSR168 compliant
portal framework
Portlets for parameter file
preparing, comparison,
managing
Simulation staging,
managing, monitoring
Link to data archives, viz
etc.
15
7/22/06
Notification and Information
GridSpherePortal
SMSServer
Mail Server
“The Grid”
ReplicaCatalog
User details,notification prefs andsimulation information
IM Server
7/22/06
Looking Forward
• Generalized component descriptions for
thorns
• Interoperability with other frameworks
• Supporting multiscale, multimodel scenarios
• New applications with coastal, CFD
• Grid computing and data archives
• Methodologies for petascale computing
• Keep developments driven by physics
16
7/22/06
top related