power powerwerwer po power po er many · shell int'l exploration & production ......

12
POWER POWER POWER POWER POWER PO WER POWER POWER WER POWER POW OWER POWER POWER POWER POWER POWER POWE POWER POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER PO WER POWER POWER POWER POWER PO OWER POWER POWER POWER POWER POWER POWE POWER POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER PO WER POWER POWER POWER POWER PO OWER POWER POWER POWER POWER POWER POWE POWER POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER PO WER POWER POWER POWER PO OWER POWER POWER POWER POWER POWER POWE POWER POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER R POWER POWER POWER POW POW POWER OWER OWER POWER POWER POWER R R PO R R POWER POWER POWER WER WER POWER POWER POWER ER PO PO ER PO POWER POWER POWER OWER P OWER P POWER R POW R POW POWER POWER POWER POWER WER POW WER POW OWER OWER POWER WER POW WER POW POWER OWER OWER P POWER POWER WER POW WER POW POWER WER PO WER PO POWER POWER POWER POWER POWER POWER WER P WER P POWER P P POWER POWE POWE POWER WER WER POWER POWER OWER OWER POWER WER WER POWER POW POW POWER PO PO POWER R R POWER POW POW POWER PO PO POWER WER WER WER WER POWER P P POWER P P POWER ER ER POWER ER ER POWER POWER WER WER POWER POW PO PO PO PO PO PO POWER POWER WER PO PO POWER P P POWER OWER OWER POWER POW POW many Explore cutting-edge technology and learn from top experts at ClusterWorld Conference and Expo, the first major event to focus exclusively on cluster technology. CONFERENCE JUNE 23-26 2003 EXPOSITION JUNE 24-26 2003 SAN JOSE CONVENTION CENTER SAN JOSE, CALIFORNIA Keynote Speakers John Picklo, Manager of HPC DaimlerChrysler John Reynders, VP for Informatics Celera Therapeutics Jack Buur, Principal Research Physicist Shell Int'l Exploration & Production www.clusterworldexpo.com the power of many REGISTER ONLINE BEFORE MAY 23RD AND SAVE! R POWER POWER POWER POWER WER POWER POWER WER POW OWER POWER POWER POWER POWE POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER WER POWER POWER POWER PO OWER POWER POWER POWER POWE POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER WER POWER POWER POWER PO OWER POWER POWER POWER POWE POWER POWER POWER POWER POWER P POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER WER POWER POWER POWER PO OWER POWER POWER POWER POWE POWER POWER POWER POWER POWER P POWER POWER POWER POWER R POWER POWER POW POW POWER OWER OWER POWER POWER POWER PO POWER POWER POWER ER PO PO R R ER PO R R POWER POWER POWER OWER P OWER P POWER R POW R POW POWER POWER POWER POWER WER POW WER POW OWER OWER P POPOWER WER POW WER POW POWER WER PO WER PO POWER R R POWER POWER WER P WER P POWER P P POWER POWE POWE POWER WER WER POWER POWER OWER OWER POWER WER WER POWER POW POW POWER PO PO POWER R R POWER POW POW POWER PO PO POWER ER ER POW PO PO PO PO PO PO POWER WER PO PO POWER P P POWER OWER OWER POWER POW POW Platinum Sponsors Gold Sponsors Silver Sponsors Conference Partner Media Sponsors Visit our website for more information about our conference program, exposition and special events. www.clusterworldexpo.com ClusterWorld Conference & Expo 330 Townsend Street, Suite 112 San Francisco, CA 94107 PRSRT STD US POSTAGE PAID PERMIT NO 115 LIBERTY, MO priority code DDJR

Upload: truonghuong

Post on 21-Jul-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

R POWER POWER POWER POWER POWER POWER POWER POWER WER POWER POWOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER

R POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER POWER POWER POWER

POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER POWER POWER POWER

POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER POWER POWER POWER POWER POWER POOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER

POWER POWER POWER R POWER POWER

POWERPOWPOW

POWEROWER OWER

POWERPOWER

POWERRR

PORR

POWERPOWER POWER

WERWER

POWER POWER POWER

ER PO PORR

ER PO P

RR

POWERPOWER POWER OWER POWER P

POWERR POWR POWPOWERPOWER

POWER

POWERWER POWWER POWOWEROWER

POWERWER POWWER POW

POWEROWER OWER

P

POWER

POWERWER POWWER POW

POWERWER POWER PO

POWERPOWER POWER

POWER RRPOWER POWER

WER PWER P

POWER P P

POWER POWEPOWE

POWER WER WER

POWER

POWEROWEROWER

POWERWERWER

POWERPOWPOW

POWER POPO

POWER R R

POWER POW POW

POWERPOPO

POWERWERWER

WER WER

POWERPP

POWERPP

POWERER ER

POWER ER ER

POWER

POWERWERWER

POWER

POWPOPOPOPOPOPO

POWER POWER

WERPOPO

POWERPP

POWEROWEROWER POWERPOWPOW

many

Explore cutting-edge technology and learn from top experts at ClusterWorldConference and Expo, the first major eventto focus exclusively on cluster technology.

CONFERENCE JUNE 23-26 2003 EXPOSITION JUNE 24-26 2003SAN JOSE CONVENTION CENTERSAN JOSE, CALIFORNIA

Keynote SpeakersJohn Picklo, Manager of HPCDaimlerChrysler

John Reynders, VP for InformaticsCelera Therapeutics

Jack Buur, Principal Research PhysicistShell Int'l Exploration & Production

www.clusterworldexpo.com

the power of many

REGISTER ONLINE BEFORE MAY 23RD AND SAVE!

R POWER POWER POWER POWER POWER POWER POWER POWER WER POWER POWOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER

R POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER POWER POWER POWER

POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER POWER POWER POWER

POWER POWER POWER POWER POWER POWER POWER R POWER POWER POWER POWER POWER POWER POWER POWER POWER POOWER POWER POWER POWER POWER POWER POWE

POWER POWER POWER POWER POWER POWER PPOWER POWER

POWER POWER POWER R POWER POWER

POWERPOWPOW

POWEROWER OWER

POWERPOWER

POWERRR

PORR

POWERPOWER POWER

WERWER

POWER POWER POWER

ER PO PORR

ER PO P

RR

POWERPOWER POWER OWER POWER P

POWERR POWR POWPOWERPOWER

POWER

POWERWER POWWER POWOWEROWER

POWERWER POWWER POW

POWEROWER OWER

P

POWER

POWERWER POWWER POW

POWERWER POWER PO

POWERPOWER POWER

POWER RRPOWER POWER

WER PWER P

POWER P P

POWER POWEPOWE

POWER WER WER

POWER

POWEROWEROWER

POWERWERWER

POWERPOWPOW

POWER POPO

POWER R R

POWER POW POW

POWERPOPO

POWERWERWER

WER WER

POWERPP

POWERPP

POWERER ER

POWER ER ER

POWER

POWERWERWER

POWER

POWPOPOPOPOPOPO

POWER POWER

WERPOPO

POWERPP

POWEROWEROWER POWERPOWPOW

Platinum Sponsors Gold Sponsors Silver Sponsors

Conference Partner Media Sponsors

Visit our website for more information about our conference program, exposition and special events.

www.clusterworldexpo.com

Clu

ster

Wo

rld

Co

nfe

ren

ce &

Exp

o33

0 To

wn

sen

d S

tree

t,Su

ite

112

San

Fra

nci

sco,

CA

941

07

PRSR

T ST

D

US

POST

AG

E PA

ID

PERM

IT N

O 1

15LI

BER

TY,M

O

pri

ori

ty c

od

e

DD

JR

www.clusterworldexpo.com 32 Register online before May 23rd and SAVE!

Introducing ClusterWorld Conference and Expo, the firstmajor event to focus entirely on clustered systems.From research labs to industrial datacenters, clusters arerevolutionizing high-performance and high-availabilitycomputing.

ClusterWorld’s conference program has been created inconjunction with the Linux Clusters Institute and offerssomething for everyone who works with clusters—whether you’re doing pure research, parallel softwaredevelopment, clustered systems administration, or even IT management.

The conference also offers in-depth vertical tracks in fieldsranging from bioinformatics to petroleum exploration,CAD, aerospace and automotive engineering, digital con-tent creation, and financial services. Distributed clustercomputing is represented as well, with a special focus onemerging Grid computing technologies.

Whatever industry you work in, there is a specific ClusterWorldconference track that will keep you on top of the latesttools and techniques in your field. And the ClusterWorldExpo show floor is the best place to test drive the latestcluster technologies from all the major systems vendors.

If you work with clusters in any capacity, or if you arethinking about deploying cluster technology within yourorganization, ClusterWorld Conference and Expo is the oneevent you cannot afford to miss this year.

Register online by May 23rd and take advantage of ourmoney saving Early Bird Conference Package at:

www.clusterworldexpo.com

ClusterWorld Conference and Expo. The power of many.

What is ClusterWorld? Event Features

Beowulf Reunion TourThursday, June 26, 2003 2:30pm - 3:30pmThis year marks the 10th anniversary of the Beowulf project, the starting point forLinux PC clustering. The Beowulf Reunion Tour brings togetherkey members of the Beowulf team whose insight and forethoughtrevolutionized clustering.

Don Becker, Tom Sterling and other members of the core teamwill talk about Beowulf’s early beginnings, its successes over thepast 10 years and what the next decade may hold.

Excellence in Cluster Technology Awards Wednesday, June 25, 2003 6:15pm - 7:30pmClusters are fast becoming the preferred solutions to technical prob-lems requiring low cost, high-performance and high-availabilitysolutions. As this trend continues, there are goods, services andsolutions that stand out above the rest.The Excellence in ClusterTechnology Awards are given in recognition of outstanding products,services and platforms that advance the role and adoption of clustered systems.

GridWars: Parallel Programming Championship Presented by Engineered Intelligence Sponsored by HPTuesday, June 24th, 12:00pm - 1:00pmCreated to increase interest in parallel programming, GridWars is a highlystrategic but simple game where battle programs fight for control over par-allel processor cycles.Flex your programming muscle against some ofthe best cluster developers in the world on the gladiatorial grid.

Visit GridWars.com for information on registration, toolkit downloads, and the rulesof the event.

ClusterWorld Challenge Tuesday, June 24th, 2:30pm - 3:30pmCluster experts face off against one another in a quizshow setting at the ClusterWorld Challenge. Hosted byPete Beckman, the CW Challenge is a combination ofFamily Feud and post-doctorate-level supercomputingwith questions touching on several categories of clustering. Teams compete for prizesand the bragging rights of being ClusterWorld Challenge Champions.

Is your organization interested in participating the ClusterWorld Challenge andgoing up against some of the world’s finest clustering minds?

Cluster Crash Party Tuesday, June 24th, 6:00pm - 7:00pmAfter a hard day of attending conference sessions and checking outthe exhibits, it's time for the systems to come down! Join us on theexpo floor for our Cluster "Crash" Party. The Cluster Crash is anopportunity for attendees and exhibitors to meet in a relaxed atmosphere over foodand beverages. Don't miss this great opportunity to network with your peers or chatwith vendor reps!

www.clusterworldexpo.com 54 Register online before May 23rd and SAVE!

John Picklo Manager, High Performance Computing, DaimlerChryslerMr. Picklo is responsible for systems software and hardware for all of the engineering

mainframes and supercomputers at the Chrysler Group. His background includes 25

years of experience working with information technology in various technical and

consulting roles. Mr. Picklo’s automotive background includes experience designing

and managing systems to support computer-aided design at DaimlerChrysler, General

Motors, Nissan, and Toyota.

Dr. John Reynders Vice President, Informatics, Celera TherapeuticsDr. Reynders is responsible for computational sciences, algorithmics, software engi-

neering, computer science, and knowledge management efforts in support of drug

discovery and development at Celera Therapeutics. Previously, Dr. Reynders served as

Vice President for Information Systems at Celera Genomics where he was responsible

for all supercomputing capabilities, discovery software engineering, and enterprise

system infrastructure. Prior to Celera, Dr. Reynders worked at Sun Microsystems, Inc.

and Los Alamos National Laboratory where he managed the largest dedicated unclas-

sified super computer in the United States.

Jacobus N. Buur Principal Research Physicist, Shell International Exploration and Production B.V.Mr. Buur is responsible for research initiatives in subsurface imaging and exploration for

Royal Dutch/Shell via their GameChanger process. He has been instrumental in develop-

ing key visualization technologies and has made significant contributions to the adop-

tion of clusters in the petroleum industry with the early installation of large-scale Linux

clusters. Mr. Buur has been with the Royal Dutch/Shell family of companies for 20 years.

Dr. Tilak Agerwala Vice President, Systems, IBM Research Dr. Agerwala is responsible for developing the next-generation hardware and

software technologies for embedded systems, servers, and supercomputers for IBM.

Dr. Agerwala’s distinguished career includes jointly developing the architectural foun-

dations of the RS6000 and was responsible for the systems architecture and technolo-

gy strategy of the RS/6000 SP (1992-1997), the most successful parallel computer of all

time. Dr. Agerwala received the W. Wallace McDowell Award from the IEEE in 1998 for

outstanding contributions to the development of high performance computers.

Keynotes

Donald BeckerScyld Computing Corporation, USA

Peter BeckmanArgonne National Laboratories, USA

Gianfranco BilardiUniversity of Padova, Italy

Jay BoisseauTACC / UT - Austin, USA

Ron BrightwellSandia National Laboratories, USA

Giri ChukkapalliSan Diego Supercomputing Center, USA

Stefano CozziniINFM - Sissa, Italy

Cesar De RoseCPAD - PUCRS / HP, Brazil

Jack DongarraUniversity of Tennessee, USA

Thomas FahringerUniversity of Vienna, Austria

Wolfgang GentzschSun Microsystems, Inc., USA

Patrick GeoffrayMyricom, USA

David HenryLinux NetworX, USA

Haeng Jin JangKISTI, South Korea

Patricia KovatchSan Diego Supercomputing Center, USA

Werner KrotzVogel - Pallas, Germany

Greg LindahKey Technologies, USA

Arthur B. MaccabeUniversity of New Mexico, USA

Tim MattsonIntel, USA

Marshall Kirk McKusickMarshall Kirk McKusick Consultancy, USA

Bart MillerUniversity of Wisconsin, USA

Ron MinnichLos Alamos National Laboratories, USA

Bernd MohrResearch Center Juelich, Germany

Shirley MooreUniversity of Tennessee, USA

PROGRAM COMMITTEE

David MortonMHPCC, USA

J.P. NavarroArgonne National Laboratories, USA

Henry NeemanOSCER - Oklahoma University, USA

Takashi OhtaIBM Tokyo Research Lab, Japan

Rod OldehoeftLos Alamos National Laboratories, USA

Jairo PanettaINPE / CPTEC, Brazil

Michael PflugmacherNCSA / UIUC, USA

Christoph PospiechIBM, Germany

Jean-Pierre ProstBM, France

Faisal SaiedNCSA / UIUC, USA

Stephen ScottORNL, USA

Nils SmedsKTH, Sweden

Brian SmithUniversity of New Mexico, USA

Martin StreicherLinux Magazine, USA

John TaylorQuadrics, UK

Jeff VetterLLNL, USA

Steering Committee

Robert BallanceResearch Associate Professor, University of New Mexico

Luiz DeroseResearch Staff Member,

IBM T.J. Watson Research Center

David KlepackiSenior Staff Member, IBM T.J. Watson Research Center

Becky McGrealDirector Operations, Conference Program

LCI & CWCE

Candace ShirleyProgram Manager, HPCERC, University of New Mexico

John TownsDivision Director for Scientific Computing,

NCSA at University of Illinois

Advisory Board

Nanette BodenExecutive Vice President, MyricomNanette Boden was a member of the groupthat founded Myricom in 1994. She hasserved in a number of management positionsat Myricom. Her current responsibilities rangefrom oversight of operations to managingkey customer and sales-channel relationships.

Jay ClarkMarketing and Business Development,High Performance Computing Group,MSC.SoftwareJay Clark was instrumental in the earlyadoption of clusters into the automotiveand aerospace industries. He has been amember of the MSC.Software team for over13 years and has held positions inApplication Engineering and Sales.

Dave DriggersFounder and Chief Executive Officer,RacksaverDavid Driggers is cofounder and CEO ofRackSaver, a leading provider of high-den-sity servers and supercomputing clusters.Mr. Driggers is also a member of RackSaver’sBoard of Directors, as well as being chair-man of the company’s executive and tech-nology development committees.

Frank L. GilfeatherInterim Associate Vice Provost forResearch, University of New MexicoFrank L. Gilfeather, is currently InterimAssociate Vice Provost for Research at theUniversity of New Mexico with responsibilitiesinvolving campus wide research initiativesand planning. He was Executive Director ofthe HPC Education and Research Center(HPCERC) at UNM from 1993 to 2003.

Rick HerrmannIndustry Manager for High-performanceComputing, Intel

Ron NeylandDirector of Cluster Systems Engineering,RLX TechnologiesRon Neyland joined RLX with a primary goal ofbuilding a team to deliver Microsoft Windowssupport for RLX systems.He is currently respon-sibility for RLX’s compute cluster solutions.

Pratap PattnaikSenior Manager of the Scalable SystemsGroup, IBM Research DivisionPratap Pattnaik’s current research workincludes the development and design ofcomputer systems, operating systems andautonomic components. He is also leadingthe GRID research activities at IBM Research.

Daniel A. ReedDirector, National Center forSupercomputing Applications (NCSA)Daniel Reed provides strategic directionand vision to NCSA, the NationalComputational Science Alliance, and theTeraGrid project. He is a respected leader inthe national computer science communityand among the federal agencies that sup-port research and development.

Dave RichDirector of HPC Initiatives, AMDDavid Rich is a director concentrating onAMD’s high performance computing initia-tives. Previous to AMD, Mr. Rich was generalmanager of API NetWorks, a developer ofhigh-speed, I/O interconnect componentsbased on AMD’s HyperTransport technology.

Reza RooholaminiDirector of Operating Systems andCluster Engineering Group, DellReza Rooholamini is responsible for develop-ing the Linux OS and cluster products in theEnterprise Systems Group at Dell. He has over30 publications in areas of his research interest;HPCC, storage systems, HA and interconnects.

www.clusterworldexpo.com 76 Register online before May 23rd and SAVE!

Sponsors as of April 1, 2003

Platinum Sponsors

Conference Partner Media Sponsors

Gold Sponsors

Silver Sponsors

Tutorials

Developed by the Linux Clusters Institute, the ClusterWorld tutorials are hands-onworkshops where attendees learn from the real-world experience of cluster experts.The tutorials follow two tracks:

Track 1 (TA): Introduction to Linux ClusteringThis track contains tutorials that illustrate starting from the ground up how to design,configure and install a Linux cluster. Topics include:

➤ Cluster components ➤ System configuration alternatives ➤ Networking choices ➤ System acceptance testing ➤ Hardware alternatives ➤ Cluster management tools

Track 2 (TS): The Linux Cluster Software StackThis track contains tutorials that presents the basic components in the cluster soft-ware stack. Topics include:

➤ Kernel issues ➤ Security ➤ Compilers ➤ Scheduling ➤ Libraries ➤ Message passing ➤ Tools ➤ Grid components

Monday, June 23, 2003 8:30am -12:00pmTA1: Programming in the Linux Cluster Environment I:Tools and Program Optimization TechniquesThis tutorial presents topics focused on optimizations for cluster applications.Particular attention is given to processor architecture considerations and programmingtechniques that take advantage of processor architecture. In addition, performancemonitoring and analysis techniques and tools will be examined.

Monday, June 23, 2003 8:30am -12:00pmTS1: Introduction to Linux Clusters:Design Alternatives, Configuration and InstallationIn this tutorial, attendees will learn how to design, configure and install a Linux clus-ter. Topics include choosing cluster components, networking choices, hardware alter-natives, system configuration alternatives, system acceptance testing and clustermanagement tools.

Monday, June 23, 2003 1:30pm - 5:00pmTA2: Programming in the Linux Cluster Environment II:Parallel Programming with MPIIn this tutorial, attendees will be presented with basic information on programmingwith the MPI library. Tools for debugging parallel applications will be covered, alongwith techniques for improving application scalability on Linux clusters.

Monday, June 23, 2003 1:30pm - 5:00pmTS2: The Linux Cluster Software Stack:Operations from the System Administration PerspectiveThis tutorial presents the basic components in the cluster software stack: kernelissues, compilers, libraries, tools, security, scheduling, message passing and Grid com-ponents. Particular attention is given to the alternatives and tradeoffs in differentstyles of cluster operations.Exhibitors: SuSE, Cyclades, Dolphin Interconnect

Time System Applications Cluster Solutions Bioinformatics Digital Content Creation/Visualization/Simulations

Petroleum/Geophysical Exploration

Automotive & AerospaceEngineering

8:30am - 9:00am Breakfast

9:00am - 10:00am Opening Keynote: John Picklo, DaimlerChrysler

10:00am - 10:30am Conference Attendee Break

Day 1 | Tuesday, June 24, 2003

10:30am - 11:30am S1: Simple Linux Utility forResource Management10:30am - 11:15amS2: Simple Installation andAdministration Tool for LargeClusters 11:15am - 12:00pm

A1: Large Scale Parallel ReservoirSimulations on a Linux PCCluster 10:30am-11:15amA2: Scalable Performance ofFLUENT on NCSA IA-32 LinuxCluster 11:15am-12:00pm

C1: Building the TeraGrid:The World's Largest Grid,Fastest Linux Cluster, andFastest Optical NetworkDedicated to Open Science

B1: Running BLAST on a Linux Cluster

D1: The Current State ofNumerical Weather Predictionon Cluster Technology - What isNeeded to Break the 25%Efficiency Barrier?

P1: Exploring the Earth'sSubsurface with Itanium 2Linux Clusters

E1: Cluster Computing in SpaceApplications

2:30pm - 3:30pm ClusterWorld Challenge hosted by Pete Beckman

3:30pm - 4:00pm Conference Attendee Break

4:00pm - 5:00pm S4: A Middleware-Level ParallelTransfer Technique OverMultiple Network Interfaces

A3: Moore's Law and ClusterComputing: When Moore Is NotEnough

C3: Tools for Optimizing HPCApplications on Intel Clusters

B3: Terascale Linux Clusters:Supercomputing Solutionsfor Life Sciences

D3: Real-time Visualization ofCluster Networks

P3: Parallel Reservoir Simulationon Intel Xeon HPC Clusters

E3: Using Clusters to DeliverTurn Key CFD Solutions

6:00pm - 7:00pm Cluster Crash Party

7:15pm - 9:00pm Birds-of-a-Feather Meetings

8:30am - 9:00am Breakfast

9:00am - 10:00am Keynote: Jacobus N. Buur, Shell International Exploration and Production B.V.

10:00am - 10:30am Conference Attendee Break

Day 2 | Wednesday, June 25, 2003

10:30am - 11:30am S5: The Cluster IntegrationToolkit (CIT)10:30am - 11:15amS6: Scalable C3 Power Tools11:15am - 12:00pm

A4: Cooperative Caching in LinuxClusters 10:30am - 11:15amA5: Object storage: scalablebandwidth for HPC clusters11:15am - 12:00pm

C4: The Ultra-Scalable HPTC Lustre Filesystem

B4: Blade Servers for Genomic Research

D4: HPC and HA Clustering forOnline Gaming

P4: Geoscience visualizationand seismic processing clus-ters: collaboration and integra-tion issues

E4: LS-DYNA: CAE SimulationSoftware on Linux Clusters

1:15pm - 2:15pm C5: Building the World's Most Powerful Cluster

D5: Large Scale ScientificVisualization on PC Clusters

P5: Case Study: DeployingLarge-scale Seismic ProcessingClusters at CGG

E5: Linux Clusters in theGerman Automotive Industry

4:00pm - 5:00pm S9: Scheduling for ImprovedWrite Performance in a ParallelVirtual File System (CEFT-PVFS) 4:45pm - 5:30pm

A6: Analyzing Cluster Log FilesUsing Logsurfer4:00pm - 4:45pmA7: Performance Evaluation ofLoad Sharing Policies withPANTS on Beowulf Clusters4:45pm - 5:30pm

C6: Driving Cluster/GridTechnologies in HPC

B6: High PerformanceMathematical Libraries forItanium 2 Clusters

D6: The Power of Simulations:Predicting Emergent Behavior

P6: Grid Computing In TheEnergy Industry

E6: Improving Multi-site/Multi-departmental Cluster SystemsThrough Data Grids in theAutomotive and AerospaceIndustries

6:15pm - 7:30pm Excellence in Cluster Technology Awards

1:15pm - 2:15pm C2: Building x86-64Applications for AMD Opteron HCP Clusters

B2: Biobrew Linux: A LinuxCluster Distribution ForBioinformatics

D2: Building and Using TiledDisplay Walls

P2: Scalability Considerationsfor Compute IntensiveApplications

E2: Full Vehicle DynamicAnalysis Using AutomatedComponent Modal Synthesis

8:30am - 9:00am Breakfast

9:00am - 10:00am Keynote: John Reynders, Celera Therapeutics

10:00am - 10:30am Conference Attendee Break

Day 3 | Thursday, June 26, 2003

10:30am - 11:30am S10: Achieving Order throughCHAOS: The LLNL HPC ClusterExperience 10:30am-11:15amS11: Supercomputing CenterManagement Using AIRS11:15am - 12:00pm

A8: On the Numeric Efficiencyof C++ Packages in ScientificComputing10:30am-11:15amA9: Benchmarking I/OSolutions for Clusters11:15am-12:00pm

C7: Emerging Trends in DataCenter Powering and Cooling

B7: Parallel ComputationalBiology Tools andApplications for WindowsClusters

D7: The Use of Clusters forEngineering Simulation

P7: Drilling in the Digital OilField: High Pay-offs from LinuxClusters

E7: Scrutinizing CFDPerformance on Multiple LinuxCluster Architectures

1:15pm - 2:15pm C8: The Virtual Environmentand Its Impact on ITInfrastructure

B8: Building Software forHigh PerformanceInformatics and Chemistry

D8: NEESgrid: VirtualCollaboratory for EarthquakeEngineering and Simulation

E8: Managing CAE SimulationWorkload in ClusterEnvironments

S3: The Space Simulator

A10: The Design,Implementation, andEvaluation of mpiBLAST

Check www.clusterworldexpo.com for updates to the conference program

Time System Applications Cluster Solutions Bioinformatics Digital Content Creation/Visualization/Simulations

Petroleum/Geophysical Exploration

Automotive & AerospaceEngineering

Time System Applications Cluster Solutions Bioinformatics Digital Content Creation/Visualization/Simulations

Petroleum/Geophysical Exploration

Automotive & AerospaceEngineering

11:30am - 1:15pm Visit Exhibits | Lunch | GridWars Parallel Programming Championship 12:00pm-1:00pm

11:30am - 1:15pm Visit Exhibits | Lunch

S7: Full Circle: SimulatingLinux Clusters on LinuxClusters

2:30pm - 3:30pm Keynote: Tilak Agerwala, IBM Research

3:30pm - 4:00pm Conference Attendee Break

5:00pm - 6:00pm Visit Exhibits

11:30am - 1:15pm Visit Exhibits | Lunch

2:30pm - 3:30pm Beowulf Reunion Tour

www.clusterworldexpo.com 1110 Register online before May 23rd and SAVE!

test and run the same code that will be usedin a real system. It is currently being used inhardware validation and software develop-ment for the BlueGene/L cellular architecturemachine. BGLsim is capable of functionallysimulating multiple nodes of this machineoperating in parallel. It simulates instructionexecution in each node and the communica-tion that happens between nodes. To illus-trate the capabilities of BGLsim, experimentsrunning the NAS Parallel Benchmark IS on asimulated BlueGene/L machine are described.

Wednesday, June 25, 2003 4:45pm -5:30pmS9: Scheduling for Improved WritePerformance in a Parallel Virtual FileSystem (CEFT-PVFS)Yifeng Zhu, University of Nebraska, Lincoln This session will demonstrate that all the disks onthe nodes of a cluster can be connected togetherthrough CEFTPVFS, an RAID10 style parallel filesystem for Linux system, to provide a GBytes/secparallel I/O performance , without any addition-al cost.To improve the overall I/O performance,I/O requests can be scheduled on a less loadednode in each mirroring pair, thus making moreinformed scheduling decisions. Based on theheuristic rules we found from the experimentalresults, a scheduling algorithm for dynamic load-balancing has been developed that significantlyimproves the overall performance.

Thursday, June 26, 2003 10:30am -11:15amS10: Achieving Order through CHAOS: TheLLNL HPC Cluster ExperienceRobin Goldstone, Lawrence LivermoreNational LaboratoryFor the past several years, Lawrence LivermoreNational Laboratory (LLNL) has invested signif-icant effort in the deployment of large HighPerformance Computing (HPC) Linux clusters.After deploying two modest sized clusters (88nodes and 128 nodes) in early 2002, effortsprogressed to the deployment of the Multi-programmatic Capability Resource (MCR, 1154nodes) in fall 2002 and ASCI Linux Cluster (ALC,962 nodes) in early 2003.Through these efforts,LLNL has developed expertise in a number ofareas related to the design, deployment andmanagement of large Linux clusters. In thissession LLNL will present their experiences,including challenges encountered and lessonslearned.

Thursday, June 26, 2003 11:15am-12:00pmS11: Supercomputing Center ManagementUsing AIRSRobert A. Ballance, University of NewMexicoRunning a large university supercomputingcenter teaches many lessons, including theneed to centralize data collection and analy-sis, automate system administration func-tions, and enable users to manage their ownprojects. Albuquerque Integrated ReportingSystem (AIRS), a centralized, web-enabledapplication capable of user and projectadministration across multiple clusters andreporting against both active and historicaldata, evolved in response to these pressures.

APPLICATIONS TRACK

Tuesday, June 24, 2003 10:30am -11:15amA1: Large Scale Parallel ReservoirSimulations on a Linux PC ClusterWalid A. Habiballah, PetroleumEngineering Application ServicesDepartmentNumerical simulation is an important toolused by engineers to develop productionstrategies and enhance hydrocarbon recov-ery from reservoirs. Demand for large scalereservoir simulations is increasing as engi-neers want to study larger and more com-plex models. In this study, we evaluate a stateof the art PC cluster and available softwaretools for production simulations of largereservoir models. We discuss some of ourfindings and issues related to large scale par-allel reservoir simulations and present per-formance comparisons between a Pentium IVLinux PC cluster and an IBM SP Nighthawksupercomputer.

Tuesday June 24, 2003 11:15am-12:00pmA2: Scalable Performance of FLUENT onNCSA IA-32 Linux ClusterWai Yip Kwok, National Center forSupercomputing ApplicationsFLUENT, a leading industrial computationalfluid dynamics (CFD) software, has been port-ed to the NCSA IA-32 Linux cluster. For thisstudy, the scalable performance of FLUENT isbenchmarked with two engineering prob-lems from Caterpillar, Inc. and Fluent, Inc witha maximum of 64 processors to accommo-date up to 10 million cells. This session willoutline the impacts of different interconnectson simulation performance. Using Myrinet

SYSTEMS TRACK

Tuesday, June 24, 2003 10:30am -11:15amS1: Simple Linux Utility for ResourceManagement (SLURM)Morris Jette, Lawrence Livermore NationalLaboratorySLURM is an open source, fault-tolerant andhighly scalable cluster management and jobscheduling system for Linux clusters of thousandsof nodes. Components include machine status,partition management, scheduling and streamcopy modules.This session presents an overviewof the SLURM architecture and functionality.

Tuesday, June 24, 2003 11:15am-12:00pmS2: Simple Installation and AdministrationTool for Large ClustersTomoyuki Hiroyasu, Doshisha UniversityThe installation and configuration of clusterswith many nodes is difficult due to the largeamount of time and knowledge required tofully complete the task. To solve this problem asimple installation and administration tool,“Doshisha Cluster Auto Setup Tool: DCAST,” hasbeen developed. Targeted at Linux, it supportsboth diskless and diskfull clusters, requires nointeraction during install, boots slave nodesover the network and changes to configura-tion are propagated to all nodes.

Tuesday, June 24, 2003 1:15pm-2:15pmS3: The Space SimulatorMichael S. Warren, Los Alamos NationalLaboratoryThe Space Simulator is a 294 processorBeowulf cluster with a peak performancenear1.5 Teraflops. It achieved Linpack perform-ance of 665.1 Gflops on 288 processors, mak-ing it the 85th fastest computer in the world.The Space Simulator Cluster is dedicated toperforming computational astrophysics simu-lations in the Theoretical Astrophysics group(T6) at Los Alamos National Laboratory. Thiscase study will outline the design drivers, soft-ware and applications applied to the systemand lessons learned in building this cluster.

Tuesday, June 24, 2003 4:00pm -5:00pmS4: Middleware-Level Parallel Transfer Tech-nique Over Multiple Network InterfacesNader Mohamed, University of Nebraska,LincolnNetwork middleware is a software layer thatprovides abstract network APIs to hide the

lowlevel technical details from users. Existingnetwork middleware support single networkinterface and link message transfers. In this ses-sion, we describe a middleware level paralleltransfer technique that utilizes multiple net-work interface units that may be connectedthrough multiple networks. It operates on anyreliable transport protocol such as TCP andtransparently provides an expandable highbandwidth solution that reduces messagetransfer time, provides fault tolerance and facili-tates dynamic load balancing between theunderlying multiple networks.The experimen-tal evaluation displayed a peak performance of187Mbps on two fast Ethernet networks.

Wednesday, June 25, 2003 10:30am -11:15amS5: The Cluster Integration Toolkit (CIT)James H. Laros III, Sandia National Labs The Cluster Integration Toolkit is an extensi-ble, portable, scalable cluster managementsoftware architecture for a variety of systems.It has been successfully used to integrate andsupport a number of clusters at SandiaNational Labs and several other sites, thelargest of which is 1861 nodes. This sessionwill discuss the goals of the project and howthey were achieved. The installation processwill be described and common tasks for clus-ter implementation and support will bedemonstrated.

Wednesday, June 25, 2003 11:15am -12:00pmS6: Scalable C3 Power Tools Stephen Scott, Oak Ridge NationalLaboratory With the growth of the typical cluster reach-ing 512 and more compute nodes, it is appar-ent that cluster tools must begin to reachtoward the 1000’s of nodes in scalability.Version 3.2 of the C3 tools has started stretch-ing the Single System Illusion concept into therealm of 1000’s of compute nodes by actuallyimproving performance on larger clusters.Thissession is a discussion of how this was imple-mented and how to use this new version of C3and also presents some results comparing thelatest release with prior versions of C3.

Wednesday, June 25, 2003 1:15pm-2:15pmS7: Full Circle: Simulating Linux Clusters onLinux ClustersJose Moreira, IBM Thomas J. WatsonResearch Center BGLsim is a complete system simulator forparallel machines allowing users to develop,

Conference Sessions

www.clusterworldexpo.com 1312 Register online before May 23rd and SAVE!

Thursday, June 26, 2003 11:15am-12:00pmA9: Benchmarking I/O Solutions forClustersStefano Cozzini, Democritos INFM NationalSimulation Cente Clustered Systems offer many advantages fordemanding scientific applications: they candeal with massive CPU-bound requirementsand allow the distribution of RAM amongmany nodes. However, many scientific appli-cations process massive amounts of data andtherefore require high performance, distrib-uted storage next to parallel I/O. This sessionwill discuss present-day I/O cluster solutionsbased on Bonnie performance benchmarkingfor a variety of popular systems.

Thursday, June 26, 2003 1:15pm-2:15pmA10: The Design, Implementation, andEvaluation of mpiBLASTAaron E. Darling, University of Wisconsin-Madison mpiBLAST is an Open Source parallelization ofBLAST that achieves superlinear speed-up bysegmenting a BLAST database and then hav-ing each node in a computational clustersearch a unique portion of the database.Database segmentation permits each node tosearch a smaller portion of the database, elim-inating disk I/O and vastly improving BLASTperformance. Because database segmenta-tion does not create heavy communicationdemands, BLAST users can take advantage oflow-cost and efficient Linux cluster architec-tures such as the bladed Beowulf. In additionto this presentation of the software architec-ture of mpiBLAST, there will be a detailed per-formance analysis of mpiBLAST to demon-strate its scalability.

CLUSTER SOLUTIONS TRACK

Tuesday, June 24, 2003 10:30am -11:30amC1: Building the TeraGridPete Beckman, Argonne NationalLaboratoryThe TeraGrid is one of the most ambitious col-laborative grid projects ever undertaken. Thebuilding blocks for the $88 million NationalScience Foundation funded project includemammoth computational resources, ultra-fastfiberoptic networks linking NCSA, SDSC,CalTech Argonne and PSC and a software “gridhosting environment.”Together, they will forman environment that makes developing clus-ter-based, grid-enabled scientific applicationseasy. This presentation will provide anoverview of the project, the bleeding edgetechnologies used to bring clusters and gridsto the scientific community and an update oncurrent status and results.

Tuesday, June 24, 2003 1:15pm -2:15pmC2: Building x86-64 Applications for AMDOpteron HCP Clusters Richard Brunner, AMDThis session will provide an overview of AMDOpteron architecture and a real world look atHPC applications based on the AMD Opteronprocessor.The session will focus on LINUX SMPand NUMA applications with an overview ofthe AMD Opteron family of 64-bit processorsand associated support ICs from AMD.Theapplications session will also demonstrate howto apply HyperTransport Technology with thevast ecosystem of support ICs, which are avail-able today and in the months to come.

Tuesday, June 24, 2003 4:00pm -5:00pmC3: Tools for Optimizing HPC Applicationson Intel ClustersDon Gunning, IntelThe Intel software research lab is involved inseveral projects related to the developmentand deployment of HPC software on Intelbased clusters. This discussion will focus onthe work Intel is doing in parallel/concurrentcomputing within a single job or task, thedevelopment, debugging and tuning multi-threaded applications, in addition to deploy-ing MPI (and mixed MPI/threaded) applica-tions and Extending OpenMP to executeacross clusters. This discussion will also touchon ideas for maximum messaging perform-ance on the interconnect while maximizingapplication performance on the node.

interconnects, the Linux cluster computesmore than 2.5 times faster than an SGIOrigin2000 supercomputer at NCSA. A per-formance incease of seven times is observedwhen 32 processors are used instead of two.

Tuesday June 24, 2003 4:00pm -5:00pmA3: Moore’s Law and Cluster Computing:When Moore Is Not EnoughGreg Lindahl, Key Research, Inc.Linux cluster builders have become accus-tomed to continuous improvement of clusterbuilding blocks: each year, CPUs get faster,disks get bigger, memory bandwidth rises andnetworks get cheaper and faster. Theseimprovements are often seen as the inevitablemarch of progress, driven by the commoditymarket and Moore’s Law. This session willrevist Moore’s famous law in detail to deter-mine if it adaquately predicts an environmentripe for commodity cluster computing.

Wednesday June 25, 2003 10:30am -11:15amA4: Cooperative Caching in Linux ClustersYing Xu, University of California RiversideOperating systems used in most Linux clus-ters only manage memory locally withoutcooperating with other nodes in the system.This can create states where a node withinthe cluster may be short of memory whileidle memory in other nodes is wasted. Thissession attempts to solve the problem of howto improve the cluster operating system tosupport the use of cluster-wide memory as aglobal distributed resource. Presented will bea description of a cooperative cachingscheme for caching files in the cluster-widememory and corresponding changes in Linuxkernel memory management to support it.

Wednesday June 25, 2003 11:15am -12:00pmA5: Object storage: Scalable Bandwidth forHPC ClustersGarth GibsonThis session describes the Object StorageArchitecture solution for cost-effective, highbandwidth storage in HPC environments. Itaddresses the unique problems of storageintensive computations in very large clusters,suggesting that a shared file system with out-of-band metadata management is needed toachieve the required bandwidth.The sessionfurther argues that for excellent data reliability,storage protection needs to be supported onthe data path and it recommends the higher-level semantics of object-based, rather than

block-based, storage for scalable perform-ance, data reliability and efficient sharing.

Wednesday June 25, 2003 4:00pm-4:45pmA6: Analyzing Cluster Log Files UsingLogsurferJames Prewett, University of New MexicoLogsurfer is a log analysis tool that simplifiesmaintaining a cluster by aiding identificationand resolution of system issues. This sessionwill outline several examples of usingLogsufer in a cluster environment. Examplesrange from finding the traces of a complesexploitation of a service to determiningwhich of a set of nodes have problemsrebooting. Attendees will learn to configureLogsurfer to meet the particular needs oftheir environment.

Wednesday June 25, 2003 4:45pm -5:30pmA7: Performance of Load Sharing Policieswith PANTS on Beowulf ClustersJames Nichols, Worcester PolytechnicInstitute Powerful, low-cost clusters of personal com-puters such as Beowulf clusters, have fueledthe potential for widespread distributed com-putation.While these Beowulf clusters typical-ly have software that facilitates developmentof distributed applications, there is still a needfor effective distributed computation that istransparent to the application programmer.

Thursday, June 26, 2003 10:30am -11:15amA8: On the Numeric Efficiency of C++Packages in Scientific ComputingUlisses Mello, IBM TJ Watson Research Center Object-Oriented Programming (OOP) hasproven to be a useful paradigm for pro-gramming complex models. In spite ofrecent interest in expressing OOP para-digms in languages such as FORTRAN90,C++ is the dominant OO language in scien-tific computing, despite its complexity.Barton & Nackman advocated C++ as areplacement for FORTRAN in engineeringand scientific computing due to its availabil-ity, portability, effciency, correctness andgenerality. These authors used OOP for codereorganization of LAPACK (Linear AlgebraPACKage), and they were able to group andwrap over 250 FORTRAN routines into muchsmaller set of classes, which expressed thecommon structure of LAPACK.

Conference Sessions

www.clusterworldexpo.com 1514 Register online before May 23rd and SAVE!

tion that combines the NPACI Rocks clustersoftware with several popular Open Sourcebioinformatics software tools like BLAST,HMMER, ClustalW and BioPerl.The result is aLinux distribution that can be used to install aworkstation or a Beowulf cluster for bioinfor-matics analyses.

Tuesday, June 24, 2003 4:00pm -5:00pmB3: Terascale Linux Clusters:Supercomputing Solutions for the Life SciencesBruce Ling, Tularik

At Tularik, a biotechnology company specializ-ing in drug discovery and development usinggene regulation, informatics has become essen-tial for the process of genomics-based drug dis-covery.With the explosion of the genomic dataand lead discovery screening data points, apowerful computing environment becomes amust in order to boost B&D productivity. Bydeploying a 150-processor cluster, Tularik hassuccessfully managed millions of data points,coming from Assay-Development, High-Throughput-Screening (HTS), Structure-Activity-Relationship (SAR), Lead-Optimizationand Micro-Array to speed its R&D productivityand decision making processes.

Wednesday, June 25, 2003 10:30am -11:30amB4: Blade Servers for Genomic ResearchRon Neyland, RLX TechnologiesClusters based on industry standard hardwareand software have become the most widelyused tools for performing genomic processingand analysis. While providing many benefitssuch as outstanding price/performance, theyalso introduce a new set of problems. Thissession will address how blade servers pro-vide a compute cluster platform that deliversthe compute power required for genomicresearch, while minimizing many of the prob-lems. Real world examples of clusters runningmany of the widely used genomic applica-tions will be presented, along with tips andtools for managing the cluster environment.

Wednesday, June 25, 2003 4:00pm-5:00pmB6: High Performance MathematicalLibraries for Itanium 2 ClustersHsin-Ying Lin, HPHP’s Mathematical LIBrary (HP MLIB) providesa user-friendly interface using standard defi-nitions of public domain software andenables users to access the power of high

performance computing. HP MLIB fullyexploits the architecture of the processor andachieves optimal performance on Itanium 2.HP MLIB has been used by high performancecomputing customers for over 15 years. Thissession will provide a brief overview of rele-vant architectural features and depict howthese features have been used to designhigh-level algorithms. The performance ofsome of the key components in HP MLIB onItanium 2 clusters will be discussed: i.e. matrixmultiplication, ScaLAPACK and SuperLU_DIST.

Thursday, June 26, 2003 10:30am -11:30amB7: Parallel Computational Biology Toolsand Applications for Windows ClustersJaroslaw Pillardy, Cornell Theory CenterUsing massively parallel programs for dataanalysis is the most popular way of dealingwith the enormous amounts of data pro-duced in molecular biology research. Severalcomputational biology tools for MicrosoftWindows clusters of different levels of com-plexity, available at the ComputationalBiology Service Unit at the Cornell TheoryCenter, will be discussed. All of the tools fol-low a master-worker approach using MPIcommunications. The simplest tools - toolsthat are very important to biologists - arestandard sequence-based data mining toolssuch as BLAST and HMMER. More sophisticat-ed is the structure-based (threading) proteinannotation algorithm LOOPP.

Thursday, June 26, 2003 1:15pm-2:15pmB8: Building Software for HighPerformance Informatics and ChemistryJoseph Landman, Scalable Informatics LLCGiven the growth rate of life science data sets,analysis applications designed for singlemachines with shared memory and one ormore CPUs quickly leads to a performancebottleneck. Clusters and Grids represent apotential solution to this bottleneck but onlywhen applications are properly designed tomake full use of the resources available. In thissession we will look at the hard realities ofbuilding software for the informatics industry,including: problems with running legacy soft-ware on clusters, how to make efficient use ofclusters, for both the cluster and the user, andmaking life science informatics and chemistryapplications scale well on clustered systems.

Wednesday, June 25, 2003 10:30am -11:30amC4:The Ultra-Scalable HPTC Lustre FilesystemKent Koeninger, HPThe Lustre filesystem is designed to provide acoherent-scalable shared filesystem that canserve thousands of Linux client nodes, deliv-ering extremely high-bandwidth parallel-filesystem access to many terabytes of stor-age. This talk will describe how the Lusterfilesystem will be used in scalable-HPTC-Linuxsystems to combine the flexibility, scalabilityand manageability of NAS systems with theperformance of SAN systems. The Lustredevelopment effort is an open source projectwith initial release target in 2003.

Wednesday, June 25, 2003 1:15pm -2:15pmC5: Building the World’s Most PowerfulClusterKim Clark, Linux NetworXIn 2002, Linux Networx built the MCR clusterhoused at Lawrence Livermore NationalLaboratory. It is currently the largest cluster inthe world with a theoretical peak of 11.2Tflops and, with more than 1,000 nodes tomanage and monitor, ranks as the fifth largestsupercomputer in the world. The unique chal-lenges involved in building and configuringsuch a massive system and what was leanedfrom this experience will be discussed.Attendees will learn how to apply aspects ofthe LLNL system to their own smaller systemto enhance cluster performance and reliability.

Wednesday, June 25, 20034:00pm -5:00pmC6: Driving Cluster/Grid Technologies in HPC David Barkai, IntelHigh performance computing has undergonea metamorphosis in the last 15-20 years.Theimpact of changes to the industry and usercommunity will be reviewed. Discussed will bea summary of the building blocks as a set ofcomponents built upon enabling technologies,while highlighting the gaps and challenges ascluster computing ramps up and grid comput-ing continues to develop.

Thursday, June 26, 2003 10:30am -11:30amC7: Emerging Trends in Data CenterPowering and CoolingWahib Nawabi, APCTraditional data center architecture approachesforce enterprises to build out to full capacityfrom day one, yet one hundred percent utiliza-tion of the designed capacity is seldom reached.

This results in long deployment schedules, mil-lions of dollars of unrecoverable up-front capitalinvestments and the maintenance of expensiveservice contracts on under-utilized infrastruc-ture. APC’s PowerStruXure offers an on-demandsolution that accelerates speed of deploymentand allows you to invest in a data center solutionthat is sufficient to meet today’s demands, ratherthan an uncertain estimate of future capacity.

Thursday June 26, 2003 1:15pm -2:15pmC8: The Virtual Environment and Its Impacton IT InfrastructureDaniel Kusnetzky, IDCIDC has been examining the evolution of thevirtual environment for quite a number ofyears. This session will examine IDC’s defini-tion of the virtual environment, its roots intechniques developed in the late 1970s, andhow Windows, Unix and Linux can bedeployed as platforms in the virtual environ-ment. Dan Kusnetzky, IDC’s Vice President ofSystem Software, will present the drivers forvirtual environment software adoption andproject how the virtual environment willimpact the overall IT infrastructure in thecoming years.

BIOINFORMATICS TRACK

Tuesday, June 24, 2003 10:30am -11:30amB1: Running BLAST on a Linux ClusterRay Hookway, HPEveryone knows that Blast is an example ofan embarrassingly parallel application, i.e., anapplication that will run well on a cluster.Conceptually, one breaks up a query against adatabase into several queries against subsetsof the database and distributes the resultingjobs across the nodes of the cluster. However,it is not obvious how to go about doing this.The talk will begin with a brief review of howBlast works and then will explore factors thateffect the performance of Blast running on asingle system. Final focus will be on the answerto the question “How to run Blast on a cluster?”

Tuesday, June 24, 2003 1:15pm -2:15pmB2: Biobrew Linux: A Linux ClusterDistribution For BioinformaticsGlen Otero, CallidentBioBrew Linux is the first known attempt atcreating and freely distributing an easy-to-useclustering software package designed forbioinformaticists.With support for both IA32and IA64 platforms, BioBrew is a Linux distribu-

Conference Sessions

www.clusterworldexpo.com 1716 Register online before May 23rd and SAVE!

ing resources with a semantic approach to dis-tributed and federated data sources, organiza-tions can perform simulations that create predic-tive emergent behavior.This session will examinemodern predictive efforts that focus on bruteforce calculation and future trends in simulationwith emphasis on the distribution of bothdata and computational resources.

Thursday, June 26, 2003 10:30am-11:30amD7: The Use of Clusters for EngineeringSimulationLynn Lewis, HPClusters allow the use of advanced mathemati-cal techniques for optimization, changing theway engineers arrive at cost effective, safedesigns.Without inexpensive clusters, engineersat automotive manufacturers could not do1000's of crash test simulations integrated withthe initial design stage nor test for structuralintegrity much less manufacturability withinweeks.This session will examine in detail how,over the previous decade, Unix and lately Linuxclusters have have found use in commericalcash and fluid dynamics simulations, changingthe way cars and aircraft are designed and built.

Thursday June 25, 2003 1:15pm-2:15pmD8: NEESgrid: Virtual Collaboratory forEarthquake Engineering and SimulationTom Prudhomme, National Center forSupercomputing ApplicationsNEESgrid will link earthquake engineeringresearchers across the U.S. with leading-edgecomputing resources and research and testingfacilities, allowing teams to plan, perform, andpublish their research. Via both Telepresenceand other collaboration technologies, researchteams are able to work remotely on experi-mental trials and simulations. This session willexamine how NEESgrid, through the sharedresources of Grid technology, will bringtogether information technologists and engi-neers in a way that will revolutionize earth-quake engineering, research and simulation.

PETROLEUM TRACK

Tuesday June 24, 2003 10:30am -11:30amP1: Exploring the Earth’s Subsurface withItanium 2 Linux ClustersKeith GrayThis case study of an Itanium 2 processorarchitecture and Linux cluster technology forseismic imaging and migration imperativeproject, allowed a major Oil & Gas company to

reduce their cost for this high-end infrastruc-ture by one-half while increasing performanceby 3X, and in some cases exceeding thisexpectation by 5X. The environment includes1024 processors (4-way HP rx5670 servers x256 servers) with 8.2 Terabytes (32GB perrx5670 server) of memory and operates atover 4Teraflops peak performance.

Tuesday June 24, 2003 1:15pm -2:15pmP2: Scalability Considerations for ComputeIntensive ApplicationsChristian Tanasescu, SGIThis session investigates the scalability, archi-tectural requirements and performance charac-teristics of some of the most widely used com-pute intensive applications in the scientific andengineering communities. Seismic Processingand Reservoir Simulation (SPR) applicationsgenerally consume data read from memoryand have to load continuous new data. As aresult, to keep the floating point (FP) units busy,these applications require computer architec-tures with high memory bandwidth, mainlydue to the data addressing patterns and heavyI/O activities.We will also introduce BandeLa, tostudy the influence of the communicationbandwidth and latency for MPI applications.

Tuesday, June 24, 2003 4:00pm-5:00pmP3: Parallel Reservoir Simulation on IntelXeon HPC ClustersKamy Sepehrnoori, University of Texas AustinNumerical simulation of reservoirs is an integralpart of geo-scientific studies, with the goal ofoptimizing petroleum recovery. In this session,we conduct a series of benchmarks by running aparallel reservoir simulation code on an IntelXeon Linux cluster and study the scalabilitywhile using different interconnects for thecluster. Our results show that the simulator’s per-formance scales linearly from one to 64 single-processor nodes, when using a low-latency, high-bandwidth interconnect. In addition to bench-marking, we describe a process-to-processormapping approach for dual-processor clustersto improve communication performance aswell as overall performance of the simulator.

Wednesday, June 25, 2003 10:30am -11:30amP4: Geoscience Visualization and SeismicProcessing ClustersPhil Neri, ParadigmThe active development of Linux visualizationclusters has led to the notion of associatingclosely compute-intensive seismic processing

DIGITAL CONTENT CREATION / VISUALIZATION / SIMULATIONS

Tuesday, June 24, 2003 10:30am -11:30amD1: The Current State of NumericalWeather Prediction on Cluster TechnologyDan Weber, Center for the Analysis andPrediction of StormsThis session will look in depth at the currentstate of weather prediction and the manychallenges it faces. The talk will examine thecomputational needs (teraflops) of a robustnumerical weather prediction (NWP) systemat thunderstorm scale and review NWP per-formance on current computer technology. Areview of current models will be addressed, aswell as the roadblocks associated with clus-ters. Finally, will be proposal for a completeshift in the way systems of equations aresolved on scalar technology in order to breakthe 25% efficiency ceiling.

Tuesday, June 24, 2003 1:15pm-2:15pmD2: Building and Using Tiled Display WallsPaul John Rajlich, National Center forSupercomputing ApplicationsTiled display walls provide a large-format envi-ronment for presenting very high-resolutionvisualizations by tiling together the outputfrom a collection of projectors. Projectors aredriven by a Linux cluster augmented with high-performance graphics accelerator cards andcosts are controlled by using commodity pro-jectors and low-cost PCs.Tiled walls must face anumber of challenges, such as, aligning the pro-jectors so that the output of adjacent tiles alignto create a seamless image.This session will dis-cuss the Alliance Display Wall-in-a-Box effort; adistribution of related Open Source softwarepackages that reduces the setup and mainte-nance of complex high-end display systems.

Tuesday, June 24, 2003 4:00pm-5:00pmD3: Real-time Visualization of Cluster NetworksTom Caudell, University of New MexicoThe real-time visualization of cluster networksprovides a number of benefits to administratorsand developers in search of performance bottle-necks. Real-time visuals provide early warning ofreal problems in network traffic as well as pro-vide clear indication of potential problemsbefore they occur. However, real-time networkvisualization is a remarkablily difficult project.This session will discuss a number of the techni-cal hurdles involved in building a visualization

system that will scale with increased perform-ance. Using network visualization, organizationscan design applications that take better advan-tage of network traffic, avoiding bottlenecks,and administrators can make informed deci-sions on scheduling that lead a cluster towardoptimal performance.

Wednesday, June 25, 2003 10:30am -11:30amD4: HPC and HA Clustering for Online GamingJesper Jensen, William Pentoja,Matthew Elkourie, SCISCI, the company who developed and supportsthe backend for the Department of Defense'sAmerica's Army game, will deliver a case studyon deploying gaming clusters for the DoD —and other game titles — and give an overviewof where large-scale game technology is andwhere it is going. With technology capable ofpushing an average of 1.35 teraflops per cabinetspace and leveraging multiple transit carriers, SCIclusters deliver both the HPC and HA required tosupport a massive gaming audience. This dis-cussion will touch on solutions for 32 bit andnext-generation 64 bit architectures both inplace and under development.

Wednesday, June 25, 2003 1:15pm -2:15pmD5: Large Scale Scientific Visualization on PC ClustersBrian Wylie, Sandia National LaboratoriesThis session covers the use of PC clusters withcommodity graphics cards as high-perform-ance scientific visualization platforms. A clus-ter of PC nodes, in which many or all of thenodes have 3D hardware accelerators, is anattractive approach to building a scalablegraphics system. The main challenge in usingcluster-based graphics systems is the difficultyof realizing the full aggregate performance ofall the individual graphics accelerators. Topicscovered will include parallel geometric render-ing, parallel volume rendering, data distribu-tion approaches and novel techniques for uti-lizing graphics processors.

Wednesday June 25, 2003 4:00pm -5:00pmD6: The Power of Simulations:Predicting Emergent BehaviorCameron HuntCommodity hardware and Open Source soft-ware like Linux have made distributed comput-ing through clusters and Grids the most cost-effective method of increasing computer power.By combining the ubiquity of distributed comput-

Conference Sessions

www.clusterworldexpo.com 1918 Register online before May 23rd and SAVE!

resources necessary to overcome these chal-lenges. Now, however, organizations are ableto access a full-featured implementation ofFluent via the Internet in a pay as you go sce-nario. This session will discuss the problemssolved and gains realized by a distributedimplementation of Fluent 6.1.

Wednesday, June 25, 2003 10:30am -11:30amE4: LS-DYNA: CAE Simulation Software onLinux ClustersGuangye Li, IBMLS-DYNA is used in a wide variety of simulationapplications: automotive crashworthiness &occupant safety; sheet metal forming, militaryand defense applications, aerospace industryapplications, electronic component design.Several years ago, one simulation of a verysimplified finite element model needed daysto complete on a Symmetric Multiprocessing(SMP) vector computer. With the introductionof Distributed Multiprocessing technology, theMPP (Massively Parallel Processors) version ofLS-DYNA can dramatically reduce the turn-around time for the simulation and thereforereduce the time for the automotive designprocess. We will present the comparison ofthe scalability of the SMP and MPP versions ofLS-DYNA, as well as the comparison of com-munication networks (Myrinet, Fast Ethernet,Gigabit Ethernet) on Linux clusters.

Tuesday, June 24, 2003 1:15pm -12:15pmE5: Linux Clusters in the GermanAutomotive IndustryDr. Karsten Gaier, science + computing AGAfter the first German CAE-Linux computercluster (LCC) was installed in 1999 atDaimlerChrysler for electromagnetic compati-bility calculations (EMC), there has been greatsuccess in the adoption of LCC. This includesclusters based on 512 CPUs used for crash-calculations running at a major automotivemanufacturer. This talk will provide anoverview of ways in which Linux clusters arechanging the course of CAE in Germany. It willalso look at a number of different configura-tions currently being implemented in some ofthe world’s largest automotive manufacturers.

Wednesday, June 25, 2003 4:00pm -5:00pmE6: Improving Clusters Through Data Gridsin the Automotive and Aerospace IndustriesAndrew Grimshaw, AvakiAs the pressure increases to optimize theproduct design and manufacturing processes

it is critical for the automotive and aerospaceindustries to give professionals secure accessto product and manufacturing information.Data is often located at multiple R&D sites andsuppliers, regardless of location. Additionally,product developers require more and moreprocessing power, delivered via clusters thatare not effective unless they can provideaccess to the data they need. This session willexamine the most significant data challengesfacing today’s automotive and aerospacecompanies and how Grid technology impactsthe engineering and manufacturing process.

Thursday, June 26, 2003 10:30am-11:30amE7: Scrutinizing CFD Performance onMultiple Linux Cluster ArchitecturesThomas Hauser, Utah State UniversityLinux cluster supercomputers are a cost-effec-tive platform for simulating fluid flow in engi-neering applications. However, obtaining highperformance on these clusters is a non-trivialproblem, requiring tuning and design modifi-cations to the Computational Fluid Dynamics(CFD) codes. Investigations in optimizing CFDcodes on Linux cluster platforms will be pre-sented. Detailed performance results of twoCFD codes on a wide range of cluster archi-tectures, including Pentium and Athlon, IntelItanium and the AMD Opteron, will be ana-lyzed. The single and multi-processor per-formance of these codes on different clusterarchitectures will be compared and means ofimproving performance discussed.

Thursday, June 26, 20031:15pm -2:15pmE8: Managing CAE Simulation Workload inCluster EnvironmentsMichael M. Humphrey, AltairAutomotive manufacturers are beginning tocapitalize on workload management softwareto get the most out their numerically intensecomputing environments. Workload manage-ment software is middleware technology thatsits between your compute-intensive applica-tions - such as ABAQUS, ANSYS, FLUENT, LS-DYNA, NASTRAN and OPTISTRUCT - and yournetwork hardware operating systems. Thesoftware schedules and distributes all types ofapplication runs (serial, parallel, distributedmemory, parameter studies, big memory, longrunning, etc.), on all types of hardware (desk-tops, clusters, supercomputers and evenacross sites). This presentation will describethe current capabilities of PBS Pro workloadmanagement software as a middlewareenabler for robust system design.

and geosciences visualization, notably for thepurpose of building and verifying velocity andsolid models. The options are to implementcross-system data integration, or to share of acommon hardware resource. Practical imple-mentations of the integration model will bepresented, based on Paradigm’s experience withexisting production systems.The use of a CORBA-based distributed data architecture will also bediscussed.The common hardware concept,still inthe design phase, will be analyzed for its expect-ed benefits, economics and potential problems.

Wednesday, June 25, 2003 1:15pm -12:15pmP5: Case Study: Deploying Large-scaleSeismic Processing Clusters at CGGCompagnie Generale de Geophysique

Wednesday, June 25, 2003 4:00pm -5:00pmP6 Grid Computing In The Energy IndustryJamie Bernardin, DataSynapseGrid computing has attracted significantattention in the current IT environment.What are the business and technical factorsdriving companies to adopt Grid? In thispresentation on Grid computing in Oil & Gas,we will examine, frequently encounteredobstacles to deploying a grid computingsolution, compare the vision of Grid to therealities of today, identify target deploymentsfor distributed computing solutions in the Oil& Gas sector, and describe the value impact ofgrid computing. DataSynapse will share casestudies from its existing engagements as wellas identify specific technical requirementsunique to the energy market.

Thursday, June 26, 2003 10:30am-11:30amP7: Drilling in the Digital Oil Field: HighPay-offs from Linux ClustersShawn Fuller, HPThe Oil & Gas industry is required to managemammoth volumes of complex data for bothengineering and scientific requirements in theirsearch for discovering new reservoirs and morecost efficient production methods. Globallydeployable high-performance computing sys-tems coupled with best-in-class applicationsare the keys to success for Oil & Gas companiesto excel in their business.This session will coverthe areas of technology receiving the mostfocus: mobility, desktop visualization, scalableand immersive visualization, global collabora-tion, scalable clustered systems, network stor-age systems, imaging and printing - coveringthe full gamut of Oil & Gas IT requirements.

AUTOMOTIVE & AEROSPACE ENGINEERING

Tuesday, June 24, 2003 10:30am-11:30amE1: Cluster Computing in SpaceApplicationsEric George, The Aerospace CorporationThis case study will examine how TheAerospace Corporation utilizes cluster com-puting for a variety of applications in supportof high priority national defense programsincluding the Global Positioning System (GPS)and future missile warning programs.Applications to date have focused on astrody-namics, satellite constellation design, commu-nications network modeling, thermal analysis,and complex scheduling/tasking algorithms.Processing techniques range from MonteCarlo analysis & brute force search operationsto genetic algorithms. Research is progress-ing on implementation of a diverse grid-com-puting environment at Aerospace.

Tuesday, June 24, 2003 1:15pm -2:15pmE2: Full Vehicle Dynamic Analysis UsingAutomated Component Modal SynthesisPeter Schartz, MSC.SoftwareToday it is commonplace to attempt to ana-lyze the fully trimmed body of an automobilefor its vibration characteristics, over increas-ing frequency ranges, and on inexpensivecomputer hardware. The cost effectiveness ofRISC based cache processors, combined withupward pressure in the form in large, detailedmodels, has allowed new software methodsto utilize domain decomposition to enablehigh-level parallelism. A domain decomposi-tion, followed by a component modal synthe-sis solution, is the bases for Automated ModalComponent Synthesis (ACMS) in MSC.Nastran.The solution is described in theory, and itseffectiveness is demonstrated by an exampletaken from today’s automotive industry.

Tuesday, June 24, 2003 4:00pm-5:00pmE3: Using Clusters to Deliver Turn Key CFDSolutionsPaul Bemis, FluentWhile low cost, high performance clustershave been in use since the early 1990’s, theapplication of commercial off-the-shelf CFDsoftware, such as Fluent, to harness theseshared nothing architectures has only beenviable near the end of that decade. Earlyimplementations required persistent ITdepartment willing to commit the time and

Conference Sessions

www.clusterworldexpo.com 2120 Register online before May 23rd and SAVE!

Special rates have been negotiated for CWCE 2003 attendees at hotels located nearthe San Jose Convention Center. All hotel requests should be submitted directly tothe San Jose Convention and Visitor’s Bureau (CVB) by June 13, 2003. Please makeyour reservation through our online form.

Fairmont San Jose (Headquarter Hotel) $149 + tax (single/double)San Jose Marriott $139 + tax (single/double)Hilton San Jose $129 + tax (single/double)

You may contact the CVB directly by calling (408) 792-4168. You may also fax requeststo (408) 293-3705. Please do not contact the hotel directly as these special rates areonly available through the CVB. Room rates are guaranteed provided that rooms arestill available.

HURRY! These discounted rates may not be available after June 13.

Airline Reservations: American Airlines is the official carrier for the ClusterWorldConference and Expo. To take advantage of the special fares that are being offered toCWCE 2003 attendees, please contact American Airlines directly at (800) 433-1790 andrefer to authorization code # A4463AX.Best flights will sell out quickly and seats are limited.

Car Rental: Contact AVIS directly at (800) 331-1600 or online at www.avis.com andreference discount code # J998405 for the best rates available.

Convention Center Location: The San Jose Convention Center is located at 150 W.San Carlos Street, between Almaden Boulevard and Market Street, in San Jose, CA.The Convention Center parking garage entrances are located on Almaden Boulevardand Market Street. In addition, there are more than 21,000 parking spaces in down-town San Jose; parking locations are subject to change due to downtown develop-ment. For the most up-to-date information on downtown parking including freeparking and validation programs, public transportation and the DASH shuttle, visitwww.sjdowntownparking.com.

San Carlos St.

First St.

Secon

d St.

Fou

rth St.

Market St.

Virginia St.

Santa Clara St.

Hedding St.

Compaq Centerat San Jose

AmtrakTech Museum

Convention Center

San Jose StateUniversity

280

101

Parkside HallM

on

terey Hw

y

Alm

aden

17

1

2 3

87

Light Rail runs every 10 -15 minutescall (408) 321-2300 for schedule

= Light Rail

= Freeway

Legend

Fairmont Hotel, 170 S. Market St.1

Hilton San Jose & Towers300 Almaden Blvd.

2

San Jose Marriott, 365 S. Market St.3

General Information

CONFERENCE DISCOUNTS

Academia and government employees are eligible to receive a 50% discount off all conference registrationpackages. To take advantage of this discount, you must either fax or mail the ClusterWorld ConferenceRegistration form to the fax number or address below by June 13, 2003.

Fax the completed form to: Mail the completed form to:ClusterWorld Conference & Expo ClusterWorld Conference & Expoc/o ExpoExchange Registration c/o ExpoExchange Registration(301) 694-5124 PO Box 590

Frederick, MD 21705-0590

A valid ID is required to receive the academia and government employee discount. If you wish to takeadvantage of the discounted pricing, please be sure to indicate as such on the Conference RegistrationForm, then attach a copy of your identification credentials and submit the completed form via fax or mail.Valid forms of identification include:

➤ Educational Institution identification ➤ Institutional letter verifying educator/student’s employment or enrollment status ➤ Health Insurance identification displaying Educational Institution name ➤ Government agency identification

Group DiscountRegister for four Conference Packages and receive the fifth of equal or lesser value for FREE! Please mail orfax all of your Registration forms together and identify which registrant is FREE. Please note that this spe-cial group discount is not available online.

RECEIVING YOUR BADGE

If you are a US attendee and you register on or before May 23, 2003 you will receive your badge in themail. To receive your badge holder, simply bring your badge with you on-site to any Badge Holder Pick-Upcounter, located in the Main Lobby of the San Jose Convention Center. All US attendees that register afterMay 23, 2003 must pick up their badge on-site during registration hours.

If you are an International attendee, your badge will not be mailed to you. In order for you to receive yourbadge, please bring your registration confirmation letter with you on-site to the Conference Registrationcounter, located in the Main Lobby of the San Jose Convention Center.

REGISTRATION AND EVENT POLICY

Registration Cancellation and SubstitutionsIf you need to cancel, you may do so for a full refund until June 18, 2003. Attendees who register prior to, orafter the deadline date who do not cancel in writing by the deadline date are liable and will be charged forthe full registration fee. Sorry, no refunds or letters of credit are available after this date. You may fax or mailyour cancellation request in order for it to be processed.

Fax the completed form to: Mail the completed form to:ClusterWorld Conference & Expo ClusterWorld Conference & Expoc/o ExpoExchange Registration c/o ExpoExchange Registration(301) 694-5124 PO Box 590

Frederick, MD 21705-0590

Written requests for a downgraded pass must be received no later than June 18, 2003 for a full refund onthe difference of registration fees between the value of the original and downgraded pass. Requestsreceived after June 18, 2003 will receive a letter of credit for the 2004 CWCE issued for the difference inpass values. Upgrade pass requests must be submitted in writing and faxed to (301) 694-5124 along withpayment information for the difference in value.

Substitutions are allowed only with the written permission of the original registrant. Please mail your sub-stitution request to the above address, or fax to (301) 694-5124.

Travel and Accommodations

❑ Check here if you require special services . (Attach a written description of your needs.)

ATTENDEE INFORMATION

FIRST NAME M.I. LAST NAME

TITLE

COMPANY

STREET ADDRESS, P.O. BOX, APT#, SUITE, MAIL STOP, ETC.

CITY STATE/PROVINCE

ZIP/POSTAL CODE COUNTRY

PHONE FAX

EMAIL

❑ I would not like to receive product information or news from ClusterWorld Conference and Expo exhibitors or approved third parties via email.

REGISTRATION PACKAGE SELECTION

Please select for the package choices below and the total costs at the bottom. Refer to the opposite page for package descriptions, tutorialcodes and pricing. If selecting the Platinum (PL) or Exhibits (EO) pass please do not select any other packages.

Package: ❑ Platinum Pass (PL) ❑ ClusterWorld ❑ ClusterWorld ❑ Half-Day Tutorial ❑ ExhibitsTutorial Codes 3 Day Pass (3D) 1 Day Pass (1D) Codes Pass (EO)

(Please indicate) ❑ Tue ❑ Wed ❑ Thu (T1) (Please indicate)(Please indicate day)

❑ Group Discounts: Register for four conference packages and receive the fifth of equal or lesser value for FREE (all forms must be received together by mail/fax ONLY)

❑ Check here for Government Employee, Academic or Student discount eligibility. Attach a photocopy of your valid identification credentials.

1) Which category best describesyour primary JOB TITLE:❑ 1. Corporate Management (CEO,

CTO, CIO, President)❑ 2. Research & Development

Management❑ 3. IT Director / Manager❑ 4. Lab Director / Manager❑ 5. Senior Scientist / Staff Scientist❑ 6.Software Engineer / Programmer❑ 7.Network Engineer / Manager❑ 8.Systems Engineer / Analyst❑ 9.Manufacturing / Engineering Staff❑ 10.Consultant❑ 11.Government Agency Executive❑ 12.Academic Head / University

Faculty / Professor❑ 13.Student❑ 14.Other (Please Specify)

2) Which category best describesthe BUSINESS or INDUSTRY inwhich you work?❑ 1. Academia / Education❑ 2. Aerospace❑ 3. Automotive❑ 4. Biotechnology

❑ 5. Digital Content Creation❑ 6. Finance / Insurance / Banking❑ 7. Government (Defense)❑ 8. Government (Energy)❑ 9. Government (Other)❑ 10. Manufacturing / Design

(Computers / Communications)❑ 11. Manufacturing / Design (Other)❑ 12. Petroleum / Geophysical

Exploration❑ 13. Pharmeceuticals❑ 14. Scientific Visualization❑ 15. Other (Please Specify)

3) What is the SIZE of yourorganization?❑ 1. Under 50❑ 2. 50 - 99❑ 3. 100 - 499❑ 4. 500 - 999❑ 5. 1,000 - 4,999❑ 6. 5,000 - 9,999❑ 7. 10,000 or more4) What is your organization’sTOTAL ANNUAL IT BUDGET for allcluster-related hardware,software and services?

❑ 1. $0 - $25,000❑ 2. $25,001 - $100,000❑ 3. $100,001 - $500,000❑ 4. $500,001 - $1,000,000❑ 5. $1,000,000 - $5,000,000❑ 6. Over $5,000,0005) What is YOUR ROLE in the purchasing process for all cluster-related hardware,software and services?❑ 1.Evaluate or recommend products❑ 2.Specify vendors❑ 3.Approve purchases❑ 4.No role6) Do you plan to evaluate products at this conference forpurchase within the next year?❑ 1.Yes❑ 2.No❑ 3.Maybe7) Which PRODUCTS OR SERVICESare you interested in purchasing?(Check all that apply)❑ 1.Bioinformatics Software❑ 2.CAD / CAE / CFD Software❑ 3.Database Software❑ 4.Digital Content Creation /

Rendering Software❑ 5.Distributed / Grid Computing

Software❑ 6.High-Speed Network

Interconnects❑ 7.IA64 Based Systems❑ 8.X86-64 Based Systems❑ 9.Non-IA Based 64-Bit Systems❑ 10.IA32 Based Systems❑ 11.Imaging Software❑ 12.Network Infrastructure

(Routers / Switches)❑ 13.Network / Systems

Management (Hardware)❑ 14.Network / Systems

Management (Software)❑ 15.Security Software❑ 16.Simulation Software❑ 17.Storage Products (Hardware)❑ 18.Storage Products (Software)❑ 19.Visualization Tools❑ 20. Parallel Programming

Compilers❑ 21. Parallel Programming

Profilers / Debuggers❑ 22.Other (Please Specify)

8) HOW MANY SERVERS make upyour organization’s cluster?❑ 1. Under 50❑ 2. 50 - 99❑ 3. 100 - 199❑ 4. 200 - 499❑ 5. 500 - 999❑ 6. 1000 - 1999❑ 7. 2000 +9) Would you like to receive a

complimentary subscription toClusterWorld Magazine?❑ 1.Yes❑ 2.NoIn lieu of a signature, audit require-ments dictate that we must ask for apersonal identifier. Please indicateyour eye color 10) Please indicate your housingplans?❑ 1.Use my own travel agent❑ 2.Book accomodations myself❑ 3.Use ClusterWorld Conference &

Expo Housing❑ 4.Live Locally❑ 5.Staying with relatives / friends❑ 6.Don’t know yet

PAYMENT INFORMATION (Payment must accompany form for registration to be completed)

Conditions: All Registration fees and credentials are non-transferable. Discounts on registration fees are valid on NEW registrations ONLY and must be redeemed/noted at time

of registration. No refunds or credits will be issued for a discount after the initial registration. A copy of the order must be provided at the time of registration. A $20.00

fee will be charged for all returned checks.

TOTAL Amount $ ❑ Check Enclosed (Make payable to ClusterWorld Conference & Expo and enclose theregistration form in the envelope)

❑ MasterCard ❑ VISA ❑ American Express

ACCOUNT NUMBER EXPIRATION DATE

FIRST NAME M.I. LAST NAME

ATTENDEE PROFILE (Must be completed to process your registration)

PRIORITY CODE: Enter your priority code frompromotional mailing or email. If you do not havea code, please enterNone.

Cancellation and Substitutions: If you need to cancel, you may do so for a full refund until June 18, 2003. Attendees who register prior to, or after the deadline date, who do not cancel in writing by the deadline date are liable and willbe charged for the full registration fee. Sorry, no refunds or letter of credit are available after this date. Please fax your cancellation request to 301-694-5124 or mail your request to ClusterWorld Conference & Expo, c/oExpoexchange Registration, PO Box 590, Frederick, MD 21705-0590. Written requests for a downgraded pass must be received no later than June 18, 2003 for a full refund on the difference of registration fees between thevalue of the original and downgraded pass. Requests received after June 18, 2003 will receive a letter of credit for the 2004 CWCE issued for the difference in pass values. Upgrade pass requests must be submitted in writing and faxedto 301-694-5124 along with payment information for the difference in value. Substitutions are allowed only with the written permission of the original registrant. Please mail your substitution request to the above address, or fax to301-694-5124. Sorry, no one under the age of 18 is allowed on the Expo floor.

CONFERENCE JUNE 23-26, 2003

EXPOSITION JUNE 24-26, 2003

SAN JOSE CONVENTION CENTER, SAN JOSE, CA

Register online with your priority code by May 23, 2003, and save!

Platinum Pass

Best Value!

(PL)

• 3 Days of Sessions

• 2 Half-Day Tutorials

(Attendees must register

for specific tutorials)

• Birds-of-a-Feather Sessions

• Keynotes

• Feature Presentations

• Exhibits

$895 $995 $1,095

3 Day

ClusterWorld

Conference Pass

(3D)

• 3 Days of Sessions

• Birds-of-a-Feather Sessions

• Keynotes

• Feature Presentations

• Exhibits

$595 $645 $695

1 Day

ClusterWorld

Conference Pass

(1D)

• 1 Day of Sessions

• Birds-of-a-Feather Sessions

• Keynotes

• Feature Presentations

• Exhibits

$225 $275 $325

Half-Day

Tutorials

(TH)

$300 each $350 each $400 each

Exhibits Pass

(EO)

• Exhibits

• Keynotes

• Feature Presentations

Free Free $50

• Choose up to 2 (Attendees must

register for specific tutorials)

• Keynotes

• Feature Presentations

• Exhibits

Early Bird DiscountOn or Before May 23, 2003

May 23, 2003 to June 18, 2003

OnsiteJune 22-26, 2003

ONLINE: MAIL: FAX:June 18, 2003 By June 13, 2003 By June 13, 2003Visit www.clusterworldexpo.com Complete the registration form and send to: Complete the registration form(Advanced discount pricing ends ClusterWorld c/o ExpoExchange Registration and fax to: (301) 694-5124May 23, 2003) PO Box 590 (Faxed registrations must include

Frederick, MD 21705-0590 credit card information)

REGISTRATION PACKAGES

TUTORIAL KEY

SYSTEMS TRACK TUTORIALS

TS1: Introduction to Linux Clusters - Design

Alternatives, Configuration and Installation

TS2: The Linux Cluster Software Stack - Operations

from the System Administration Perspective

APPLICATIONS TRACK TUTORIALS

TA1: Programming in the Linux Cluster Environment I -

Tools and Program Optimization Techniques

TA2: Programming in the Linux Cluster Environment II -

Parallel Programming with MPI

3 EASY WAYS TO REGISTER

REGISTER NOW!