nccs nccs user forum 22 september 2009. nccs agenda welcome & introduction phil webster, cisto...
Post on 21-Jan-2016
225 Views
Preview:
TRANSCRIPT
NCCS
NCCS User Forum
22 September 2009
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Key Accomplishments
• Incorporation of SCU5 processors into general queue pool
• Capability to run large jobs (4000+ cores) on SCU5
• Analysis nodes placed in production
• Migrated DMF from Dirac (Irix) to Palm (Linux)
NCCS
New NCCS Staff Members
• Lynn Parnell, Ph.D. Engineering Mechanics, High Performance Computing Lead
• Matt Koop, Ph.D. Computer Science, User Services
• Tom Maxwell, Ph.D. Physics, Analysis System Lead
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Key Accomplishments
Discover/Analysis Environment• Added SCU5 (cluster totals: 10,840 compute CPUs, 110 TF)• Placed analysis nodes (dali01-dali06) in production status• Implemented storage area network (SAN)• Implemented GPFS multicluster feature• Upgraded GPFS• Implemented RDMA• Implemented InfiniBand token networkDiscover/Data Portal• Implemented NFS mounts for select Discover data on Data PortalData Portal• Migrated all users/applications to HP Bladeservers• Upgraded GPFS• Implemented GPFS multicluster feature• Implemented InfiniBand IP network• Upgraded SLES10 operating system to SP2DMF• Migrated DMF from Irix to LinuxOther• Migrated non-compliant AUIDs• Transitioned SecurID operations from NCCS to ITCD• Enhanced NCCS network redundancy
NCCS
2/4/09 – SCU4 (544 cores added)
2/19/09 – SCU4 (240 cores added)
2/27/09 – SCU4 (1,280 cores added)
8/13/09 – SCU5 (4,128 cores added)
Discover 2009 Daily Utilization Percentage
NCCS
Discover Daily Utilization Percentage by GroupMay – August 2009
8/13/09 – SCU5 (4,128 cores added)
NCCS
Discover Total CPU ConsumptionPast 12 Months (CPU Hours)
9/4/08 – SCU3 (2,064 cores added)
2/4/09 – SCU4 (544 cores added)
2/19/09 – SCU4 (240 cores added)
2/27/09 – SCU4 (1,280 cores added)
8/13/09 – SCU5 (4,128 cores added)
NCCS
Discover Job Analysis – August 2009
0
20,000
40,000
60,000
80,000
100,000
120,000
140,000
4 16 32 64 128 256 512 1024 2048
Discover Jobs by Job Size -August 2009
0
200,000
400,000
600,000
800,000
1,000,000
4 16 32 64 128 256 512 1024 2048
Discover CPU Hours by Job Size - August 2009
0
1
2
3
4
5
6
7
8
4 16 32 64 128 256 512 1024 2048
Discover Expansion Factor by Job Size - August 2009
NCCS
Discover Job Analysis – August 2009
010,00020,00030,00040,00050,00060,00070,00080,00090,000
Discover Jobs by Queue - August 2009
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
Discover CPU Hours by Queue -August 2009
0
0.5
1
1.5
2Discover Expansion Factor by Queue - August 2009
NCCS
Discover Availability
Scheduled Maintenance: Jun-Aug
10 Jun - 17 hrs 5 minGPFS (Token and Subnets, 3.2.1-12)
24 Jun - 12 hoursGPFS (RDMA, Multicluster, SCU5 integration)
29 Jul - 12 hoursGPFS 3.2.1-13, OFED1.4 , DDN firmware
30 Jul - 2 hours 20 minutesDDN controller replacement
19 Aug - 4 hoursNASA AUID transition
Unscheduled Outages: Jun-Aug
16 Jun – 3 hrs 35 min – nodes out of memory24 Jun – 4 hrs 39 min – maintenance extension6-7 Jul – 4 hrs 18 min – internal switch error13 Jul – 2 hrs 59 min – GPFS error14 Jul – 26 min – nodes out of memory20 Jul – 2 hrs 2 min – GPFS error29 Jul – 55 min – Maintenance extension19 Aug – 2 hrs 45 min – maintenance extension
NCCS
Current Issues on Discover:Login Node Hangs
• Symptom: Login nodes become unresponsive.
• Impact: Users cannot login.
• Status: Developing/testing solution. Issue arose during critical security patch installation.
NCCS
Current Issues on DMF:Post-Migration Clean-Up
• Symptoms: Various.
• Impact: Various.
• Status: Issues addressed as they are encountered and reported.
NCCS
Future Enhancements
• Discover Cluster– PBS V 10– Additional storage– SLES10 SP2
• Data Portal– GDS OPeNDAP performance enhancements– Use of GPFS-CNFS for improved NFS mount
availability
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
I/O Study Team
• Dan Kokron• Bill Putman• Dan Duffy• Bill Ward• Tyler Simon• Matt Koop• Harper Pryor• Building on work by SIVO and GMAO (Brent Swartz)
NCCS
Representative GEOS Output
• Dan Kokron has generated many runs containing data in order to characterize the GEOS I/O– 720 core, quarter degree GEOS with YOTC-like history
– Number of processes that write: 67
– Total amount of data: ~225 GB (written to multiple files)
– Average write size: ~1.7 MB
– Running in dnb33
– Using Nehalem cores (GPFS with RDMA)
• Average Bandwidth– Timing the entire CFIO calls results in a bandwidth of 3.8 MB/sec
– Timing just the NetCDF ncvpt calls results in a bandwidth of 44.4 MB/sec
• Why is this so slow?
NCCS
Kernel Benchmarks
• Used open source I/O kernel benchmarks of xdd and iozone– Achieved over 1 GB/sec to all the new
nobackup file systems
• Wrote two representative one-node c-code benchmarks– Using c writes and appending to files– Using NetCDF writes with chunking and
appending to files
• Ran these benchmarks writing out exactly the same as process 0 in the GEOS run
– C-writes: Average bandwidth of around 900 MB/sec (consistent with kernel benchmarks)
– NetCDF writes: Average bandwidth of around 600 MB/sec
• Why is GEOS I/O running so slow?
C-writesAverage Bandwidth ~900MB/sec
NetCDF-writesAverage Bandwidth ~600MB/sec
NCCS
Effect of NetCDF Chunking
• How does changing the NetCDF chunk size affect the overall performance?
• The table shows runs varying the chunk size for an average of 10 runs for each chunk size
– Used the NetCDF kernel benchmark
• The smallest chunk size reproduces the GEOS bandwidth
– As best as we can tell, this is roughly equivalent to the default chunk size
• The best chunk size turned out to be about the size of the array being written ~3MB
Chunksize
(# Floats)
Chunksize(KB)
AverageBandwidth
(MB/sec)
1K 4 37
32K 128 262
128K 512 492
512K 2,048 537
1M 4,096 596
2M 8,192 497
3M 12,228 369
6M 24,576 477
10M 40,960 327
References:• “NetCDF-4 Performance Report”, Lee, et. Al., June 2008.• NetCDF on-line tutorial:http://www.unidata.ucar.edu/software/netcdf/docs_beta/netcdf-tutorial.html• Benchmarking I/O Performance with GEOSdas and other modeling guru postshttps://modelingguru.nasa.gov/clearspace/message/5615#5615
NCCS
Setting Chunk Size in GEOS
• Dan K. ran several baseline runs to make sure we were measuring things correctly
• Turned on chunking and set the chunk size equal to the write size (1080x721x1x1)
• Dramatic improvement in ncvpt bandwidth
• Why was the last run so slow?– Because we had a file
system hang during that run
File Name Description Ncvpt Bandwidth(MB/sec)
Base Line 1 Base line run with time stamps at each “wrote” statement
44.47
Base Line 2 Printed out time stamps before and after the call to ncvpt
76.35
Base Line 3 Printing the time stamps moved after the call to ncvpt
64.69
Using NetCDF Chunking
Initial run with NetCDF chunking turned on
409.87
Using NetCDF Chunking and Fortran Buffering (1)
IO Buffering in the Intel IO library on top of NetCDF chunking
421.23
Using NetCDF Chunking and Fortran Buffering (2)
Same as previous run with very different results
45.17
NCCS
What next?
• Further explore chunk sizes in NetCDF– What is the best chunk size?
– Do you set the chunk sizes for write performance or for read performance?
– Once a file has been written with a set chunk size, it cannot be changed without rewriting the file.
• Need to better understand the variability seen in the file system performance
– Not uncommon to see a 2x or greater difference in performance from run to run
• Turn the NetCDF kernel benchmark into a multi-node benchmark– Use this benchmark for testing system changes and potential new
systems
• Compare performance across NCCS and NAS systems• Write up results
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Ticket Closure Percentiles1 March to 31 August 2009
NCCS
Issue: Parallel Jobs > 1500 CPUs
• Original problem: Many jobs wouldn’t run at > 1500 CPUs
• Status at last Forum: Resolved using a different version of the DAPL library
• Current Status: Now able to run at 4000+ CPUs using MVAPICH on SCU5
NCCS
Issue: Getting Jobs into Execution
• Long wait for queued jobs before launching• Reasons
– SCALI=TRUE is restrictive– Per user & per project limits on number of eligible jobs
(use qstat –is)– Scheduling policy: first-fit on job list ordered by queue
priority and queue time
• User services will be contacting folks using SCALI=TRUE to assist them in migration away from this feature
NCCS
Future User Forums
• NCCS User Forum schedule– 8 Dec 2009, 9 Mar 9 2010, 8 Jun 2010, 14 Sep 2010,
and 7 Dec 2010– All on Tuesday– All 2:00-3:30 PM– All in Building 33, Room H114
• Published– On http://nccs.nasa.gov/– On GSFC-CAL-NCCS-Users
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Sustained System Performance
• What is the overall system performance?• Many different benchmarks or peak numbers are
available– Often unrealistic or not relevant
• SSP refers to a set of benchmarks that evaluates performance as related to real workloads on the system– SSP concepts originated from NERSC (LBNL)
NCCS
Performance Monitoring
• Not just for evaluating a new system• Ever wonder if a system change has affected
performance?– Often changes can be subtle and not detected with
normal system validation tools• Silent corruption• Slowness
– Find out immediately instead of after running the application and getting an error
NCCS
Performance Monitoring (contd.)
• Run real workloads (SSP) to determine performance changes over time– Quickly determine if something is broken or slow– Perform data verification
• Run automatically on a regular basis as well as after system changes– e.g. change to a compiler, MPI version, OS update
NERSC SSP Example Chart
NCCS
Meaningful Measurements
• How you can help– We need your application and a representative
dataset for your application– Ideally should take ~20-30 minutes to run at various
processor counts
• Your benefits– Changes to the system that affect your application will
be noticed immediately– Data will be placed on NCCS website to show system
performance over time
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Discover Job Monitor
• All data is presented as a current system snapshot, in 5 min intervals.
• Displays system load as a percentage• Displays the number of running jobs and running cores• Queued jobs and job wait times• Displays current qstat -a output• Interactive Historical Utilization Chart• Message of the day• Displays average number of cores per job
• Job Monitor
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Climate Data Analysis
• Climate models are generating ever-increasing amounts of output data.
• Larger datasets are making it increasingly cumbersome for scientists to perform analyses on their desktop computers.
• Server-side analysis of climate model results is quickly becoming a necessity.
NCCS
Parallelizing Application Scripts
• Many data processing shell scripts can be easily parallelized – MatLab, IDL, etc.
• Use task parallelism to process multiple files in parallel– Each file processed on a separate core within a single dali node
• Limit load on dali (16 cores per node ) – Max: 10 compute intensive processes per node
Serial Version:
while ( … ) … # process another file run.grid.qd.s …end
Parallel Version:
while ( … ) … # process another file run.grid.qd.s & …end
NCCS
ParaView
• Open-source, multi-platform visualization application– Developed by Kitware, Inc. (authors of VTK)
• Designed to process large data sets• Built on parallel VTK• Client-server architecture:
– Client: Qt based desktop application
– Data Server: MPI based parallel application on dali.
• Parallel streaming filters for data processing• Large library of existing filters• Highly extensible using plugins
– Plugin development required for HDF, NetCDF, OBS data
• No existing climate-specific tools or algorithms• Data Server being integrated into ESG
NCCS
ParaView Client
Qt desktop application that Controls data access, processing, analysis, and visualization
NCCS
ParaView Client Features
NCCS
Analysis Workflow Configuration
Configure a parallel streaming pipeline for data analysis
NCCS
ParaView Applications
Polar Vortex Breakdown Simulation
Cross Wind Fire Simulation
Golevka Asteroid Explosion Simulation
3D Rayleigh-Benard problem
NCCS
Climate Data Analysis Toolkit
• Integrated environment for data processing, viz, & analysis
• Integrates numerous software modules in python shell
• Open source with a large diverse set of contributors
• Analysis environment for ESG developed @ LLNL
NCCS
Data Manipulation
• Exploits NumPy Array and Masked Array• Adds persistent climate metadata• Exposes NumPy, SciPy, & RPy mathematical operations
ClusteringFFTImage processingLinear algebraInterpolationMax entropyOptimizationSignal processingStatistical functionsConvolutionSparse matricesRegressionSpatial algorithms
NCCS
Grid Support
• Spherical Coordinate Remapping and Interpolation Package– remapping and interpolation between grids on a sphere– Map between any pair of lat-long grids
• GridSpec– Standard description of earth system model grids
– To be implemented in NetCDF CF convention
– Implemented in CMOR
• MoDAVE– Grid visualization
NCCS
Climate Analysis
• Genutil & Cdutil (PCMDI)– General Utilities for climate data analysis
• Statistics, array & color manipulation, selection, etc.
– Climate Utilities• time extraction, averages, bounds, interpolation • masking/regridding, region extraction
• PyClimate– Toolset for analyzing climate variability
• Empirical Orthogonal Functions (EOF) analysis• Analysis of coupled data sets
– Singular Vector Decomposition (SVD) – Canonical Correlation Analysis (CCA)
• Linear digital filters• Kernel based probability• Density function estimation
NCCS
CDAT Climate Diagnostics
• Provides a common environment for climate research• Uniform diagnostics for model evaluation and comparison
Taylor Diagram
Thermodynamic Plot
Performance Portrait Plot
Wheeler-Kalidas Analysis
NCCS
Contributed Packages
• PyGrADS (potential)• AsciiData• BinaryIO• ComparisonStatistics• CssGrid• DsGrid• Egenix• EOF• EzTemplate• HDF5Tools• IOAPITools• Ipython• Lmoments• MSU
• NatGrid• ORT• PyLoapi• PynCl• RegridPack• ShGrid• SP• SpanLib• SpherePack• Trends• Twisted• ZonalMeans• ZopeInterface
NCCS
Visualization
• Visualization and Control System (VCS)– Standard CDAT 1D and 2D graphics package
• Integrated Contributed 2D Packages– Xmgrace– Matplotlib– IaGraph
• Integrated Contributed 3D packages– ViSUS– VTK– NcVTK– MoDAVE
NCCS
Visual Climate Data Analysis Tools (VCDAT)
• CDAT GUI, facilitates:– Data access– Data processing & analysis– Data visualization
• Accepts python input– Commands and scripts
• Saves state– Converts keystrokes to python
• Online help
NCCS
MoDAVE
• Visualization of Mosaic grids• Parallelized using MPI• Integration into CDAT in process• Developed by Tech-X & LLNL
Cubed sphere visualization
NCCS
ViSUS in CDAT
• Data streaming application– Progressive
processing & visualization of large scientific datasets
• Future capabilities for petascale dataset streaming
• Simultaneous visualization of multiple ( 1D, 2D, 3D ) data representations
NCCS
VisTrails
• Scientific workflow and provenance management system.
• Interface for next version of CDAT– history trees, data pipelines, visualization spreadsheet, provenance capture
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Background
• Scientists generate large data files• Processing the files consists of executing a
series of independent tasks• Ensemble runs of models• All the tasks are run on one CPU
NCCS
PoDS
• Task parallelism tool taking advantage of distributed architectures as well as multi-core capabilities
• For running serial independent tasks across nodes
• Does not make any assumption on the underlying applications to be executed
• Can be ported to other platforms
NCCS
PoDs Features
• Dynamic assessment of resource availability• Each task is timed• A summary report is provided
NCCS
Task Assignment
Command 1Command 2Command 3Command 4
Command 5Command 6Command 7Command 8
Command 9…
Node 1
Node 2
Node 3
Execution File
NCCS
PoDS Usage
pods.py [-help] [execFile] [CpusPerNode]
execFile: file listing all the independent tasks to be executed
CpusPerNode: number of CPUs per node. If not provide, PoDS will automatically use the number of CPUs available in each node.
NCCS
Simple Example
• Randomly generates an integer n between 0 and 10^9
• Loops over n to perform some basic operations• Each time the application is called a different n is
obtained. We want to run the application 150 times.
NCCS
Timing Numbers
Nodes Cores/Node Time (s)1 1 990
2 4964 2568 133
2 1 4972 2474 1318 61
NCCS
More Information
• User’s Guide on ModelingGuru:https://modelingguru.nasa.gov/clearspace/docs/DOC-1582
• Package available at/usr/local/other/pods
NCCS
Agenda
Welcome & IntroductionPhil Webster, CISTO Chief
Current System StatusFred Reitz, Operations Lead
NCCS Compute CapabilitiesDan Duffy, Lead Architect
PoDSJules Kouatchou, SIVO
User Services UpdatesBill Ward, User Services Lead
Analysis System UpdatesTom Maxwell, Analysis Lead
Discover Job MonitorTyler Simon, User Services
SSP TestMatt Koop, User Services
Questions and CommentsPhil Webster, CISTO Chief
NCCS
Important Contacts
• NCCS Support– support@nccs.nasa.gov (301) 286-9120
• Analysis Lead– Thomas.Maxwell@nasa.gov (301) 286-7810
• I/O Improvements– Daniel.Q.Duffy@nasa.gov (301) 286-8830
• PoDS Info– Jules.Kouatchou-1@nasa.gov (301) 286-6059
• User Services Lead– William.A.Ward@nasa.gov (301) 286-2954
top related