university of illinois at urbana-champaign ncsa supercluster administration nt cluster group...
TRANSCRIPT
University of Illinois at Urbana-Champaign
NCSA Supercluster AdministrationNCSA Supercluster Administration
NT Cluster Group Computing and Communications Division
NCSA
Avneesh [email protected]
University of Illinois at Urbana-Champaign
System GoalsSystem Goals
• Provide a production level of service• Integrate the system into current environment
– Apply current supercomputer policies and procedures– Account management– Resource usage / allocation
• Provide conveniences to the users– Develop an environment where users can prepare and run
their own codes effectively– This requires advanced automated administration
– Provide feedback to users– Job status to users via email
– Provide common applications– Get an account, get your data, and run
University of Illinois at Urbana-Champaign
NCSA NT 320 Pentium® CPU ClusterNCSA NT 320 Pentium® CPU Cluster
64 CPUs - Serial 32 Compaq PWS 6000 Dual 333 MHz Pentium II 512 MB memory
256 CPUs - Parallel MPI 64 HP Kayak XU systems Dual 550 MHz Pentium III Xeon 1 GB RAM 64 HP Kayak XU systems Dual 300MHz Pentium II 512 MB memory
64 Dual 550 MHz Pentium III Xeon HP Kayaks back-to-back
University of Illinois at Urbana-Champaign
System ConfigurationSystem Configuration
• Software– Microsoft NT 4.0 Server– LSF from Platform for queuing system– MPI
– HPVM from Chien’s CSAG group• Networking
– Myrinet– MPI communication
– Fast Ethernet– Used for network file systems
– Fibre Channel – Storage Networks
– Giganet– Testing environment
University of Illinois at Urbana-Champaign
Alliance NT Supercluster, July 1999Alliance NT Supercluster, July 1999
Front-End Systemsntsc-tsN.ncsa.uiuc.edu
LSF BatchJob Scheduler•Apps development
•Job submission
128 Compute Nodes, 256 CPUs
128 GB Home200 GB Scratch
FTP to Mass StorageDaily backups
Fast Ethernet
64 Dual 550 MHz Systems 64 Dual 300 MHz Systems
Myrinet Interconnect and HPVM
Internet
File serversLSF master
Serial Nodes
24 Dual 333
Fast EthernetNo MPI
University of Illinois at Urbana-Champaign
Accessing the ClusterAccessing the Cluster
• Windows Terminal Server Interactive Nodes– Multiuser form of Windows NT
–Surprisingly good performance– Access Methods
–Windows RDP Client from Microsoft– Windows clients only
–Citrix ICA client– Available for most platforms – http://www.citrix .com to download the clients– A java applet client is available
–X Windows– Rsh daemon to start sessions
University of Illinois at Urbana-Champaign
Windows NT on a Web PageWindows NT on a Web Page
University of Illinois at Urbana-Champaign
System SetupSystem Setup
• System imaging– Initial setup from a network enabled boot floppy
– Clears system and clones system using Drive Image Professional• Uses image file on network file server
– Manually set hostname/IP in configuration files– Reboot and let it retrieve NT image, change Security ID, and configure
– Small Non-volatile DOS partition– Boots from this during subsequent imaging– Stores configuration information– Runs batch scripts from server every boot
– All systems can be updated from calling a single script– Scripts on the server contain the re-imaging commands – < 20 minutes to convert to a new configuration on all systems
– Simplifies Administration– Systems are identical, adverse behavior usually hardware related– Add new systems or repair a broken system quickly
University of Illinois at Urbana-Champaign
Updating softwareUpdating software
• Radical changes through re-imaging– Prepare single system
– Set configuration scripts to run at next boot
– Boot to DOS and upload image to server
• Incremental upgrades– Scripted using batch
– Registry files are merged using RCMD– RCMD is a resource kit Remote command tool
– Most common upgrade is LSF– OS and MPI do not change often
University of Illinois at Urbana-Champaign
Shows in real time: •systems status •current load•load by user name•Load by jobid•All running/pending jobs
NT Cluster MonitoringNT Cluster Monitoring
•ScalableReconfigurable grid Works well over modemHighlights troubled systems
Deviation from expected load can be viewed
University of Illinois at Urbana-Champaign
Node AdministrationNode Administration
• CRUN Scripts– Runs scripts sequentially for ranges of machines
– Used for Rebooting, updating files …
– Coupled with other tools like Tlist (like ps) and kill– Can be used to find processes on hosts
• RCMD– Provides interactive access to compute nodes
– Useful in manual process management
– Faster than using LSF’s lsrun
University of Illinois at Urbana-Champaign
Process AdministrationProcess Administration
• Simply start and stop jobs– Not so simple– Queuing system software may not be fault tolerant
– Only some of the processes launch– Not all of the processes get terminated
• Shepherding – Makes decisions about jobs and processes
– Can kill jobs if processes do not start or quit– Can kill processes if jobs finish
– Coupled with process tracking software to find orphans
– Uses semi-intelligent Shepherd Agent – Also provides interface for global administration
University of Illinois at Urbana-Champaign
Account AdministrationAccount Administration
• Integrates into our current systems– Account creation/deletion occurs in our allocation division– Uses command line utilities to manage accounts– Password management can be handled through this system
• System Usage Accounting– Custom daemon created– Simple, dedicated CPU / Memory accounting
– Actual process CPU usage is not relevant due to our MPI– Processes always use 100% of the CPU
– Number of process and time info collected by LSF– Existing Accounting infrastructure used
University of Illinois at Urbana-Champaign
Storage AdministrationStorage Administration
• Storage systems– Storage Central Disk advisor by W Quinn
– For monitoring file system usage
• Quota software– No quota software currently in use
– Our scratch system is Windows 2000 and has quota Software available
– Quotas will be enforced when we switch to W2K– Home directories are on Windows NT 4.0
• Security– Home space is readable by the user only
– Upon request, administrators can gain access
– Scratch Space file access is maintained by the user
University of Illinois at Urbana-Champaign
Scalability IssuesScalability Issues
• Queuing system– LSF is currently working at a scale unexpected a
few years ago– Where will difficulties arise?
– Batch system falls behind more often when system size grows– Related to the speed and reliability of the network
– Platform Computing LSF has adapted in the past• Monitoring tools
– Many command line tools are already impractical– Visualization methods need to be researched
– GLMon may not be effective for more that 1000 nodes
– Detailed monitoring effects system scalability
University of Illinois at Urbana-Champaign
Future DirectionsFuture Directions
– Better integration with the mass storage system
– High performance shared file systems
– Improved reliability and process management
– Advanced user support
– Advancements in interconnects– Better scaling– Better performance