new compute-cluster at cms daniela-maria pusinelli compute-cluster - 1jahreskolloquium 29.06.2010...

16
Agenda 1. Construction and Management 2. High Avaibility 3. How to use 4. Software 5. Batchsystem Compute-Cluster - 2 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Upload: blake-torres

Post on 26-Mar-2015

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Agenda

1. Construction and Management2. High Avaibility3. How to use4. Software5. Batchsystem

Compute-Cluster - 2 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Page 2: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

1. Construction and Management/1

8 Supermicro Twin2-Server

Where to find: Grimm-Zentrum, Server Room, water cooled cabinet

Aufbau: - 8 Supermicro Twin2-Server, 2 U- every has 4 nodes with 2 Intel-Nehalem CPUs- every has 4 Cores, Infiniband QDR, 48 GB memory- at all 32 nodes, 256 Cores, 1.536 TB memory

Compute-Cluster - 3 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Page 3: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

- two master nodes are responsible for login to the address: clou.cms.hu-berlin.de

- they manage different services (DHCP, NFS, LDAP, Mail)- they start und stop alle nodes if necessary, only they communicate into the Universtiy network- they provide development software (Compiler, MPI)- and also application software- they organize the batch service

1. Construction and Management/2

Compute-Cluster - 4 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Page 4: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

- two as a HA cluster configured server provide the parallel file system (Lustre FS) for temporary data of large size

lulu:/work- all nodes are equipped with fast infiniband network (QDR) which is responsible for node communication during parallel computations - the master and file server nodes are monitored by central Nagios of CMS- the compute server are monitored local with Ganglia

1. Construction and Management/3

Compute-Cluster - 5 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Page 5: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

1. Aufbau und Cluster-Verwaltung/4

Compute-Cluster - 6 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

Stellt/work bereit

-/work Data

- 1 MDT , 2 OSTs

- 3 OSTs

- Login clou.cms.hu-berlin.de

- Failover Login

- 36 Port Infiniband Switch

- node1 bis node4

- node5 bis node8

- node25 bis node28

- node29 bis node32

- /home Data

Page 6: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

- all servers are equipped with redundant power supplies

- master server: RedHat Resource Group Manger rgmanager

configures and monitors the services on both master nodes- in case of failover of one master the other one takes over the services of the failed one- file server: for the Lustre FS /work are also redundant configured,

if one fails the other one takes the MDS (Meta Data Server) and also

the OSTs (Object Storage Targets), possible becuse all data are

stored on a virtual SAN disk

Compute-Cluster - 7 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

2. High Avaibility /1

Page 7: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

3. How to use/1

Compute-Cluster - 8 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Kkk- all colleagues who have not enough resources at the institut are allowed to

use the system

- necessyary is an account at CMS, the acoount will be opend for the service

- login on master clou.cms.hu-berlin.de with SSH form University network

- you may login to nodes via ssh from master node without further

authentification

- Data storage:- /home/<institut>/<account> unique user home dir - /afs/.cms.hu-berlin.de/user/<account> OpenAFS home dir- /work/<account> working dir- /perm/<account> permanent dir- /scratch/<account> on nodes lokal working dir

- Migration of data from old cluster via /home Verzeichnis or with scp

Page 8: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

3. How to use/2

Compute-Cluster - 9 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

KkkData saving:

- data in /home daily into TSM, max. 3 older versions- data in /afs/.cms.hu-berlin.de daily, the whole semester- data in /perm daily on disk, max. 3 older versions- data in /work no saving- data in /scratchno saving

- /work and /scratch are controlled on achieve high water mark of 85%, data older than 1 month willl be removed- important data should be copied to /perm or to a home dir- a parallel SSH is installed, for calling commands on all nodes

pssh --helppssh –P –h /usr/localshare/nodes/all w |grep load

Page 9: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

4. Software/1

Compute-Cluster - 10 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

KkkSystem software- Operating system: CentOS 5 = RedHat EL 5

Development Software- GNU Compiler- Intel Compiler Suite mit MKL- Portland Group Compiler - OpenMPI- MPICH

Application software- Chemie: Gaussian, Turbomole, ORCA- Matlab, Maple- further Software is possible

Page 10: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

4. Software/2

Compute-Cluster - 11 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Kkk- all Software versions must be loaded with the module commandmodule available -> all available modules (Software)

- Development:module load intel-cluster-322 -> loads Intel Compiler+module load openmpi-14-intel -> loads OpenMPI...

- Application software:module load g09-a02 -> loads Gaussian09 A02module load orca27 -> loads ORCA 2.7module load matlab-10a -> loads Matlab R2010a...

Page 11: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

5. Batch Service/1

Compute-Cluster - 12 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Open Source Product: SUN Grid Engine (SGE)

- cell clous is installed

- on one master node the Qmaster daemon is running the other one is

working as a slave and will get activ if the first fails

- SGE supports parallel environments (PE)

- There is a Grafical User Interface QMON

- With QMON all configurations may be shown and actions may be done

- The are two parallel environments installed:

ompi (128 Slots) and smp (64 Slots)

- These will be allocated to the parallel queues

Page 12: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

5. Batch Service/2

Compute-Cluster - 13 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Kkk- parallel jobs upto 32 cores are allowed, max. 4 Jobs a 32 Cores at the

same time in bigpar

- in queue par there are upto 8 jobs a 8 cores allowed, all jobs are

running on a separate node

- ser and long are serial queues

- inter is for interactive computations (Matlab, GaussView)

- all values and configurations are preliminary and may be changed on

user requirements

- batch scripts of old cluster can’t be used for the new one, because of

new batch system

Page 13: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

5. Batch Service/3

Compute-Cluster - 14 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

KkkQueue Priority Prozessors Memory Slots PE Runtime

bigpar +5 8-32 40 GB 128 ompi 48 h

par 0 4-8 20 GB 64 smp 24 h

long -5 1 4 GB 32 - 96 h

short +10 1 4 GB 32 - 6 h

inter +15 1 1 GB 8 - 3 h

Page 14: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

5. Batch Service/4

Compute-Cluster - 15 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Kkk- Users are collected to lists = working groups e.d.: cms, limberg, haerdle, ...

- submition of jobs: example scripts for all applications are in /perm/skripte, they include the module calls

- e.d. Gaussian09 computation:

cd /work/$USER/g09cp /perm/skripte/g09/run_g09_smp_8 -> ev. Änderungenqsub run_g09_smp_8qstatqmon& -> Grafical User Interface

Page 15: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

5. Batch Service/5

Compute-Cluster - 16 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Kkk- MPI program developing and starting (computation of Pi)cd /work/$USER/ompimodule load intel-cluster-322module load openmpi-14-intelcp /perm/skripte/ompi/cpi.c .mpicc –o cpi cpi.c

cp /perm/skripte/ompi/run_ompi_32 .qsub run_ompi_32

Enthält den Aufruf des MPI Programmesmpirun –np 8 –machinefile nodefile cpicat nodefile -> enhält den Knoten, z.B.

node8.local slots=8

Page 16: New Compute-Cluster at CMS Daniela-Maria Pusinelli Compute-Cluster - 1Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

5. Batch Service/6

Compute-Cluster - 17 Jahreskolloquium 29.06.2010 Daniela-Maria Pusinelli

alle

Kkk- important commandsqstat -> Status der eigenen Jobsqstat –u \* -> Liste aller Jobs im Clusterqdel <jobid> -> Entfernen des Jobsqconf –sql -> Liste aller Queuesqconf –sq par -> Konfiguration Queue parqconf –sul -> zeigt die Userlisten (Gruppen)qconf –su cms -> User der Liste/Gruppe cmsqconf –spl -> Liste der Parallel Environmentsqacct <jobid> -> Abrechnungsinfo zum Job qstat –q bigpar –f -> zeigt, ob Queue disabled