large scale data flow in local and grid environment viktor kolosov (itep moscow) ivan korolko (itep...

Download Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)

If you can't read please download the document

Upload: gabriel-greer

Post on 18-Jan-2018

224 views

Category:

Documents


0 download

DESCRIPTION

main components ITEP LHC computer farm (1) 64  Pentium IV PC modules ( ) A. Selivanov (ITEP-ALICE) a head of the ITEP-LHC farm

TRANSCRIPT

Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow) Research objectives Plans: Large scale data flow simulation in local and GRID environment. Done: Large scale data flow optimization in realistic DC environment (ALICE and LHCb) more interesting more useful (hopefully) main components ITEP LHC computer farm (1) 64 Pentium IV PC modules ( ) A. Selivanov (ITEP-ALICE) a head of the ITEP-LHC farm BATCH nodes ITEP LHC computer farm (2) CPU:64 PIV-2.4GHz (hyperthreading) RAM:1 GB Disks:80 GB Mass storage Disk servers:6 x 1.6 TB + 1 x 1.0 TB + 1 x 0.5 TB 100 Mbit/s CERN 2-3 Mbit/s 20 (LCG test) + 44 (DCs) Monitoring available atITEP LHC FARM usage in 2004 Main ITEP players in 2004 ALICE and LHCb ALICE DC Goals Determine readiness of the off-line framework for data processing Validate the distributed computing model PDC2004:10% test of the final capacity PDC04 physics: hard probes (jets, heavy flavours) & pp physics Strategy Part 1: underlying (background) events (March-July) Distributed simulation Data transfer to CERN Part 2: signal events & test of CERN as data source (July-November) Distributed simulation, reconstruction, generation of ESD Part 3: distributed analysis Tools AliEn Alice Environment for the distributed computing AliEn LCG Interface LHCb DC Physics Goals (170M events) 1. HLT studies 2. S/B studies, consolidate background estimates, background properties Gather information for the LHCb computing TDR Robustness test of the LHCb software and production system Test of the LHCb distributed computing model Incorporation of the LCG application software Use of LCG as a substantial fraction of the production capacity Strategy: 1.MC Production(April-September) 2.Stripping (event preselection)still going on 3.Analysis Details 1 job 1 event Raw event size: 2 GB ESD size: MB CPU time: 5-20 hours RAM usage: huge Store local copies Backup sent to CERN ALICE AliEn Massive data exchange with disk servers--- 1 job 500 events Raw event size: ~1.3 MB DST size: MB CPU time: hours RAM usage: moderate Store local copies of DSTs DSTs and LOGs sent to CERN LHCb DIRAC Often communication with central services- Optimization April start massive LHCb DC 1 job/CPU everything OK use hyperthreading - 2jobs/CPU - increase efficiency by 30-40% May start massive ALICE DC bad interference with LHCb jobs often crashes of NFS restrict ALICE queue to 10 simultaneous jobs, optimize communication with disk server June Septembersmooth running share resources, LHCb - June July,ALICE August September careful online monitoring of jobs (on top of usual monitoring from collaboration) Monitoring Often power cuts in summer (4-5 times)-5% all intermediate steps are lost () provide reserve power line and more powerful UPS Stalled jobs-10% infinite loops in GEANT4 (LHCb) crashes of central services write simple check script and kill such jobs (bug report is not sent) Slow data transfer to CERN poor and restricted link to CERN problems with CASTOR automatic retry ALICE Statistics LHCb Statistics Summary Quite visible participation in ALICE and LHCb DCs ALICE ~5% contribution (ITEP part ~70%) LHCb ~5% contribution (ITEP part ~70%) With only 44 CPUs Problems reported to colleagues in collaborations More attention to LCG now Distributed analysis very different pattern of work load