grid in jinr and participation in the wlcg project

40
GRID in JINR and participation in the WLCG project Korenkov Vladimir Korenkov Vladimir LIT, JINR LIT, JINR 17 17 .07.1 .07.1 2 2

Upload: questa

Post on 09-Jan-2016

31 views

Category:

Documents


0 download

DESCRIPTION

GRID in JINR and participation in the WLCG project. Korenkov Vladimir LIT, JINR. 17 .07.1 2. Tier Structure of GRID Distributed Computing: Tier-0/Tier-1/Tier-2/Tier-3. Tier-0 (CERN): accepts data from the CMS Online Data Acquisition and Trigger System archives RAW data - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: GRID in JINR and participation in the WLCG project

GRID in JINR and participation in the WLCG project

Korenkov VladimirKorenkov Vladimir

LIT, JINRLIT, JINR

1717.07.1.07.122

Page 2: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Tier Structure of GRID Distributed Tier Structure of GRID Distributed Computing:Computing:

Tier-0/Tier-1/Tier-2/Tier-3Tier-0/Tier-1/Tier-2/Tier-3

[email protected] 2

Tier-0 (CERN):• accepts data from the CMS Online Data Acquisition and Trigger System

• archives RAW data • the first pass of reconstruction and performs Prompt Calibration

• data distribution to Tier-1Tier-1 (11 centers):• receives a data from the Tier-0 • data processing (re-reconstruction, skimming , calibration etc)• distributes data and MC to the other Tier-1 and Tier-2• secure storage and redistribution for data and MCTier-2 (>200 centers):• simulation• user physics analysis

Page 3: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Some historySome history 1999 – Monarc Project1999 – Monarc Project

• Early discussions on how to organise distributed Early discussions on how to organise distributed computing for LHCcomputing for LHC

2001-2003 - EU DataGrid project2001-2003 - EU DataGrid project• middleware & testbed for an operational gridmiddleware & testbed for an operational grid

2002-2005 – LHC Computing Grid – LCG2002-2005 – LHC Computing Grid – LCG• deploying the results of DataGrid to provide adeploying the results of DataGrid to provide aproduction facility for LHC experiments production facility for LHC experiments

2004-2006 – EU EGEE project phase 12004-2006 – EU EGEE project phase 1• starts from the LCG gridstarts from the LCG grid• shared production infrastructureshared production infrastructure• expanding to other communities and sciencesexpanding to other communities and sciences

2006-2008 – EU EGEE-II 2006-2008 – EU EGEE-II • Building on phase 1Building on phase 1• Expanding applications and communitiesExpanding applications and communities … …

2008-2010 – EU EGEE-III2008-2010 – EU EGEE-III 2010-2012 - 2010-2012 - EGI-InSPIRE EGI-InSPIRE

CERN

Page 4: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Russian Data Intensive Grid infrastructure (RDIG)

RDIG Resource Centres:– ITEP– JINR-LCG2 (Dubna)– RRC-KI– RU-Moscow-KIAM– RU-Phys-SPbSU– RU-Protvino-IHEP– RU-SPbSU– Ru-Troitsk-INR– ru-IMPB-LCG2– ru-Moscow-FIAN– ru-Moscow-MEPHI– ru-PNPI-LCG2 (Gatchina)– ru-Moscow-SINP- Kharkov-KIPT (UA)- BY-NCPHEP (Minsk) - UA-KNU-- UA-BITP

The Russian consortium RDIG (Russian Data Intensive Grid), was set up in September 2003 as a national federation in the EGEE project. Now the RDIG infrastructure comprises 17 Resource Centers with > 20000 kSI2K CPU and > 4500 TB of disc storage.

Page 5: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

CPU: CPU: 2560 Core 2560 Core Total performance Total performance 24000 HEPSpec0624000 HEPSpec06Disk storage capacity Disk storage capacity 1800 TB 1800 TB

Growth of the JINR Tier2

resources in 2003 - 2011

JINR Tier2 CenterJINR Tier2 Center

Availability and Reliability = 99%

More than 7,5 millions tasks and more than 130 millions

Normalised CPU time (HEPSpec06HEPSpec06)) were executed during

July 2011- July 20120

1000

2000

3000

4000

5000

2003 2004 2005 2006 2007 2008 2009 2010 2011

Perf

orm

ance

/Cap

acity

(kSI

2K/T

B)

Performance

Disk storage capacityDisk storage capacity

Page 6: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Scheme of the CICC network connectionsScheme of the CICC network connections

Page 7: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Support of VOs and experiments

Now CICC JINR as a grid-site of the global grid infrastructure supports computations of 10 Virtual Organizations (alice, atlas, biomed, cms, dteam, fusion, hone, lhcb, rgstest and ops), and also gives a possibility of using the grid-resources for the CBM and PANDA experiments.

Page 8: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

JINR monitoring main pageJINR LAN Infrastructure Monitoring is one of the most important services. The Network monitoring information system (NMIS) with web-based graphical interface – http://litmon.jinr.ru provides the operational monitoring processes and monitors the most important network elements: JINR LAN routers and switches, remote routers and switches, cooling units, DNS servers, dCache servers, Oracle servers, RAIDs, WLCG servers, working nodes, etc. More than 350 network nodes are in round-the-clock monitoring.

Page 9: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

99

Country Normalized CPU time per Country LHC VO (July 2011 - July 2012)

All Country - 3,808,660,150 Russia- 95,289,002 (2.5%)2011 - 2,108,813,533 Russia - 45,385,987

Page 10: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

10101010

Russia Normalized CPU time per SITE LHC VO (July 2011 - July 2012)

Page 11: GRID in JINR and participation in the WLCG project

Contribution of RDMS CMS sites – 4% (16,668,792 kSI2K)

Contribution of JINR – 52%

CMS Computing Support at the JINR CMS T2 center at JINR:CPU 1240 kSI2kSlots 495 jobsStorageDisk 376 TB

All CMS Sites

CMS Sites in Russia

JINR

Statistics for 12 last m

onths: 08/2011-07/2012

Page 12: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

JINR Monitoring and Analysis Remote CentreJINR Monitoring and Analysis Remote Centre

Monitoring of detector systems Data Monitoring / Express

Analysis Shift Operations (except for run

control) Communications of JINR shifter

with personal at CMS Control Room (SX5) and CMS Meyrin centre

Communications between JINR experts and CMS shifters

Coordination of data processing and data management

Training and Information

Monitoring of detector systems Data Monitoring / Express

Analysis Shift Operations (except for run

control) Communications of JINR shifter

with personal at CMS Control Room (SX5) and CMS Meyrin centre

Communications between JINR experts and CMS shifters

Coordination of data processing and data management

Training and Information

1212Ilya Gorbunov and Sergei Shmatov “RDMS CMS Computing”, AIS2012, Dubna, May 18, 2012

Page 13: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Atlas Computing Support at JINR

Atlas at JINR (Statistics for 12 last months: 07.2011- 07.2012):CPU time (HEPSpec06): 51,948,968 (52%)Jobs : 4,885,878 (47%)

Atlas Sites in Russia

Page 14: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

System of remote access in real time (SRART) for System of remote access in real time (SRART) for monitoring and quality assessment of data from the monitoring and quality assessment of data from the

ATLAS at JINR ATLAS at JINR

At present the system of remote access in real time is debugged on real data of the ATLAS experiment.

One of the most significant results of the team TDAQ ATLAS at LIT during the last few years was the participation in the development of the project TDAQ ATLAS at CERN. The system of remote access in real time (SRART) for monitoring and quality assessment of data from the ATLAS at JINR was put in operation.

Page 15: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

ALICE Computing Support at JINR ALICE Computing Support at JINR

Seven RDIG sites (IHEP, ITEP, JINR, PNPI, RRC-KI, SPSU and Troitsk) in last 6 months contribute 4.19% to whole CPU of ALICE; -”- 9.18% to whole DONE Jobs numberin last 6 months

  Delivered Efficiency Job statistics Group CPU Wall CPU/Wall Assigned Completed EfficiencyRDIG   1293  2395   54.02%   717047  528445 73.7%

In July 2012

Page 16: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

SE usageSE usage More than 1PB of disk space are allocated today by More than 1PB of disk space are allocated today by

seven RDIG sites for ALICE.seven RDIG sites for ALICE. ~744 TB of this disk space is used today. ALICE ~744 TB of this disk space is used today. ALICE

management asks about increase disk capacity as management asks about increase disk capacity as a speed of new data taken and development is a speed of new data taken and development is much faster than speed of disk cleaning.much faster than speed of disk cleaning.

Example of CPU usage share between seven RDIG Example of CPU usage share between seven RDIG sites. sites.

Page 17: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Aggregate SE traffic Aggregate SE traffic

Summary SE in traffic in 6 months was = 943 TB, out was = 9.6PB

Page 18: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

The tasks of the Russia & JINR in the WLCGThe tasks of the Russia & JINR in the WLCG ((20201212 years)years): : • Task 1. MW (gLite) testing (supervisor Task 1. MW (gLite) testing (supervisor O. KeebleO. Keeble) ) • Task 2. LCG vs Experiments (supervisor I. Bird)Task 2. LCG vs Experiments (supervisor I. Bird)• Task 3. LCG monitoring (supervisor J. Andreeva)Task 3. LCG monitoring (supervisor J. Andreeva)• Task 4. Tier3 monitoring (supervisor J. Andreeva, A.Klementov)Task 4. Tier3 monitoring (supervisor J. Andreeva, A.Klementov)• Task 5/6. Task 5/6. Genser/ Genser/ MCDBMCDB ( supervisor ( supervisor W. PokorskiW. Pokorski))• Task 7. Tier1 (supervisor I. Bird)Task 7. Tier1 (supervisor I. Bird)

Worldwide LHC Computing Grid Project (WLCG)

The protocol between CERN, Russia and JINR on participation in LCG Project was approved in 2003. MoU on Worldwide LHC Computing Grid (WLCG) signed by Russia and JINR in October, 2007

Page 19: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

1919

The tasks of the JINR in the WLCG:• WLCG-infrastructure support and development at JINR;WLCG-infrastructure support and development at JINR;• participation in WLCG middleware testing/evaluation, participation in WLCG middleware testing/evaluation, • grid monitoring and accounting tools development;grid monitoring and accounting tools development;• Development software for HEP applications; Development software for HEP applications; • MCDB development; MCDB development; • support of Users, Virtual Organization (VO) and application;• User & Administrator training and education;User & Administrator training and education;• support of JINR Member States in the WLCG activities;support of JINR Member States in the WLCG activities;

Worldwide LHC Computing Grid Project (WLCG)

Page 20: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

2020

RDIG monitoring&accounting http://rocmon.jinr.ru:8080

Monitored valuesCPUs - total /working / down/ free / busyJobs - running / waiting Storage space - used / availableNetwork - Available bandwidth Accounting valuesNumber of submitted jobsUsed CPU time

Totally sum in secondsNormalized (with WNs productivity)Average time per job

Waiting timeTotally sum in secondsAverage ratio waiting/used CPU time per job

Physical memoryAverage per job

JINR CICC

• Monitoring – allows to keep an eye on parameters of Grid sites' operation in real time

• Accounting - resources utilization on Grid sites by virtual organizations and single users

Page 21: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

ArchitectureArchitecture

Page 22: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

2222

A monitoring systemmonitoring system is developed which provides a convenient and reliable tool for receiving detailed information about the FTS current stateFTS current state and the analysis of errorsanalysis of errors on data transfer channels, maintaining FTS functionality and functionality and optimizationoptimization of the technical support process. The system could seriously improve the FTS reliability and performancereliability and performance.

Site2Site1

Storage system

CASTOR

dCache

DPM

GridFTP

SRM

Storage system

CASTOR

dCache

DPM

SRM

FTSFTS

FTS (FILE TRANSFER SERVICE) MONITORING FOR FTS (FILE TRANSFER SERVICE) MONITORING FOR WORLDWIDE LHC COMPUTING GRID (WLCG) PROJECTWORLDWIDE LHC COMPUTING GRID (WLCG) PROJECT

Page 23: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

The Worldwide LHC Computing Grid (WLCG)

Page 24: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

ATLAS ATLAS DDQ2Q2 Deletion serviceDeletion service During the 2010 year the works on development of the deletion During the 2010 year the works on development of the deletion

service for ATLAS Distributed Data Management (DDM) system service for ATLAS Distributed Data Management (DDM) system were performedwere performed

Works started at the middle of April 2010. At the end August new Works started at the middle of April 2010. At the end August new version of Deletion Service was tested for set of sites and from version of Deletion Service was tested for set of sites and from November of 2010 for all sites managed by DQ2 November of 2010 for all sites managed by DQ2

Development comprises the building of new interfaces between Development comprises the building of new interfaces between parts of deletion service (based on the web service technology), parts of deletion service (based on the web service technology), creating new database schema, rebuilding the deletion service creating new database schema, rebuilding the deletion service core part, development of extended interfaces with mass storage core part, development of extended interfaces with mass storage systems and extension of the deletion monitoring system. systems and extension of the deletion monitoring system.

Deletion service maintained by JINR specialists.Deletion service maintained by JINR specialists.

Page 25: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Tier 3 sites monitoring projectTier 3 sites monitoring project

•Traditional LHC Distributed Computing Tier-0 (CERN) → Tier-1 → Tier-2

• Additional → Tier-3 • Needs → the global view of the LHC computing activities• The LIT participates in the development of a software suite for Tier-3 sites monitoring • A virtual testbed has been created at JINR which allows simulation of various Tier3 clusters and solutions for data storage.

Tier-0

Tier-1

Tier-2

Tier-3

CERN Analysis Facility

Raw

Raw/AOD/ESD

AOD

DPD

Interactive analysisplots, fits, toy MC,

studies, …

12 Sites Worldwide

120+ Sites Worldwide

developmenthost

2 WNs

Headnode + WN

PBS (Torque)

2 servers

Manager (redirector)XRootD

2 WNs

Headnode

PROOF

2 WNs

Headnode + WN

Condor

2 WNs

Headnode

Oracle Grid Engine

2 servers

Manager (redirector)XRootD

OSS

ClientLustre

MDS

Gangliaweb

frontend

Page 26: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Tier 3 sites monitoring project (2011)Tier 3 sites monitoring project (2011) Tier-3 sites consist of resources mostly dedicated for the data analysis by the Tier-3 sites consist of resources mostly dedicated for the data analysis by the

geographically close or local scientific groups. Set of Tier 3 sites can be joined to geographically close or local scientific groups. Set of Tier 3 sites can be joined to federation.federation.

Many Institutes and National Communities built (or have plans to build) Tier-3 Many Institutes and National Communities built (or have plans to build) Tier-3 facilities. Tier-3 sites comprise a range of architectures and many do not possess facilities. Tier-3 sites comprise a range of architectures and many do not possess Grid middleware, which would render application of Grid monitoring systems Grid middleware, which would render application of Grid monitoring systems useless.useless.

Joined effort of ATLAS, BNL (USA), JINR and CERN IT (ES group)Joined effort of ATLAS, BNL (USA), JINR and CERN IT (ES group) Objectives for Tier3 monitoringObjectives for Tier3 monitoring

• Monitoring of Tier 3 site.Monitoring of Tier 3 site.• Monitoring of Tier 3 sites federation.Monitoring of Tier 3 sites federation.

  Monitoring of Tier 3 siteMonitoring of Tier 3 site• Detailed monitoring of the local fabric (overall cluster or clusters monitoring, Detailed monitoring of the local fabric (overall cluster or clusters monitoring,

monitoring each individual node in the cluster, network utilization)monitoring each individual node in the cluster, network utilization)• Monitoring of the batch system.Monitoring of the batch system.• Monitoring of the mass storage system (total and available space, number of Monitoring of the mass storage system (total and available space, number of

connections, I/O performance)connections, I/O performance)• Monitoring of VO computing activities at a siteMonitoring of VO computing activities at a site

  Monitoring of Tier 3 sites federationMonitoring of Tier 3 sites federation• Monitoring of the VO usage of the Tier3 resources in terms of data transfer and Monitoring of the VO usage of the Tier3 resources in terms of data transfer and

job processing and the quality of the provided service based on the job job processing and the quality of the provided service based on the job processing and data transfer monitoring metrics.processing and data transfer monitoring metrics.

Page 27: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

xRootD monitoring architecturexRootD monitoring architecture Implementation of the XROOTD monitoring foresees 3 levels of Implementation of the XROOTD monitoring foresees 3 levels of

hierarchy: site, federation, global. hierarchy: site, federation, global. Monitoring on the site level is implemented in the framework of the Monitoring on the site level is implemented in the framework of the

Tier3 monitoring project and is currently under validation by the Tier3 monitoring project and is currently under validation by the ATLAS pilot sites.ATLAS pilot sites.

The site collector reads xrootd summary and detailed flows, process The site collector reads xrootd summary and detailed flows, process them in order to reformat into event-like information and publishes them in order to reformat into event-like information and publishes reformatted info to MSG. At the federation level data consumed from reformatted info to MSG. At the federation level data consumed from MSG is processed using hadoop/mapreduce to correlate events which MSG is processed using hadoop/mapreduce to correlate events which are related to the same transfer and to generate federation level are related to the same transfer and to generate federation level statistics which becomes available via UI and APIs. The UI is similar statistics which becomes available via UI and APIs. The UI is similar to the UI of the Global WLCG transfer Dashboard, therefore most of to the UI of the Global WLCG transfer Dashboard, therefore most of the UI code can be reused. Finally messages similar to FTS transfer the UI code can be reused. Finally messages similar to FTS transfer messages describing every particular transfer are published from the messages describing every particular transfer are published from the federation to MSG to be further consumed by Global WLCG transfer federation to MSG to be further consumed by Global WLCG transfer DashboardDashboard. .

2727

Page 28: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

xRootD monitoring: xRootD monitoring: the schema of the schema of information flowinformation flow

2828

Page 29: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Data transfer global monitoring systemData transfer global monitoring system

2929

Page 30: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

3030

Training courses on grid technologiesTraining courses on grid technologies for users (basic idea and skills),for users (basic idea and skills), for system administrators (grid sites for system administrators (grid sites

deployment),deployment), for grid applications and services for grid applications and services

developers.developers. Evaluation of different implementations of Evaluation of different implementations of

grid middleware and testing particular grid grid middleware and testing particular grid services.services.

Grid services development and porting Grid services development and porting existing applications into grid environment.existing applications into grid environment.

Grid training and education (T-infrastructure)

Page 31: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

3131

T-infrastructure T-infrastructure implementationimplementation

All services are All services are deployed on VMs deployed on VMs (OpenVZ)(OpenVZ)

Main parts:Main parts: three grid sites on three grid sites on

gLite middleware,gLite middleware, GT5 testbed,GT5 testbed, desktop grid desktop grid

testbed based on testbed based on BOINC,BOINC,

testbed for WLCG testbed for WLCG activities.activities.

Running since 2006Running since 2006

Page 32: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

JINR Grid-infrastructure for training andJINR Grid-infrastructure for training and education – first step towards education – first step towards

construction of the JINR Member States construction of the JINR Member States grid-infrastructuregrid-infrastructure

Consists of three grid sites at Consists of three grid sites at JINR and one at each of the JINR and one at each of the following sites: following sites:

Institute of High-Energy Institute of High-Energy Physics - IHEP (Protvino), Physics - IHEP (Protvino),

Institute of Mathematics Institute of Mathematics and Information and Information Technologies AS of Republic Technologies AS of Republic of Uzbekistan - IMIT of Uzbekistan - IMIT (Tashkhent, Uzbekistan), (Tashkhent, Uzbekistan),

Sofia University "St. Sofia University "St. Kliment Ohridski" - SU Kliment Ohridski" - SU (Sofia, Bulgaria), (Sofia, Bulgaria),

Bogolyubov Institute for Bogolyubov Institute for Theoretical Physics - BITP Theoretical Physics - BITP (Kiev, Ukraine), (Kiev, Ukraine),

National Technical National Technical University of Ukraine "Kyiv University of Ukraine "Kyiv Polytechnic Institute" - KPI Polytechnic Institute" - KPI (Kiev, Ukraine). (Kiev, Ukraine).

Letters of Intent with Moldova Letters of Intent with Moldova “MD-GRID”, Mongolia “MD-GRID”, Mongolia “Mongol-Grid”, Kazakhstan“Mongol-Grid”, Kazakhstan

Project with Cairo UniversityProject with Cairo University

https://gridedu.jinr.ruhttps://gridedu.jinr.ru

Page 33: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

3333

Participation in GridNNN projectParticipation in GridNNN project

Grid support for Russian nationalGrid support for Russian national nanotechnology nanotechnology networknetwork To provide for science and industry an effective access To provide for science and industry an effective access

to the distributed computational, informational and to the distributed computational, informational and networking facilitiesnetworking facilities

Expecting breakthrough in nanotechnologiesExpecting breakthrough in nanotechnologies Supported by the special federal programSupported by the special federal program

Main pointsMain points based on a network of supercomputers (about 15-30)based on a network of supercomputers (about 15-30) has two grid operations centers (main and backup)has two grid operations centers (main and backup) is a set of grid services with unified interfaceis a set of grid services with unified interface partially based on Globus Toolkit 4partially based on Globus Toolkit 4

Page 34: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

3434

GridNNN infrastructureGridNNN infrastructure10 resource centers at the moment in different regions of 10 resource centers at the moment in different regions of

RussiaRussia RRC KI, «Chebyshev» (MSU), IPCP RAS, CC FEB RAS, RRC KI, «Chebyshev» (MSU), IPCP RAS, CC FEB RAS,

ICMM RAS, JINR, SINP MSU, PNPI, KNC RAS, SPbSUICMM RAS, JINR, SINP MSU, PNPI, KNC RAS, SPbSU

Page 35: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Russian Grid NetworkRussian Grid Network Goal — to make a computational base forGoal — to make a computational base for

hi-tech industry and sciencehi-tech industry and science Using the network ofUsing the network of

supercomputers andsupercomputers andoriginal software created within recently finished original software created within recently finished GridNNN project GridNNN project

SomeSome statisticsstatistics 19 resource centers19 resource centers 10 virtual organizations10 virtual organizations 70 users70 users more than 500'000 more than 500'000

ttasksasks processedprocessed

Page 36: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

Project:Project: Model of a shared Model of a shared distributed system for distributed system for acquisition, transfer and acquisition, transfer and processing of very large-processing of very large-scale data volumes, based scale data volumes, based on Grid technologies, for on Grid technologies, for the NICA accelerator the NICA accelerator complexcomplex

TermsTerms:: 2011-2012 2011-2012CostCost:: federal budget - 10 federal budget - 10

million rubles, million rubles, extrabudgetary sources - extrabudgetary sources - 25% of the total cost25% of the total cost

Leading executorLeading executor:: LIT JINR LIT JINRCo-executor:Co-executor: VBLHEP JINR VBLHEP JINR

MPD data processing model

( from “The MultiPurpose Detector – MPD Conceptual Design Report v. 1.4 ”)

Page 37: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

3737

WLCG Tier1 in RussiaWLCG Tier1 in Russia Proposal to create the LCG Tier-1 center in Proposal to create the LCG Tier-1 center in Russia: official letter by A. Fursenko, Minister of Russia: official letter by A. Fursenko, Minister of Science and Education of Russia, has been sent Science and Education of Russia, has been sent to CERN DG R. Heuer in March 2011to CERN DG R. Heuer in March 2011 The corresponding points were included to the The corresponding points were included to the agenda of Russia – CERN 5x5 meeting in 2011agenda of Russia – CERN 5x5 meeting in 2011

serving all four experiments: ALICE, ATLAS, CMS serving all four experiments: ALICE, ATLAS, CMS and LHCband LHCb

~10% of the total existing Tier1 resources ~10% of the total existing Tier1 resources (without CERN)(without CERN)

increase by 30% on each yearincrease by 30% on each year draft planning (proposal under discussion) to have draft planning (proposal under discussion) to have

prototype in the end of 2012 and full resources in 2014 prototype in the end of 2012 and full resources in 2014 to be ready for the start of the next LHC sessionto be ready for the start of the next LHC session

Page 38: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

JINR ResourcesJINR ResourcesJINR-LCG2Tier2 April 2012 Dec. 2012 Dec. 2013

CPU(HEP-SPEC06)

20 000 30 000 45 000

Disk(Tbytes)CMSATLASALICE

995360 360275

1800660 660480

300011001100800

Tape (Tbytes) - - 5000

JINR-CMSTier1

Dec. 2012Prototype

Sep. 2014Start

CPU(HEP-SPEC06)

5 000 50 000

Disk(Tbytes)CMS

500 8 000

Tape (Tbytes) 500 15 000

Page 39: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

3939

Frames for Grid cooperation of JINR Frames for Grid cooperation of JINR Worldwide LHC Computing Grid (WLCG); EGI-InSPIRE EGI-InSPIRE RDIG Development CERN-RFBR project “Global data transfer monitoring system for WLCG Global data transfer monitoring system for WLCG

infrastructureinfrastructure” NASU-RFBR project “Development and support of LIT JINR and NSC KIPT grid-NASU-RFBR project “Development and support of LIT JINR and NSC KIPT grid-

infrastructures for distributed CMS data processing of the LHC operation”infrastructures for distributed CMS data processing of the LHC operation” BMBF grant “Development of the Grid-infrastructure and tools to provide joint

investigations performed with participation of JINR and German research centers”

“Development of Grid segment for the LHC experiments” was supported in frames of JINR-South Africa cooperation agreement;

Development of Grid segment at Cairo University and its integration to the JINR Development of Grid segment at Cairo University and its integration to the JINR GridEdu infrastructureGridEdu infrastructure

JINR - FZU AS Czech Republic Project “The GRID for the physics experiments”JINR - FZU AS Czech Republic Project “The GRID for the physics experiments” JINR-Romania cooperation Hulubei-Meshcheryakov programme JINR-Moldova cooperation (MD-GRID, RENAM) (MD-GRID, RENAM) JINR-Mongolia cooperation (Mongol-Grid) (Mongol-Grid) JINR-Slovakia cooperation JINR- Kazakhstan Kazakhstan cooperation (ENU Gumelev) Project “Russian Grid Network”Russian Grid Network”

Page 40: GRID in JINR and participation in the WLCG project

T.Strizh (LIT, JINR)T.Strizh (LIT, JINR)

WEB-PORTAL “GRID AT JINR” – “ГРИД В ОИЯИ”: http://grid.jinr.ru A new informational resourcehas been created at JINR:web-portal “GRID AT JINR”.

The content includes the detailedinformation on the JINR grid-site andJINR’s participation in grid projects:

•КОНЦЕПЦИЯ ГРИД •ГРИД-технологии •ГРИД-проекты •Консорциум РДИГ

•ГРИД-САЙТ ОИЯИ •Инфраструктура и сервисы •Схема •Статистика •Поддержка ВО и экспериментов

•ATLAS •CMS •CBM и PANDA •HONE

•Как стать пользователем •ОИЯИ В ГРИД-ПРОЕКТАХ

•WLCG •ГридННС •EGEE •Проекты РФФИ •Проекты ИНТАС •СКИФ-ГРИД

•ТЕСТИРОВАНИЕ ГРИД-ПО •СТРАНЫ-УЧАСТНИЦЫ ОИЯИ •МОНИТОРИНГ И АККАУНТИНГ

•RDIG-мониторинг •dCache-мониторинг •Dashboard •FTS-мониторинг •Н1 МС-мониторинг

•ГРИД-КОНФЕРЕНЦИИ •GRID •NEC

•ОБУЧЕНИЕ •Учебная грид-инфраструктура •Курсы и лекции •Учебные материалы

•ДОКУМЕНТАЦИЯ •Статьи •Учебные материалы

•НОВОСТИ •КОНТАКТЫ