Participation of JINR in the LCG and EGEE projects V.V.Korenkov (JINR, Dubna) NEC’2005, Varna 17 September, 2005.
Post on 31-Dec-2015
Embed Size (px)
<ul><li><p>Participation of JINR in the LCG and EGEE projects</p><p>V.V.Korenkov (JINR, Dubna) </p><p>NEC2005, Varna17 September, 2005</p></li><li><p>Federating Worldwide Resources for the LHC</p></li><li><p>Infrastructure LCG/EGEE</p></li><li><p>Russian distributed Tier2 ClusterRRC-LHCLCG Tier1/Tier2cloudCERNGbits/sFZKRegional connectivity:</p><p>cloud backbone Gbits/s</p><p>to labs 1001000 Mbit/sCollaborative centersTier2clusterGRID access</p></li><li><p>The protocol between CERN, Russia and JINR on a participation in LCG Project has been approved in 2003.The tasks of the Russian institutes in the LCG: LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in a context of using in the LCG; event generators repository, data base of physical events: support and development;</p><p>LHC Computing Grid Project (LCG)</p></li><li><p>LHC Computing Grid Project (LCG)The tasks of the Russian institutes & JINR in the LCG (2004 and 2005 years): </p><p>LCG Deployment and OperationLCG Test suitCastorLCG AA- Genser&MCDBARDA</p><p>LHC Computing Grid Project (LCG)</p></li><li><p>JINR in LCG (2004 and 2005 years)LCG2 infrastructure was created at JINRThe server for monitoring Russian LCG sites was installed; LCG web-portal was created in Russia and now its development is in progress:Tests on data transferring by the GridFTP protocol (GlobusToookit 3) were made.the toolkit GoToGrid on the automatic installation and tuning of the LCG-2 package was developed;development of the MCDB system;software for installation and control of MonaLisa clients on the base of RMS (Remote Maintenance Shell) was designed. Works to develop CASTOR2 system was in progress: development of the control process of the garbage collection module, communication to Oracle DB;participation in the work to create the TESTBED of the new gLite middleware;Testing of next components gLite: Metadata catalog, Fireman catalogMonitoring of WMS (Workload Management System) gLite testbed in INFN site gundam.cnaf.infn.it </p></li><li><p>MCDB Web Interface Screen-shothttp://mcdb.cern.chOnly Mozilla Browser Supported (for the time being)</p></li><li><p>Russian Data Intensive GRID (RDIG) Consortium EGEE FederationEight Institutes made up the consortium RDIG (Russian Data Intensive GRID) as a national federation in the EGEE project. They are: IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Pushchino), ITEP - Institute of Theoretical and Experimental Physics (Moscow), JINR - Joint Institute of Nuclear Physics , KIAM RAS - Keldysh Institute of Applied Mathematics, PNPI - Petersburg Nuclear Physics Institute (Gatchina), RRC KI - Russian Research Center Kurchatov Institute, SINP-MSU - Skobeltsyn Institute of Nuclear Physics (MSU).</p></li><li><p> Russian Contribution to EGEE RDIG as an operational and functional part of EGEE infrastructure (CIC, ROC, RC). Activities: SA1 - European Grid Operations, Support and Management SA2 Network Resource Provision NA2 Dissemination and Outreach NA3 User Training and Induction NA4 - Application Identification and Support</p></li><li><p>JINR role and work in EGEE</p><p>SA1 - European Grid Operations, Support and Management EGEE-RDIG monitoring and accounting. Middleware deployment and resource induction. Participation in the OMII and GT4 evaluation and in the gLite testing. LCG SC activity coordination in Russia.</p><p>NA2 - Dissemination and Outreach Coordination of this activity in Russia, organization of EGEE RDIG Conference, Creation and run the RDIG Web site (http://www.egee-rdig.ru), dissemination in JINR Member states.</p><p>NA3 - User Training and Induction Organization of grid tutorials, induction courses and training courses for administrators.</p><p>NA4 - Application Identification and Support Coordination of this activity in Russia, organization of HEP applications in Russia through the EGEE infrastructure.</p></li><li><p>Grid middleware evaluationsThe goal of the evaluations is to get a better understanding of the functionality, performance, solidity, interoperability, deployability, management and usability of components in different grid MW distributions Aid decision about possible usage of components for the EGEE MW and about provision of interoperability between these distributions and the EGEE MWEvaluation of OMII distribution by JINR and KIAM in February - April 2005Evaluation of Globus Toolkit 4 by JINR, KIAM and SINP MSU in May - October 2005</p></li><li><p>Evaluation of OMII distribution by JINR and KIAMhttp://www.gridclub.ru/library/OMII-evaluaton-EGEE3.ppt Installation and configuration, supported platforms Performance, scalability and reliability studies of OMII services: JobService, DataService, dummy services Aspects of security, authorization, account management, resource allocation, administration in regard to the operation of a grid with many users, big virtual organizations and many resource centers Interoperability with gLite Workload Management System (WMS) </p></li><li><p>Evaluation of Globus Toolkit 4 by JINR, KIAM and SINP MSUhttp://theory.sinp.msu.ru/dokuwiki/doku.php?id=egee:gt4:gt4Installation and configuration, supported platforms Performance, reliability, functional characteristics, interfaces of JAVA WS-Core, WS-GRAM, GridFTP, RLS, RFT , WS-MDS4, WS Delegation service Aspects of security, authorization, usability and administrationcomparison of corresponding GT4 and gLite components</p></li><li><p>Participation in EGEE MW testingDevelopment of test suites for gLite (EGEE JRA1 activity) by JINR, IHEP and PNPI from June 2005 (continuing)WMS DAG testsWMS MPI tests WMS JDL tests R-GMA tests </p></li><li><p>RDIG http://rocmon.jinr.ru:8080</p></li><li><p>SC3 GOALSService Challenge 1 (end of 2004):Demonstrate the possibility of throughput of 500 MByte/s to Tier1 in LCG environment.</p><p>Service Challenge 2 (spring 2005):Maintain the throughput 500 MByte/s cumulative on all Tier1s for prolonged time, and evaluate the data transfer environment on Tier0 Tier1s. Service Challenge 3 (Summer-end of 2005)Show reliable and stable data transfer on each Tier1: to disk -150 MByte/s, to tape - 60 MByte/s. All Tier1s and some Tier2s involved. </p><p>Service Challenge 4 (Spring 2006):Prove the GRID infrastructure performance to handle the LHC data in proposed rate (from raw data transfer up to final analysis) with all Tier1s and majority of Tier2s. Final Goal:Build the production GRID-infrastructure on all Tier0, Tier1 Tier2 according to the LHC experiments specifics.</p></li><li><p>Summary of Tier0/1/2 RolesTier0 (CERN): safe keeping of RAW data (first copy); first pass reconstruction, distribution of RAW data and reconstruction output to Tier1; reprocessing of data during LHC down-times;</p><p>Tier1: safe keeping of a proportional share of RAW and reconstructed data; large scale reprocessing and safe keeping of corresponding output; distribution of data products to Tier2s and safe keeping of a share of simulated data produced at these Tier2s;</p><p>Tier2: Handling analysis requirements and proportional share of simulated event production and reconstruction. No long term data storage.</p></li><li><p>Tier2 RolesTier2 roles vary by experiment, but include:</p><p>Production of simulated data;Production of calibration constants;Active role in [end-user] analysis</p><p>Must also consider services offered to T2s by T1s</p><p>e.g. safe-guarding of simulation output;Delivery of analysis input.</p><p>No fixed dependency between a given T2 and T1</p></li><li><p>A Simple T2 Model Each T2 is configured to upload MC data to and download data via a given T1</p><p>In case the T1 is logical unavailable, wait and retry</p><p>For data download, retrieve via alternate route / T1</p></li><li><p>Tier1/2 Network Topology</p></li><li><p>Tier2 in Russia</p></li><li><p>Universal Grid infrastructure in University Center of JINR Grid infrastructure is set of virtual machines (VMs) running on physical ones (hosts) Virtualisation was made using User Mode Linux current number of VMs is 36 (6 VMs on each of 6 hosts) all virtual resourses are grouped into independent testbeds which in turn can be used for different aims: system administrators and users training in grid field, debugging and testing custom grid services in desirable grid environmentCourse for system administrators using Nordugrid ARC middleware was successfully conducted on that infrastructureLCG2 or gLite installation and configuration course is in future plans</p></li><li><p>EGEE NA3 Coursesin Dubna28.06.2004NA3. Introduction Courses</p><p>29-3-.03.2005LCG2 Administrator`s Course, </p><p>06.09.2005LCG-2 Induction Courses for CMS Users</p></li></ul>
View more >
V.V. Ivanov (LIT JINR) GRID’2012, JINR, Dubna INFORMATION TECHNOLOGIES in JINR ACTIVITIES Victor V. Ivanov Joint Institute for Nuclear Research Laboratory.
1 Report of PAC for Particle Physics T. Hallman JINR Scientific Council Meeting June 2-4, 2005 Dubna, Russia.
Speed-up of the ring recognition algorithm Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna, Russia Gennady Ososkov LIT JINR, Dubna, Russia.
A.Olchevski, JINR (Dubna) Test Beam studiesof COMPASS ECAL0 Test Beam studies of COMPASS ECAL0 module prototype with MAPD readout ECAL0 Team, JINR, DUBNA.
DARK MATTER IN THE MILKY WAY ALEXEY GLADYSHEV (JINR, DUBNA & ITEP, MOSCOW) SMALL TRIANGLE MEETING 2005.
CRIS&OAR for Research Information Management I.Filozova JINR LIT, University “DUBNA” Dubna, Russia SCHOOL ON JINR/CERN GRID AND ADVANCED INFORMATION SYSTEMS.
Plamen Fiziev BLTF, JINR, Dubna Parallel computations using Maple 17 Mathematical Modeling and Computational Physics Dubna, 2013.