“ national knowledge network and grid - a dae perspective ” b.s. jagadeesh
DESCRIPTION
“ National Knowledge Network and Grid - A DAE Perspective ” B.S. Jagadeesh Computer Division BARC. India and CERN: Visions for future Collaboration 01/March/2011. Our Approach to Grids has been an evolutionary approach. - PowerPoint PPT PresentationTRANSCRIPT
“ National Knowledge Network and Grid
- A DAE Perspective ”
B.S. Jagadeesh
Computer Division
BARC.
India and CERN: Visions for future Collaboration 01/March/2011
Our Approach to Grids has been an evolutionary approach
• Anupam Supercomputers• To Achieve Supercomputing Speeds
– At least 10 times faster than the available sequential machines in BARC
• To build a General Purpose Parallel Computer– Catering to wide variety of problems– General Purpose compute nodes and interconnection
network
• To keep development cycle short– Use readily available, off-the-shelf components
ANUPAM Performance over the years
‘ANUPAM-ADHYA’ 47 Tera Flops
Processing ( parallelization of computation)
I/O ( parallel file system)
Visualization (parallelized graphic pipeline/ Tile Display Unit)
Complete solution to scientific problemsby exploiting parallelism for
Snapshots of Tiled Image Viewer
We now have ,
Large tiled display
Rendering power with distributed rendering
Scalable to many many pixels and polygons
An attractive alternative to high end graphics system
Deep and rich scientific visualization system
Post-tsunami: Nagappattinam, India (Lat: 10.7906° N Lon: 79.8428° E)This one-meter resolution image was taken by Space Imaging's IKONOS satellite on Dec. 29, 2004 — just three days after the devastating tsunami hit. 1M IKONOS Image Acquired: 29 December 2004 Credit "Space Imaging"
So, We Need Lots Of Resources Like High PerformanceComputers, Visualization Tools, Data Collection Tools, Sophisticated
Laboratory Equipments Etc.
“ Science Has Become Mega Science”
“Laboratory Has To Be A Collaboratory”
Key Concept is “Sharing By Respecting Administrative Policies”
User Access Point
Resource Broker
Grid Resources
Result
GRID CONCEPT
Grid concept (another perspective)
• Many jobs per system
• One job per system
• Two systems per JOB
• Many systems per JOB
• View all of the above as Single unified resource
-- Early days of Computation
-- Risc / Workstation Era
-- Client-Server Model
-- Parallel / Distributed Computing
-- Grid Computing
LHC Computing
• LHC (Large Hadron Collider) has become operational and is
churning out data.
• Data rates per experiment of >100 Mbytes/sec.
• >1 Pbytes/year of storage for raw data per experiment.
• Computationally problem is so large that can not be solved by a
single computer centre
• World-wide collaborations and analysis.
– Desirable to share computing and analysis throughout the world.
LEMONarchitecture
QUATTOR
• Quattor is a tool suite providing automated installation,
configuration and management of clusters and farms
• Highly suitable to install, configure and manage Grid
computing clusters correctly and automatically
• At CERN, currently used to auto manage nodes >2000 with
heterogeneous hardware and software applications
• Centrally configurable & reproducible installations, run
time management for functional & security updates to
maximize availability
Please visit: http://gridview.cern.ch/GRIDVIEW/Please visit: http://gridview.cern.ch/GRIDVIEW/
•Nationwide networks like ANUNET, NICNET,SPACENET did
exist
•Idea is to Synergize, Integrate and leverage to form National
Information Highways
•Take everyone onboard (5000 institutes of learning)
. Provide quality of service
.Outlay of Rs 100 crores for Proof of Concept
Envisaged applications are:
Countrywide Classrooms,
Telemedicine, E-Governance,
Grid Technology …..
National Knowledge Network
NKN TOPOLOGY
All pops are covered by atleast Two NLDs
Core
Distribution
Distribution
Dis
trib
utio
n
Dis
trib
utio
n
EDGE
EDGE
ED
GE
ED
GE
EDGE
EDGE
EDG
E
EDGE
NKN Topology
Current status
15 POPS
78 Institutes have been connected
Work in progress for
27 POPS
550+ Institutes
Achieve Higher Availability
NKNNKN
Educational Institutions
Research LabsCSIR/DAE/ISRO/ICAR
INTERNET
Connections to Global Networks
(e.g. GEANT)
EDUSAT
MPLSClouds
Broad BandClouds
National / StateData Centers
NationalInternet
Exchange Points (NIXI)
NTROCert-IN
DAE-wide Applications on NKN • DAE-Grid : Grid resources at BARC, IGCAR, RRCAT
and VECC
• WLCG and GARUDA
• Videoconferencing: with NIC, IITs, IISc
• Collab-CAD : Collaborative design of sub assembly of the prototype 500 MW Fast Breeder Reactor from NIC, BARC & IGCAR
• Remote classrooms : Amongst different Training schools
ANUNET WIDE AREA NETWORK
INSAT 3C8 Carriers of 768 Kbps each
CAT, Indore
IOP Bhubaneshwar
IPR, Ahmedabad
HRI, Allahabad
BARC, Mysore
BARC, Gauribidnur
BARC, Tarapur
BARC,Mt.Abu
AERB NPCIL
HWB
BRIT
CTCRS, BARCAnushaktinagar, Mumbai
BARC
NFC CCCM
ECIL, Hyderabad
TIFR IRE
TMC
DAE, Mumbai
SAHA INST.
VECC, Kolkata
BARCFACL
IGCAR,Kalpakkam
IMS, Chennai
MRPU
Notes: Sites shown in yellow oblong are connected over dedicated landlines.
Quarter Transponder9 MHz
HWB, Manuguru
HWB, Kota
AMD, Secund’bad
AMD, Shillong
UCIL-I Jaduguda
UCIL-II Jaduguda
UCIL-III Jaduguda
TMHNavi Mumbai
BARC, Trombay
Hosp.
Mumbai
Gauribidunir
Mysore
Hyderabad
Chennai
Kalpakkam
Vizag
Manguru
Bhubneshwar
Kolkata
Indore
Delhi
Gandhinagar
Mount Abu
Jaduguda
Tarapur
Allahabad Shilong
Kota
ANUNET Leased Links Plan for 11th Plan
Existing
Proposed
Leased Links Plan All over India
DAE & NKN
• BARC CONDUCTED THE FIRST NKN WORKSHOP TO TRAIN
NETWORK ADMINISTRATORS IN DEC, 2008
• DAE-GRID
• COUNTRYWIDE CLASSROOMS OF HBNI
• SHARING OF DATABASES
• DISASTER RECOVERY SYSTEMS (planned)
• DAE GETS ACCESS TO RESOURCES IN THE COUNTRY
• DAE STANDS TO GAIN IN THAT WE GET ACCESS TO
RESOURCES ON GEANT(34 EUROPEAN COUNTRIES),
• GLORIAD ( RUSSIA,CHINA,USA,NETHERLANDS,CANADA ..),
• TIEN3 (AUSTRALIA,CHINA,JAPAN,SINGAPORE,…..)
Applications …
0
National Grid Computing
CDAC, Pune
WLCG Collaboration
Common Users Group (CUG)
Anunet (DAE units)BARC – IGCAR
NKN Router
NKN-Internet (Grenoble-France)
NKN-General
(National Collaborations)
Intranet segment of
BARC
Internet segment of
BARC
Logical Communication Domains Through NKN
Teacher
55“ LEDProjection Screen
Elevation Front
55“ LED
HD Camera
HD Camera
55“ LED Elevation Back55“ LED
LAYOUT OF VIRTUAL CLASSROOM
LAYOUT OF VIRTUAL CLASS ROOM
An Example of High bandwidth Application
Collaboratory?A Collaboratory (ESRF, Grenoble)
Depicts a one degree oscillation photograph on crystals of HIV-1 PR M36I mutant recorded by remotely operating the FIP beamline at ESRF, and OMIT density for the mutation residue I. (Credits: Dr. Jean-Luc Ferrer, Dr. Michel Pirochi & Dr. Jacques Joley, IBS/ESRF, France, Dr. M.V. Hosur & Colleagues, Solid State Physics Division & Computer Division, BARC)
E-Governance?
COLLABORATIVE DESIGN
Collaborative design of reactor componentsCredits : IGCAR, Kalpakkam, NIC-Delhi, Comp Divn, BARC
UTKARSH
• Utkarsh – dual processor-quad core 80 node (BARC)
• Aksha-itanium – dual processor 10 node (BARC)
• Ramanujam – dual core dual processor 14 node (RRCAT)
• Daksha – dual processor 8 node (RRCAT)
• Igcgrid-xeon – dual processor 8 node (IGCAR)
• Igcgrid2-xeon – quad core 16 node (IGCAR)
DAEGrid Connectivity is through NKN – 1 Gbps
4 Sites
6 Clusters
800 Cores
DAEGrid
Categories of Grid ApplicationsCategory Examples Characteristics
Distributed Supercomputing
Ab-initio
Molecular DynLarge CPU / memory reqd
High
ThroughputCryptography
Harness Idle
cycles
On DemandRemote Experiments
Cost
effectiveness
Data Intensive CERN LHCInfo from Large Data sets / Provenance
CollaborativeData
ExplorationSupport communication
Moving Forward ……“ ‘Fire and Forget’ to ‘Assured quality of service’ ”
- Effect of Process Migration in distributed environments
- A Novel Distributed Memory file system
- Implementation of Network Swap for Performance Enhancement
- Redundancy issues to address the failure of resource brokers
- Service center concept using Glite Middleware
- Development of Meta-brokers to direct jobs amongst middlewares
-All of the above lead towards ensuring Better quality of service and imparting simplicity in Grid usage… clouds?
Acknowledgements
All colleagues from Computer Division, BARC for help in preparing the presentation material are
gratefully acknowledged. We also Acknowledge NIC-Delhi, IGCAR-Kalpakkam and IT-Division, CERN,
Geneva.
THANK YOU