highest energy e + e – collider lep at cern 1989-2000 200 gev ~4km radius first e + e – collider...
TRANSCRIPT
Highest Energy e+e– Collider
LEP at CERN 1989-2000
200 GeV ~4km radius
First e+e– Collider
ADA in Frascati 1961 0.2 GeV ~1m radius
e+e– Colliders
First Beams: April 2007Physics Runs: from Summer 2007
TOTEM pp, general purpose; HI
pp, general purpose; HI
LHCb: B-physics
ALICE : HI
pp s =14 TeV L=1034 cm-2 s-1
Heavy ions
CMS at LHC: 2007 StartCMS at LHC: 2007 Start
CMS
LHC Schedule Reconfirmed at CERN Council June 2003
ATLAS
HCAL Barrels Done: Installing HCAL Endcap and Muon CSCs in
SX5
36 Muon CSCs successfully installed on YE-2,3. Avg. rate 6/day
(planned 4/day). Cabling+commissioning.
HE-1 complete, HE+ will be mounted in Q4 2003
Large Hadron Collider (LHC)
Bunch Crossing
1034 cm-2 s-1
Luminosity
2835 Bunches/Beam 1011 Protons/Bunch
14 TeV Proton Proton
Proton Collisions
Parton Collisions
Higgs Production
7.5 m (25 ns)
~10000 per day
4x107 Hz
109 Hz
LHC Magnets
• 9 Tesla field• Dipoles separated by 20cm• Cooled to superfluid liquid helium temperatures• 20 km of magnets
LHC Magnets
LHC Detectors
B-physicsCP Violation
Heavy IonsQuark-gluon plasma
CMS
ATLAS
LHCCERN Laboratory in Geneva, Switzerland
LHCCMS Detector
LHC300 foot shaft
LHCCMS Cavern (300 feet underground)
LHC
online systemmulti-level trigger
filter out backgroundreduce data volume level 1 - special hardware
40 MHz (80 TB/sec)level 2 - embedded processorslevel 3 - PCs
75 KHz (75 GB/sec)5 KHz (5 GB/sec)100 Hz(100-1000 MB/sec) data processing
offline analysis, selection
One of the four LHC detectors (CMS)
Raw recording rate 0.1 – 1 GB/sec3 - 8 PetaBytes / year
LHC Computing: Different from LHC Computing: Different from Previous Experiment GenerationsPrevious Experiment Generations
Tier2 Center
Online System
Offline Farm,CERN
Computer
France Center
FNAL Center
Italy Center
UK Center
InstituteInstituteInstituteInstitute ~0.25TIPS
Workstations
100–1000 MBytes/sec
~2.4 Gbits/sec
100 - 1000
Mbits/sec
Bunch crossing per 25 nsecs.Event is ~1 MByte in size
Physicists work on analysis “channels”.
Processing power: ~200,000 of today’s fastest PCs
Physics data cache
~PBytes/sec
~0.6 - 2.5 Gbits/sec
Tier2 Center
Tier2 Center
Tier2 Center
~622 Mbits/sec
Tier 0 +1
Tier 1
Tier 3
Tier 4
Tier2 Center
Tier 2
Experiment
Regional Center Hierarchy Regional Center Hierarchy (Worldwide Data Grid)(Worldwide Data Grid)
Production BW Growth of Int’l Production BW Growth of Int’l HENP Network Links (US-CERN HENP Network Links (US-CERN
Example)Example)
Production BW Growth of Int’l Production BW Growth of Int’l HENP Network Links (US-CERN HENP Network Links (US-CERN
Example)Example) Rate of Progress >> Moore’s Law. Rate of Progress >> Moore’s Law. (US-CERN Example)(US-CERN Example)
9.6 kbps Analog9.6 kbps Analog (1985)(1985) 64-256 kbps Digital 64-256 kbps Digital (1989 - 1994) [X 7 – 27](1989 - 1994) [X 7 – 27] 1.5 Mbps Shared 1.5 Mbps Shared (1990-3; IBM) [X 160](1990-3; IBM) [X 160] 2 -4 Mbps2 -4 Mbps (1996-1998) [X 200-(1996-1998) [X 200-
400]400] 12-20 Mbps 12-20 Mbps (1999-2000) [X 1.2k-2k](1999-2000) [X 1.2k-2k] 155-310 Mbps155-310 Mbps (2001-2) [X 16k – (2001-2) [X 16k –
32k]32k] 622 Mbps622 Mbps (2002-3) [X 65k](2002-3) [X 65k] 2.5 Gbps 2.5 Gbps (2003-4) [X 250k](2003-4) [X 250k] 10 Gbps 10 Gbps (2005) [X 1M](2005) [X 1M]
A factor of ~1M over a period of 1985-2005 A factor of ~1M over a period of 1985-2005 (a factor of ~5k during 1995-2005)(a factor of ~5k during 1995-2005)
HENP has become a leading applications driver, HENP has become a leading applications driver, and also a co-developer of global networks and also a co-developer of global networks
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps
Year Production Experimental Remarks
2001 0.155 0.622-2.5 SONET/SDH
2002 0.622 2.5 SONET/SDH DWDM; GigE Integ.
2003 2.5 10 DWDM; 1 + 10 GigE Integration
2005 10 2-4 X 10 Switch; Provisioning
2007 2-4 X 10 ~10 X 10; 40 Gbps
1st Gen. Grids
2009 ~10 X 10 or 1-2 X 40
~5 X 40 or ~20-50 X 10
40 Gbps Switching
2011 ~5 X 40 or
~20 X 10
~25 X 40 or ~100 X 10
2nd Gen Grids Terabit Networks
2013 ~Terabit ~MultiTbps ~Fill One Fiber
Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade;We are Rapidly Learning to Use Multi-Gbps Networks Dynamically
History – One large Research Site History – One large Research Site
Current Traffic to ~400 Mbps;Projections: 0.5 to 24 Tbps by
~2012
Much of the Traffic:SLAC IN2P3/RAL/INFN;
via ESnet+France;Abilene+CERN
Digital Divide Illustrated by Network Infrastructures: TERENA NREN
Core Capacity
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Ukraine
Moldova
Azerbaijan
Jordan
Morocco
Iran
Bulgaria
Algeria
Croatia
Romania
Turkey
Serbia/Montenegro
Cyprus
Estonia
Latvia
Malta
Slovenia
Lithuania
Austria
Denmark
Iceland
Ireland
Slovakia
Switzerland
Portugal
Czech Republic
Finland
France
Greece
Hungary
Italy
Norway
Spain
Belgium
Germany
Netherlands
Poland
Sweden
United Kingdom
Core Capacity in Gbps
Current Core Capacity
Expected Increase in two years
Core capacity goes up in Large Steps: 10 to 20 Gbps;
2.5 to 10 Gbps; 0.6-1 to 2.5 Gbps
Current In Two Years
SE Europe, Medit., FSU, Middle East:Less Progress Based on Older
Technologies (Below 0.15, 1.0 Gbps): Digital Divide Will Not Be
ClosedSource: TERENA
The Global Lambda Integrated Facility for Research and Education (GLIF)
The Global Lambda Integrated Facility for Research and Education (GLIF)
Virtual organizationVirtual organization supports persistent data-intensive scientific research supports persistent data-intensive scientific research and middleware development on “LambdaGrids”and middleware development on “LambdaGrids”
Grid applications “ride” on dynamically configured networks based on Grid applications “ride” on dynamically configured networks based on optical wavelengths. optical wavelengths.
Architecting an international LambdaGrid infrastructureArchitecting an international LambdaGrid infrastructure