higher speed study group...
TRANSCRIPT
Higher Speed Study Group CFI, V 1.01San Diego, CA
Higher Speed Study GroupHigher Speed Study GroupCallCall--ForFor--InterestInterest
IEEE 802.3 Working GroupSan Diego, CAJuly 18, 2006
July 18, 2006 2Higher Speed Study Group CFI, V 1.01San Diego, CA
Objective for this MeetingObjective for this Meeting
• To measure the interest in starting a study group for Higher Speed Ethernet
• We don’t need to– Fully explore the problem– Debate strengths and weaknesses of solutions– Choose any one solution– Create PAR or five criteria– Create a standard or specification
• Anyone in the room may speak / vote• RESPECT… give it, get it
July 18, 2006 3Higher Speed Study Group CFI, V 1.01San Diego, CA
• Presentations– “Higher Speed Ethernet - Overview,” Brad Booth.– “The Need for Higher Speed Ethernet,” Jeff Cain.– “The Technical Viability of Higher Speed Ethernet,” Petar
Pepeljugoski.– “Higher Speed Ethernet - Why Now,” John D’Ambrosia.
• Discussion• Call for Interest• Future Work• Cabo Wabo Time!
AgendaAgenda
Higher Speed Study Group CFI, V 1.01San Diego, CA
Higher Speed Ethernet Higher Speed Ethernet -- OverviewOverview
Presented byBrad Booth, Quake Technologies
IEEE 802.3 Working GroupSan Diego, CAJuly 18, 2006
July 18, 2006 5Higher Speed Study Group CFI, V 1.01San Diego, CA
Value = GrowthValue = Growth• Metcalfe Law
– Value of networkproportional to square the number of users
• Permitted Ethernet to enter new markets– Metro, core, carrier– Access, consumer– Industrial, video, medical,
aircraft• Top of the network
requirements– Bandwidth is a commodity– Resiliency– Reduced operational costs
(OpEx) Increasing Performance & Bandwidth Demands
HSSG TargetMarket
July 18, 2006 6Higher Speed Study Group CFI, V 1.01San Diego, CA
Managed Ethernet SwitchesManaged Ethernet Switches
0
20
40
60
80
100
120
140
160
180
200
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Port
Shi
pmen
ts in
Mill
ions
100 Mb/s1 Gb/s 10 GE
10G E
HSSG
1000BASE-T
*Source – Dell’Oro 2006
Gig E
CX4
10GBASE-T
Backplane
LRM
July 18, 2006 7Higher Speed Study Group CFI, V 1.01San Diego, CA
Managed Ethernet SwitchesManaged Ethernet Switches
0
20
40
60
80
100
120
140
160
180
200
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Port
Shi
pmen
ts in
Mill
ions
100 Mb/s1 Gb/s 10 GE
*Source – Dell’Oro 2006
Increasing Bandwidth Aggregation10G E
HSSG
1000BASE-T
Gig E
CX4
10GBASE-T
Backplane
LRM
July 18, 2006 8Higher Speed Study Group CFI, V 1.01San Diego, CA
802.3ad Link Aggregation (LAG)802.3ad Link Aggregation (LAG)
• Temporary fix for increased bandwidth demand• Increased complexity
– Difficult to plan for capacity and traffic engineering– Harder to manage & troubleshoot multiple physical links
based on a single logical interface– Cable & link management
• Uneven distribution of traffic– Limitations in the standard– Inefficient distribution of large flows – Load balancing requires packet inspection or other
knowledge
July 18, 2006 9Higher Speed Study Group CFI, V 1.01San Diego, CA
Market DriversMarket Drivers• Bird’s eye view
– Not targeted at the desktop– Multiple applications
throughout the network– Cost model at aggregation
points should be considered• Possible reach targets
– Backplane (1m)– Very short reach (< 25m)– Short reach (< 100m)– Long reach (< 40km)– Very long reach (> 40km)
06 07 08 09 10 11 12 13 14
Internet eXchangesISP BackbonesContent Providers
ISP AggregationSupercomputing / R & EBroadband Aggregation
15
Campus BackbonesCorporate BackbonesCorporate Data Centers
16 17 Year06 07 08 09 10 11 12 13 14
Internet eXchangesISP BackbonesContent Providers
ISP AggregationSupercomputing / R & EBroadband Aggregation
15
Campus BackbonesCorporate BackbonesCorporate Data Centers
16 17 Year
Market Adopters• Implication…
– Possibility of > 1 project– Study group may find multiple projects to initiate
Higher Speed Study Group CFI, V 1.01San Diego, CA
““The Need for The Need for Higher Speed EthernetHigher Speed Ethernet””
Presented by Jeff Cain, Cisco Systems
IEEE 802.3 Working GroupSan Diego, CAJuly 18, 2006
July 18, 2006 11Higher Speed Study Group CFI, V 1.01San Diego, CA
Driving Bandwidth RequirementsDriving Bandwidth Requirements
• Consumers– Increasing penetration– Increasing bandwidth
• Content– Increasing bandwidth requirements
• Networking – Pulling it all together– Service Providers– Internet eXchanges
• Other– High Performance Computing– Data Centers– Research & Development
July 18, 2006 12Higher Speed Study Group CFI, V 1.01San Diego, CA
• Applications need 50+ Mbps per home– File Sharing / Peer to Peer – IPTV / Video on Demand / High Definition TV– Gaming
• Broadband access– 50 / 100 Mbps typical in Japan and Korea– US / Europe - 1 to 30 Mbps
• Verizon fiber deployment for FIOS• Comcast to trial 100 Mbps service in 2006 (DOCSIS 3.0)• Centillium Communications announces VDSL2 OFFERING
100 Mbps symmetric data rates • Utah’s Utopia in June’s Spectrum
– 100M point-to-point network to a potential 2.5M customers– “easily” upgradeable to 1G
• IEEE 10GEPON Study Group
Consumers and Broadband AccessConsumers and Broadband Access
July 18, 2006 13Higher Speed Study Group CFI, V 1.01San Diego, CA
Content ProvidersContent Providers
• Personalized content is a killer application• Google, Yahoo!, MSN and YouTube deliver high bandwidth
personalized content– Yahoo! (Source: Adam Bechtel, Yahoo!, “The Need for 100GE”, Designcon 2006 Management Forum Panel)
• 10 GbE used in data center, metro and WAN• 6 x 10 GbE and 4 x 10 GbE LAG required Q106• Ethernet everywhere
– Yahoo! Asia Pacific (source: Chris Choi, Yahoo! Asia Pacific)
• 40+ Gbps aggregate bandwidth need to support Major League Baseball streaming in Asia Pacific
– YouTube (Source: Colin Corbett, YouTube, “Peering of Video”, NANOG 37)
• Personal video sharing site with over 20 Gbps traffic (50 million videos viewed per day)
• ~20% monthly traffic growth, all unicast traffic
• Online DVD rental initiatives by NetFlix and Amazon
July 18, 2006 14Higher Speed Study Group CFI, V 1.01San Diego, CA
Video on Demand: Driving BandwidthVideo on Demand: Driving Bandwidth
• Comcast – customer profile– 10M Digital subscribers -
47% of customer base – 70% Digital customers use
Video on Demand– Only 28% are getting High
Definition programmingSource: Ian Cox, Cisco Systems / Comcast
• Network profile– Driving bandwidth factor is Video
on Demand unicast traffic• Standard Definition (SD): 3.5Mbps• High Definition (HD): 19 Mbps
– Regional networks service a metro area
– One 10 GbE link can only deliver 2500 SD streams
– Sections of the regional network use up to 7 x 10 GbE links today
• 1 to 2 bi-directional links for Voice, Broadcast TV, Internet
• 2 to 5 uni-directional links Video on Demand
• Personalized Content – killer app!– Growth potential for digital
subscribers– Growth potential for HD
programming– Bandwidth requirements of HD
Used with permission from Comcast
July 18, 2006 15Higher Speed Study Group CFI, V 1.01San Diego, CA
Service ProvidersService Providers• High growth projected as broadband penetration and speed
increase– Core links and aggregation to core links: n x 10 GbE LAG– Customer links: 10 GbE and n x 10 GbE LAG
• Current interfaces and bundling technologies will be challenged to meet future demand in largest networks
• Level 3 (Source: Joe Lawrence, Level 3 Communications)
– Today (4Q05)• Edge router connectivity ranges from N x GbE to N x 10 GbE
redundant• Backbone router connectivity can range up to 8 x 10 GbE • Most majors routes operate at 2-4 x 10 GbE• One major corridor operates at 8 x 10 GbE
– Future (2010)• Backbone to LAN aggregation demands grow to >100 x 10 GbE• Most major routes projected to require 20-60 x 10 GbE • One major corridor projected to require over 100 x 10 GbE
July 18, 2006 16Higher Speed Study Group CFI, V 1.01San Diego, CA
Monthly Average Peak Traffic @ 3 IX in Japan
0
2 0 0
4 0 0
6 0 0
8 0 0
1 0 0 0
Traf
fic (G
bps)
(Mon
thly
ave
rage
of p
eak
traffi
c in
a d
ay)
> 10X
10GbE(802.3ae
completed)
Provided by: Itsuro Morita, KDDI Labs Source: Report by Japanese Ministry of Internal
Affairs and Communications
2000 2002 2004 2006 2008 2010
Traffic Patterns at Major Traffic Patterns at Major IXsIXs
Used with permission from Internet Multifeed Co.
Daily Traffic @ JPNAP in Japan
Daily Maximum Gbps@ JPIX in Japan
Source: Japan Internet Exchange Co. Ltd.
AMS-IX in AmsterdamAverage Traffic Volume in Tbyte/day
0
1
10
100
1000
10000
100000
1000000
98 99 00 01 02 03 04 05 06 07 08 09 10 11 12Year
Dai
ly T
raffi
c, (T
Byt
e / D
ay) Source - "actual" data AMS-IX
Actual - Jan 98 to April 06, Forecast May 06 to Dec 12
Theoretical forecast assumes historical 7% monthly growth rate observed from 2001 to April 2006
Average traffic growth rate since 2003 6% per month
July 18, 2006 17Higher Speed Study Group CFI, V 1.01San Diego, CA
1
10
100
1000
10000
100000
1000000
10000000
94 96 98 00 02 04 06 08 10 12 14Year
Ave
rage
Gig
aflo
ps p
er s
econ
d (to
p 50
0, '9
3 - '
05)
High Performance ComputingHigh Performance Computing
Source - Top500.org
Historical Observations• 4 years between
standard and deployment
• GE captures 50% of market in 3 years (2002 - 2005)
• 10GE enters top500 in 2006 (included in Gigabit Ethernet #’s)
• 12x increase in AvgGigaflops: 10x increase in Ethernet Interconnect
Forecast• 10x increase needed
at 84,000 Gigaflops (2010)
• There will always be early adopters.
GEGEStdStd
10GE10GEStdStd
585 GF/s585 GF/s--GE GE
DeploymentDeployment
7000 GF/s7000 GF/s--10GE10GE
DeploymentDeployment
50K to 110K GF/s50K to 110K GF/s--HSSG DeploymentHSSG Deployment
July 18, 2006 18Higher Speed Study Group CFI, V 1.01San Diego, CA
Data CenterData Center
• Fabric consolidation– Converging LAN, storage and
interprocessor communication (IPC)– TOE/TCP acceleration for LAN traffic– iSCSI for storage market– iWARP/RDMA for low latency IPC
• Modular platforms– Shift to blade server deployments– Bandwidth aggregation at switch
blade– Multicore/multithreaded CPUs handle
increase bandwidth availability• Bandwidth hierarchy
– Consolidation increases traffic bandwidth
– Modular increases uplink bandwidth requirement
LAN
IPC
Storage
Higher Speed Ethernet
Consolidated FabricConsolidated Fabric
10 GbE Fabric
Higher Speed Ethernet
July 18, 2006 19Higher Speed Study Group CFI, V 1.01San Diego, CA
Research and DevelopmentResearch and Development
• Network Utilization 2006• From Dec 2003 to June 2006 – Mean
1.5 Gb/s• Min peak traffic has reached 18.9 Gb/s• Minimum of 5x of min peak to mean
average needed• Forecast 2010
• Mean – 30 Gb/s• Min peak traffic forecast 150 Gb/s
Source: ESnet / LBNL
ESnet Daily Ingress / Egress Traffic
Multiple projects that could have significant impact on Transport projection over the next 5 to 10 years.
Used with permission from ESnet
July 18, 2006 20Higher Speed Study Group CFI, V 1.01San Diego, CA
The Ethernet EcosystemThe Ethernet Ecosystem
Research, Educationand Government Facilities
(High Performance Computing)
ResearchNetworks
Consumer Broadband Access
BroadbandAccess Networks
Corporate Data Centers and Enterprise
(High Performance Computing)
EnterpriseNetworks
Content Providers
ContentNetworks
Internet BackboneNetworks
Internet BackboneNetworks
Internet eXchange andInterconnection Points
July 18, 2006 21Higher Speed Study Group CFI, V 1.01San Diego, CA
Bandwidth and Growth ProjectionsBandwidth and Growth Projections
Research, Educationand Government Facilities
Consumer Broadband Access
Content Providers
Corporate Data Centers and Enterprise
ResearchNetworks
BroadbandAccess Networks
EnterpriseNetworks
ContentNetworks
Internet BackboneNetworks
Internet BackboneNetworksLevel 3: 8x10 GbE LAG today,
BW growth 15x in 5 years (~70%/year)
Internet Exchanges: Up to 8x10 GbE LAG today,
BW growth 50-75% per year for next 3 – 5 years
Cisco: 10GbE today, 40+ GbE (100 GbE
preferred) in 5 years
Cox: 10 GbE today, BW growth 50-75% per
year for next 3 – 5 years
Comcast: 4x10 GbELAG today, 3X BW
increase in 3 to 5 years
Yahoo!: 4x10 GbE LAG today,BW doubling in <12 months
LLNL: 4x10 GbE LAG and 500x10 GbE ports
today, 10x speed requirement in 5 years
on deployed ports
ESnet: 10 GbE today, 10 Gbps on 20+ links 5 years from now; 5-
10 locations will require more than 40
Gbps
July 18, 2006 22Higher Speed Study Group CFI, V 1.01San Diego, CA
ConclusionsConclusions• Broadband penetration / access speeds / high bandwidth
content driving core infrastructure bandwidth • Multiple applications need higher speed
– Portals / Content providers: bandwidth needs track with broadband growth
– Video on Demand: Regional networks already in 10’s of Gbps– Service Providers: 8 x10GbE links now, 10’s x 10GbE in future– Internet eXchanges: aggregate point for Internet traffic– High Performance Computing: ≈12x improvement in processing
drives 10x jump in Ethernet interconnect – Data Center: consolidation, server and storage consolidation,
large clusters– Research & Development – 2010 Forecast- 30 Gbps mean traffic,
150 Gbps min peak traffic, • 10 GbE / w LAG is hitting scaling limits today • Ethernet is the preferred technology in the end-to-end
ecosystem
Higher Speed Study Group CFI, V 1.01San Diego, CA
The Technical Viability of The Technical Viability of Higher Speed EthernetHigher Speed Ethernet
Presented by Petar Pepeljugoski, IBM Research
IEEE 802.3 Working GroupSan Diego, CAJuly 18, 2006
July 18, 2006 24Higher Speed Study Group CFI, V 1.01San Diego, CA
IntroductionIntroduction
• Marketing Input for application spaces– “LAN” reach (tens to hundreds of meters)– “WAN” reach (hundreds of meters to tens of kilometers)
• Solution space– Faster is an option, but not the only one– “Higher” speed means a “fatter” pipe
• By speed• By number of wavelengths• By number of lanes
• Issues to consider– Architecture complexity– Component maturity
July 18, 2006 25Higher Speed Study Group CFI, V 1.01San Diego, CA
Example Example -- AAggregation at the ggregation at the PPhysical hysical LLayer (APL)ayer (APL)
MAC
RECONCILIATION
Aggregation at Physical Layer (APL)
MEDIUM
1 to “N”10GBASE-R PHYs
Or 10 GBASE-W PHYs
…….PMD
MDI
XGMII
PMA
PCS
MEDIUM
PMD
MDI
PMA
PCS
MEDIUM
PMD
MDI
PMA
PCS
XGMII XGMII
nGMII Lane bonding
July 18, 2006 26Higher Speed Study Group CFI, V 1.01San Diego, CA
• Multi-channel PHY– Multicore cable, ribbon fiber– Parallel backplane channels
• Multi-wavelength (WDM) PHY – N wavelengths on single fiber pair
• Multi-wavelength (DWDM) system– Single wavelength per module– External optical MUX/DEMUX
Using APL to Create a Fatter PipeUsing APL to Create a Fatter Pipe
MAC
RECONCILIATION
Based on Aggregation at Physical Layer (APL)
PMD
PMA
PCS
yGMII*
MDI
Optical MUX / DEMUX
MEDIUM
PMD
PMA
PCS ……
PMD
PMA
PCS
……
……
1:NMultiwavelength
PHYComprised of
1:N Lanes
(D)WDM PHY
Single or dual Fiber
PHY Comprised of 1:N Lanes
1:N sets of conductors or fibers
MAC
RECONCILIATION
Based on Aggregation at Physical Layer (APL)
PMD
PMA
PCS
nGMII
MEDIUMMDI
PMD
PMA
PCS
PMD
PMA
PCS
……
……
Multiconductor Cu CableRibbon Fiber
Multi-channel PHY
……
Based on Aggregation at Physical Layer (APL)
MEDIUM
…….PMD
MDI
XGMII
PMA
PCS
PMD
MDI
PMA
PCS
XGMII
yGMII*
MAC
RECONCILIATION
λ1 λ2
External optical MUX / DEMUX
MEDIUM MEDIUM
PMD
MDI
XGMII
PMA
PCS
λN
Single or dual Fiber
1 to “N”10GBASE-R
PHYs
DWDM system
…λN+1 λN+m-1
* y=10·N
July 18, 2006 27Higher Speed Study Group CFI, V 1.01San Diego, CA
ASIC Technology RoadmapASIC Technology Roadmap
• Typical 10 Gb/s MAC – 64-bit wide – 156.25 MHz– 90nm or 130 nm silicon
• MAC Integration with other circuitry
• Implementation discussions suggest – Speed improvement possible
• by data path increase • by frequency increase
– 45-90 nm ASIC processesUsed with permission from Fujitsu
July 18, 2006 28Higher Speed Study Group CFI, V 1.01San Diego, CA
PMD ComponentsPMD Components• PIN – bandwidths supporting speeds beyond 10 GBaud available• Drivers and TIAs
– Challenging for serial solution• Optical connectors
– LC, SC, ST, FC/PC and variations for serial/WDM, MTP/MPO for parallel links– Other connector types for reduced cost?
• WDM muxes and demuxes, optical equalizers etc.– Integrated photonics improves density, power consumption
• Sources: – Laser Diodes – 850nm 10 Gb/s VCSEL (singlets) widely available
• Arrays demonstrated– 1300nm VCSELs up to 4 Gb/s,
• progress expected towards 10 Gb/s– 1300nm and 1550nm edge emitting laser in wide use at 10 Gb/s– Directly modulated laser diodes – speeds beyond 10 Gb/s demonstrated– External modulators integrated in CMOS and InP
• Optical amplifiers• Equalization
– Links utilizing equalization to be standardized (802.3ap, 802.3aq)
July 18, 2006 29Higher Speed Study Group CFI, V 1.01San Diego, CA
• Optical equalizer implemented as silica planar lightwave circuit
• DWDM muxes and demuxes (10-l) implemented as photonic integrated circuits
• Silicon optical filters (DWDM) multiplexing/demultiplexing– Electrically tunable integrated with control circuitry
Integrated PhotonicsIntegrated Photonicschip size: 35mm x 45mm
Adjustable couplers
Adjustable phase
Doerr, used with permission from Lucent
R. Merel, used with permission from Luxtera
• Silicon 10 Gb/s modulators- Low loss, low power consumption- Excellent signal integrity
July 18, 2006 30Higher Speed Study Group CFI, V 1.01San Diego, CA
DWDM Example: (N) Channels x (M) DWDM Example: (N) Channels x (M) Gb/sGb/s (DWDM)(DWDM)Demo: 10x10Gb/s DWDMDemo: 10x10Gb/s DWDM
• Transmitter and Receiver use Photonic Integrated Circuits– Commercially deployed in carrier networks
since 2004
-80
-70
-60
-50
-40
-30
-20
-10
0
10
1526 1530 1534 1538 1542Wavelength (nm)
Nor
mal
ized
Pow
er (d
B)
10 ch. DWDM Spectrum
10 ch. DWDM Transmitter
10-λ MUX
Modulators
Lasers
10 ch. TIAD. Perkins, used with permission from Infinera
July 18, 2006 31Higher Speed Study Group CFI, V 1.01San Diego, CA
MultiMulti--channel (Parallel) Example: (N) Ch. x (M) channel (Parallel) Example: (N) Ch. x (M) Gb/sGb/sDemo: 120 Demo: 120 Gb/sGb/s, 12 x 10 , 12 x 10 Gb/sGb/s
• For 40 Gb/s, commercially available today• SNAP12 (12x3.125Gb/s)
– extension to 12x10Gb/s demonstrated in 2003:
– SiGe LDD, SiGe Rx, 10Gb/s lasers • 12 channels running at BER <10-12, link
length 316m using OM3 fiber• 12 channels, 9.9-11.0 Gb/s/channel tester
designed and built.
D. Kuchta, used with permission from IBM
12 ch. TX OE chips 12 ch. RX OE chips
July 18, 2006 32Higher Speed Study Group CFI, V 1.01San Diego, CA
Communications Industry SnapshotCommunications Industry Snapshot• Established technology
– Aggregation• VCAT and LCAS• 802.3ah type aggregation• XAUI in 802.3ae• Inverse Muxing over ATM• PPP Multilink• DOCSIS 3.0 channel bonding
– ITU-T• DWDM wavelength plan• WDM component specs• "Black box" and "black link" DWDM specification methodologies
– Ribbon fibers, connectors and lane definitions• Industry Related Activities
– SNAP12 and POP4 MSAs– Infiniband (IB) Standard for up to 96 Gb/s (120 GBaud)
• Variable lane count (1, 4, 8, 12) at 2, 4 and 8 Gb/s lane speed– Optical Internetworking Forum
• CEI-25 Project (20 – 25 Gb/s Electrical Signaling)• 100 – 160 Gb/s Interfaces
– OC-768 Deployment
July 18, 2006 33Higher Speed Study Group CFI, V 1.01San Diego, CA
ConclusionsConclusions
• Aggregation at the Physical Layer (APL)– Lane Bonding in multiple 802.3 clauses– Scaleable 1 to “N” lane PHY’s
• Building block technologies – Existing PMD technology– Multi-wavelength devices demonstrated– Multi-channel devices demonstrated– Repackaging of existing 10Gb/s optical technology
• Finer ASIC processes forecasted to be available if required for MAC and MAC clients
• Technology exists to enable Higher Speed Ethernet
Higher Speed Study Group CFI, V 1.01San Diego, CA
Higher Speed Ethernet Higher Speed Ethernet --Why Now?Why Now?
Presented byJohn D’Ambrosia, Force10 Networks
IEEE 802.3 Working GroupSan Diego, CAJuly 18, 2006
July 18, 2006 35Higher Speed Study Group CFI, V 1.01San Diego, CA
The Need for Higher SpeedThe Need for Higher Speed
• Work of the past 25 years is paying off!– (Traditional) LAN deployment– Non LAN deployment– Technology development
• Traffic is growing everywhere– The masses have more ways of faster access– Higher bandwidth content– New applications enabled
• Need for higher speed by multiple applications
July 18, 2006 36Higher Speed Study Group CFI, V 1.01San Diego, CA
SummarySummary• Applications are challenging today’s solutions• “Higher” Speed needed throughout the entire
Ecosystem• Needed by 2010 for multiple applications
• Past efforts took 3 to 4 years– 10GE– EFM – 10GBASE-T
• We need to begin the process and study the problem
July 18, 2006 37Higher Speed Study Group CFI, V 1.01San Diego, CA
SupportersSupportersDavid Law 3COMKam Patel ADCDavid Yanish ADCAdam Healey AgereHenk Steenman AMS-IXHoward Baumer BroadcomKevin Brown BroadcomWael Diab BroadcomHoward Frazier BroadcomAli Ghiasi BroadcomPat Thaler BroadcomArthur Marris CadenceAlessandro Barbieri CiscoHugh Barrass CiscoMark Nowell CiscoTom Lindsay ClariPhy CommunicationsMenachem Abraham Columbus AdvisorsJohn Abbott CorningSteve Swanson CorningJohn Dallesasse EmcoreLane Patterson EquinixArne Alping EricssonEddie Tsumura Excelight CommunicationsAndy Moorwood Extreme NetworksJonathan King FinisarJoel Goergen Force10 NetworksPete Tomaszewski Force10 NetworksGreg Chesson GooglePetar Pepeljugoski IBM ResearchLarry Rubin IndependentDrew Perkins InfineraIlango Ganga IntelGopal Hegde IntelSchelto van Doorn IntelToshinori Ishii Internet Multifeed Co.Takejiro Takabayashi Japan Internet Exchange Co. Ltd.Wenbin Jiang JDSUMike McConnell KeyEye CommunicationsItsuro Morita KDDI Labs
Alan Flatman LAN TechnologiesMike Bennett LBNLJoe Lawrence Level3Roger Merel LuxteraChris Di Minico MC CommunicationsRolf McClellan McClellan ConsultingGourgen Oganessyan MolexJimmy Sheffield Newport EnterprisesPeter Schoenmaker NTT AmericaDavid Martin NortelKirk Bovill OCPRobert Lingle OFSGeorge Oulundsen OFSRich Taborek Onde TechnologyEd Cornejo OpnextMatt Traverso OpnextMike Dudek PicolightJack Jewell PicolightBrian Holden PMC-SierraDan Dove ProCurve by HP Brad Booth Quake TechnologiesRami Kanama Redfem Integrated OpticsJohn Monson Scintera NetworksGeorge Zimmerman SolarflareRick Rabinovich Spirent CommunicationsShimon Muller SunStephan Korsback SwitchcoreGlen Kramer TeknovusAndre Szczepanek Texas InstrumentsRalf-Peter Braun T-SystemsSuveer Dhamejani Tyco ElectronicsEric Lynskey UNH-IOLBob Noseworthy UNH-IOLMartin Carroll VerizonFrank Chang VitesseKevin Daines World Wide PacketsTom Palkert XilinxAdam Bechtel Yahoo!
Higher Speed Study Group CFI, V 1.01San Diego, CA
Straw PollsStraw Polls
July 18, 2006 39Higher Speed Study Group CFI, V 1.01San Diego, CA
CallCall--ForFor--InterestInterest
• Should a Study Group be formed for “Higher Speed Ethernet”?
Y: 147 N: 9 A: 31
July 18, 2006 40Higher Speed Study Group CFI, V 1.01San Diego, CA
ParticipationParticipation
• I would participate in the “Higher Speed” Study Group in IEEE 802.3.
Tally: 108
• My company would support participation in the “Higher Speed” Study Group in IEEE 802.3
Tally: 76
July 18, 2006 41Higher Speed Study Group CFI, V 1.01San Diego, CA
Future WorkFuture Work
• Ask 802.3 to form Higher Speed SG on Thursday
• If approved– 802 EC informed of Higher Speed SG on
Friday– First HSSG meeting, week of September
2006 IEEE 802.3 Interim.
July 18, 2006 42Higher Speed Study Group CFI, V 1.01San Diego, CA
Thank You!Thank You!
July 18, 2006 43Higher Speed Study Group CFI, V 1.01San Diego, CA
ContributorsContributors• David Law, 3COM• Henk Steenman, AMS-IX• Howard Frazier, Broadcom• Pat Thaler, Broadcom• Hugh Barrass, Cisco• Ian Cox, Cisco• Andy Moorwood, Extreme Networks • Dave Hawley, Extreme Networks• Steve Garrison, Force10 Networks• Joel Goergen, Force10 Networks• Greg Hankins, Force10 Networks• Petar Pepeljugoski, IBM Research • Drew Perkins, Infinera• Itsuro Morita, KDDI Labs• Mike Bennett, LBNL• Joe Lawrence, Level(3) Communications• Marcus Duelk, Lucent• Roger Merel, Luxtera• Rolf McClellan, McClellan Consulting• Brad Booth, Quake Technologies