2 © copyright 2009 emc corporation. all rights reserved. / 2 © copyright 2009. scalar decisions...
Post on 19-Dec-2015
212 views
TRANSCRIPT
2© Copyright 2009 EMC Corporation. All rights reserved. / 2© Copyright 2009. scalar decisions inc. Not for redistribution outside of the
intended audience.
3© Copyright 2009 EMC Corporation. All rights reserved. / 3© Copyright 2008. scalar decisions inc. Not for redistribution outside of the
intended audience.
Toronto Vancouver Calgary Ottawa London Kitchener Guelph
Over 50 Employees across Canada
Certifications and Partnerships
CISCO Silver Partner, broad product expertise
DCNI (Data Center Networking Infrastructure)
UCS ATP (Unified Computing Solutions Advanced Technology Partner)
VMware Enterprise VIP Partner
Gold-Level VMware Authorized Consultants (VAC)
EMC Velocity Solution Provider
Technically Led specializing in Advanced IT Infrastructure
Transform your Data Center Fabric with Scalar Decisions
By engaging with our architect team for technology deep dives
By assessing your infrastructure and building a strategic plan
4© Copyright 2009 EMC Corporation. All rights reserved. / 4© Copyright 2008. scalar decisions inc. Not for redistribution outside of the
intended audience.
Existing architecture colliding with new paradigms– Mass consolidation highlights I/O bottlenecks and process inefficiencies– x64 Virtualization may reduce physical footprint, but not management
overhead– Multiple slower, discrete fabrics (Eth, FC), storage arrays and complex cabling
buildup– Getting awfully hard to do more with less!
Emerging Solutions to simplify and save– Unified Fabric on lossless 10GigE– Unified Storage systems with FCoE– Large memory Unified Compute server blades that are one with the Unified
Fabric– Distributed Virtual Switching for x64 hypervisors
5© Copyright 2009 EMC Corporation. All rights reserved. / 5
Scalar Labs – Customer Demo CentreHassle-free access to the technologies you
need!
21 vendors’ products on display with remote access
Product demonstrations and hands-on
Customer Proof-of-Concepts
Interoperability Testing
Access to direct vendor assistance as needed
Remote labs for F5 Authorized Training
Convenient downtown Toronto location
Events, tours and special requests ___________________________________________________
6© Copyright 2009 EMC Corporation. All rights reserved. / 6© Copyright 2009. scalar decisions inc. Not for redistribution outside of the
intended audience.
7© Copyright 2009 EMC Corporation. All rights reserved. / 7© Copyright 2008. scalar decisions inc. Not for redistribution outside of the
intended audience.
8© Copyright 2009 EMC Corporation. All rights reserved.
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Joe Rabasca - Solutions LeadEMC Corporation
9© Copyright 2009 EMC Corporation. All rights reserved.
Objectives
After this session you will be able to:
Understand FCoE and iSCSI and how they fit into existing storage and networking infrastructures.
Compare and contrast the structure and functionality of the FCoE and iSCSI protocol stacks.
Understand how FCoE and iSCSI solutions provide storage networking options for Ethernet, including 10 Gb Ethernet.
11© Copyright 2009 EMC Corporation. All rights reserved.
Rack Server Environment Today
Servers connect to LAN, NAS and iSCSI SAN with NICs
Servers connect to FC SAN with HBAs
Many environments today are still 1 Gigabit Ethernet
Multiple server adapters, multiple cables, power and cooling costs– Storage is a separate network
(including iSCSI)
Rack-mounted servers
EthernetFibre Channel
Ethernet LAN
1 Gigabit Ethernet
1 Gigabit EthernetNICs
Storage
Fibre Channel SAN
FibreChannelHBAs
1 Gigabit Ethernet
iSCSI SAN
Note: NAS will continue to be part of the solution. Everywhere that yousee Ethernet or 10Gb Ethernet in thispresentation, NAS can be considered
part of the unified storage solution
12© Copyright 2009 EMC Corporation. All rights reserved.
10Gb Ethernet allows for Converged Data Center
Maturation of 10 Gigabit Ethernet– 10 Gigabit Ethernet allows replacement of n x 1Gb with a much smaller
number (start with 2) of 10Gb Adapters– Many storage applications require > 1Gb bandwidth
10 Gigabit Ethernet simplifies server, network and storage infrastructure– Reduces the number of cables and server adapters– Lowers capital expenditures and administrative costs – Reduces server power and cooling costs– Blade servers and server virtualization drive consolidated bandwidth
10 Gigabit Ethernet is the answer!iSCSI and FCoE both leverage this inflection point
LAN
SANSingle Wire for Network and Storage10 GbE
14© Copyright 2009 EMC Corporation. All rights reserved.
Why iSCSI?
Link
IPsec
IP
TCP
iSCSI
SCSI
Link
IPsec
IP
TCP
iSCSI
SCSI
Initiator Target
IP Network
Provides physical network capability (Layer 2 Ethernet, Cat 5, MAC, etc.)
Provides IP routing (Layer 3) capability so packets can find their way through the network
Reliable data transport and delivery (TCP Windows, ACKs, ordering, etc.)
Delivery of iSCSI Protocol Data Unit (PDU) for SCSI functionality (initiator, target, data read / write, etc.)
15© Copyright 2009 EMC Corporation. All rights reserved.
Why a New Option for FC Customers?
FC has a large and well managed install base– Want a solution that is attractive for customers with FC expertise /
investment– Previous convergence options did not allow for incremental adoption
Requirement for a Data Center solution that can provide I/O consolidation
– 10 Gigabit Ethernet makes this option available
Leveraging Ethernet infrastructure and skill set has always been attractive
FCoE allows an Ethernet-based SAN to be introduced into the FC-based Data Center
without breaking existing administrative tools and workflows
16© Copyright 2009 EMC Corporation. All rights reserved.
BaseTransport
Encapsulation Layer
SCSI
App
Ethernet
IP
TCP
iSCSI
IP
TCP
iFCP
FC
IP
TCP
FCIP
FC
FC
FC
FCoE
FC
SCSI
Applications
Protocol Comparisons
BaseTransport
Encapsulation Layer
SCSI
App
Ethernet
IP
TCP
iSCSI
IP
TCP
iFCP
FC
IP
TCP
FCIP
FC
Infiniband
SRP
FCoE
FC
SCSI
Applications
FC managementFC replication
over IPBlock storage with TCP/IP
New transport and drivers
Low latency, high bandwidth
FC over Ethernet (no TCP/IP)
17© Copyright 2009 EMC Corporation. All rights reserved.
FCoE Extends FC on a Single Network
Network Driver
FC Driver
Converged Network Adapter
Server sees storage traffic as FC
FC network
FC storage
Ethernet Network
Converged Network Switch
EthernetFC
FCoE SW Stack
Standard 10G NIC
Lossless Ethernet Links2 options
SAN sees host as FC
20© Copyright 2009 EMC Corporation. All rights reserved.
CRCEthernetHeader
iSCSI is SCSI functionality transported using TCP/IP for delivery and routing in a standard Ethernet/IP environment
iSCSI and FCoE Framing
TCP/IP and iSCSI require CPU processing
FCoE is FC frames encapsulated in Layer 2 Ethernet frames designed to utilize a Lossless Ethernet environment
– Large maximum size of FC requires Ethernet Jumbo Frames – No TCP, so Lossless environment required– No IP routing
Eth
ern
etH
ead
er
FC
oE
Hea
der
FC
Hea
der
FC Payload CR
C
EO
F
FC
S
FCoE Frame
iSCSI Frame IP TCP iSCSI Data
FC Frame
23© Copyright 2009 EMC Corporation. All rights reserved.
FCoE Frame Formats
Destination MAC Address
Source MAC Address
IEEE 802.1Q Tag
ET = FCoE Ver Reserved
Reserved
Reserved SOF
Encapsulated FC Frame(Including FC-CRC)
EOF Reserved
FCS
Reserved
FCoE Frame Format
Bit 0 Bit 31
Ethernet frames give a 1:1 encapsulation of FC frames– No segmenting FC frames across
multiple Ethernet frames– FCoE flow control is Ethernet based
BB Credit/R_RDY replaced by Pause/PFC mechanism
FC frames are large, require Jumbo frames– Max FC payload size is 2112 bytes– Max FCoE frame size is 2180 bytes
Also created a FCoE Initialization Protocol (FIP) for:– Discovery– Login – To determine if the MAC address is
server provided (SPMA) or fabric provided (FPMA)
28© Copyright 2009 EMC Corporation. All rights reserved.
Lossless Ethernet
Limit the environment only to the Data Center– FCoE is Layer 2 only
IEEE 802.1 Data Center Bridging (DCB) is the standards task group
Converged Enhanced Ethernet (CEE) is an industry consensus term which covers three link level features
– Priority Flow Control (PFC, IEEE 802.1Qbb)– Enhanced Transmission Selection (ETS, IEEE 802.1Qaz)– Data Center Bridging Exchange Notification (DCBX, currently part of IEEE
802.1Qaz, leverages 802.1AB (LLDP))
Data Center Ethernet is a Cisco term for CEE plus additional functionality including Congestion Notification (IEEE 802.1Qau)
Enhanced Ethernet provides the Lossless Infrastructure which will enable FCoE and augment iSCSI storage traffic .
29© Copyright 2009 EMC Corporation. All rights reserved.
PAUSE and Priority Flow Control
PAUSE transforms Ethernet into a lossless fabric Classical 802.3x PAUSE is rarely implemented since it stops all traffic Priority Flow Control (PFC), formerly known as Per Priority PAUSE
(PPP) or Class Based Flow Control– PFC will be limited to Data Center
A new PAUSE function that can halt traffic according to priority tag while allowing traffic at other priority levels to continue
– Creates lossless virtual lanes
Per priority link level flow control– Only affect traffic that needs it
– Ability to enable it per priority
– Not simply 8 x 802.3x PAUSE
Switch A Switch B
30© Copyright 2009 EMC Corporation. All rights reserved.
Enhanced Transmission Selection and Data Center Bridging Exchange Protocol (DCBX)
Enhanced Transmission Selection (ETS) provides a common management framework for bandwidth management Allows configuration
of HPC & storage traffic to have appropriately higher priority
When a given load in a class does not fully utilize its allocated bandwidth, ETS allows other traffic classes to use the available bandwidth
Maintain low latency treatment of certain traffic classes
Offered Traffic
t1 t2 t3
10 GE Link Realized Traffic Utilization
3G/s HPC Traffic3G/s
2G/s
3G/sStorage Traffic3G/s
3G/s
LAN Traffic4G/s
5G/s3G/s
t1 t2 t3
3G/s 3G/s
3G/s 3G/s 3G/s
2G/s
3G/s 4G/s 6G/s
Data Center Bridging Exchange Protocol (DCBX) is responsible for configuration of link parameters for DCB functions Determines which devices support Enhanced Ethernet functions
32© Copyright 2009 EMC Corporation. All rights reserved.
40 & 100 Gigabit Ethernet
IEEE P802.3ba Task Force states that bandwidth requirements for computing and networking applications are growing at different rates, which necessitates two distinct data rates, 40 Gb/s and 100 Gb/s
IEEE target for standard completion of 40 GbE & 100 GbE is 2010
40 GbE products shipping today supporting existing fiber plant and plan is for 100 GbE to also support 10m copper, 100m MMF (use OM4 for extended reach) and SMF
Cost of 40 GbE or 100 GbE is currently 5 – 10 x 10 GbE– Adoption will become more economically attractive at 2.5x which will take a
couple of years
37© Copyright 2009 EMC Corporation. All rights reserved.
Deployments - FCoE and iSCSI
FCoE
FC expertise / install base
FC management
Layer 2 Ethernet
Use FCIP for distance
Standards in process
Ethernet
Leverage Ethernet/IP expertise
10 Gigabit Ethernet
Lossless Ethernet
iSCSI
No FC expertise needed
Supports distance connectivity (L3 IP routing)
Strong virtualization affinity
Standards since 2003
38© Copyright 2009 EMC Corporation. All rights reserved.
iSCSI Deployment
iSCSI grew to > 10% of SAN market revenue in 2008 *
Many deployments are small environments, which replace DAS
– Strong affinity in SMB/commercial markets
Seeing strong growth of Unified Storage– Supports iSCSI, FC, and NAS
iSCSI with 10 Gigabit Ethernet becoming available
Ethernet
iSCSI SAN
* According to IDC, 3/09
39© Copyright 2009 EMC Corporation. All rights reserved.
FCoE Server Phase (Today)
FC HBAs1 Gb NICs
Converged Network Switch
Rack Mounted Servers
10 GbE CNAs
FC Attach
FCoE with direct attach of server to Converged Network Switch at top of rack or end of row
Tightly controlled solution
Server 10 GE adapters may be CNA or NIC
Storage is still a separate network
Ethernet LAN
Storage
Fibre Channel SAN
EthernetFC
40© Copyright 2009 EMC Corporation. All rights reserved.
FCoE Network Phase (2009 / 2010)
Converged Network Switches move out of the rack from a tightly controlled environment into a unified network
Maintains existing LAN and SAN management
Overlapping domains may compel cultural adjustments
Rack Mounted Servers
10 GbE CNAs
Converged Network Switch
FC Attach
Ethernet Network (IP, FCoE) and CNS
Ethernet LAN
Storage
Fibre Channel SAN
EthernetFC
43© Copyright 2009 EMC Corporation. All rights reserved.
Convergence at 10 Gigabit Ethernet
Two paths to a Converged Network– iSCSI purely Ethernet– FCoE allows for mix of FC and Ethernet (or all
Ethernet) FC that you have today or buy tomorrow will plug
into this in the future
Choose based on scalability, management, and skill set
Rack Mounted Servers
10 GbE CNAs
Converged Network Switch
FC SAN
Ethernet LAN
iSCSI/FCoEStorage
EthernetFC
44© Copyright 2009 EMC Corporation. All rights reserved.
Time To Widespread Adoption
1990 2000 20101980
Defined73
Standard83
Widespread93
Defined85
Standard94
Widespread03
07 09 ??Defined
Standard?
iSCSIiSCSI
Defined00 02
Widespread08
Standard
Standard
10 Gigabit Ethernet10 Gigabit Ethernet02 09?
Widespread
45© Copyright 2009 EMC Corporation. All rights reserved.
Summary
A converged data center environment can be built using 10Gb Ethernet– Ethernet Enhancements are required for FCoE and will assist iSCSI
Choosing between FCoE and iSCSI will be based on customer existing infrastructure and skill set
10 Gigabit Ethernet solutions will take time to mature– Active industry participation is creating standards that allow solutions that can
integrate into existing data centers– FCoE and iSCSI will follow the Ethernet roadmap to 40 and 100 Gigabit in the future
The Converged Data Center allows Storage andNetworking to leverage operational and capital efficiencies
Office of the CTOEMC Corporation