next-generation data center core switches …...next-generation data center core switches...
TRANSCRIPT
Next-generation Data Center Core Switches
Introduction
Since the introduction of cloud computing technology in 2006, it has developed so
rapidly that almost all enterprise IT services have migrated, or are in the process of
migrating, to cloud-computing platforms.
Data centers are seeing an annual growth rate of over 40% in network devices.
Among all network devices in a data center, core switches are the key nodes in the
architecture of the entire cloud-computing network.
The reasons behind the development of Data Center Core Switches
The development of data center core switches can be attributed to many factors. The
primary driving force is the revolutionary transition from the client/server traffic
model to the server/server traffic model. During this transition, Incast and multicast
traffic models are more widely used on networks than previous unicast traffic models.
As more and more core enterprise services migrate to IT platforms, enterprises are
increasing their spend on IT system construction. Development of data center
technologies, such as large-scale server clusters, virtualization, and big data, also
bring higher requirements for data center networks.
Service (application)
driving forces
Cloud Computing
Scalability Multi-service Integration
High Reliability
Application virtualization
Fast expansion of data centers (scale up and scale out)
Ethernet, SAN, and high-speed computing
Core services on IT platforms, cluster systems
Technical (vendor) driving forces
Network adaptation to virtualization
Innovation in high-performance hardware-platforms + control protocols
Everything over Ethernet (requiring10GE – 100GE access)
ISSU, micro kernel, distributed network
The preceding table shows that the key to the success of data center switches is to
combine service requirements with mature technologies. Huawei follows this
principle when developing next-generation data center core switches.
Characteristics of Current Data Center Core Switches
Currently, the data center core switches of mainstream vendors have the following
characteristics:
1. High scalability
Usage Scenario
For rapid expansion of data centers
For VM migration For more flexible network architecture scaling
More accessible servers, higher accessibility, and higher throughput
Scalable large L2 network to support VM migration
Modular scaling for data center networks, for easier network deployment, maintenance and management
Matching technology
Higher-performance hardware
New protocols and hardware components
Pod-based network design
Chips with higher processing capabilities, interfaces with higher speeds (40GE/100GE), and network devices with higher port densities
New L2 protocol TRILL (VLAN anywhere but not every where), and L2 interconnection between data centers
New data center network design fulfilling network requirements for scale up and scale out
2. Virtualization capabilities
Usage Scenario
To simplify network topology and facilitate network maintenance
To implement resource sharing and flexible resource allocation
To support migration of VMs and detection VMs
Matching technology
Two (multi)-node cluster
Virtual switch 802.1BR, 802.1Qbg, out-of-band NMS
Well developed two-node cluster technology and two-node cluster solutions for mainstream vendors. Multi-node cluster technology is more difficult and is still under development. Matrix expansion and virtual stacking technologies used on access switches can also establish clusters.
Virtual switch technology helps faster network deployment and improves efficiency of resource utilization, including equipment rooms, power supply systems, and line cards.
All these technologies are currently under development, and major vendors have their own solutions. It is still uncertain which technology will become predominant.
3. Support for multi-service integration and network convergence
Usage Scenario
For complex network services
For integration of traditional services
For network convergence
Multi-tenant, mobile IP, VPN, etc.
Integration of firewall, network analysis, and load balancing services
Convergence of traditional FC network and HPC network with Ethernet networks
Matching technology
New protocols and chips (VPN technologies, VxLAN/NvGRE, IPv6, and so on)
Integrated service line cards
Everything Over Ethernet
New chips have higher processing capabilities and can process complex services. Mainstream vendors have just started the development of new chips, and none of these chips are widely used.
Complex services cannot be integrated on ASIC. Therefore, these services are integrated into one single device by using integrated service line cards.
New technologies, such as 10GE access and DCB, enable data from heterogeneous networks to be transmitted over Ethernet, for example, FCoE and RDMAoE.
Deficiencies of Current Data Center Core Switches Although data center core switches have made many breakthroughs in services and technologies, they still have the following deficiencies:
1. Insufficient network scalability
l At present, very few data center core switches can provide scalability that
supports network expansion for the next five to 10 years. This is because the
device architecture cannot keep pace with the rapid network expansion.
l Server virtualization also brings great demand in L2 data switching. However,
L2 networks and nodes are not very scalable due to defects of the L2
networks.
l Traditional L2 protocols, such as STP, only solve the problem of L2 loops and
do not determine how to establish a large L2 network. There are still no
mature solutions for L2 VM migration between data centers.
2. Separate network virtualization and service virtualization
l Services and applications become more flexible and dynamic after service
virtualization.
l It is a big challenge for virtual networks to dynamically change configurations
and deployments to adapt to changes in services and applications.
3. Insufficient openness to network behavior
l As network application environments become more complex, many customers
want to customize their own network behaviors. However, customer networks
have their own characteristics and requirements, and standard devices cannot
meet the special requirements of all customers. To address this problem, some
vendors have introduced the idea of using open standard interfaces to control
network behavior.
l Openflow, OpenStack, and SDN technologies were developed following this
idea. They share network control details with external applications to support
the customization of network behavior. Although these technologies still need
improvement and have not been widely used on network devices, these
technologies may be the way of the future.
l Other than technologies that separate service control from forwarding,
customization of network behavior can also be implemented through APIs
provided by network devices or through middleware provided by an open
platform. The need for network behavior customization cannot be ignored
regardless of the technology used.
4. Duplicate investment on core switches of data centers and campus networks
l Separate core switches for data centers and campus networks ensure network
security. However, deploying two groups of core switches increases
investment on physical devices and raises costs of device management and
maintenance.
l For customers, using the same hardware platform for data center and campus
network services allows the implementation of uniform service management
and the facilitation of network deployment, operation, and maintenance. The
core of the data center and campus network are physically coupled through
virtual switch technology.
l Technically, it is an inevitable trend for a data center and campus network to
share the same group of core switches.
5. High electricity costs
l New core switches use more power than old core switches. Traditional power
supply systems in the data center equipment rooms cannot meet the
requirements of these new switches. High power consumption results in high
temperatures, which may degrade the reliability of the device.
l New switches use more power because they use complex, high-frequency
chips to provide complex services.
l Therefore, it is important to reduce the power consumption of data center core
switches. Energy-saving core switches improve device reliability, save energy,
and have lower requirements for data center equipment rooms.
Huawei’s Next-generation Data Center Core Switches
Huawei is dedicated to offering competitive data center core switches and data
center network solutions for customers around the world. Huawei has developed the
CloudEngine 12800 series, its next-generation data center core switches, which
provide the following features to meet service requirements of customers in the
cloud-computing era:
CE12812 CE12808 CE12804
1. Scalable network: higher scalability
l By 2012, the largest switching capacity of a data center core switch will be
480 Gbps per slot because the maximum capacity of a switch panel is 48 x
10GE. If the expected service life of a core switch is five years, the switch
must support a bandwidth upgrade to at least eight times the current
bandwidth. That is, the switching capacity must increase to about 4 Tbps per
slot in five years.
l Huawei CE12800 supports standard Layer 2 protocols, such as TRILL, used
to build large Layer 2 networks. In addition, the Layer 2 TRILL network can
seamlessly connect to standard IP networks, allowing customers to flexibly
deploy services on a large network.
2. Virtual network: close coupling of application and network
l In-band protocols between server network adapters and network devices can
quickly detect the migration of a large number of VMs, and network devices
can also respond to VM migration. However, this is not enough for
cloud-computing applications.
l To respond to changes in virtualized services, many devices, including load
balancing devices and WAN routers, need to adjust their configurations. The
configuration adjustment may even involve multiple data centers.
l Therefore, network virtualization needs to be closely coupled with service
changes using out-of-band network management interfaces. Inbound
protocols can combine with out-of-band network management interfaces to
implement end-to-end virtualization.
3. Open network: comprehensive customization capabilities
l Huawei supports Openflow, OpenStack, and SDN technologies.
l Huawei also provides open service platforms with standard APIs or
middleware to overcome defects in standard protocols. An open service
platform allows customers to quickly deploy their own network services.
4. Resource-sharing: one group of core switches for data centers and campus
networks
l Huawei’s CE12800 supports virtual switch technology, which can virtualize
a single switch into a maximum of eight local switches (the number will
increase to 16 in the future). The excellent forwarding performance of the
CE12800 will provide an integrated service platform supporting both data
center services and campus network services.
l Physical coupling of core nodes helps customers realize uniform
management and deployment of services and a simplified operations and
maintenance, which reduces a customer’s capital and operating expenditures.
5. Environmentally-friendly design: build a green data center
l Huawei's next-generation data center switches are installed with
energy-saving ASIC chips and transceivers and offer the industry's most
efficient digital power modules. Power consumption of key components can
be adjusted based on traffic volume, which reduces power consumption per
unit of transmission speed.
l Expanding the service life of network devices is an effective way to protect
the environment because abandoning a core switch would result in high
carbon emissions, which is much more costly than the power fees of a core
switch. Huawei's next-generation data center switches support a capacity
upgrade to at least eight times the current capacity. This scalability can
support a service life of up to five to 10 years, saving resources and
protecting customer investment.
Huawei CE12800: Next-generation Data Center Core Switches
Huawei’s CE12800 series of are designed for cloud-computing data centers and
super-large virtualized data center networks. The CE12800 has high bandwidth
scalability and is capable of upgrading bandwidth from 1 Tbps to 4 Tbps per slot.
Compared with other core switches, CE12800 is highly competitive in terms of its
scalability, virtualization, openness, usability, and energy efficiency. The
CE12800 series offer core switches for future-oriented data centers.