bric - 7x24 exchange - carolinas chapter 7x24...cissp, cdcdp, cdcep software defined networking...

16
D C C C Network Fabric 1 Carl Rumbolo CISSP, CDCDP, CDCEP

Upload: others

Post on 16-Sep-2020

2 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

D

CCC

Netw

ork F

ab

ric

1

Carl Rumbolo CISSP, CDCDP, CDCEP

Page 2: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Software Defined Networking – The ‘Next Big Thing’

2

Software Defined Networking (SDN), also referred to as Software Defined Infrastructure is a new model for building networks

‘Traditional’ networks use a range of physical devices such as routers, switches, firewalls and other components to control and move network traffic and data.

SDN networks move these functions into software, allowing greater flexibility and ease of deployment. SDN separates the control and movement of network traffic from the movement of data.

Some hyperscalers are already doing this!

Page 3: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Software Defined Network Infrastructure – What is It?

Policy Based Secure Trust Zone Segmentation

Fabric based Network Design

Location Independent Data Center Operation

Data Center A

Data Center B

F

F

F

F

E

E

A

E

E

A

B

E

A

B

B

F

F

F

F

E

E

D

E

E

D

C

E

D

C

C

Network Fabric

D D

C C

C

3

So just what is this SDN stuff ….

Converged Nodes – Compute, Storage & Services

Page 4: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Software Defined Network Infrastructure – What is It?

• Compute, storage and network interfaces are physically converged then given a “virtual persona” through Software Definition.

• Converged Node personas include: Compute Data Storage Service (Firewall, Load Balancer etc.)

• High capacity/low latency non-blocking network with deterministic performance between all DC nodes.

• Multiple parallel paths to achieve scalability, reliability and serviceability• Multi-service fabric supporting all communications (Access/Storage/Mgmt.) between Converged

Nodes.

4

Fabric based networks & Converged Notes

Fabric based Network Design

Converged Nodes – Compute, Storage & Services

Page 5: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Software Defined Network Infrastructure – What is It?

• Through software definition roles, access and controls can be assigned logically and do not need to be hardware based

• Policy-based grouping logically segment resources into secure Trust Zones based on business and security needs.

• Segmented Trust zones enforce access policies within and between zones.• Segmented Trust zones may reside within a Data Center - Zone A, Zone B, Zone C, Zone D or span Data

Centers - Zone E, Zone F

•Non-blocking network design coupled with software defined configuration and management enables cloud ready applications to span multiple sites•On demand elastic scaling within and across locations •Ability to geographically distributed simultaneous operation for availability and business continuity

5

Location Independent Data Centers and Policy Based Segmentation

Policy Based Secure Trust Zone Segmentation

Location Independent Data Center Operation

Page 6: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Software Defined Infrastructure – Evolving Networks

Legacy (Traditional) Network

Layered access/distribution/core network – hierarchical structure with high oversubscription ratio between layers

Endpoints are limited by layer 3 (IP) network boundaries

Network over-subscription impact Physical location dependence for

endpoints due to layer 3 requirements Most services (firewall, load balancing)

are physical and limited by layer 3 boundaries

Additional network devices required in order to service SAN in addition to production workloads.

Vertical physical and logical silos with little flexibility

Redundant and survivable but silos make it less resilient and flexible

6

The legacy – what exists today

No

rth-

sou

th

Page 7: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Spine-leaf network design ‘flat network’ High capacity bandwidth with low latency

/ non-blocking network “Any to Any” connectivity – physical

location independence for endpoints High density server farms no longer

bound by layer 3 (IP) network Services (firewalls, load balancers, etc.)

are virtualized and located closer to the endpoint. Increases flexibility for placement of servers.

Virtual storage (vSAN) – reduces dependence on storage arrays (space savings) while reducing device counts & cabling plant (cost & space saves)

Self healing network – redundant, survivable and flexible

Software Defined Infrastructure – Evolving Networks

The Software Defined Network

7

Networking evolved for the future….

East - West

Page 8: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

8

Software defined networks will create challenges (and opportunities) for data center operators and managers in managing space .

Space planning - SDN is a possibility for any data center depending requirements

SDN can be applied to both “greenfield” (new) and “brownfield” (existing) data centers Obviously easier in a “greenfield” environment ‘Brownfield’ deployments will require more planning

Ongoing support requirements for legacy infrastructure – not everything ‘magically’ moves Space to build out new infrastructure – SDN typically will need new network hardware Server & compute platforms might migrate – or require new

Virtualization trends driven by demand for ‘big data’ and ‘cloud’ services (Hadoop, IaaS, PaaS, etc.)

IaaS platforms - Exadata, Azure CPS and similar solutions The ability to repurpose hardware for applications based on demand – potential to lower over all

server physical density through higher compute density Example – a virtual desktop farm supports different user populations (with different configurations)

depending on workload / time of day. Reduces the number of physical hosts – but higher density of compute

Space planning decisions will be driven in part by other technology requirements – particularly structured cabling - more on this later

Data Center Impacts – Space Considerations

Page 9: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

9

SDN – along with the continuing evolution in network and compute hardware is going to have significant impact on data center power & cooling

The good news - IT technology has been trending toward less power hungry hardware since 2008

Data Center Impacts –Power & Cooling

This chart shows past and projected growth rate of total US data center energy use from 2000 until 2020. It also illustrateshow much faster data center energy use would grow if the industry, hypothetically, did not make any further efficiency gainsimprovements after 2010. (Source: US Department of Energy, Lawrence Berkeley National Laboratory)

Page 10: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

10

However – as compute and network platforms have gotten more efficient, they tend to be more dense in both space & compute power

SDN will likely result in a smaller compute footprint – less physical servers, storage arrays, etc.

SDN will likely result in a smaller compute footprint – less physical servers, storage arrays, etc.

As the number of devices drops overall power demand will decrease

But power density (and cooling requirements) will be significant within a smaller space

Typical legacy data center may have 10-15 physical servers per rack @ 4 – 6 kw per rack In an SDN scenario per rack load may approach 24 kw per rack (15 Dell FC830 running VMware)

Data Center Impacts –Power & Cooling

So – smaller footprint , lower power demand but the challenge is providing the power and cooling in smaller space

Page 11: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

11

A variety of options for meeting requirements – challenges and opportunities

Traditional hot / cold aisle solutions may not be sufficient (likely)

Retrofitting existing data centers may be challenging – particularly in a operational environment

Building out new infrastructure, containment solutions, etc., in a production environment

In-row or cabinet cooling solutions may work in some situations but are generally not scalable.

Cabinet chimney, hot or cold aisle containment solutions work well – but pose challenges, particularly being retrofitted into existing data centers.

Some proprietary IaaS solutions such as Exadata may not be compatible with all containment solutions Retrofitting containment solutions in existing data centers may face limitations due to facility design

Liquid cooling – a different approach

Energy efficient – lower operational costs (may incur higher install costs) Hard to retro-fit at scale in existing data centers Definitely should be considered in new construction

Data Center Impacts –Power & Cooling

Each data center environment will have unique challenges – no one solution fits all. Solving the challenge may require multiple solutions in the same facility.

Page 12: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

The spine-leaf network architecture inherent in SDN, along with the demand for high capacity band-width will fundamentally change the cabling plant requirements.

Network bandwidth requirements for 40G, 100G and beyond will drive optical fiber plant designs

Example: spine-leaf design with 8 spine switches and 240 leaf switches would require 1920 ports (8 ports per leaf switch x total number of leaf switches

Spine – Leaf networks require dense fiber infrastructure; leaf switches require a physical network connection to every spine switch

Example: spine-leaf design with 8 spine switches and 240 leaf switches would require 1920 ports (8 ports per leaf switch x total number of leaf switches

Complex & design cabling requires re-think of cable conveyance – underfloor solutions become much more challenging

Spine – Leaf networks - a given it will be optical fiber based, but what?

Single-Mode - Obvious Choice , But…EXPENSIVE NETWORK OPTICS COSTS (3X OR MORE) OM4 Multi-mode – Serviceable, With Challenges

Distance Limits (175-200m 40G / 150m 100G) High fiber strand counts (8 fibers per 40/100G link)

OM5 SWDM4 - Something New – Emerging Technology & worth exploring Longer reach (400m for 40g) Reduced fiber count – 2 per 40 / 100g link (same as single-mode) Backward compatible to OM4

Data Center Impacts – Network Cabling Infrastructure

12

Page 13: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Server connectivity requirements will change – this will impact the data center

Data Center Impacts – Compute Cabling Infrastructure

Server connectivity will migrate from 1G or 10G toward 25G solutions

Port density per compute device will be dependent on technology selection – vSAN may drive down total ports per device – but ports per device will be higher bandwidth

The capability of virtual SAN (vSAN) will reduce the overall number of connections – but require higher bandwidth

The type connectivity chosen for compute nodes will be a significant driver for space planning & and data center layout

“Top-of-Rack” , “Middle or End of Row”, “Center of Room” - all work but each has challenges Accounting for infrastructure – cable conveyance & pathways, cable management in cabinets Impact of cabling on airflow management - overhead conveyance in compute node areas almost a given

Are airflow management / containment solutions chosen compatible with overhead cabling Some cabling choices will drive requirements for large conveyance requirements – can that be supported in your data

center13

Compute Device - Common Today

Network (IP) 2 UTP 10G

IP Storage 2 Fiber 10G

SAN (FC) 2 Fiber 8G FC

Management 1 UTP 1G

Compute Device - Future in SDN

Network (IP) 4 Fiber 10 or 25G

Management 1 UTP 1G *

* Some SDN designs move management traffic to network

Page 14: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

Cabling plant choices will have a long term impact on data centers

Data Center Impacts – Compute Cabling Infrastructure

Is UTP a viable option for the long term?

10G-BASE-T UTP – fairly common, straight forward installation, but…. Is distance limited – 90 meters Bulky – requires a lot of cable management (ex: 48 ports requires 2RU) 10G UTP ports are power hungry – more heat , lower per port capacity on switches

Typically lower port capacity per switch Less ports per switch, more switches

No real viable upgrade path to 25G

Twin-Ax (InifiBand) – Good solution but challenges

More common in storage solutions but viable compute option Currently supports range of bandwidths up to 100G Copper based twin-ax solutions are distance limited – even more so than UTP

Typically 10G SFP+CU is limited to 10 meters

Optical InfiniBand can reach 100 meters but is a direct attached solution Cable management a major issue – all connections are direct attached

Optical Fiber – Most likely the long term solution

Support for full range of bandwidth requirements Distance limits are not as much of a concern Smallest footprint option - (ex: 288 connections in 4RU of space) When all factors considered is least expensive option

14

Page 15: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as

SDN will drive operational changes, relationships and even job function

Data Center Impacts – The Operational Side

Design challenges require greater partnerships between all parties – network, server, storage, data center and facilities

Everything ties together – operating in silos no longer viable Changes to one element will typically impact everyone else In an SDN environment no one group does it ‘on their own’

Operationally roles for technology partners will evolve - Who does what? (The “Rice Bowl” issue)

Lines between software and hardware will blur as functions merge Changes in technology organization & roles will require corresponding adjustments within data

center operations Who does what - implementation, cabling work, server provisioning

SDN represents CHANGE – and change makes people uncomfortable

Education - Training (formal & informal), the more people know, the easier it is Communication - Not only internal to teams but cross-functionally

15

Page 16: bric - 7x24 Exchange - Carolinas Chapter 7x24...CISSP, CDCDP, CDCEP Software Defined Networking –The ‘Next Big Thing ’ 2 Software Defined Networking (SDN), also referred to as