brkdct-2868

44
© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr 1 VMware Integration © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public BRKDCT-2868 14490_04_2008_c2 2 BRKDCT-2868

Upload: jeffgrantinct

Post on 29-Mar-2015

1.349 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

1

VMware Integration

© 2008 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-286814490_04_2008_c2 2

BRKDCT-2868

Page 2: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

2

Virtualization

Mofied Stripped Down OS with

Hypervisor

Guest OS

AppVM

Host OS

VM

Hypervisor

Modified OS

AppVM

Mofied Stripped Down OS with

Hypervisor

Guest OS

App

Guest OS

App

Guest OS

App

Modified OS

App

© 2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 3

CPU CPU

VMware Microsoft

CPU

XEN aka Paravirtualization

Migration

VMotion, aka VM Migration allows a VM to be reallocated on a different Hard are itho t ha ing to

VMware Virtualization Layer

Hardware without having to interrupt service.Downtime in the order of few milliseconds to few minutes, not hours or daysCan be used to perform Maintenance on a server,Can be used to shift workloads more efficiently2 t f Mi ti

VMware Virtualization LayerOS OS Con

sole

OS

OS

App. App. App.

CPU CPU

Con

sole

OS

Hypervisor Hypervisor

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 4BRKDCT-286814490_04_2008_c2

2 types of Migration:VMotion MigrationRegular Migration

Page 3: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

3

VMware Architecture in a Nutshell

ProductionNetwork

MgmtNetwork

VirtualMachines

VM KernelNetwork

OS OS OS

ConsoleOS

App. App. App.

VM Vi li i L

© 2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 5

ESX Server Host

Machines

VM Virtualization Layer

Physical Hardware

CPU

VMware HA Clustering

ESX H t 2

Hypervisor

ESX H t 1

Hypervisor

Guest OS

App1

Guest OS

App2

ESX Host 3

Hypervisor

Guest OS

App3

Guest OS

App4

Guest OS

App5Guest OS

App1

Guest OS

App2

© 2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 6

CPU

ESX Host 2

CPU

ESX Host 1

CPU

ESX Host 3

Page 4: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

4

Application-level HA clustering(Provided by MSCS, Veritas etc…)

ESX H t 2

Hypervisor

ESX H t 1

Hypervisor

Guest OS

App1

Guest OS

App2

ESX Host 3

Hypervisor

Guest OS

App3

Guest OS

App4

Guest OS

App5

Guest OS

App1

Guest OS

App2

© 2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 7

CPU

ESX Host 2

CPU

ESX Host 1

CPU

ESX Host 3

Agenda

VMware LAN NetworkingvSwitch BasicsvSwitch Basics

NIC Teaming

vSwitch vs LAN Switch

Cisco/VMware DC DESIGNS

SAN Designs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 8BRKDCT-286814490_04_2008_c2

Page 5: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

5

VMware Networking ComponentsPer ESX-server configuration

VMNICS = uplinksvSwitchVMs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 9BRKDCT-286814490_04_2008_c2

vmnic0

vmnic1

vNIC

vNIC

Virtual Ports

VM_LUN_0007

VM_LUN_0005

vSwitch0

vNIC MAC Address

VM’s MAC address automatically generated

/vmfs/volumes/46b9d79a-2de6e23e-929d-001b78bb5a2c/VM LUN 0005Mechanisms to avoid MAC

collision

VM’s MAC address doesn’t change with migration

VM’s MAC addresses can be made static by modifying the configuration files

001b78bb5a2c/VM_LUN_0005/VM_LUN_0005.vmx

ethernet0.addressType = "vpx"

ethernet0.generatedAddress = "00:50:56:b0:5f:24„

ethernet0.addressType = „static“

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 10BRKDCT-286814490_04_2008_c2

g

ethernetN.address = 00:50:56:XX:YY:ZZ

ethernet0.address = "00:50:56:00:00:06„

Page 6: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

6

vSwitch Forwarding Characteristics

Forwarding based on MAC address (No Learning): If traffic doesn’t match a VM MAC is sent out to vmnic

VM-to-VM traffic stays local

Vswitches TAG traffic with 802.1q VLAN ID

vSwitches are 802.1q Capable

vSwitches can create Etherchannels

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 11BRKDCT-286814490_04_2008_c2

vSwitch Creation

YOU DON’T HAVE TO SELECT A NIC

This is just a name

vNICs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 12BRKDCT-286814490_04_2008_c2

vswitch

Select the Port-Group by specifying theNETWORK LABEL

Page 7: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

7

VM Port-Group vSwitch

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 13BRKDCT-286814490_04_2008_c2

VLAN’s - External Switch Tagging - ESTVLAN tagging and stripping is done by the physical switchVM1 VM2 Service

Console

No ESX configuration required as the server is not tagging

The number of VLAN’s supported is limited to the number of physical NIC’s in the server

VMkernel

VMkernelNIC VSwitch A VSwitch B ESX

Server

Virtual NIC’s

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 14BRKDCT-286814490_04_2008_c2

PhysicalSwitches

C s t e se e

VLAN 100 VLAN 200

Physical NIC’s

Page 8: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

8

VLAN’s - Virtual Switch Tagging - VSTThe vSwitch tags outgooing frames with the VLAN IdVM1 VM2 Service

Console

The vSwitch strips any dot1Q tags before delivering to the VM

Physical NIC’s and switch port operate as a trunkVMkernel

VMkernelNIC VSwitch A ESX

Server

Virtual NIC’s

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 15BRKDCT-286814490_04_2008_c2

PhysicalSwitches

Number of VLAN’s are limited to the number of vNIC’s

No VTP or DTP. All static config. Prune VLAN’s so ESX doesn’t process broadcasts

VLAN 100 VLAN 200

Physical NIC’sdot1Q

VLAN’s - Virtual Guest Tagging - VGTPortgroup VLAN Id set to 4095

Tagging and stripping ofVM1 VM2 Service

ConsoleTagging and stripping of VLAN id’s happens in the guest VM – requires an 802.1Q driver

Guest can send/receive any tagged VLAN frame

Number of VLAN’s per VMkernel

VMkernelNIC VSwitch A ESX

Server

Virtual NIC’sdot1QVM applied

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 16BRKDCT-286814490_04_2008_c2

PhysicalSwitches

guest are not limited to the number of VNIC’s

VMware does not ship with the driver:Windows E1000Linux dot1q module

VLAN 100 VLAN 200

Physical NIC’sdot1Q

Page 9: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

9

Agenda

VMware LAN NetworkingvSwitch BasicsvSwitch Basics

NIC Teaming

vSwitch vs LAN Switch

Cisco/VMware DC DESIGNS

SAN Designs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 17BRKDCT-286814490_04_2008_c2

Meaning of NIC Teaming in VMware (1)

ESX server NIC cards

vSwitch Uplinks

vmnic0 vmnic1 vmnic2 vmnic3

vNIC vNICvNIC

vNIC

NIC Teaming NIC Teaming

THIS IS NOT NIC Teaming

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 18BRKDCT-286814490_04_2008_c2

ESX Server Host

vNIC vNIC

Page 10: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

10

Meaning of NIC Teaming in VMware (2)Teaming is Configured at

The vmnic Level

s is

NO

T Te

amin

g

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 19BRKDCT-286814490_04_2008_c2

This

Design Example 2 NICs, VLAN 1 and 2, Active/Standby

vSwitch0

vmnic0 vmnic1

Port-Group 1VLAN 2

Port-Group 2VLAN 1

802.1qVlan 1,2

802.1qVlan 1,2

ESX Server

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 20BRKDCT-286814490_04_2008_c2

VM1 Service ConsoleVM2

Page 11: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

11

Active/Standby per-Port-Group

CBS-rightCBS-left

VMNIC0 VMNIC1

Port-Group2Port-Group1

vSwitch0

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 21BRKDCT-286814490_04_2008_c2

VM5 VM7 VM4 VM6.5 .7 .4 .6ESX Server

Port-Group Overrides vSwitch Global Configuration

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 22BRKDCT-286814490_04_2008_c2

Page 12: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

12

Active/Active

ESX NIC d

vmnic0 vmnic1

ESX server NIC cards

vSwitch

ESX server

Port-Group

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 23BRKDCT-286814490_04_2008_c2

VM1 VM2 VM3 VM4 VM5

Active/ActiveIP-Based Load Balancing

Works with Channel-Group mode ON

LACP i t t d P t h liLACP is not supported (see below):

9w0d: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/14, changed state to up

9w0d: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/13, changed state to up

9w0d: %EC-5-L3DONTBNDL2: Gi1/0/14 suspended: LACP currently

vmnic0 vmnic1

vSwitch

ESX server

Port-Group

Port-channeling

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 24BRKDCT-286814490_04_2008_c2

Gi1/0/14 suspended: LACP currently not enabled on the remote port.

9w0d: %EC-5-L3DONTBNDL2: Gi1/0/13 suspended: LACP currently not enabled on the remote port.

VM1 VM2 VM3 VM4

Page 13: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

13

Agenda

VMware LAN NetworkingvSwitch BasicsvSwitch Basics

NIC Teaming

vSwitch vs LAN Switch

Cisco/VMware DC DESIGNS

SAN Designs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 25BRKDCT-286814490_04_2008_c2

All Links Active, No Spanning-TreeIs There a Loop?

CBS-rightCBS-left

NIC1 NIC2

vSwitch1

NIC3 NIC4

Port-Group2Port-Group1

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 26BRKDCT-286814490_04_2008_c2

VM5 VM7 VM4 VM6.5 .7 .4 .6ESX Server

Page 14: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

14

Broadcast/Multicast/Unknown Unicast Forwarding in Active/Active (1)

vSwitch0

vmnic0 vmnic1

802.1qVlan 1,2

802.1qVlan 1,2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 27BRKDCT-286814490_04_2008_c2

VM1 VM2

Port-Group 1VLAN 2

ESX Server

Broadcast/Multicast/Unknown Unicast Forwarding in Active/Active (2)

vSwitchNIC1 NIC2ESX Host

802.1qVlan 1,2

802.1qVlan 1,2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 28BRKDCT-286814490_04_2008_c2

VM1 VM2 VM3

Page 15: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

15

Can the vSwitch Pass Traffic Through?

vSwitchNIC1 NIC2

E.g. HSRP?

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 29BRKDCT-286814490_04_2008_c2

VM1 VM2

Is This Design Possible?

Catalyst1 Catalyst2

vSwitch

802.1q

802.1q

ESX server1

VMNIC1 VMNIC2

1 2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 30BRKDCT-286814490_04_2008_c2

VM5 VM7.5 .7

Page 16: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

16

vSwitch Security

Promiscuous mode Reject prevents a port fromprevents a port from capturing traffic whose address is not the VM’s address

MAC Address Change, prevents the VM from modifying the vNIC address

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 31BRKDCT-286814490_04_2008_c2

Forget Transmits prevents the VM from sending out traffic with a different MAC (e.g NLB)

vSwitch vs LAN Switch

Similarly to a LAN Switch:Forwarding based on MAC address

Differently from a LAN SwitchNo Learning

address

VM-to-VM traffic stays local

Vswitches TAG traffic with 802.1q VLAN ID

vSwitches are 802.1q Capable

vSwitches can create Etherchannels

Preemption Configuration

No Spanning-Tree protocol

No Dynamic trunk negotiation (DTP)

No 802.3ad LACP

2 Etherchannel backing up each other is not possible

No SPAN/mirroring capabilities: Traffic capturing is not the

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 32BRKDCT-286814490_04_2008_c2

Preemption Configuration (similar to Flexlinks, but no delay preemption)

Traffic capturing is not the equivalent of SPAN

Port Security limited

Page 17: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

17

Agenda

VMware LAN NetworkingvSwitch BasicsvSwitch Basics

NIC Teaming

vSwitch vs LAN Switch

Cisco/VMware DC DESIGNS

SAN Designs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 33BRKDCT-286814490_04_2008_c2

vSwitch and NIC Teaming Best Practices

Q: Should I use multiple vSwitches or multiple Port-Groups to isolate traffic?

A: We didn’t see any advantage in using

Q: Which NIC Teaming configuration should I use?

A: Active/Active, Virtual Port-ID basedmultiple vSwitches, multiple Port-Groupswith different VLANs give you enough flexibility to isolate servers

Q: Should I use EST or VST?

A: Always use VST, i.e. assign the VLAN from the vSwitch

Q: Can I use native VLAN for VMs?

A: Yes you can, but to make it simple don’t. If you do, do not TAG VMs with the native VLAN

Q: Do I have to attach all NICs in the team to the same switch or to different switches?

A: with Active/Active Virtual Port-ID based, it doesn’t matter

Q: Should I use Beaconing?

A: No

Q: Should I use Rolling Failover (i.e. no preemption)

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 34BRKDCT-286814490_04_2008_c2

VLAN preemption)

A: No, default is good, just enable trunkfast on the Cisco switch

Page 18: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

18

Cisco Switchport Configuration

Make it a Trunk

Enable Trunkfast

interface GigabitEthernetX/X

description <<** VM Port **>>

Can the Native VLAN be used for VMs?

Yes, but IF you do, you have 2 options

Configure VLAN ID = 0 for the VMs that are going to use the native VLAN (preferred)

Configure “vlan dot1q tag native” on the 6k (not recommended)

Do not enable Port Sec rit

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk native vlan <id>

switchport trunk allowed vlan xx,yy-zz

switchport mode trunk

switchport nonegotiate

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 35BRKDCT-286814490_04_2008_c2

Do not enable Port Security (see next slide)

Make sure that “teamed” NICs are in the same Layer 2 domain

Provide a Redundant Layer 2 path

switchport nonegotiate

no cdp enable

spanning-tree portfast trunk

!

Typically: SC, VMKernel, VM Production

Configuration with 2 NICSC, VMKernel, Production Share NICs

Trunks

VMNIC1 VMNIC2

802.1q: Production VLANs,Service Console, VM Kernel 802.1q

ESX Server

vSwitch 0

Port-Group2

Port-Group3

Port-Group1

NIC teamingActive/Active

Global

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 36BRKDCT-286814490_04_2008_c2

VM1 VM2 ServiceConsole VM Kernel

HBA1 HBA2

VST

Active/Active

Active/StandbyVmnic1/vmnic2

Active/StandbyVmnic2/vmnic1

Page 19: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

19

Configuration with 2 NICsDedicated NIC to SC, VMKernel, Separate NIC for Production

Trunks

VMNIC1 VMNIC2

802.1q: Production VLANs,Service Console, VM Kernel 802.1q

ESX Server

vSwitch 0

Port-Group2

Port-Group3

Port-Group1

NIC teamingActive/Active

Global Active/Standby

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 37BRKDCT-286814490_04_2008_c2

VM1 VM2 ServiceConsole VM Kernel

HBA1 HBA2

VST

Vmnic1/vmnic2

Active/StandbyVmnic2/vmnic1

Active/StandbyVmnic2/vmnic1

Network Attachment (1)root

Secondary root

Rapid PVST+

802.1q802.1q:

Production,SC VMKernel

Catalyst1 Catalyst2

No Blocked Port,No Loop

All NICs are usedTraffic distributed

On all links

802.1q:Production,

SC, VMKernel

TrunkfastBPDU guard

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 38BRKDCT-286814490_04_2008_c2

SC, VMKernel

ESX server1 ESX server 2

VMNIC1 VMNIC2

1 2 3 4

VMNIC1 VMNIC2

On all links

vSwitch vSwitch

Page 20: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

20

Network Attachment (2)802.1q:

Production, SC, VMKernelrootSecondary

root

Rapid PVST+

802 1q: All NICs are used

Typical Spanning-TreeV-Shape Topology

TrunkfastBPDU guard

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 39BRKDCT-286814490_04_2008_c2

802.1q802.1q:

Production,SC, VMKernel

ESX server1 ESX server 2

VMNIC1 VMNIC2

1 2 3 4

VMNIC1 VMNIC2

All NICs are usedTraffic distributed

On all links

vSwitchvSwitch

Configuration with 4 NICsDedicated NICs for SC and VMKernel

Production

Dedicated NIC for SC

Dedicated NIC for VMKernelVMs become completely isolated

ESX Server

ProductionVLANs

Active/Active

VMNIC4

VMNIC3VMNIC2VMNIC1

Dedicated NIC for VMKernel

Redundant Production

How good is this design?

Isolates Management Access

VC cannot control ESX Host Isolates VMKernel

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 40BRKDCT-286814490_04_2008_c2

HBA1 HBA2

vswitch

Port-Group 1ServiceConsole VM Kernel

Vmnic1/vmnic2Management access is lost

iSCSI access is lostVMotion can’t run

VC cannot control ESX Host

If this is part of an HA ClusterVMs are powered down

If using iSCSI this is the worstPossible failure, very complicated

To recover from

If this is part of a DRS clusterIt prevents automatic migration

Page 21: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

21

Configuration with 4 NICs

P d ti SC VMK lRedundant SC and

ESX Server

ProductionVLANs

SC, VMKernelVLANs

Active/Active

VMNIC4

VMNIC3VMNIC2VMNIC1

VMKernel Connectivity

Redundant Production

HA augmented by teaming on Different NIC chipsets

All links used

SC swaps to vmnic4

VC can still control Host

Production Traffic goes to vmnic3

Production and ManagementG h h hi 2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 41BRKDCT-286814490_04_2008_c2

HBA1 HBA2

vswitch

Port-Group 1ServiceConsole VM Kernel

Active/StandbyVmnic2/vmnic4

Active/StandbyVmnic4/vmnic2

Vmnic1/vmnic3“Dedicated NICs” for SC

And VMKernelVMKernel swaps to vmnic2

Production TrafficContinues on vmnic1

Go through chipset 2Production and Management

Go through chipset1

Network Attachment (1)

rootSecondary

root Rapid PVST+

802.1q:

Catalyst1 Catalyst2

No Blocked Port,No Loop

802.1q:Production,

SC, VMKernel

TrunkfastBPDU guard

802.1q:

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 42BRKDCT-286814490_04_2008_c2

vSwitch

qProduction

ESX server1 ESX server 2

1 2 7

vSwitch

SC and VMKernel

34 5

6 8

Page 22: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

22

Network Attachment (2)802.1q:

Production, SC, VMKernelrootSecondary

rootRapid PVST+

Typical Spanning-TreeV-Shape Topology

TrunkfastBPDU guard

802 1q

Catalyst1 Catalyst2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 43BRKDCT-286814490_04_2008_c2

vSwitch

802.1q:Production

ESX server1 ESX server 2

1 27

vSwitch

802.1q:SC and VMKernel

34 5

6 8

How About?802.1q:

Production, SC, VMKernelrootSecondary

root

Typical Spanning-TreeV-Shape Topology

TrunkfastBPDU guard

802 1q

Catalyst1 Catalyst2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 44BRKDCT-286814490_04_2008_c2

vSwitch

802.1q:Production

ESX server1 ESX server 2

1 27

vSwitch

802.1q:SC and VMKernel

34 5

6 8

Page 23: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

23

4 NICs with Etherchannel“Clustered” switches

802.1q:Production 1

273

4 5

6 8802.1q:

SC, VMKernel

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 45BRKDCT-286814490_04_2008_c2

ESX server1 ESX server 2vSwitch vSwitch

VMotion Migration Requirements

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 46BRKDCT-286814490_04_2008_c2

Page 24: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

24

VMKernel Network can be routedVM KernelNetwork

VirtualMachines

ProductionNetwork

MgmtNetwork

VM KernelNetwork

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 47BRKDCT-286814490_04_2008_c2

ESX Server Host

Machines

VMotion L2 Design

vmnic0 vmnic1 vmnic2 vmnic3

Rack10 Rack1

vmnic0 vmnic2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 48BRKDCT-286814490_04_2008_c2

VM4 VM5ESX Host 2 VM6

vSwitch0 vSwitch1 vSwitch2

vmkernel Serviceconsole

ESX Host 1

vSwitch0 vSwitch2

vmkernel

Page 25: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

25

HA clustering (1)

EMC/Legato AAM basedHA Agent runs in every host

Recommendations:Have 2 Service Console on

d d hHA Agent runs in every hostHeartbeats Unicast UDP port ~8042 (4 UDP ports opened)Hearbeats run on the Service Console ONLYWhen a Failure Occurs, the ESX Host pings the gateway (on the SERVICE CONSOLE ONLY) to verify Network Connectivity

redundant pathsAvoid losing SAN access (e.g. via

iSCSI)Make sure you know before hand

if DRS is activated too!

Caveats:Losing Production VLAN

connectivity only, ISOLATES VMs (there’s no equivalent of

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 49BRKDCT-286814490_04_2008_c2

verify Network ConnectivityIf ESX Host is isolated, it shuts down the VMs thus releaseing locks on the SAN

VMs (there s no equivalent of uplink tracking on the vswitch)

Solution:NIC TEAMING

HA clustering (2)

10.0.200.0iSCSI access/VMkernelCOS 10.0.2.0

vmnic0vmnic0

Prod 10.0.100.0

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 50BRKDCT-286814490_04_2008_c2

ESX2 Server HostESX1 Server Host

VM1 VM2

VM1 VM2

Page 26: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

26

Agenda

VMware LAN NetworkingvSwitch BasicsvSwitch Basics

NIC Teaming

vSwitch vs LAN Switch

Cisco/VMware DC DESIGNS

SAN Designs

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 51BRKDCT-286814490_04_2008_c2

Multiple ESX Servers—Shared Storage

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 52BRKDCT-286814490_04_2008_c2

Page 27: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

27

Virtual Machines

VMFS

Stores the entire virtual machine state in a central location

VMFS Is High Performance Cluster File System for Virtual Machines

Servers

ESXServer

ESXServer

ESXServer

ESXServer

Virtual Machines

VMFS VMFS VMFSVMFS

Supports heterogeneous storage arrays

Adds more storage to a VMFS volume dynamically

Allows multiple ESX Servers to access the same virtual machine storage concurrently

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 53BRKDCT-286814490_04_2008_c2

Storage

A.vmdk

g y

Enable virtualization-based distributed infrastructure services such as VMotion, DRS, HA

ESX 2

The Storage Stack in VI3

ESX 1 VM1 VM2 VM3 VM4 VM5 Selectively presents logical containers to VMs

VD2VD1 VD3 VD4 VD5

ESX Storage Stack

Provides services such as snapshots

Aggregates physical volumes

Provisions logical containersLVMVMFS

VSCSIDisklib ESX Storage Stack

LVMVMFS

VSCSIDisklib

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 54BRKDCT-286814490_04_2008_c2

LUN 1 LUN 2 LUN 3

SAN switchClustered host-based VM and filesystem

Analogous to how VI3 virtualizes servers

Looks like a SAN to VMsA network of LUNs

Presented to a network of VMs

Page 28: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

28

Standard Access of Virtual Disks on VMFS

ESX1 ESX2 ESX3

VM1 VM2 VM3 VM4 VM5 VM6

VMFS1

LUN1

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 55BRKDCT-286814490_04_2008_c2

The LUN(s) are presented to an ESX Server cluster via standard LUN masking and zoningVMFS is a clustered volume manager and filesystem that arbitrates access to the shared LUN

Data is still protected so that only the right application has access. The point of control moves from the SAN to the vmkernel,but there is no loss of security.

ESX Server creates virtual machines (VMs), each with their own virtual disk(s)The virtual disks are really files on VMFSEach VM has a virtual LSI SCSI adapter in its virtual HW modelEach VM sees virtual disk(s) as local SCSI targets – whether the virtual disk files sit on local storage, iSCSI, or fiber channelVMFS makes sure that only one VM is accessing a virtual disk at one time

With VMotion, CPU state and memory are transferred from one host to another but the virtual disks stay stillVMFS manages the transfer of access from source to destination ESX Server

Three Layers of the Storage StackVirtualdisks(VMDK)

Virtual Machine

ESX Server

( )

DatastoresVMFS Vols(LUNs)

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 56BRKDCT-286814490_04_2008_c2

Storage ArrayPhysical disks

Page 29: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

29

ESX Server View of SAN

FibreChannel disk arrays appear as SCSI targets (devices) which may have one or more LUNs

On boot, ESX Server scans for all LUNs by sending inquiry command to each possible target/LUN number

Rescan command causes ESX Server to scan again, looking for added or removed targets/LUNs

ESX Server can send normal SCSI commands to any

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 57BRKDCT-286814490_04_2008_c2

LUN, just like a local disk

ESX Server View of SAN (Cont.)

Built-in locking mechanism to ensure multiple hosts can access same disk on SAN safely

VMFS-2 and VMFS-3 are distributed file systems, do appropriate on-disk locking to allow many ESX Server servers to access same VMFS

Storage is a resource that must be monitored and managed to ensure performance of VM’s

Leverage 3rd-party systems and storage management tools

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 58BRKDCT-286814490_04_2008_c2

g p y y g g

Use VirtualCenter to monitor storage performance from virtual infrastructure point of view

Page 30: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

30

Choices in Protocol

FC, iSCSI or NAS?Best practice to leverage the existing infrastructureBest practice to leverage the existing infrastructure

Not to introduce too many changes all at once

Virtual environments can leverage all types

You can choose what fits best and even mix them

Common industry perceptions and trade offs still apply in the virtual world

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 59BRKDCT-286814490_04_2008_c2

What works well for one does not work for all

Which Protocol to Choose?

Leverage the existing infrastructure when possible

Consider customer expertise and ability to learnp y

Consider the costs (Dollars and Performance)

What does the environment need in terms of throughputSize for aggregate throughput before capacity

What functionality is really needed for Virtual MachinesVmotion, HA, DRS (works on both NAS and SAN)

VMware Consolidated Backup (VCB)

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 60BRKDCT-286814490_04_2008_c2

VMware Consolidated Backup (VCB)

ESX boot from disk

Future scalability

DR requirements

Page 31: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

31

FC SAN—Considerations

Leverage multiple paths for high availability

Manually distribute I/O intensive VMs onManually distribute I/O intensive VMs on separate paths

Block access provides optimal performance for large high transactional throughput work loads

Considered the industrial strength backbone for most large enterprise environments

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 61BRKDCT-286814490_04_2008_c2

Requires expertise in storage management team

Expensive price per port connectivity

Increasing to 10 Gb throughput (Soon)

iSCSI—Considerations

Uses standard NAS infrastructureBest Practice to

Have dedicated LAN/VLAN to isolate from other network trafficUse GbE or faster networkUse multiple NICs or iSCSI HBAs

Use iSCSI HBA for performance environmentsUse SW initiator for cost sensitive environments

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 62BRKDCT-286814490_04_2008_c2

Supports all VI 3 featuresVmotion, DRS, HAESX boot from HW initiator onlyVCB is in experimental support today – full support shortly

Page 32: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

32

NFS—Considerations

Has more protocol overhead but less FS overhead than VMFS as the NAS FS lives on the NAS Head

Simple to define in ESX by providingConfigure NFS server hostname or IP

NFS share

ESX Local datastore name

No tuning required for ESX as most are already definedNo options for rsize or wsize

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 63BRKDCT-286814490_04_2008_c2

Version is v3,

Protocol is TCP

Max mount points = 8 by defaultCan be increase to hard limit of 32

Supports almost all VI3 features except VCB

Summary of Features Supported

Protocol Vmotion,

DRS & HA

VCB ESX boot

from diskFC SAN

Yes Yes YesiSCSI SAN

HW init Yes Soon YesiSCSI SAN

SW i it Y S N

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 64BRKDCT-286814490_04_2008_c2

SW init Yes Soon NoNFS

Yes No No

Page 33: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

33

Choosing Disk Technologies

Traditional performance factorsCapacity / PriceCapacity / Price

Disk types (SCSI, FC, SATA/SAS)

Access Time; IOPS; Sustained Transfer Rate

Drive RPM to reduce rotational latency

Seek time

Reliability (MTBF)

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 65BRKDCT-286814490_04_2008_c2

VM performance gated ultimately by IOPS density and storage space

IOPS Density -> Number of read IOPS/GB Higher = better

The Choices One Needs to Consider

FS vs. RawVMFS vs. RDM (when to use)

NFS vs. BlockNAS vs. SAN (why use each)

iSCSI vs. FCWhat is the trade off?

Boot from SANSome times needed for diskless servers

Recommended Size of LUN

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 66BRKDCT-286814490_04_2008_c2

Recommended Size of LUNit depends on application needs…

File system vs. LUN snapshots (host or array vs. Vmware VMFS snapshots) – which to pick?

Scalability (factors to consider) # hosts, dynamic adding of capacity, practical vs. physical limits

Page 34: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

34

Trade Offs to Consider

Ease of provisioning

Ease of on-going managementEase of on-going management

Performance optimization

Scalability – Head room to grow

Function of 3rd Party servicesRemote Mirroring

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 67BRKDCT-286814490_04_2008_c2

Backups

Enterprise Systems Management

Skill level of administration team

How many shared vs. isolated storage resources

Isolate vs. Consolidate Storage Resources

RDMs map a single LUN to one VM

One can also dedicate a single VMFS VolumeOne can also dedicate a single VMFS Volume to one VM

When comparing VMFS to RDMs both the above configurations are what should be compared

The bigger question is how many VM can share a single VMFS Volume without contention causing pain

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 68BRKDCT-286814490_04_2008_c2

The answer is that it depends on many variablesNumber of VMs and their workload type

Number of ESX servers those VM are spread across

Number of concurrent request to the same disk sector/platter

Page 35: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

35

Isolate vs. Consolidate

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 69BRKDCT-286814490_04_2008_c2

Increased utilizationEasier provisioningLess management

Poor utilizationIslands of allocations More management

Where Have You Heard This Before

Remember the DAS SAN migration

Convergence of LAN and NASConvergence of LAN and NAS

All the same concerns have been raised beforeWhat if the work load of some cause problems for all?

How will we know who is taking the lions share of resource?

What if it does not work out?

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 70BRKDCT-286814490_04_2008_c2

The Earth Is Flat!

If Man Were Meant to fly He Would Have Wings

Our Biggest Obstacle Is Conventional Wisdom!

Page 36: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

36

VMFS vs. RDM—RDM Advantages

Virtual machine partitions are stored in the native guest OS file system format, facilitating “layered applications” that need this level of access

As there is only one virtual machine on a LUN, you have much finer grain characterization of the LUN,and no I/O or SCSI reservation lock contention. The LUN can be designed for optimal performance

With “Virtual Compatibility” mode virtual machines

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 71BRKDCT-286814490_04_2008_c2

With Virtual Compatibility mode, virtual machines have many of the features of being on a VMFS, such as file locking to allow multiple access, and snapshots

VMFS vs. RDM—RDM Advantages

With “Physical Compatibility” mode, it gives a virtual machine the capability of sending almost all “low-level” SCSI commands to the target device, including command and control to a storage controller, such as through SAN Management agents in the virtual machine.

Dynamic Name Resolution: Stores unique information about LUN regardless of changes to physical address

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 72BRKDCT-286814490_04_2008_c2

changes due to hardware or path changes

Page 37: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

37

VMFS vs. RDM—RDM Disadvantages

Not available for block or RAID devices that do not report a SCSI serial number

No snapshots in “Physical Compatibility” mode, only available in “Virtual Compatibility” mode

Can be very inefficient, in that, unlike VMFS, you can only have one VM access a RDM

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 73BRKDCT-286814490_04_2008_c2

RDMs and Replication

RDMs mapped RAW LUNs can be replicated to the Remote Site

RDMs reference the RAW LUNs viathe LUN number

LUN ID

VMFS3 Volumes on Remote site will have unusable RDM configuration if either properties change

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 74BRKDCT-286814490_04_2008_c2

Remove the old RDMs and recreate them Must correlate RDM entries to correct RAW LUNs

Use the same RDM file name as old one to avoid editing the vmx file

Page 38: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

38

Storage—Type of Access

RAW

RAW may give better

VMFS

Leverage templates andRAW may give better performance

RAW means more LUNsMore provisioning time

Advanced features still work

Leverage templates and quick provisioning

Fewer LUNs means you don’t have to watch Heap

Scales better with Consolidated Backup

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 75BRKDCT-286814490_04_2008_c2

oPreferred Method

Storage—How Big Can I Go?

One Big Volume or Individual?Will you be doing replication?Will you be doing replication?

More granular slices will help

High performance applications?

Individual volumes could help

With Virtual Infrastructure 3

VMDK, swap, config files, log files, and snapshots all live

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 76BRKDCT-286814490_04_2008_c2

on VMFS

Page 39: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

39

What Is iSCSI?

A SCSI transport protocol, enabling access to storage devices over standard TCP/IP networks

Maps SCSI block-oriented storage over TCP/IP

Similar to mapping SCSI over Fibre Channel

“Initiators”, such as an iSCSI HBA in an ESX Server, send SCSI commands to “targets”, located in iSCSI storage systems

Block storage

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 77BRKDCT-286814490_04_2008_c2

IP

VMware iSCSI Overview

VMware added iSCSI as a supported option in VI3Block-level I/O over TCP/IP using SCSI-3 protocolBlock level I/O over TCP/IP using SCSI 3 protocol

Supporting both Hardware and Software Initiators

GigE NiCs MUST be used for SW Initiators (no 100Mb NICs)

Support iSCSI HBAs (HW init) and NICs for SW only today

Check the HCL for supported HW Initiators and SW NICs

What we do not support in ESX 3.0.1

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 78BRKDCT-286814490_04_2008_c2

What we do not support in ESX 3.0.1 10 gigE

Jumbo Frames

Multi Connect Session (MCS)

TCP-Offload Engine (TOE) Cards

Page 40: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

40

VMware ESX Storage Options

VM VM

iSCSI/NFS

VM VM

DAS

VM VM

FC

80%+ of install base uses FC storage

iSCSI i l i SMB

SCSIFC

VM VM

FC

VM VM VM VM

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 79BRKDCT-286814490_04_2008_c2

iSCSI is popular in SMB market

DAS is not popular because it prohibits VMotion

Virtual Servers Share a Physical HBAA zone includes the physical hba and the storage arrayAccess control is demanded to storage array “LUN masking and mapping”, it isal

er

s

FC

Storage Array(LUN Mapping and Masking)MDS9000

array LUN masking and mapping , it is based on the physical HBA pWWN and it is the same for all VMsThe hypervisor is in charge of the mapping, errors may be disastrous

Hyp

ervi

sor

Virt

uSe

rve

Mapping

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 80BRKDCT-286814490_04_2008_c2

Zone FC Name Server

pWWN-P

Single Login on a Single Point-to-Point Connection

HW

pWWN-P FC

Page 41: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

41

NPIV Usage Examples‘Intelligent Pass-thru’Virtual Machine Aggregation

FC FC FC FC

FC FC FC FC

FC

Switch becomes an HBA concentrator

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 81BRKDCT-286814490_04_2008_c2

NP_Port

F_PortF_Port

FC

NPIV enabled HBA

Raw Device Mapping

RDM allows direct read/write access VM1 VM2

to disk

Block mapping is still maintained within a VMFS file

Rarely used but important for clustering

FC FC

RDM

Mapping

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 82BRKDCT-286814490_04_2008_c2

important for clustering (MSCS supported)

Used with NPIV environments

VMFSFC

Page 42: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

42

Storage Multi-Pathing

No storage load balancing, strictly failover

Two modes of operation dictate behavior (Fi d d M t R t)(Fixed and Most Recent)

Fixed ModeAllows definition of preferred paths

If preferred path fails a secondary path is used

If preferred path reappears it will fail back

Most Recently UsedIf current path fails a secondary path is used

FC

VM VM

FC

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 83BRKDCT-286814490_04_2008_c2

If previous path reappears the current path is still used

Supports both Active/Active and Active/Passive arrays

Auto detects multiple paths

Q and A

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 84BRKDCT-286814490_04_2008_c2

Page 43: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

43

Recommended Reading

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 85BRKDCT-286814490_04_2008_c2

Recommended Reading

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 86BRKDCT-286814490_04_2008_c2

Page 44: BRKDCT-2868

© 2006, Cisco Systems, Inc. All rights reserved.Presentation_ID.scr

44

Complete Your Online Session Evaluation

Give us your feedback and you could win fabulous prizes. Winners announced daily.

Don’t forget to activate your Cisco Live virtual account for access to

Receive 20 Passport points for each session evaluation you complete.

Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.

all session material on-demand and return for our live virtual event in October 2008.

Go to the Collaboration Zone in World of Solutions or visit www.cisco-live.com.

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 87BRKDCT-286814490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 88BRKDCT-286814490_04_2008_c2