vsphere on nas design considerations

Post on 14-May-2015

25.871 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

This presentation discusses some design considerations for running VMware vSphere on NFS.

TRANSCRIPT

Design Considerationsfor vSphere on NFSDiscussing some design considerationsfor using vSphere with NFS

Scott Lowe, VCDX 39vExpert, Author, Blogger, Geek

http://blog.scottlowe.org / Twitter: @scott_lowe

Before we start

•Get involved! Audience participation is encouraged and requested.

• If you use Twitter, feel free to tweet about this session (use hashtag #VMUG or handle @SeattleVMUG)

• I encourage you to take photos or videos of today’s session and share them online

•This presentation will be made available online after the event

Your name is familiar...

•Some NFS basics

•Some link aggregation basics

•NFS bandwidth

•Link redundancy

•NFS and iSCSI interaction

•Routed NFS access

•Other considerations

Agenda

•All versions of ESX/ESXi use NFSv3 over TCP

•NFSv3 uses a single TCP session for data transfer

•This single session originates from one VMkernel port and terminates at the NAS IP interface/export

•vSphere 5 adds support for DNS round robin but still uses single TCP session and only resolves DNS name once

Some NFS Basics

•Requires unique hash values to place flows on a link in the bundle

• Identical hash values will always result in the same link being selected

•Does provide link redundancy

•Doesn’t increase per-flow bandwidth, only aggregate bandwidth

•Need special support to avoid single point of failure (SPoF)

Some Link Aggregation Basics

•Can’t use link aggregation to increase per-datastore bandwidth

•Can’t use DNS round robin to increase per-datastore bandwidth

•Can’t use multiple VMkernel NICs to increase per-datastore bandwidth

•Must move to a faster network transport (from 1Gb to 10Gb Ethernet, for example)

•That being said, most workloads are not bandwidth constrained

NFS Bandwidth

•No concept of multipathing; link redundancy must be managed at the network layer

•No concept of multiple active “paths” per datastore

•Link aggregation helps but is not required

Link Redundancy

• iSCSI traffic is generally “pinned” to specific uplinks via port binding/multipathing configuration; not so for NFS traffic

•Traffic could “cross” uplinks under certain configurations

•Need to keep separate with:

•Per-port group failover configurations

•Separate vSwitches

•Separate IP subnets for iSCSI and NFS traffic

NFS and iSCSI Interaction

Routed NFS Access

•Supported as of vSphere 5.0 U1

•Be sure to use FHRP (HSRP or VRRP) for gateway redundancy and apply QoS where needed

•Can’t use IPv6 or vSphere Distributed Switch (VDS)

•Be sure latency won’t be an issue (WAN routing not supported)

•More information available at http://blogs.vmware.com/vsphere/2012/06/vsphere-50-u1-now-supports-routed-nfs-storage-access.html

•Thin provisioned VMDKs: need VAAI-NFS plugin to do thick provisioned VMDKs

•Datastore sizing: SCSI locking not an issue, but still need to consider:

•Underlying disk architectures/layout and IOPS requirements

•Ability to meet RPO/RTO

•Jumbo frames: can be useful, but not necessarily required

•ESXi configuration recommendations: follow vendor-provided recommended practices

Other Considerations

Questions &Answers

One more thing...

Coming to VMworld?

• If you’re coming to VMworld(and you should be!), considerbringing your spouse/partnerwith you!

•Spousetivities will be offeringplanned, organized activities for spouses/partners/friends traveling with VMworld conference attendees

•See http://spousetivities.com for more information

Thank you!

top related