physical to virtual: leveraging vmware's new vsphere at the
DESCRIPTION
TRANSCRIPT
Physical to Virtual:Leveraging VMware’s new vSphere at the Faculty of
Medicine
Jean-Ray ArseneauSupervisor Analyst – Information Technology
Faculty of Medicine
Alternative titles (from colleagues)...
• Physical to Virtual: He’s only been here a few months and we already don’t recognize the infrastructure
• Physical to Virtual: How to turn an existing quarter of a million dollar infrastructure into a doorstop
• Physical to Virtual: You want to do what now?
Leveraging VMware’s new vSphere at the Faculty of Medicine
Original Project Goals
• The original project goal was to have a backup infrastructure for our corporate servers in case of failure;
• Servers were 5+ years old, hardware was beginning to fail;• Originally wanted to P2V (Physical To Virtual) servers on a
nightly basis in order to restore them in a virtual infrastructure in case of hardware failure (emergency scenario only);
• Needed a quick way to restore the entire server state in the case of hardware/software malfunction or disaster
• Wanted to implement our own “in-house” server backup solution either to Tape or Disk solution.
Leveraging VMware’s new vSphere at the Faculty of Medicine
Like any Systems Analyst....
• I wasn’t satisfied...• The project needed more “oomph”;• This was the perfect time to migrate our corporate servers to a
virtual infrastructure;• VMware’s technology is proven and used in such enterprise
environments as:– CITIGroup (US banking enterprise)– City of Ottawa– Government of Canada
• Perfect opportunity to consolidate our servers, lower our power/cooling footprint, make space in our datacenter, facilitate management of our servers and best of all – lower costs.
• Given everything listed above, backing up server states will be seamless and integrate well into the environment without creating too many headaches.
Leveraging VMware’s new vSphere at the Faculty of Medicine
Pre-Virtualization: Servers
Leveraging VMware’s new vSphere at the Faculty of Medicine
Physical
Corporate Servers
20
Physical Servers 20 (spread across 2 BladeCenters)
CPU Capacity 240Ghz
Memory Capacity 80GB
Max CPU Consumption
10Ghz
Max Memory Consumption
48GB
Pre-Virtualization: SAN
Leveraging VMware’s new vSphere at the Faculty of Medicine
Physical
Storage subsystems
3
Total available SAN storage
11TB
Provisioned storage
8.8TB
Used storage 3.8TB
Room for expansion
2TB
Planning our new infrastructure
• How many ESXi hosts (VMware’s server OS) do we need vs. should we have?– S.R. Hadden Quote:
http://www.youtube.com/watch?v=8-1n1BliRQ8– How much CPU/RAM should each of these servers have?– Do we plan for future expansion?
• How do we restructure our SAN? Do we restructure our SAN?– Do we stick with RDM (Raw LUNs) or move solely to
VMDK? How about a combination of both?• What kind of networking do we want? How many “pipes” are
going to be fed into each ESX hosts? Do we have enough bandwidth?
Leveraging VMware’s new vSphere at the Faculty of Medicine
Planning our new infrastructure (con’t)
• What type of VMware licenses do we require? What features do we want in our infrastructure?
• What will we be migrating over to the virtual datacenter? Do we move researchers as well?
If you use SPF1, expect to get burned. No Single Point of Failure
Leveraging VMware’s new vSphere at the Faculty of Medicine
“Virtual” Datacenter Advantages
• Central point of management for deploying and managing corporate servers
• Deploy Windows or Linux servers from templates in 20-30 minutes instead of hours
• Quickly setup environments for various testing scenarios (great for technicians)
• “Snapshot” servers to easily recover the state of a server prior to doing any major software upgrades
• Provide high availability and dynamic resource scheduling on your servers (more later...)
• Easily add more resources to those servers requiring more (in the case of Windows 2008+, hot-add memory and virtual CPUs)
Leveraging VMware’s new vSphere at the Faculty of Medicine
“Virtual” Datacenter Advantages (con’t)
• Managing SAN storage becomes easier (if using VMDK instead of RDM)
• Prioritize which servers get more“shares”/”slices” of the pie in case of the ESX server gets bogged down*
• Perform maintenance/upgrades/configuration changes on physical ESX hosts while providing zero downtime to your clients
• Redundant systems in addition to High Availability and Dynamic Resource Scheduling provide increased availability to your clients
• Virtual hardware doesn’t fail
Leveraging VMware’s new vSphere at the Faculty of Medicine
VMware vSphere Features(that make our lives easier...)
• High Availability (HA)– Provides the guest/server OS with a “heartbeat”. If the
heartbeat is lost, the VM automatically gets restarted (and on a different host if the server OS experiences a failure). This happens within 30 seconds of the failure.
• Dynamic Resource Scheduling (DRS)– Allows a system administrator schedule resource policies to
balance the load on a cluster of ESX servers.– Can “powerdown” (sleep) unused ESX servers during non-
peak periods and automatically “powerup” these servers during a spike in usage.
– Allows you to set affinity rules for VMs that must be separate or together (ie: MSCS/SPF... DFS example)
Leveraging VMware’s new vSphere at the Faculty of Medicine
VMware vSphere Features (con’t)
• Dynamic Resource Scheduling (DRS) (con’t...)– Allows to set power-on policies and power-on order when
restarting VMs due to cluster failures.– Automatically shuffle VMs (using vMotion) across the cluster
to compensate for required resources.• vMotion and Storage vMotion
– vMotion: “Live”/”Hot” migrate a VM off one ESX host and onto another ESX host without disrupting the server/application being migrated
– Storage vMotion: “Live”/”Hot” migrate a VM’s disk file (.vmdk) from one storage LUN to another storage LUN*
Leveraging VMware’s new vSphere at the Faculty of Medicine
VMware vSphere Features (con’t)
• Alert Management– vSphere can monitor various virtual and physical aspects of
the virtual datacenter and notify the appropriate personnel via e-mail/SNMP:
• VM CPU/Memory usage• Host CPU/Memory usage• SAN usage / connectivity• Migration / DRS issues, automatic migration events• Physical Components: memory, cpu, battery, power
supply, fans, hard disks
Leveraging VMware’s new vSphere at the Faculty of Medicine
Physical vs. Virtual Comparison: Servers
Leveraging VMware’s new vSphere at the Faculty of Medicine
Physical Virtual Notes
Corporate Servers
20 20
Physical Servers 20 (spread across 2 BladeCenters)
2
CPU Capacity 240Ghz 44.8Ghz
Memory Capacity 80GB 128GB
CPU Consumption
<10Ghz ~5-8Ghz Spread across 32VMs
Memory Consumption
<48GB ~43-48GB Spread across 32VMs
Physical vs. Virtual Comparison: SAN
Leveraging VMware’s new vSphere at the Faculty of Medicine
Physical Virtual Notes
Storage subsystems
3 3
Total available SAN storage
11TB 11TB Growing to 16TB “soon”
Provisioned storage
8.8TB 8.8TB
Used storage 3.8TB 3.8TB
Room for expansion
2TB 7.2TB* Space is utilized on the SAN on an “as-needed” basis using thin-provisioned disks
The Infrastructure At Work...
Leveraging VMware’s new vSphere at the Faculty of Medicine
• Sept 14, 2009 – AM– Experienced an ESXi host failure – lost one major file server,
primary web server* and about 8 other servers– All servers were rebooted and back online within 2 minutes,
minimizing client interruptions• Almost Weekly:
– E-mails from the system indicating disk usage, resource usage, errors, etc.
• Deploy vApp – Specifically vCMA (mixed with other apps)– For true admins who are always on the go.– http://screencast.com/t/OfHCsgNx
• vMotion Demo (time permitting...)
Current Project Status
• Virtual “datacenter” fully implemented• SAN is 75% re-carved for the new infrastructure, recovering
terabytes of unused space• Backup solutions are currently being tested – not all software
support vSphere or ESXi 4.0 yet, testing still to come.– Moving from Raw LUNs to VMDKs permits us to easily
backup and restore a server state very quickly or send it offsite to a DR-site (future project?).
– VMware currently offers backup solution – still buggy, but included in vSphere as “VMware Data Recovery” vApp – allows for backup to LUN, CIFS (onsite/offsite). Promising, will have to continue to evaluate.
Leveraging VMware’s new vSphere at the Faculty of Medicine
Questions...?
Leveraging VMware’s new vSphere at the Faculty of Medicine