measuring nexsan performance and compatibility in virtualized environments

22
Measuring Nexsan Performance and Compatibility in Virtualized Environments Data Center Converged SAN Infrastructure openBench Labs Analysis :

Upload: suministros-obras-y-sistemas

Post on 30-May-2015

1.469 views

Category:

Technology


2 download

DESCRIPTION

Measuring Nexsan Performance and Compatibility in Virtualized Environments, Medición de desempeño y compatibilidad de de Nexsan en ambietes virtualizados.

TRANSCRIPT

Page 1: Measuring Nexsan Performance and Compatibility in Virtualized Environments

Measuring Nexsan Performance andCompatibility in Virtualized Environments

DataCenterConvergedSANInfrastructure

openBench Labs

Analysis:

Page 2: Measuring Nexsan Performance and Compatibility in Virtualized Environments

Author: Jack Fegreus, Ph.D.Managing DirectoropenBench Labs

http://www.openBench.comSeptember 15, 2010

Measuring Nexsan Performance andCompatibility in Virtualized Environments

Analysis:

Jack Fegreus is Managing Director of openBench Labs and consults throughRidgetop Research. He also contributes to InfoStor, Virtual Strategy Magazine,and Open Magazine, and serves as CTO of Strategic Communications.Previously he was Editor in Chief of Open Magazine, Data Storage, BackOfficeCTO, Client/Server Today, and Digital Review. Jack also served as a consultantto Demax Software and was IT Director at Riley Stoker Corp. Jack holds a Ph.D.in Mathematics and worked on the application of computers to symbolic logic.

Page 3: Measuring Nexsan Performance and Compatibility in Virtualized Environments

Table of Contents

Executive Summary 04

VOE Test Scenario 06

SASBeast Performance Spectrum 13

Customer Value 19

03

Table of Contents

Page 4: Measuring Nexsan Performance and Compatibility in Virtualized Environments

The mission of IT is to get the right information to the right people in time in orderto create value or mitigate risk. With this in mind, the growing use of digital archiving,rich media in corporate applications, and a Virtual Operating Environment (VOE), suchas VMware® vSphere™, is driving double-digit growth in the volume of data stored. Thathas made data storage the cornerstone of IT strategic plans for reducing capital expense(CapEx) and operating expense (OpEx) resource costs.

To meet the needs of IT atsmall to medium enterprise(SME) sites, Nexsan has evolvedits line of SASBeast™ storagearrays around highly flexiblesoftware architecture that can beintegrated to the point oftransparency in a Windows Server2008 R2 environment. TheseNexsan arrays can support a fullhierarchy of SSD, SAS, and SATAdrives in complex SAN fabricsthat utilize both Fibre Channeland iSCSI paths. For SME sites, aSASBeast can provide multiplestorage targets that support a widerange of application-specificrequirements stemming fromService Level Agreements (SLAs).

The robust architecture of theNexsan SASBeast provides ITwith a single platform that cansatisfy a wide range of storagemetrics with respect to access

(IOPS), throughput (MB per second), or capacity (price per GB). Nonetheless, helping ITcontain traditional CapEx provisioning costs through the deployment of hierarchicalstorage resources is only the starting point for the business value proposition of theSASBeast. Through tight integration of the Nexsan Management Console with the

Executive Summary

Executive Summary

“For an SME site to successfully implement a VOE, Nexsan provides astorage infrastructure that is capable of efficiently supporting the

characteristic I/O patterns that distinguish a VOE host server to deliveroptimal performance.”

04

openBench Labs Test Briefing:Nexsan SASBeast® Enterprise Storage Array

1) Enhance Administrator Productivity: An embedded WEB-based utilityenables the management of multiple storage arrays from one interface,which can be integrated with the Microsoft Management Console and theVirtual Disk Service to provide a complete single-pane-of-glass storagemanagement interface.

2) Maximize Density and Reliability with Hierarchical Storage: 4U chassissupports any mix of 42 SSD, SAS, and SATA drives mounted vertically—front-to-front and back-to-back—to cancel rotational vibrations, reduce headpositioning errors, optimize thermal operations, and extend drive life.

3) Maximize Energy Savings: AutoMAID® (Massive Array of Idle Disks)technology automates the placing of drives in a hierarchy of idle states toconserve energy, while maintaining near-instantaneous access to data.

4) Maximize I/O Performance: Dual active-active RAID controllers support 42simultaneously active drives:

Iometer Streaming I/O Benchmark: Total full-duplex throughput reached1GB per second, while simultaneously streaming 128KB reads and128KB writes using three SAS- and one SATA-based RAID-5 volumes.

Iometer I/O Operations Benchmark: 4KB reads and writes (80/20percent mix), averaged 2,330 IOPS on a SAS RAID-5 volume and scaledto 4,380 IOPS with a second volume.

Page 5: Measuring Nexsan Performance and Compatibility in Virtualized Environments

Microsoft Management Console (MMC) and Virtual Disk Services (VDS), a NexsanSASBeast presents IT administrators with a unified SAN management suite to cost-effectively manage the reliability, availability and scalability (RAS) of multiple petabytesof data. This is particularly important for OpEx costs, which typically are 30 to 40 timesgreater than CapEx costs over the life of a storage array.

Nexsan’s simplified storage management and task automation is particularly importantwhen implementing a VOE, which introduces a complete virtualization scheme involvingservers, storage, and networks. VOE virtualization with multiple levels of abstraction cancomplicate important IT administration functions. Reflecting these problems, IDG, in arecent survey of CIOs implementing server virtualization, reported that the percent ofCIOs citing an increase in the complexity of datacenter management jumped from 47percent at the end of 2008 to 67 percent at the end of 2009.

Virtualization difficulties are often exacerbated by multiple incompatible advancedpoint solutions, which often come as extra-cost options of storage products. Thesepowerful proprietary features are particularly problematic for IT at SME sites. Featuresdesigned to resolve complex issues encountered in large datacenters frequently onlyintroduce incompatibilities among interdependent resources and limit the benefits thatSME sites can garner from a VOE, which independently provides IT with sufficientlyrobust and easy-to-use solutions to deal with the intricacies of hypervisor architecture.

For an SME site to successfully implement a VOE, Nexsan provides a storageinfrastructure that is capable of efficiently supporting the characteristic I/O patterns thatdistinguish a VOE host server. With such a foundation in place, IT is free to use thecomprehensive virtualization features of their VOE to provision resources for VMs,commission and decommission VM applications, and migrate VMs among multiple hostsin real time to meet changing resource demands. What’s more, advanced third partyapplications designed for a VOE are far more likely to recognize VOE solutions for suchissues as thin provisioning than hardware-specific solutions.

To meet the exacting demands of multiple IT environments, including that of a VOE,a Nexsan SASBeast provides IT with a storage resource fully optimized for reliability andperformance. Each physical unit features design innovations to extend the lifespan ofinstalled disk drives. The design of the SASBeast also promotes infrastructure scale-out,as each additional unit also adds controllers and ports to maintain performance.

More importantly, the scaling-out of a storage infrastructure with SASBeast units hasa minimal impact on IT overhead, which is the key driver of OpEx costs. Each SASBeastcomes with an embedded WEB-enabled Graphical User Interface (GUI), dubbedNexScan®, which allows IT to provision a single subsystem with a hierarchy of drivetypes. Furthermore, NexScan simplifies administrator tasks in a Windows Serverenvironment through tight integration of its management software with MMC and VDSfor end-to-end storage management. With NexScan, an administrator can provision alogical disk on any SASBeast, export it to a Windows Server, and provision the serverwith that logical disk from a single interface.

05

Executive Summary

Page 6: Measuring Nexsan Performance and Compatibility in Virtualized Environments

I/O RANDOMIZATION IN A VOE

With server virtualization rated as one of the best ways to optimize resourceutilization and minimize the costs of IT operations, many sites run eight or more serverVMs on each host in a production VOE. As a result, VOE host servers must be able todeliver higher I/O throughput loads via fewer physical connections.

In stark contrast to a VOE,traditional core-driven SANfabrics are characterized by afew storage devices withconnections that fan out overmultiple physical servers.Each server generates amodest I/O stream andmultiple servers seldom accessthe same data simultaneously.From a business softwareperspective, the I/Orequirements for a VM are thesame as those for a physicalserver. From an ITadministrator’s perspective,however, I/O requirements forVOE hosts are dramaticallydifferent. In a VOE, a smallnumber of hosts share a smallnumber of large datastores,while the hosts aggregate andrandomize all of the I/O frommultiple VMs.

Elevated I/O stress alsoimpacts the I/Orequirements of VOE

support servers. In particular, servers used for backup should be capable of handling thelogical disks associated with multiple hosted VMs in parallel.

06

VOE Test Scenario

VOE Test Scenario

“While working with a range of Windows tools for administrators,such as Server Manager and Storage Manager for SANs, we were

able to directly configure and manage storage LUNs for both the FC and theiSCSI fabrics without having to open up a separate Nexsan GUI.”

Page 7: Measuring Nexsan Performance and Compatibility in Virtualized Environments

At theheart of ourvSphere 4.1VOE testenvironment,we ran a mixof twelveserver andworkstationVMs. Toprovide atypical SMEinfrastructure,we utilized aNexsanSASBeast,along with ahybrid SANtopology thatfeatured a4Gbps FCfabric and a1GbE iSCSIfabric. Withthe priceconvergenceof 8Gbps and4Gbps FCHBAs,SASBeastsystems arenow shippingwith a new

generation of 8Gbps FC ports.

While working with a range of Windows tools for administrators, such as ServerManager and Storage Manager for SANs, we were able to directly configure and managestorage LUNs for both the FC and the iSCSI fabrics without having to open up a separateNexsan GUI. What’s more, the Nexsan software, which treats the virtualization of storageover FC and iSCSI fabrics as simple transport options, made it very easy to switch backand forth between fabric connections.

While physically setting up the SASBeast, a number of design elements that enhancereliability stood out. Storage reliability is especially important in a VOE as impaired arrayprocessing for a rebuild, or worse the loss of an array, cascades from one host server

07

VOE Test Scenario

NEXSAN ISCSI & FC CONVERGEENCE

Among the ways that Nexsan simplifies SAN management is through the convergence of iSCSI and FibreChannel devices. Nexsan treats these connections as two simple transport options. Within the Nexsan GUI, wereadily shared volumes that were used as VOE datastores by ESXi hosts and a Windows server that was used torun backup software. More importantly, the SASBeast facilitated our ability to switch between a local DisasterRecovery (DR) scenario, in which both the VOE host and the Windows server connected to the datastore volumeover the FC fabric, and a remote DR scenario, in which the Windows server connected to the datastore via ouriSCSI fabric.

Page 8: Measuring Nexsan Performance and Compatibility in Virtualized Environments

down to multiple VMs.

To extend the service life of disk drives, the SASBeast positions disks vertically inopposing order—alternating between front-to-front and then back-to-back. This layoutdampens the natural rotational vibrations generated by each drive. Mounting all of thedrives in parallel tends to amplify these vibrations and induce head positioning errors onreads and writes. Head positioning errors are particularly problematic in a VOE, which ischaracterized by random small-block I/O requests. In such an environment, data accesstime plays a greater role with respect to transfer time for I/O services.

That vertical disk layout also helps create a positive-pressure air flow inside the unit.High efficiency waste heat transfer in a storage chassis is dependent on molecularcontact as air flows over external drive surfaces. As a result, air pressure is just asimportant as air flow for proper cooling.

To facilitate testing, our Nexsan SASBeast was provisioned with twenty eight 15K rpmSAS drives and fourteen 2TB SATA drives. To configure internal arrays and provideexternal access to target volumes, our SASBeast was set up with two controllers, whichcould be used to create both SAS and SATA arrays and which featured a pair of FC and apair of iSCSI ports per controller. Also embedded in the unit was version 1.5.4 of theNexsan management software, which we tightly integrated with MMC on all of ourWindows-based servers. Using this storage infrastructure, we were able to provide ourVOE and physical server environments with storage hierarchies designed to meet robustsets of application-specific SLA metrics.

In particular, we configured three RAID arrays on the SASBeast: Two arrays utilizedSAS drives and one array utilized SATA drives. For optimal IOPS performance, wecreated a 7.2TB RAID-5 SAS array on controller 0. Then on controller 1, we created a6.6TB RAID-6 SAS array for higher availability.

VOE CONSOLIDATION

We implemented our vSphere™ 4.1 VOE, on a quad-processor HP ProLiant® DL580server running the VMware ESXi™ 4.1 hypervisor. This server hosted 12 VMs running amix of operating systems, which included Windows Server® 2008, Windows Server 2003,SUSE Linux Enterprise Server 11, and Windows 7. Within our VOE, we set up a storagehierarchy that was undergirded by three central datastores that were created on each ofthe three arrays set up on the Nexsan SASBeast.

To manage VM backups, we installed Veeam Backup & Replication v4.1 on a quad-core Dell® PowerEdge® 1900 server, which ran Windows Server 2008 R2 and sharedaccess to each datastore mapped to the VOE host. We tested datastore access over bothour FC fabric, which represented a local DR scenario, and over our iSCSI fabric, whichrepresented a remote DR scenario. In addition, we mapped another RAID-5 SATAvolume to the Dell PowerEdge server to store backup images of VMs.

The number of VMs that typically run on a VOE host along with the automated

08

VOE Test Scenario

Page 9: Measuring Nexsan Performance and Compatibility in Virtualized Environments

movement of those VMs among hosts as a means to balance processing loads puts apremium on storage resources with low I/O latency. What’s more, increasing the VMdensity on a host also serves to further randomize I/O requests as the host consolidatesmultiple data streams from multiple VMs.

To handle the randomizedI/O patterns of a VOE hostwith multiple VMs, the NexsanSASBeast provides a highlyflexible storage infrastructure.In addition to being able toprovision a SASBeast with awide range of disk drive types,administrators have an equallybroad choice of options fromwhich to configure accesspolicies for internal arrays andexternal volumes.

Using a Nexsan SASBeastwith dual controllers, ITadministrators are free toassign any RAID array that iscreated to either controller.Independent of arrayownership, administrators setup how logical volumescreated on those arrays areaccessed over Fibre Channeland iSCSI SAN fabrics. Inparticular, a SASBeast with twocontrollers and two FC portson each controller can presentfour distinct paths to eachvolume exposed.

Without any externalconstraints, four distinct paths to a storage device will create what appear to be fourindependent devices on the client system. That presents an intolerable situation to mostoperating systems, which assume exclusive ownership of a device. To resolve thisproblem, the simple solution is to use an active-passive scheme for ports and controllersthat enables only one path at a time. That solution, however, precludes load balancingand link aggregation.

Nexsan provides IT administrators with a number of options for sophisticated loadbalancing via multipath I/O (MPIO). The range of options for each unit is set within the

VOE Test Scenario

09

NEXSAN VOE OPTIMIZATION

To maximize performance in our VOE, we biased I/O caching on the SASBeast forrandom access. As the host consolidates the I/O streams of multiple VMs, sequential I/Osfor multiple VMs are consolidated by the host with the result that random access I/Obecomes a key characteristic for a VOE host. We also set up an I/O multipathing schemeon the Nexsan SASBeast that allowed us to map any array volume to one or all FibreChannel and all iSCSI ports.

Page 10: Measuring Nexsan Performance and Compatibility in Virtualized Environments

Nexsan GUI. Volume access can be restricted to a simple non-redundant controller setupor can be allowed to utilize all ports and all LUNs (APAL): The later configurationprovides the greatest flexibility and protection and is the only configuration to supportiSCSI failover.

ASYMMETRIC MPIO

To provide a high performance scale-out storage architecture, each SASBeast supportstwo internal controllers that are each capable of supporting two external FC ports andtwo external iSCSI ports. When a RAID array is created, it is assigned a master controllerto service the array. If the SASBeast is placed in APAL mode, IT administrators can mapany volume to all of the FC and iSCSI ports as a load balancing and failover scheme. Inthis situation I/O requests directed to the other controller incurs added overhead neededto switch control of the array.

To garner the best I/O performance in a high-throughput low-latency environment, ahost must be able to implement a sophisticated load balancing scheme that distinguishesbetween the two ports that are on the controller servicing the volume from the two portson the other controller. The key is to avoid the overhead of switching controllers.

To meet thischallenge, Nexsanimplements AsymmetricLogical Unit Access(ALUA) when exportingtarget volumes. TheNexsan device identifiesthe paths that are activeand optimized (i.e. pathsthat connect to a port onthe controller servicingthe device) and pathsthat are active but arenot optimized.Nonetheless, for thissophisticated MPIOmechanism to work, itmust be recognized bythe host operatingsystem that is using theSASBeast as a target.

Both Windows Server2008, which uses asophisticated MPIO

driver module from Nexsan, and the vSphere 4.1 hypervisors, ESXi 4.1 and ESX 4.1,recognize the Nexsan SASBeast as an ALUA target. As a result, IT administrators can set

VOE Test Scenario

10

VSPHERE 4.1 ASYMMETRIC MPIO DISCOVERY

When either an ESX or an ESXi 4.1 hypervisor discovers a volume exposed by the NexsanSASBeast, it defaults to making only one path active for I/O. We changed this default to round robinaccess. When this change is made, the new drivers in ESXI 4.1 did a SCSI inquiry on the FCvolume and discovered that the Nexsan was an ALUA device. As a result, the hypervisor set up thefour paths to the servicing controller as active optimized and the four paths to the other controlleras non-optimized. Using ESX or ESXi 4.0, all eight paths would set as equally active for I/O.

Page 11: Measuring Nexsan Performance and Compatibility in Virtualized Environments

an MPIO policy on any host running one of these operating systems that takes advantageof knowing which SAN paths connect to the controller servicing a logical drive.

On Windows Server 2008, the baseasymmetric access policy is dubbedRound Robin with Subset. This policytransmits I/O requests only to ports onthe servicing controller. Should theservicing controller fail, Nexsan passesarray servicing to the other controller inthe SASBeast and the host computerautomatically starts sending I/O requeststo the active ports on the new servicingcontroller.

To understand how the driverchanges in the new VMware hypervisorsimpact host I/O throughput, wemonitored FC data traffic at the switchports connected to the Nexsan SASBeastand the VOE host. We tested I/Othroughput by migrating a server VMfrom a datastore created on the SASRAID-6 array, which was serviced bycontroller 1, to a datastore created on theSAS RAID-5 array, which was servicedby controller 0. We repeated this testtwice: once with the VOE host runningESXI 4.1 and once with the host runningESXi 4.0 Update 2.

Running either ESXi 4.0 or ESXi 4.1,the host server balanced all read andwrite requests over all of its FC ports;however, the I/O response patterns onthe SASBeast were dramatically differentfor the two hypervisors: ESXi 4.1transmitted I/O requests exclusively tothe FC ports on the controller servicing atarget volume. In particular, when a VOEhost was running ESXi 4.1, the host onlydirected reads to controller 1, whichserviced the SAS RAID-6 volume, andwrites to controller 0, which serviced theSAS RAID-5 volume. In contrast readand write data requests were transmitted

VOE Test Scenario

11

VSPHERE ALUA PERFORMANCE

Upgrading to vSphere 4.1 from vSphere 4.0 boosted I/O throughput by 20%for VMs resident on a datastore imported from the SASBeast. Moreimportantly for IT OpEx costs, gains in I/O throughput required a simplechange in the MPIO policy for each datastore imported from the SASBeast.

Page 12: Measuring Nexsan Performance and Compatibility in Virtualized Environments

to all of the FC ports on both of the SASBeast controllers equally, when the host wasrunning ESXi 4.0.

With all I/O requests directed equally across all FC ports under ESXi 4.0, throughputat each undistinguished port was highly variable as I/O requests arrived for both disksserviced by the controller and disks serviced by the other controller. As a result, I/Othroughput averaged about 200MB per second.

On the other hand, with our VOE host running ESXi 4.1, I/O requests for a logicaldisk from the SASBeast were only directed to and balanced over the FC ports on thecontroller servicing that disk. In this situation, full duplex reads and writes averaged240MB per second as we migrated our VM from one datastore to another. For IToperations, I/O throughput under ESXi 4.1 for a VM accessing a SASBeast disk volumereached comparable levels of performance—particularly with respect to IOPS—to that ofa physical server.

VOE Test Scenario

12

Page 13: Measuring Nexsan Performance and Compatibility in Virtualized Environments

SCALE-UP AND SCALE-OUT I/O THROUGHPUT

We began our I/O tests by assessing the performance of logical disks from the NexsanSASBeast on a physical server, which was running Windows Server 2008 R2. For thesetests we created and imported a set of volumes from each of the three arrays that we hadinitially created.

We used Iometer to generate all I/O test workloads on our disk volumes. To assessstreaming sequential throughput, we used large block reads and writes, which aretypically used by backup, data mining, and online analytical processing (OLAP)applications. All of these datacenter-class applications need to stream large amounts ofserver-based data rapidly to be effective. As a result, we initially focused our attention ofusing the SASBeast in a 4Gbps Fibre Channel SAN.

We began ourbenchmark testingby reading datausing large blockI/O requests overtwo FCconnections.Maximum I/Othroughput variedamong our threelogical volumes byonly 10 percent.During these tests,

the fastest reads were measured at 554MB per second using volumes created on ourRAID-5 SAS array. What’s more, the aggregate read throughput for all targets using twoactive 4Gbps ports exceeded the wire speed capability of a single 4Gbps FC port.

While we consistently measured the lowest I/O throughput on reads and writes usingSAS RAID-6 volumes, the difference on writes between a SAS RAID-5 volume and a SASRAID-6 volume was only about 7 percent—400MB per second versus 372MB per second.Using the Nexsan SASBeast, the cost for the added security provided by an extra paritybit, which allows two drives to fail in an array and continue processing I/O requests, isvery minimal. This is particularly important for IT sites supporting mission-critical

SASBeast Performance Spectrum

SASBeast Performance Spectrum

“Using four Iometer worker processes—two reading and onewriting on three RAID-5 SAS volumes and one writing on a

RAID-5 SATA volume—we measured total full-duplex throughputfrom the SASBeast at 1GB per second.”

Fibre Channel Sequential Access I/O ThroughputWindows Server 2008 R2 — Round Robin with Subset MPIO on a 4Gbps FC SAN

RAID & Disk TypeRead ThroughputIometer benchmark128KB blocks

Write ThroughputIometer benchmark128KB blocks

Application ThroughputVeeam Backup & Replication 4.1Parallel backup of four VMs

RAID-5 SAS 554 MB/sec 400 MB/sec245 MB/secreading VM data

RAID-6 SAS 505 MB/sec 372 MB/sec

RAID-5 SATA 522 MB/sec 430 MB/sec245 MB/sec

writing backup image

13

Page 14: Measuring Nexsan Performance and Compatibility in Virtualized Environments

applications that require maximum availability and high-throughput performance.

A RAID-6 array provides an important safety net when rebuilding after a drive fails.Since a RAID-6 array can withstand the loss of two drives, the array can be automaticallyrebuilt with a hot-spare drive without risking total data loss should an unrecoverable biterror occur during the rebuild process. On the other hand, a backup of a degradedRAID-5 array should be run before attempting a rebuild. If an unrecoverable bit erroroccurs while rebuilding a degraded RAID-5 array, the rebuild will fail and data stored onthe array will be lost.

When performing writes, the variation in throughput between the disk volumesreached 15 percent. Interestingly, it was SATA RAID-5 volumes that consistentlyprovided the best streaming performance for large-block writes. In particular, using128KB writes to a SATA RAID-5 volume, throughput averaged 430MB per second.Given the low cost and high capacity advantages provided by 2TB SATA drives, theaddition of exceptional write throughput makes the SASBeast an exceptional asset forDisk-to-Disk (D2D) backup operations and other disaster recovery functions. To assessthe upper I/O throughput limits of our Nexsan SASBeast for D2D and other I/O intenseapplications, we used Iometer with multiple streaming read and write processes in orderto scale total throughput. Using four Iometer worker processes—two reading and onewriting on three RAID-5 SAS volumes and one writing on a RAID-5 SATA volume—wemeasured total full-duplex throughput from the SASBeast at 1GB per second.

ISCSI NICHE

On the other hand, streaming throughput on a 1GbE iSCSI fabric has a hard limit of120MB per second on each connection. What’s more, to approach the upper end of thatcomparatively limited of performance, IT must pay close attention to the selection ofequipment. Most low-end switches and even some Ethernet NICs that are typically foundat SMB sites do not support jumbo Ethernet frames or port trunking, which areimportant functions for maximizing iSCSI throughput. What’s more, it’s also importantto isolate iSCSI data traffic from normal LAN traffic.

For iSCSI testing, we utilized jumbo Ethernet frames—9,000 bytes rather than 1,500

SASBeast Performance Spectrum

14

iSCSI Sequential Access I/O ThroughputWindows Server 2008 R2 — Jumbo frames, iSCSI HBAs, and Round Robin MPIO on a 1GbE iSCSI SAN

RAID & Disk TypeRead ThroughputIometer benchmark128KB blocks

Write ThroughputIometer benchmark128KB blocks

Application ThroughputVeeam Backup & Replication 4.1

4 VM backups in parallel

RAID-5 SAS82 MB/sec (1 HBA)146 MB/sec (2 HBAs)

83 MB/sec (1 HBA)146 MB/sec (2 HBAs)

RAID-5 SATA80 MB/sec (1 HBA145 MB/sec (2 HBAs)

85 MB/sec (1 HBA)150 MB/sec (2 HBAs)

136 MB/secwriting backup image

Page 15: Measuring Nexsan Performance and Compatibility in Virtualized Environments

bytes—with QLogic iSCSI HBAs, which offload iSCSI protocol processing and optimizethroughput of large data packets. Our throughput results paralleled our FC fabric results:Streaming throughput differed by about 2 to 5 percent among logical volumes created onSAS and SATA arrays. Once again, the highest read throughput was measured on SAS-based volumes and the highest write throughput was measured on SATA-based volumes.

PUSHING IOPS

In addition to streaming throughput, there is also a need to satisfy small random I/Orequests. On the server side, applications built on Oracle or SQL Server must be able tohandle large numbers of I/O operations that transfer small amounts of data using smallblock sizes from a multitude of dispersed locations on a disk. Commercial applicationsthat rely on transaction processing (TP) include such staples as SAP and MicrosoftExchange. More importantly, TP applications seldom exhibit steady-state characteristics.

Typical TP loads for database-driven applications in an SMB environment averageseveral hundred IOPS. These applications often experience occasional heavy processingspikes, such as at the end of a financial reporting period that can rise by an order ofmagnitude to several thousand IOPS. That variability makes systems running TPapplications among the most difficult for IT to consolidate and among the most ideal totarget for virtualization. A well-managed VOE is capable of automatically marshaling theresources needed to support peak processing demands.

We fully expected to sustain our highest IOPS loads on SAS RAID-5 volumes andwere not disappointed. In these tests, we used a mix of 80 percent reads and 20 percentwrites. In addition we limited the I/O request load with the restriction that the averageI/O request response time had to be less than 30ms.

Using 4KB I/O requests—the size used by MS Exchange, we sustained 2,330 IOPS ona SAS RAID-5 volume, 1,970 IOPS on a SAS RAID-6 volume, and 1,160 IOPS on aSATA RAID-5 volume. Next, we switched our top performing RAID-5 SAS volume fromthe FC to the iSCSI fabric and repeated the test. While performance dropped to 1910

SASBeast Performance Spectrum

15

Random Access ThroughputWindows Server 2008 — Iometer (80% Reads and 20% Writes)

RAID & Disk Type4Gbps FC Fabric

1 logical disk30ms average access time

4Gbps FC Fabric2 logical disks

30ms average access time

1GbE iSCSI Fabric1 logical disk

30ms average access time

MS Exchange Heavy use (75% reads)

4KB I/O 2,000 mail boxes

RAID-5 SAS2,330 IOPS (4KB I/O)2,280 IOPS (8KB I/O)

4,318 IOPS (4KB I/O)4,190 IOPS (8KB I/O)

1,910 IOPS (4KB I/O)1,825 IOPS (8KB I/O)

1,500 IOPS

RAID-6 SAS1,970 IOPS (4KB I/O)1,915 IOPS (8KB I/O)

1,350 IOPS (4KB I/O)1,275 IOPS (8KB I/O)

RAID-5 SATA1,165 IOPS (4KB I/O)1,120 IOPS (8KB I/O)

795 IOPS (4KB I/O)755 IOPS (8KB I/O)

Page 16: Measuring Nexsan Performance and Compatibility in Virtualized Environments

IOPS, it was still at a par with the FC results of our RAID-6 SAS volume and above thelevel that Microsoft suggests for supporting 2,000 mail boxes with MS Exchange.

Next we ran our database-centric Iometer tests with 8KB I/O requests. In these tests,we doubled the amount of data being transferred; however, this only marginally affectedthe number of IOPS processed. With 8KB transactions, which typify I/O access withOracle and SQL Server, we sustained 2,280 IOPS on a SAS RAID-5 volume, 1,915 IOPSon a SAS RAID-6 volume, and 1,120 IOPS on a SATA RAID-5 volume. Once again whenwe connected our SAS RAID-5 volume over our iSCSI fabric, we measured a 20% dropin performance to 1,825 IOPS, which is more than sufficient to handle peak loads onmost database-driven SME applications.

To test transaction-processing scalability in a datacenter environment we addedanother RAID-5 SAS volume to our second SASBeast controller. By using two volumeson our FC fabric, we increased IOPS performance by 85% for both 4KB and 8KB I/Orequests. In our two-volume tests with SAS RAID-5 volumes, we sustained levels of 4,320IOPS and 4150 IOPS with an average response time of less than 30ms with a mix of 80percent reads and 20 percent writes.

STRETCHING I/O IN A VOE

Within our VOE, we employed a test scenario using volumes created on the sameNexsan RAID arrays that we tested with the physical Windows server. To test I/Othroughput, we used Iometer on a VM running Windows Server 2008. Given the I/Orandomization that takes place as a VOE host consolidates the I/O requests frommultiple VMs, we were not surprised to measure sequential I/O throughput at a levelthat was 20% lower than the level measured on a similarly configured physical server.

Nonetheless, at 420MB to 436MB per second for reads and 342MB to 380MB persecond for writes, the levels of streaming throughput that we measured were 60 to 70percent greater than the streaming I/O throughput levels observed when using high-end

SASBeast Performance Spectrum

16

VM I/O Throughput MetricsVM: Windows Server 2008 VM — Host: ESXi 4.1 Hypervisor on a 4Gbps FC SAN

VM DatastoreRAID & Disk Type

Sequential ThroughputStreaming 128KB blocks

Random Access1 Logical disk, 80% Reads 30ms average access time

MS Exchange Heavy use (75% reads)

4KB I/O 2,000 mail boxes

RAID-5 SAS436 MB/sec (Reads)377 MB/sec (Writes)

2,380 IOPS (4KB I/O)2,011 IOPS (8KB I/O)

1,500 IOPS

RAID-6 SAS427 MB/sec (Reads)342 MB/sec (Writes)

2,325 IOPS (4KB I/O)1,948 IOPS (8KB I/O)

RAID-5 SATA420 MB/sec (Reads)380 MB/sec (Writes)

Page 17: Measuring Nexsan Performance and Compatibility in Virtualized Environments

applications, such as backup, data mining, and video editing, on physical servers anddedicated workstations. As a result, IT should have no problems supporting streamingapplications on server VMs or using packaged VM appliances with storage resourcesunderpinned by a SASBeast.

On the other hand, we sustained IOPS levels for random access I/O on RAID-5 andRAID-6 SAS volumes that differed by only 2 to 3 percent from the levels sustained on aphysical server. These results are important for VM deployment of mission-criticaldatabase-driven applications, such as SAP. What’s more, the ability to sustain 2,380 IOPSusing 4KB I/O requests affirms the viability of deploying MS Exchange on a VM.

APPLICATIONS & THE BEAST

The real value of these synthetic benchmark tests with Iometer rests in the ability touse the results as a means of predicting the performance of applications. To put oursynthetic benchmark results into perspective, we next examined full-duplex streamingthroughput for a high end IT administrative application: VOE backup.

What makes a VOE backup process a bellwether application for streaming I/Othroughput is the representation of VM logical disk volumes as single disk files on thehost computer, This encapsulation of VM data files into a single container file makesimage-level backups faster than traditional file-level backups and enhances VMrestoration. Virtual disks can be restored as whole images or individual files can berestored from within the backup image.

More importantly, VMFS treats files representing VM virtual disks—dubbed a .vmdkfiles—analogously to a CD images. Host datastores typically contain only a small numberof these files, which can be accessed by only one VM process at a time. It is up to the OSof the VM to handle file sharing for the data files encapsulated within the .vmdk file.

This file locking scheme allows vSphere hosts to readily share datastore volumes on aSAN. The ability to share datastores among hosts greatly simplifies the implementationof vMotion, which moves VMs from one host to another for load balancing. With shareddatastores, there is no need to transfer data, which makes moving a VM much easier.Before shutting down a VM, its state must be saved. The VM can then be restarted andbrought to the saved state on a new host.

Sharing datastores over a SAN is also very important for optimizing VM backups. Forour VOE backup scenario, we utilized Veeam Backup & Replication 4.1 on a Windowsserver that shared access to all datastores belonging to hosts in our vSphere VOE.

Every VM backup process starts with the backup application sending a VMsnapcommand to the host server to initiate a VM snapshot. In the snapshot process, the VMhost server creates a point-in-time copy of the VM’s virtual disk. The host server thenfreezes the vmdk file associated with that virtual disk and returns a list of disk blocks forthat vmdk file to the backup application. The backup application then uses that block listto read the VM snapshot data residing in the VMFS datastore.

SASBeast Performance Spectrum

17

Page 18: Measuring Nexsan Performance and Compatibility in Virtualized Environments

To implement the fastestand most efficient backupprocess, IT must ensure thatall VM data will be retrieveddirectly from the VMFSdatastores using the vStorageAPI. That means thewindows server runningVeeam Backup & Replicationmust be directly connected tothe VOE datastore. In otherwords, the Windows servermust share each of thedatastores used by VOE hosts.

Through integration withVDS, the Nexsan SASBeastmakes the configuration andmanagement of shared logicalvolumes an easy task for ITadministrators. In addition tothe proprietary Nexsansoftware, IT administratorscan use Storage Manager forSANs to create new LUNsand manage existing ones onthe SASBeast. While StorageManager for SANs provided

less fine-grain control when configuring LUNs, wizards automated end-to-end storageprovisioning from the creation of a logical volume on the SASBeast, to connecting thatvolume over either the FC or iSCSI fabric, and then formatting that volume for use on ourWindows server. As a result, the Nexsan hardware and software provided an infrastructurethat enabled the rapid set up of an optimized environment for Veeam Backup & Recovery.

To minimize backupwindows in a vSphere VOE,Veeam Backup & Replicationuses vStorage APIs todirectly back up filesbelonging to a VM withoutfirst making a local copy.What’s more, Veeam Backupand Replication recognizesdisks with VMFS thinprovisioning to avoid

SASBeast Performance Spectrum

18

SIMPLIFIED VOE DATASTORE SHARING

Reflecting the interoperability of the Nexsan SASBeast, we were able to use StorageManager for SANs within the Server Manager tool, to create and manage volumes, such asthe vOperations datastore used by our ESXi host. In particular we were able to drill down onthe vOperations volume and map it to our Windows Server 2008 R2 system via either ourFC fabric or, our iSCSI fabric.

Application Throughput: VOE Full-Backup RAID-5 SAS Datastore — RAID-5 SATA Backup Repository

ESXi 4.1 Host, Windows Server 2008 R2, Veeam Backup & Recovery 4.1

Datastore Access 4VMs Processed SequentiallyOptimal compression, Deduplication

4VMs Processed in ParallelOptimal compression

FC SAN Fabric 164 MB/sec 245 MB/sec

iSCSI SAN Fabric 97 MB/sec 136 MB/sec

Page 19: Measuring Nexsan Performance and Compatibility in Virtualized Environments

backing up what is in effect empty space. In addition, the Veeam software acceleratesprocessing incremental and differential backups by leveraging Changed-Block Trackingwithin VMFS, As a result, we were able to leverage the VOE awareness for advancedoptions within our software solution to backup 4VMs at 245MB per second over our FCfabric and 136MB per second over our iSCSI fabric.

GREEN CONSOLIDATION IN A VOE

Equally important for IT, the Nexsan SASBeast was automatically making significantsavings in power and cooling costs all throughout our testing. A key feature providedwithin the Nexsan management suite provides for the optimization of up to three power-saving modes. These settings modes are applied array-wide; however, the modesavailable to an array depend upon the disk drives used in the array. For example, therotational speed of SAS drives cannot be slowed.

Once thedisks enter intoa power savingmode, they canbe automaticallyrestored to fullspeed with onlythe first I/Odelayed whenthe array isaccessed. Moreimportantly,over the periodthat we ranextensive tests ofsequential andrandom dataaccess, thepower savingsfor each of ourthree disk arrayswas remarkableuniform. Thebottom line over

our testing regime was an average power savings of 52 percent. Even more importantly,this savings was garnered over a period that saw each of the 42 disks in our SASBeastaverage 1,500,000 reads and 750,000 writes. Also of note for the mechanical design of theSASBeast, over our entire test there was not one I/O transfer or media retry.

SASBeast Performance Spectrum

AUTOMAID SAVINGS

Over our testing period, we kept the AutoMAID feature aggressively seeking to enact power savings. Overthe test period, AutoMAID provided fairly uniform power savings across all arrays, amounting to just over 50%of the expected power consumption.

19

Page 20: Measuring Nexsan Performance and Compatibility in Virtualized Environments

MEETING SLA METRICS

As companies struggle to achieve maximum efficiency, the top-of-mind issue for allcorporate decision makers is how to reduce the cost of IT operations. Universally, theleading solutions center on resource utilization, consolidation, and virtualization. Thesestrategies, however, can exacerbate the impact of a plethora of IT storage costs, from faileddisk drives to excessive administrator overhead costs. As resources are consolidated andvirtualized, the risk of catastrophic disaster increases as the number of virtual systemsgrows and the number of physical devices underpinning those systems dwindle.

While reducing OpEx and CapEx costsare the critical divers in justifying theacquisition of storage resources, thoseresources must first and foremost meet theperformance metrics needed by end-userorganizations and frequently codified inSLAs set up between IT and Line ofBusiness (LoB) divisions. These metricsaddress all of the end user’s needs forsuccessfully completing LoB tasks.Typically these requirements translate intodata throughput (MB per second) and dataresponse (average access time) metrics.

In terms of common SLA metrics, ourbenchmarks for a single SASBeast reachedlevels of performance that should easilymeet the requirements of mostapplications. What’s more, the scale-out-

oriented architecture of the SASBeast presents an extensible infrastructure that can meeteven the requirements of many High Performance Computing (HPC) applications. Witha single logical disk from a RAID-5 base array, we were able to drive read throughputwell over 500MB per second and write throughput well over 400MB per second. This isdouble the throughput rates needed for HDTV editing. With a single SASBeast and fourlogical drives, we scaled total full-duplex throughput to 1GB per second.

Customer Value

Customer Value

“While Nexsan’s storage management software provides a number ofimportant features to enhance IT productivity, the most

important feature for lowering OpEx costs in a complex SAN topology istight integration with VDS on Windows.”

20

NEXSAN SASBEAST FEATURE BENEFITS1) Application-centric Storage: Storage volumes can be created from

a hierarchy of drives to meet multiple application-specific metrics. 2) I/O Retries Minimized: During our testing we executed a total of 95

million read and write I/Os without a single retry. 3) Automatic Power Savings: AutoMAID technology can be set to

place drives in a hierarchy of idle states to conserve energy, whileonly delaying the first I/O request returning to a normal state.

4) High Streaming Throughput: Running with SAS- and SATA-basedvolumes, application performance mirrored benchmark performanceas backups of multiple VMs streamed total full-duplex data atupwards of 500MB per second.

5) Linear Scaling of IOPS in a VOE: Using random 4KB and 8KB I/Orequests—typical of Exchange Server and SQL Server—VMssustained IOPS rates for both I/O sizes that differed by less than 2.5percent on a SAS RAID-5 volume and scaled by over 80% with theaddition of a second target volume.

Page 21: Measuring Nexsan Performance and Compatibility in Virtualized Environments

We measured equally impressive IOPS rates for random access small-block—4KB and8KB—I/O requests. With one logical disk, we sustained more than 2,000 IOPS for both4KB and 8KB I/O requests which scaled to over 4,000 IOPS with two logical disks. Toput this into perspective, Microsoft recommends a storage infrastructure that can sustain1,500 IOPS for an MS Exchange installation supporting 2,000 active mail boxes.

BUILDING IN RELIABILITY

In addition to performance specifications, storage reliability expressed in guaranteeduptime is another important component of SAL requirements. Starting with themechanical design of the chassis and moving through to the embedded managementsoftware, Nexsan’s SASBeast provides a storage platform that promotes robust reliabilityand performance while presenting IT administrators with an easy to configure andmanage storage resource.

In particular, the design of the SASBeast chassis proactively seeks to maximize the lifespan of disks by minimizing vibrations and maximizing cooling. For IT operations—especially with respect to SLAs—that design helps ensure storage performance guarantiesconcerning I/O throughput and data access will not be negatively impacted by dataaccess errors induced by the physical environment. By extending disk drive life cycles,there will be fewer drive failures for IT to resolve and fewer periods of degradedperformance as an array is rebuilt with a new drive. During our testing of a SASBeast,openBench Labs generated a total of 95 million read and write requests without theoccurrence of a single read or write retry.

SIMPLIFIED MANAGEMENT DRIVES SAVINGS

While Nexsan’s storage management software provides a number of important featuresto enhance IT productivity, the most important feature for lowering OpEx costs in acomplex SAN topology is tight integration with VDS on Windows. For IT administrators,VDS integration makes the Nexsan storage management GUI available from a number ofthe standard tools on a Windows server. In particular, IT administrators are able to useStorage Manager for SANs to implement full end-to-end storage provisioning. By invokingjust one tool, an administrator can configure a Nexsan array, create and map a logicalvolume to a host server, and then format that volume on the host.

Nexsan also leverages very sophisticated SAN constructs to simplify administrativetasks. All storage systems with multiple controllers need to handle the dual issues of arrayownership—active service processors—and SAN load balancing—active ports. ThroughNexsan’s implementation of Asymmetric Logical Unit Access (ALUA), host systems withadvanced MPIO software can access a SASBeast and discern the subtle difference betweenan active port and active service processor. As a result, an IT administrator is able to map aLUN to each FC port on each controller and allow the server’s MPIO software to optimizeFC port aggregation and controller failover. Using this scheme openBench Labs was able toscale streaming reads and writes to four drives at upwards of 1GB per second.

Customer Value

21

Page 22: Measuring Nexsan Performance and Compatibility in Virtualized Environments

REVOLUTIONARY GREEN SAVINGS

In addition to providing an innovative solution for reliability and performance,Nexsan’s AutoMAID power management scheme automatically reduces SASBeast powerconsumption, which is a significant OpEx cost at large sites. Using three levels ofautomated power-saving algorithms, the SASBeast, eliminates any need for administratorintervention when it comes to the green IT issues of power savings and cooling.

Nexsan automatically reduces the power consumed by idle disk drives via a featuredubbed AutoMAID™. AutoMAID works on a per-disk basis, but within the context of aRAID set, to provide multiple levels of power savings—from parking heads to slowingrotational speed—to further contain OpEX costs. While testing the SASBeast,openBench Labs garnered a 52% power savings on our arrays.

Customer Value

22