nexsan e-series performance & functionalityopenbench.com/pdf/nexsane18.pdf · roadmap to the...

15
openBench Labs Executive Briefing: Driving Enterprise Performance with SAN Neutrality for Mid-Market Cloud Computing Nexsan E-Series Performance & Functionality March 20, 2012

Upload: tranminh

Post on 27-Sep-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

openBench Labs

Executive Briefing:

Driving Enterprise Performance with SAN Neutrality for Mid-Market Cloud Computing

Nexsan E-Series Performance & Functionality

March 20, 2012

ROADMAP TO THE VIRTUAL ENTERPRISE

With the introduction of Nexsan E-Series™ arrays, IT atmid-market sites can now satisfy their storage needs witharrays provisioned with a hierarchy of SSD, SAS, and SATAdisk drives and configured to support multipleinterconnections, including direct SAS attachment, FibreChannel (FC), and iSCSI. More importantly, these mid-marketarrays are tightly integrated with key business operatingsystems, including Windows Server, UNIX, Linux, and Mac OS, and easily meet thehigh-level performance requirements of enterprise-class applications, with respect todata access (IOPS), throughput (MB per second), and capacity (price per GB) metrics.

With storage a primarydriver of IT costs, managingstorage resources, whichcosts far more per TB thanacquiring storage, is no lessa concern for IT than theperformance of storageresources. The bottom linefor controlling IT budgetsrests much more oncontrolling operatingexpense (OpEx), whichdominates Total Cost ofOwnership (TCO) by anorder of magnitude, than oncapital expense (CapEx).

Among the newtechnologies introduced byNexsan to reduce the TCOof E-Series arrays, ActiveDrawer Technology™ allowsan IT administrator to pullout a single drawer of

drives—typically to replace a known failed drive—while keeping the system online. Thebiggest contribution to lowering TCO, however, comes from Nexsan’s tight integrationwith server operating systems hypervisors, to simplify storage management.

02

Under TesT: AdvAnced cloUd sTorAge

Nexsan E18 Enterprise Storage Array1) Enhance Administrator Productivity: All local and remote E-Series arrays can be

managed from a Web browser, Microsoft’s Windows-based storage managementtools, or the Nexsan Storage Manager client.

2) Maximize Density and Reliability with Hierarchical Storage: The 2U E18chassis supports any mix of SSD, SAS, and SATA drives mounted in a counter-rotating scheme to cancel vibrations, reduce head positioning errors, optimizethermal operations, and extend drive life.

3) Maximize Energy Savings: AutoMAID® (Massive Array of Idle Disks) technologyautomatically places drives into lower power states after a user-defined period ofinactivity, which is useful for archive and backup to disk scenarios.

4) Maximize I/O Performance: Dual active-active RAID controllers support All Port AllLUN (APAL) access enhanced by Asymmetrical Logical Unit Access (ALUA), whichbiases I/O to the controller servicing a LUN.

Iometer Streaming I/O Benchmark: While streaming large-block (128KB) I/Othroughput using a single logical disk created on either a SAS-based RAID-5volume, throughput exceeded 1,500MB per second on reads and 980MB persecond on writes on an FC SAN. Iometer I/O Operations Benchmark: Random access 8KB reads that wereconstrained by a 10ms average access time sustained 14,775 IOPS on a SAS-based volume and scaled to over 65,000 IOPS on an SSD-based volume.

ExecutiveBriefing:

Driving Enterprise Performance with SAN Neutrality for Mid-Market Cloud ComputingJack FegreusMarch 20, 2012

Nexsan E-Series Performance & Functionality

NEXSAN Information Kit•Corona-Norco case study•E-Series data and specification sheets•Storage Switzerland white paper

SUMMIT

To preserve management uniformity for an entire SAN fabric, all Nexsan E-Seriesstorage arrays are VDS-compliant, which enables IT to harness a wide range of Microsoftand third-party storage management tools, including the Nexsan Storage Manager. Tightintegration of the Nexsan Storage Manager with the Microsoft Management Console(MMC) and Virtual Disk Services (VDS) provides for extended property sheets forlogical disks on a Windows server. Drilling down on a logical disk within the MicrosoftManagement Console reveals the internal Nexsan names of both the LUN and the arrayon which the LUN was created. This level of detail is hardly matched by the frequentlyoutdated spread sheets frequently used by many in IT operations.

What’smore, ITcan manageNexsan diskarrays,which arefrequentlyconfiguredat mid-market siteswithexternalports forboth FibreChannel(FC) andiSCSI SANs,using anyVDS-compliantsoftware. IToperationscan utilizeMicrosoft’sStorageManager forSANs, andmembers ofMicrosoft’sSystemCenterfamily ofsystems

software just as easily as the Nexsan provisioning software. Specifically, we were able totreat FC and iSCSI SAN fabrics as simple transport options for LUNs, which makes itsimplifies integrating storage LUNs created on Nexsan arrays among SAN fabrics.

03

WINDOWS INTEGRATION

Nexsan’s VDS integration with Windows allows administrators to configure and manage multiple storage resourcesusing the Windows Storage Manager for SANs or Nexsan’s Array Management utility. Within a single end-to-endcontext, administrators can configure a volume on an array, map the volume to a host, and then format a logical diskon a host server. For provisioning on iSCSI and FC fabrics, Storage Manager for SANs listed a Nexsan array as unitson both fabrics to deal with differences in provisioning wizards. We also used vCenter to created virtual HBAs to gainend-to-end visibility for VMs on our SAN. Once we provisioned a VM with a virtual HBA, we were able to use NexsanStorage Manager to independently manage storage access rights and to zone the VM within the SAN fabric.

VOE SUPPORT ADVANTAGES

Nexsan’s simplified storage management and task automation is particularly importantwhen implementing a Virtual Operating Environment (VOE), such as vSphere 5. A VOEintroduces multiple levels of abstraction that complicate important IT administrationfunctions. At mid-market sites, VOE management problems are often exacerbated whenincompatible advanced point solutions, which are typically packaged as extra-cost storageoptions, introduce incompatibilities among interdependent resources.

To facilitate successful VOE implementations at mid-market sites, Nexsan provides astorage infrastructure that can leverage all native VOE solutions dealing with intricatehypervisor architecture. In addition, Nexsan arrays efficiently support the characteristicI/O patterns that distinguish VOE host servers. As a result, Nexsan infrastructure frees ITto use the comprehensive virtualization features of their VOE to provision resources forVMs, commission and decommission VM applications, and migrate VMs among multiplehosts in real time to meet changing resource demands.

The increasingnumbers of VMsrunning on VOEhosts along withautomatedmovement ofVMs to balancehost loads servesto randomize I/Orequests as a VOEhost consolidatesdata streams ofmultiple VMs. Tohandle theserandomized I/Opatterns, Nexsanprovides ITadministratorswith a number ofoptions forsophisticated load

balancing via multipath I/O (MPIO). Specifically, each E-Series array can allow hosts toutilize all ports and all LUNs (APAL): A configuration that provides IT administratorswith the greatest flexibility and support for iSCSI failover.

While APAL mode provides IT administrators with configuration flexibility, it can alsointroduce a performance issue. When a RAID array is created, it is assigned a mastercontroller to service the array. If an IT administrators maps every volume to all of the FCand iSCSI ports as a load balancing and failover scheme, I/O requests not directed to themaster controller incurs added overhead needed to switch control of the array.

04

VSPHERE ASYMMETRIC MPIO DISCOVERY

When our vSphere host discovered a volume exposed by an E18 array, the hypervisor identified whetherthe LUN was SSD-based and if it was configured as an ALUA device. For our configuration, which featuredtwo FC ports on each E18 controller and two FC ports on the host, the hypervisor set up the four paths tothe servicing controller as “active optimized” and the four paths to the other controller as “non-optimized.”

To avoid the overhead of switching controllers and garner the best I/O performance,vSphere hosts and Windows servers implement a sophisticated load balancing schemethat distinguishes between ports on a controller servicing a logical volume from ports onthe other controller. To complement this scheme, Nexsan implements Asymmetric LogicalUnit Access (ALUA) when exporting target volumes. Each E-Series array distinguishspaths that are active and optimized (i.e. paths that connect to a port on the controllerservicing the device) from paths that are active but are not optimized. As a result, ITadministrators can set an MPIO policy on any vSphere host or Windows server toautomatically favor connections to the controller servicing a logical drive.

For a vSphere environment, Nexsan E-Series arrays support APIs for HBA and SANvirtualization, including N_Port ID Virtualization (NPIV). NPIV was originally designedto provide a means to zone VMs at an FC switch, which is all too frequently a convolutedprocess. With support for NPIV, within ESX and ESXi hosts, administrators can create

05

VMWARE SAN INTEGRATION

Using vCenter, we were able to view the SAN fabric relationships underpinning a VM, including the Nexsan array exporting the LUNused for the VMFS datastore supporting the VM. In addition, we were able to provision the VM with virtual FC HBAs, including virtualnode and port WWN IDs, with which we identified the VM as a host within the Nexsan Storage Manager. As a result, we were able toindependently manage storage access rights for the VM.

virtual FC HBAs for VMs resident on a host server provisioned with physical FC HBAs.In practice, a unique identity and World Wide Port Name (WWN), which represents avirtual HBA installed on a VM, provides the means to identify and track SAN trafficassociated with the VM.

Without support for virtual HBAs, end-to-end SAN management necessarily stops atthe host server. There is no other way to identify or manage a VM within a SAN. Withsupport for NPIV, Nexsan arrays extend SAN fabric management from a VM all the wayto the storage resource used in provisioning the VM.

E18 PERFORMANCE SPECTRUM

We began our testing of the Nexsan E18 array by running synthetic benchmarks thatstressed sequential data throughput using large block (128KB) I/O requests. We usedIometer to generate sequential large block reads and writes, which are characteristic ofbackup, data mining, and online analytical processing (OLAP) applications.

In addition,the growinguse ofcustomerfacing videodata hasintroduced anew class ofvideo contentcreationapplications—dubbed NonLinear Editing(NLE) —whichstream largeI/O blockssequentially.Moreimportantly,NLE softwaredependsentirely onstrickminimumstorageperformance toprevent framedrop. As aresult, NLEstorage

06

SSD STREAMING HD VIDEO EDITING BENCHMARK

As a real world test of I/O streaming, we ran an NLE content creation scenario, in which we simultaneouslyread and wrote 36,000 frames of HD video using the Panasonic 1080i60 HD video format. Using the Panasonicformat, our video file consumed 16GB, required 100Mbps throughput for playback, and imposed a minimum450Mbps throughput heuristic on professional NLE systems. Using a hybrid storage strategy, NLE functions wereperformed on volumes provisioned on a single SSD-based array in an E18, while video distribution utilizedvolumes from a SATA array. We launched multiple NLE processes to generate simultaneous streams of readsand writes and sustained a read throughput of 242MB per second and a write at 212MB per second on each oftwo user processes performing reads and writes in opposing order.

configurations typically feature both SSD and HDD arrays to ensure that multiple editingstreams will scale within a user pool while continuing to meet strict performance andspace requirements.

To assess performance capabilities with respect to data throughput and data access,we set up four E18 arrays: Two arrays were configured with external FC connections andtwo arrays were configured for a 10GbE iSCSI SAN. In addition, to test direct SASattachment, a configuration favored in High Performance Computing (HPC)applications, we set up an E60 array. In our direct SAS attachment scenario, we utilizedtwo external 24Gbps SAS connections, which featured four 6Gbps lanes for data traffic.

In order to maximize I/O performance during our tests, we used two Dell R610 andtwo Dell R710 PowerEdge servers running Windows Server 2008 R2. For FCconnections, we installed QLogic QLE 2562 HBAs and for 10GbE iSCSI traffic, weinstalled Intel X520 HBAs.

More importantly, the Dell R610 and R710 PowerEdge servers used in testing theNexsan E-Series storage arrays were built on the Tylersburg and Westmere chipsets. TheTylersburg I/O Hub provides PCI-Express interface connectivity to a CPU. The chipsetimplements the Intel QuickPath Interconnect (QPI), which is a point-to-point processorinterconnect patterned on the crossbar architecture used in mainframes andsupercomputers. QPI replaces the older Front Side Bus (FSB) of Xeon-based systemswith a switch that provides separate parallel lanes that enable the CPU to transmit andreceive data at the same time. Current QPI implementations are capable of sustaining upto 6.4 giga transfer (GT) operations per second.

Servers capable of sustaining a high IOPS rate are essential for leveraging Nexsan E-Series arrays provisioned with solid-state drives (SSDs), which utilize enterprise-gradeSLC NAND Flash memory. With QPI architecture, both the I/O throughput and IOPSperformance of a server scale dramatically higher. While the performance capabilities ofQPI-based servers in combination with Nexsan E-Series arrays is especially important ina VI environment, a single user process is simply incapable of reaching the throughput orIOPS potential of an E18. array.

A single instance of any application cannot generate enough I/O to maximizecommand queue or cache performance. When working with HDD-based arrays, a fullcommand queue, which can be freely reordered is essential for minimizing rotationallatency. Even when working with SSD-based logical volumes, such as in our NLE contentcreation scenario, the key to maximizing the value of an E-Series array is the presence ofmultiple independent user processes.

We began our examination of E18 performance with an assessment of sequential readand write throughput. For these tests we used Iometer, to create synthetic I/O workloadsthat artificially generated enough asynchronous large-block read and write requests tokeep all command queues full throughout the test of a logical disk.

07

We started ourassessment with anexamination of sequentialthroughput using logicaldisks provisioned on asingle array within anE18.On each E-Seriessystem, we used all of thestandard default settingsfor a general purpose I/Oin APAL mode. We set nospecialized cacheperformance setting tofavor streaming or randomI/O and did not disable anyfault tolerancemechanisms, such as cachemirroring, to minimizeoverhead. Using a standardproduction configuration,each logical disk had twooptimal paths thatconnected the two ports onits master controller witheach of the HBA SANfabric ports on the testserver.

This scenario created six baseline tests. One test was created for each disk type (SSD,SAS. and SATA) and one test was created for each SAN type (8Gbps FC and 10GbEiSCSI). In addition, we created a seventh test for direct attached storage (DAS) using two24Gbps SAS connections (each cable had four 6Gbps lanes) to an E60 array, which waspopulated with SATA drives.

Using logical drives provisioned from one array and one controller on an FC SANfabric, we consistently measured read throughput levels of around 1,500MB persecond with SAS-based logical volumes and 1,400MB per second with SATA-basedvolumes. Running the same tests on an iSCSI fabric, which was set for standard sizerather than jumbo IP packets, produced similar results that were typically within 90%of the throughput level measured on an FC SAN fabric.

While SDD drives are typically associated with dramatic acceleration of randomaccess I/O transactions, we observed important differences in sequential throughputscalability using SSD-based volumes in our NLE application scenario. The characteristicsof sequential throughput with SSD-based volumes have important ramifications forapplication scaling.

08

Nexsan E18 Sequential I/O PerformanceIometer 128KB Read and Write Requests

SAN Fabric,Disk Type

Sequential Reads1 LUN, 1 Controller

Sequential Reads2 LUNs, 2 Controllers

Sequential Writes1 LUN, 1 Controller

FC SANSSD*

1,036 MB/sec 1,932 MB/sec 580 MB/sec

iSCSI SANSSD*

922 MB/sec 1,719 MB/sec 516 MB/sec

FC SANSAS

1,525 MB/sec 2,176 MB/sec 980 MB/sec

iSCSI SANSAS

1,355 MB/sec 1,935 MB/sec 872 MB/sec

FC SANSATA

1,405 MB/sec 2,164 MB/sec 1,004 MB/sec

iSCSI SANSATA

1,219 MB/sec 1,849 MB/sec 854 MB/sec

SAS DASSATA**

1,450 MB/sec 3,015 MB/sec 1,012 MB/sec

*Using current production firmware. Pending beta firmware eliminates double cache flushes.**E60 Array

For a command queue depth of 4 or less, logical volumes backed by an SSD-basedarray provided a distinct advantage in sequential throughput. We also measured lessthroughput variability with respect to the command queue length for SSD-basedlogical volumes. Less performance variability makes it easier for IT to plan, scale, andsupport applications that require specific levels of service, such as video creation.

On the other hand, as the command queue filled to 8 or more outstanding requests,streaming I/O throughput for both SAS- and SATA-based logical volumes surpassed thatof SSD-based volumes with the current E-Series firmware. A new beta version of the E-Series controller firmware, however, eliminates duplicate issueing of SSD cache flushcommands that are now done by firmware in all qualified SSD drives. In I/O access tests,this change in overhead doubled our measurements for sustained IOPS rates.

Usingmultiplelogicalvolumesassociatedwitharrays oneachcontrollerin the E18,sequentialthroughputscaledacross alldrive typesover bothFC andiSCSISANs. Weregularlymeasuredtotalsequentialthroughputfor allscenarioswithin atight rangefrom1,800MB

per second to 2,200 per second with no specialized tuning for SAN fabrics or datastream characteristics. The only exception within our sequential throughput tests

09

NEXSAN SAS STREAMING I/O

Using QLogic FC switch and Nexsan Management software, we measured the highest sequential throughput withmultiple SAS drives, as read throughput reached 2,164MB per second using volumes from both controller.

centered around our direct attached SAS connection with its significantly higherbandwidth than either 8Gbps FC or 10GbE iSCSI. Using multiple SATA-based arrays,we were able to stream sequential reads at 3,015MB per second.

This level of performance is four to five times greater than applications that rely ondata streaming will typically generate. This head room on the E18 is essential for scalingmultiple processes.

For backup in particular, when performing 128KB writes, a SATA RAID-5 volumesustained an average streaming throughput of 1,005MB per second. Given the low costand high capacity advantages provided by 2TB SATA drives, exceptional writethroughput makes the E18 an exceptional asset for Disk-to-Disk (D2D) backup.Moreover, Backup Exec 2012 and Veeam Backup & Replication v6 both utilize pools ofbackup proxy servers to run multiple VM data protection processes in parallel, which iscompletely dependent of the sequential throughput head room that an E18 provides.

In addition to streaming data in large blocks, there is also a need to satisfy smalldiscrete I/O requests. Server applications built on databases, such as Oracle and SQL

LOAD AGGREGATION IN VM BACKUP PROCESSES

Launching a SAN-based VM backup from a central backup server that delegated work to a proxy server, placed an aggregate datathroughput load from the central backup server, the vSphere 5 host, and the proxy server at around 500MB per second. on a Nexsam E18.As a result, the extended I/O headroom of the E18 was essential for increasing the number of backup proxy servers working in parallel.

10

Server, generate large numbers of discrete I/O operations that transfer a small amount ofdata at a time. Commercial applications that rely on transaction processing (TP) includesuch staples as SAP and Microsoft Exchange.

TP applications seldom exhibit steady-state characteristics. In a typical mid-marketenvironment, TP loads for applications, such as SAP, average only several hundred IOPS.Nonetheless, these applications often experience heavy processing spikes, such as at theend of a financial period. The TP load during an intense processing period can reachseveral thousand IOPS. That variability makes TP applications among the most ideal totarget for virtualization, since a well-managed VI can be configured to automaticallymarshal resources and position VMs on hosts to support peak processing demands.

To simulate real world database performance, 8KB reads. We also constrained ourresults with a global requirement that the average I/O response time be less than 10ms. Ahigh IOPS load without a restriction on average response time creates highly unrealisticresults. More importantly, to stress the Nexsan array, we had to make sure that we weretiming only I/O requests and responses on the server. This necessitated generatingthousands of block addresses outside of the timing loop using our oblLoad benchmark.

11

To analyze IOPS performance of a E18 array, we ran our oblLoad benchmark, while observing the E18 with the Nexsan management console.Key metrics for this benchmark include the initial performance of a single worker process, which is indicative of the best performance that asingle user can expect. The next key metric is the total throughput for multiple processes constrained by a fixed average access time (5ms inour tests). A single oblLoad process on a single logical SATA-based volume typically sustained about 400 IOPS, while an SSD-based volumesustained over 4,000 IOPS using current firmware. With multiple processes, the SATA-based volume sustained a maximum of 9,000 IOPS, whilethe SSD-based volume was able to sustain over 33,000 IOPS with an average access time that was under 2ms.

NEXSAN TRANSACTION PROCESSING

IOPS benchmarking of the Nexsan E18 was characterized by the greatest variations inI/O performance that we measured. We conducted all of our IOPS tests on an FC SANusing a single 1TB logical volume to minimize cache effects from both the server and thelarge controller caches on the Nexsan E18.

Not surprising, wemeasured the lowestsustained IOPS ratewith a SATA-basedlogical volume at 9,000IOPS A SAS-basedarray with 15,000 RPMdisks sustained 14,775IOPS, which was about64% greater than thetransaction loadsustained by a SATA-based array. Testing anSSD-based volumedemonstrated how a

small amount of overhead can make a dramatic effect on IOPS performance. With thecurrent production software, we were able to sustain 33,IOPS with an SSD-basedvolume. With new firmware that no longer issues cache flush commands to qualifiedSSD drives, all of which manage cache flushes with their own embedded firmware, weable to sustain 67,500 IOPS.

IOPS FOR CLOUDS

To test a real world IT scenario, we configured a vSphere 5 environment with SAS-based logical volumes supporting datastores for VMs, high capacity SATA drives withhigh streaming write throughput to support a VM backup using Veeam Backup &Replication v6, and solid-state device (SSD) drives to accelerate read and write IOPS inworkloads that utilize random I/O requests.

In our vSphere environment, we configured a modest VM with 2 CPUs and 4GB ofmemory. That VM proved capable of supporting an Exchange email server that serviced400 user mailboxes under heavy usage—defined as processing one message per mailboxper second. Using Veeam Backup & Replication, we were able to completely restore our Exchange

server VM by running the VM directly from a backup file. There was no need toprovision a datastore, rehydrate compressed or deduplicated data, or even restore VMFSdisk files. In just 19 seconds, we were able to start and publish the VM.

At the start of a Veeam Instant Recovery, pointers to the files contained in the backupfile were set up in a specific directory on the Veeam Backup server. That directory wasexported via NFS to a vSphere host as read-only files representing the VM. As a result, itwas necessary to redirect writes to the VM to store data changes. To test the ability to

12

Nexsan E18 IOPSoblLoad 8KB Random Read Requests

SAN Fabric, Disk TypeSingle User Sustained IOPS

1 LUN, 1 Controller,10ms max average access time

Maximum Sustained IOPS1 LUN, 1 Controller,

10ms max average access time

FC SAN, SSD4,400 IOPS*

9,000 IOPS* IOPS33,000 IOPS*67,500 IOPS

FC SAN SAS 590 IOPS 14,775 IOPS

FC SAN SATA 395 IOPS 9,000 IOPS

*Using current production firmware. Pending beta firmware eliminates double cache flushes.

improve email transaction performance, openBench Labs recovered the Exchange serverVM twice: first with new data redirected to a VM snapshot on a SAS-based volume andthen to a snapshot on an SSD-based volume.

By using a small SSD-based volume to store data changes, the IOPS performance ofthe recovered VM surged past the performance of the original VM, which used SAS-based disk volumes exclusively. With the snapshot located on a SAS-based volume,IOPS performance dropped about 50% during the initial recovery phase and thendropped to about 25% during the data migration and consolidation phase. Using anSSD-based volume for the redirected data, the IOPS performance of our Exchangeserver tripled to 1,200 email messages per second. Finally, during a consolidation withStorage vMotion, performance remained at a level that was still 150% greater than the

13

SSD IOPS ENHANCEMENT

We used a VM running Exchange to test booting a VM directly from a backup file. In five seconds, a directory on our Veeam Backupserver was populated with pointers to the contents of the VM backup file and exported to a vSphere host as a read-only files in an NFSdatastore. To capture new data,Veeam added a disk snapshot and CBT files to a directory on an SSD-based datastore. With data updatesto our backup file redirected to a VM snapshot on an SSD-based datastore, IOPS tripled during the first phase of our recoverytest and supported 50% higher throughput than our production configuration during a Storage vMotion consolidation.

IOPS performance of the initial configuration. Specifically, we were able to support 600email transactions per second during the restore process.

MEETING SLA METRICS

As companies struggle to achieve maximum efficiency, the top-of-mind issue for allcorporate decision makers is how to reduce the cost of IT operations. Universally, theleading solutions center on resource utilization, consolidation, and virtualization. Thesestrategies, however, can exacerbate the impact of a plethora of IT storage costs, from faileddisk drives to excessive administrator overhead costs. As resources are consolidated andvirtualized, the risk of catastrophic disaster increases as the number of virtual systemsgrows and the number of physical devices underpinning those systems dwindle.

While reducing OpEx and CapEx costsare the critical divers in justifying theacquisition of storage resources, thoseresources must first and foremost meet theperformance metrics needed by end-userorganizations and frequently codified inSLAs set up between IT and Line ofBusiness (LoB) divisions. These metricsaddress all of the end user’s needs forsuccessfully completing LoB tasks.Typically these requirements translate intodata throughput (MB per second) and dataresponse (average access time) metrics.

In terms of common SLA metrics, ourbenchmarks for a single E18 array reachedlevels of performance that should easilymeet the requirements of mostapplications. Using logical volumesprovisioned from a single SAS-based

RAID-5 array, we were able to drive read throughput well over 1500MB per second andwrite throughput up to 1000MB per second.

We measured equally impressive IOPS rates for random access small-block I/Orequests. Using a SAS-based datastore, we were able to run Exchange on a VM with 400active mailboxes that processed 1 message per second per mailbox. With one logicalSSD-based disk, we sustained more than 67,000 IOPS for both 8KB reads whilemaintaining an average access time of less than 5ms.

BUILDING IN RELIABILITY

In addition to performance specifications, storage reliability expressed in guaranteeduptime is another important component of SLA requirements. Starting with themechanical design of the chassis and moving through to the embedded disk arraysoftware, the Nexsan E18 provides a storage platform that promotes robust reliability and

14

Nexsan E18 Feature Benefits1) Application-centric Storage: Storage volumes can be created

from SAS, SATA, and SSD drives to meet multiple application-specific requirements.

2) I/O Retries Minimized: During our testing we executed a total of 95million read and write I/Os without a single retry.

3) Automatic Power Savings: AutoMAID technology can be set toplace drives into lower power states to conserve energy, which isvery useful for backup to disk and archive workloads.

4) High Streaming Throughput: Running with SAS- and SATA-basedvolumes, benchmark performance demonstrated significantapplication scaling potential as backups of multiple VMs streamedtotal full-duplex data at upwards of 500MB per second.

5) High IOPS Loads in a VOE: Using random 8KB read requests—typical of SQL Server and Oracle—we sustained 14,775 IOPS onSAS drives and easily supported an Exchange mail server on a VMconfigured with 400 mailboxes that processed 1 email transactionper second, which are biased toward writes.

performance while presenting IT administrators with an easy to configure and managestorage resource.

The design of the E18 chassis proactively seeks to maximize the life span of disks byminimizing vibrations and maximizing cooling. For IT operations—especially withrespect to SLAs—that design helps ensure storage performance guaranties concerningI/O throughput and data access will not be negatively impacted by data access errorsinduced by the physical environment. By extending disk drive life cycles, there will befewer drive failures for IT to resolve and fewer periods of degraded performance as anarray is rebuilt with a new drive. During our testing of an E18, openBench Labsgenerated a total of 95 million read and write requests without the occurrence of a singleread or write retry.

SIMPLIFIED MANAGEMENT DELIVERS SAVINGS

While the Nexsan storage management software provides a number of importantfeatures to enhance IT productivity, the most important feature for lowering OpEx costs ina complex SAN topology is tight integration with VDS on Windows. For ITadministrators, VDS integration enables administrators are able to use Microsoft’s StorageManager for SANs to implement full end-to-end storage provisioning of servers, fabric,and disk arrays. By invoking just one tool, an administrator can configure a Nexsan array,create and map a logical volume to a host server, and then format that volume for the host.

Nexsan also leverages sophisticated SAN constructs to simplify administrative tasks. Allstorage systems with multiple controllers need to handle the dual issues of arrayownership—active service processors—and SAN load balancing—active ports. Throughthe Nexsan implementation of Asymmetric Logical Unit Access (ALUA), host systems withadvanced MPIO software can access an E18 and discern the subtle difference between anactive port and active service processor. As a result, an IT administrator is able to map aLUN to each FC or iSCSI port on each controller and allow the server’s MPIO software tooptimize port aggregation and controller failover.

15

Jack Fegreus is Managing Director of openBench Labs and consults through RidgetopResearch. He also contributes to InfoStor, Virtual Strategy Magazine, and Open Magazine,and serves as CTO of Strategic Communications. Previously he was Editor in Chief of OpenMagazine, Data Storage, BackOffice CTO, Client/Server Today, and Digital Review. Jack alsoserved as a consultant to Demax Software and was IT Director at Riley Stoker Corp. Jackholds a Ph.D. in Mathematics and worked on the application of computers to symbolic logic.