nce little book of data storage

68

Upload: jerram

Post on 21-Feb-2016

229 views

Category:

Documents


0 download

DESCRIPTION

The Tenth Edition of the Little Book of Data Storage is moving into the new generation with electronic versions available and the addition of social media sites featuring for those vendors that have adopted the likes of Twitter, Linkedin and Facebook as messaging and communication vehicles. As ever, ideas on what should (and should not) be included in the book are always welcomed so please do send these across. It is you, the readers that have helped to shape this book and continue to do so. To coin an industry phrase, this publication is a “snapshot” on the world of Data Storage, one that I hope will provide you with concise information on why, who and what maybe of relevance to you, and one that hopefully will encourage you to contact the agnostic and independent company behind this publication – NCE.

TRANSCRIPT

Page 1: NCE Little Book of Data Storage

THE LITTLEBOOK OF

DATASTORAGE

Page 2: NCE Little Book of Data Storage

2 67

NCE Company BackgroundOver 30 years of experience in IT has served the privately-owned NCE business well and established us as one of the leading names in providing, integrating and maintaining technology in the Corporate Data Centre. Although this book is focused on Storage, an area that NCE continue to enjoy great success in, our service skills go way beyond that aspect alone. We maintain a wide variety of Data Centre products, with servers and switches amongst our service portfolio.

Our engineers fall into two camps, with skilled personnel located at our Dedicated Repair Centres both in Europe and North America, and multi-skilled personnel providing fi eld service (on-site) capabilities through our regional support hubs in both of the aforementioned territories.

We work with a variety of “layers” of the IT industry too, providing service delivery for manufacturers & OEMs (as well as selling their technology), distribution, resellers, system integrators, service providers, third-party maintenance companies and end-users. Our customers tell us that they like our technical strengths along with our impartiality and independence, we hope that if you aren’t already an NCE customer you will become one once you’ve read through this book!

NIC Network Interface Card

nm Nanometer (Fibre Channel)

OEM Original Equipment Manufacturer

P2V Physical to Virtual

PEP Part Exchange Program

RAID Redundant Array of Independent Disks

ROI Return on Investment

RPM Revolutions per minute

RPO Recovery Point Objective

RTO Recovery Time Objective

SaaS Storage as a Service

SAN Storage Area Network

SAS Serial Attached SCSI

SATA Serial Advanced Technology Attachment

SCSI Small Computer Systems Interface

SFP Small Form-Factor Pluggable

SLA Service Level Agreement

SLC Single Level Cell (SSD)

SLS Shared Library Services

SMB Server Message Block

SSD Solid State Drive

TB Terabyte

TLC Triple Level Cell

UDO Ultra Density Optical

VADP vStorage API for Data Protection

VCB VMware Consolidated Backup

VCP VMware Certifi ed Professional

VSS Volume Snapshot Service

VTL Virtual Tape Library

WAFL Write Anywhere File Layout

WEEE Waste Electrical and Electronic Equipment

WORM Write Once Read Many

A special thanks to everyone that has played a part in helping in putting this Book together; Maddy for proof-reading it, Phil for laying it out so perfectly, the Solutions Sales Team and all at NCE for keeping the wheels turning whilst I shut myself away in a room to put this onto paper and my family who have the patience (and provide me with coffee & refreshments) to allow me to do this in what would typically be “family time”. Not to mention those of you in the industry who continue to educate me about storage on a daily basis and provide the content for this publication! Incidentally, a message for my son who says that being an author and writing a book makes you rich – with knowledge yes, that’s the truth of it……

2

Page 3: NCE Little Book of Data Storage

3

The Little Book of Data Storage - 10th EditionIntroduction 4

Understanding I/O 6

SSD - What’s the difference? 8

SSD – reliability? 10

SSD Summary Table 12

NCE - Where to fi nd us 13

The SSD race: the runners and riders

14

Vendor Feature: Pure Storage 16

Hybrids 18

Vendor Feature: Dot Hill 20

Auto-Tiering 22

Software – Virtualisation 23

Vendor Feature: DataCore 24

Customer Case Study: Education 26

Storage Connectivity 28

Vendor Feature: Qlogic 30

Storage Interface Guide 32

Hard Disk Drive (HDD) 34

The HDD Natural Disaster 36

RAID Levels 38

Vendor Feature: Nexsan – by Imation

40

Roadmaps for HDD 42

Buyers Beware: Near-Line (NL) SAS - The Sheep in Wolfs’ Clothing

45

NCE in the Healthcare Sector 46

Tape Storage 48

Continuous Data Protection (CDP)

51

Vendor feature: FalconStor 52

RIP “The Backup Administrator” 54

Vendor feature: WD Arkeia 56

Who’s who - Data Protection 58

Vendor Feature: Overland Storage

59

Customer Case Study: Local Government

60

Cloud Storage 62

Vendor Feature: SGi 64

Big Data 65

Glossary of Terms 66

NCE Computer Group recognises all trademarks and company logos. All products and their respective specifi cations adhere to those outlined by the manufacturer/developer and are correct at the time of publication. NCE will not accept any responsibility for any of the deliverables that fail to meet with the manufacturer/developer specifi cation.

Page 4: NCE Little Book of Data Storage

4

Introduction

During a recent meeting with someone who is responsible for the provision, management and delivery of the storage infrastructure of a well-known household name I heard him say “I like managing storage because I like a challenge”. This phrase, I felt, underpins the reality of the strategic game that is fought on a daily basis. Essentially you have two key elements, the user who (and I should include myself in this envelope!) consumes storage in a quite indiscriminate fashion – fuelled by the applications that encourage them to do so, and the ultimate storage target that the data has to reside upon. As nine previous editions and over 50,000 copies of this publication can testify, this isn’t a new challenge - far from it. But the technology and terminology has evolved (and continues to do so)

and we have found that many of those in the real world (tasked with managing, provisioning and delivering storage) need a guide to help them sift through the vendor jargon and acronyms; that’s what this Little Book is here to do.

My working world is saturated with the latest and greatest messages on how storage has been reinvented and how someone has found the cure to all of our

“Many of those in the real world (tasked with managing, provisioning and delivering storage)

need a guide to help them sift through the vendor jargon and acronyms”

Welcome to the 10th edition of the little book of data storage.

Page 5: NCE Little Book of Data Storage

5

storage problems. However with many, many years of working with the “real world” of end-users I have a somewhat cynical eye that looks for proprietary aspects, cost premiums and limitations before I take the cure seriously. Once in a while there is a breakthrough that does deliver on its promises, and since the last edition of this publication was printed a few of these have emerged but for every one of these there are a good dozen or so that simply faded away.

The Tenth Edition of the Little Book of Data Storage is moving into the new generation with electronic versions available and the addition of social media sites featuring for those vendors that have adopted the likes of Twitter, Linkedin and Facebook as messaging and communication vehicles. As ever, ideas on what should (and should not) be included in the book are always welcomed so please do send these across. It is you, the readers that have helped to shape this book and continue to do so.

To coin an industry phrase, this publication is a “snapshot” on the world of Data Storage, one that I hope will provide you with concise information on why, who and what maybe of relevance to you, and one that hopefully will encourage you to contact the agnostic and independent company behind this publication – NCE.

John GreenwoodSolution Sales Director - NCEAuthor of the Little Book of Data StorageWinner of Storage Magazine “Contribution to Industry” Award

@bookofstorage

FINALIST

Page 6: NCE Little Book of Data Storage

6

Understanding I/O An alarming trend has started to emerge in the storage industry as the traditional approach of simply buying storage to match the capacity required is no longer sufficient. The truth is that all storage is not the same and, arguably, capacity is no longer the most significant factor in the equation. However when it comes to scoping your requirement, the capacity point is something that you can make a realistic and educated guess at, largely because the tools are in place to capture and simplify this aspect and always have been.

If only the same could be said for I/O – a term used to measure performance. I/O (Input/Output) has parachuted in and is suddenly something that every storage vendor dazzles you with when you are looking to buy storage. The irony is that not many of the aforementioned vendors have the tools or ability to actually provide you with an idea of what your I/O profile is today, let alone what it will be tomorrow. Therefore they are effectively shifting the risk into your corner and banking their “Get out of jail Free” card in case what you buy doesn’t deliver the required performance (I/O operations per Second or “IOPS”). We have seen this happen all too frequently in the past year or two and we are often called in after the event to rescue the situation.

So, who can provide a clear and accurate representation of your I/O profile? That’s not an easy question to answer as there are many different layers from which the information can be extracted. You could say the storage is an obvious place to start. If you take a disk shelf and populate it with drives there’s a good chance that you can associate an I/O expectation with each drive on that shelf. However, you then apply a RAID level to the disk shelf to provide resilience. Depending on the vendor and product, you may then find that some of the disk is reserved for

Page 7: NCE Little Book of Data Storage

7

snapshots or to store the fi le system. You then have connectivity to the outside world, which (in certain cases) can limit the performance of the disk that sits behind it. It may have some cache on the controller that boosts the I/O. It may have a fi le system that presents the storage to the outside world through its unique gateway.

Then there’s the network through which the data travels. It may be bandwidth which is throttled at certain times of the day or night, there may be spikes in performance when applications put a surge in demand onto the storage, or it may simply be a network that is already being pushed to its limit. And then there’s the server environment, with poorly scoped or overloaded virtualisation hosts, ill confi gured databases and resource hungry applications. Oh, let’s not forget the users – those with high expectations and little patience (I count myself as one of them!).

Take all of these factors into the equation and you can see the challenge. In truth although the storage vendors want to achieve the “single pane of glass” that captures and presents all of this useful information in a consolidated and easy to

understand format it isn’t something that they’ve yet to achieve. Subsequently scoping storage with I/O has become aligned with specifi c areas – Virtual

Desktop Infrastructure or “VDI” (given that it is known to offl oad the performance demands onto the storage) and the demands of Database Administrators (DBAs) being perhaps the most apparent of the set.

Essentially having an appreciation of the potential performance capability of your existing storage, utilisation and any latency in it is a good starting point when you are looking to refresh the environment. That information

alone will provide a good foundation on which to build and ultimately a starting point from which you can counter the vendor’s “how many IOPS do you need” challenge with an educated answer. It probably won’t

surprise you when I state that this is where NCE can help with a vendor independent assessment of your I/O profi le showing the peaks

and troughs of your environment without loading the information to suit a specifi c product/technology.

Page 8: NCE Little Book of Data Storage

8

SSD - What’s the diff erence?Our industry loves an acronym and the aptly named SSD (Solid State Drive) market hasn’t let us down as it has raced in with SLC (Single-Level Cell; 1 bit per cell), MLC (Multi-Level Cell; 2 bits per cell), eMLC (enhanced Multi-Level Cell; 2 bits per cell) and TLC (Triple-Level Cell; 3 bits per cell). I’d hazard a guess that many readers of this book hadn’t known what these stood for until now!

SLC

SLC is the luxury item in the SSD portfolio, offering the highest performance with the highest reliability but (unsurprisingly) accompanied by the highest cost.

MLC

MLC sits somewhere in the middle with regard to performance and reliability (when compared to its peers) but is a far more affordable variant and at the time of this book being written it is the market leading SSD “fl avour”.

Page 9: NCE Little Book of Data Storage

9

eMLC

eMLC is essentially a variant of MLC that has been optimised for “endurance”, something that is essentially the same as MLC but (according to the manufacturers) uses the best/premium 10% of the silicon chips to make up a solid state drive. This means that the PE (Program/Erase) cycles are reduced, thus making it more robust than the MLC offering but with the benefi t of having a cost premium associated with it.

TLC

TLC is viewed as the new kid on the block currently and promises to offer higher capacity at a lower cost. However the reliability of this is a question that remains unanswered until TLC has a customer base using it for Enterprise Storage purposes which can represent a “real world” gradient of what the true situation is.

TLC has the ability to store 3 bits of information (8 possible values) per cell instead of the 2 bits (4 possible values) per cell that is provided by the aforementioned MLC and eMLC. However, in turn, this means that the cells are used more and there is less voltage fault tolerance. To apply voltage to the entire cell multiple times even though just one bit of information is encoded (depending on the bit being changed), can slow down the write speed and causes more wear in general – so there is a trade off in the increased cell capability.

Page 10: NCE Little Book of Data Storage

10

SSD – reliability?In some ways it is unfair to compare HDD to SSD but, looking to the future, it is inevitable that SSD represents the biggest threat to the HDD market and, whilst the two complement each other currently, the boundary between them (be it drawn by cost, capacity or performance) will become less defi ned. However, there are two specifi c areas where comparisons can be made and it appears that these are the key areas that make prospective buyers of SSD technology nervous – namely reliability and longevity/lifecycle.

Given that end-user surveys continue to tell us that the most important factor in any purchase of storage is reliability (typically followed by price and performance) this aspect has to be taken seriously, simply glossing over this isn’t an option and, in view of our strengths in servicing storage at NCE, it’s a factor that we are very wary of too.

When we started to look into SSD as a storage protocol it became very apparent, very quickly, from the slides that we were seeing and the order in which they were being presented in, that reliability and longevity/lifecycle of SSD was a question that every SSD vendor wanted to answer even before the question was asked. Ironically it was a question that we weren’t that concerned about, but as they all offered us an answer it was one which became more intriguing.

Essentially the benchmark is the established hard disk drive (HDD) technology and the emphasis is on the “mechanical” and “spinning” aspect of this when referred to by the SSD vendors. SSD is, in contrast, a fi xed (non-moving) storage media based on either an “enterprise grade” or a “consumer-grade” NAND fl ash memory that was traditionally designed for digital cameras - and is doubtlessly used by us all. Does that make SSD more reliable than a HDD? In our opinion - no. HDD’s can

“Longevity/lifecycle of SSD was a question that every SSD vendor wanted to answer even

before the question was asked”

Page 11: NCE Little Book of Data Storage

11

be repaired, SSD’s are replaced – a very expensive approach by comparison.

We were astonished that some of the SSD vendors would not offer a support contract with their products stating that they were so reliable the need for a support contract was simply irrelevant. That tune has changed as prospective customers have responded to say that, even though the vendor maybe confi dent and committed to the “never fails” approach, the customers are not. They have to provide an SLA for their business and the vendor should, in turn, be prepared to provide this peace of mind.

From a hard disk drive perspective, bit error rates are consistent and typically adjacent to each other on a disk platter, in the case of SSD degradation bit errors are more random across cell(s) and are typically associated with usage and time. Bit error correction is addressed in a very different fashion when comparing an SSD with an HDD and this can be a key factor in the longevity of the SSD solution.

“Wear levelling” is a term that has been adopted by the SSD market, and it attempts to track wear and movement of data across segments by arranging data so that erasures and re-writes are distributed evenly. Be wary of any solution featuring NAND fl ash memory that doesn’t include wear levelling, it is a fairly critical factor in the reliability equation. There are two key types of wear levelling, the fi rst is pretty much the de-facto standard in SSD, namely “static” – which provides longer life expectancy but slower performance. The alternative is “dynamic” which fl ips this formula and provides better performance but shorter life expectancy and this is typically found in USB fl ash based drives.

Having talked to someone far more technical than I on this subject who I trust implicitly, he suggests that the reliability of SSD itself is not so much the issue in the world of Enterprise Storage – it is more a case of how the wrapper (Array) around the SSD itself manages the SSD technology, with a complementary tool/engine (such as the use of NV-RAM) typically playing a big part in this equation.

Page 12: NCE Little Book of Data Storage

12

SSD Capacity/Type/Cost Summary TableThe storage market has continually strived to increase capacity but in turn reduce density and power consumption, but the challenge in SSD is that both performance and reliability (or “endurance” as it tends to be tagged in the SSD arena) degrade as you increase the number of bits per cell. On the one hand TLC appears to provide SSD with the opportunity to be seen as a more price competitive animal when compared to the HDD market in which it will, inevitably, compete. On the other hand the concerns over reliability may limit its adoption.

SLC MLC eMLC TLC

Capacity Points (GB)

100, 200, 350, 400,

700

30, 60, 80, 100, 180, 200, 250, 300, 400

50, 75, 100, 150, 200, 300, 400,

500

120, 250, 500, 750,

1000

Cost High Middle Low Lower

Performance Fast Middle Middle Middle

Approximate Cycles

100,000 1,000 10,000 1,000

The evolution of storage technology continues to amaze us all. The truth is that what was once stored (and in some cases still is) on a hard drive the size of your hand, can now be stored on a flash memory card the size of your fingernail. That is the direction which the consumer market is taking: how quickly the business market will move to this model remains to be seen.

Regardless of the type of SSD, it is fair to say that the technology in an environment requiring random read performance (databases such as Oracle or SQL with their “sequential” demands being good examples of this) will blow HDD out of the water. This is where developing a complete understanding of your environment and the storage demands it has can help to justify investment in SSD – but only if you have the ability to then associate the SSD investment with the specific application or area that demands it.

Page 13: NCE Little Book of Data Storage

13

NCE - Where to fi nd usWhen we tell people about our passion for data storage we are often told “You should get out more!”, so we’ve taken this advice and as a result you’ll fi nd us in a variety of places across the calendar year. Here are a few examples of the events that we attend and organise plus some of the publications you’ll see us featuring in:

NCE Customer Karting Evening

IP ExpoStorage Awards Storage Publications

VMware Forum NCE Fast−Track Event

Page 14: NCE Little Book of Data Storage

14

The SSD race: the runners and ridersIn certain circles, the story is that flash/ solid state drive (SSD) technology has become a key player in the data storage market; there is an element of truth to this. However the uptake isn’t perhaps as significant as the hype would suggest at this point in time, with cost per GB being the key objection when compared to the more traditional hard disk drive (HDD) alternative.

There is no question that SSD is ridiculously faster than HDD technology, that’s an impossible argument for the HDD market to win (ironically the only argument being to put flash cache as a “performance buffer” in front of the spinning disk which isn’t really a straight technology fight!). But the performance versus capacity criteria and associated cost premium for this is why many of the SSD only warriors are yet to capture the imagination (or budgets) of the mainstream IT estates. Performance hungry applications are however rich picking for the SSD Array market where I/O is worth every penny and capacity an afterthought.

Bringing things back to layman’s terms, there are two specific categories that you can group the SSD Array vendors into:- those who focus on performance and those who focus on achieving a balance between cost and performance. In parallel to this you have the vendors that offer cards with Flash/SSD which provide the acceleration and performance features to boost or buffer the storage traffic.

Page 15: NCE Little Book of Data Storage

15

Arrays

Cards

an EMC Company

an EMC Company

Page 16: NCE Little Book of Data Storage

16

Vendor Feature: Pure StorageFounded: 2009Headquarters: Mountain View, CaliforniaPortfolio: All Flash Arrays (AFA)

There is a key formula which, when applied to “new” vendors in the storage industry, helps to establish whether they have a rock solid business proposition

and the technology to accompany it. There are well respected, visionary people in our industry that have a strong reputation and track record for joining vendors who then experience great success. When such people gravitate towards a specifi c vendor this is an indicator that there is a something worthy of consideration on the horizon. Pure Storage have assembled a team that meets this criterion. Effectively these people have done the “due diligence” on the company for you.

Ironically, Pure Storage were already on the NCE radar long before they moved into the European market having caught our attention at an event in North America a few years ago. They have been establishing themselves as one of

the leading names in the Flash/SSD Array sector and the crisp, direct and sharp brand and messaging from Pure Storage (something that is encapsulated by the fi ve minute

demo that features on their website) replicates that of a certain multi-national corporation based in Cupertino - only 10 minutes down the road from the Pure Storage Headquarters in Mountain View, California.

Page 17: NCE Little Book of Data Storage

17

Backed by funding from a number of well-respected technology VC’s (Greylock Partners, Sutter Hill Ventures, Redpoint Ventures, & Index Ventures), strategic investor IQT and from a fl ash memory manufacturer (Samsung Electronics) - coupled with a Management Team that have worked for Symantec, 3PAR, NetApp and Yahoo!, Pure Storage have the strong foundation required to build upon.

Pure Storage produce All Flash Array (AFA) – with the emphasis being on the word “Array” as opposed to some of the Appliances that are available, which benefi t from inline data reduction (combining compression, global de-duplication, pattern removal and thin-provisioning) to deliver performance and capacity at a price point that will rival that of spinning disk based solutions. Pure Storage use consumer grade MLC SSD but the “FlashCare” technology within the Purity Software stack providing detailed integrity checking and monitoring of wear-levelling on these drives. This, coupled with the layer of DRAM in the controllers for frequently accessed metadata, provide an ideal combination.

Connectivity is offered through an 8Gb Fibre Channel and/or 10GbE iSCSI, with Infi niband used within the controllers at the back-end to aggregate performance. The product, and the Purity Software that manages it, has been designed solely for SSD (and the fl ash technology contained within the SSD) meaning that it is at the start of the SSD technology curve rather than being in the middle or near to the end of it.

Page 18: NCE Little Book of Data Storage

18

HybridsDifferent vendors approach fl ash/SSD technology in different ways and it is essential to understand how they use the technology and where it sits before you can start comparing their respective solutions. Putting fl ash/SSD technology behind a storage controller that has been designed to push and pull data from a spinning disk can introduce a bottleneck that limits the true performance of fl ash/SSD, especially when the controller is shared with a spinning disk environment and not dedicated to fl ash/SSD.

However if you have a complementary software layer that benefi ts from Automated Storage Tiering (AST) it is possible that the differentiation of data and the performance it demands is a good fi t for the underlying Hybrid storage hardware. This is an emerging market in our industry. The added benefi t to this model is in the cost v capacity argument, with hybrids able to offer the capacity of “traditional” hard disk drives combined with the blistering speed of solid state disk at a more realistic price point (albeit with a performance trade-off) when compared to the All Flash Array (AFA).

For independent guidance and advice Contact NCEwww.nceeurope.com

Page 19: NCE Little Book of Data Storage

19

Page 20: NCE Little Book of Data Storage

20

Dot HillFounded: 1984 Headquarters: Longmont, Colorado Portfolio: Storage Arrays, Appliances and complimentary software

Dot Hill are one of the best kept secrets in the storage industry – until now! Many of you reading this book will probably have used, or even be using, a Dot Hill product but owing to it being provided through one

of their many OEM agreements (which I won’t specify for fear of breaching a non-disclosure agreement) you won’t know the name Dot Hill independently.

With over 500,000 storage systems deployed worldwide, Dot Hill understand storage and from this backdrop has come a technology platform (the AssuredSAN Pro 5000) that has been designed to deliver an all-in-one real-time automated tiered storage solution. Given the Dot Hill pedigree (including economies of scale enjoyed through the branded and OEM sales) and the underlying storage hardware, it is relatively easy for Dot Hill to achieve the reliability, build quality and integration. The challenge is in the RealStor management software layer and the ability to deliver the feature set to offer thin-provisioning, remote replication, snapshotting, storage pooling and auto-tiering which could differentiate it from other appliances proclaiming to do this.

Page 21: NCE Little Book of Data Storage

21

Some of this was already in the Dot Hill skillset - as the Dot Hill branded AssuredSAN arrays have been offered with Data Management Software (DMS) options featuring AssuredRemote, AssuredSnap and Assured Copy capabilities for some time. Thus experience and technology was already in place on these aspects. However, the license to architect the additional software features from a blank sheet of paper is the true jewel in the crown of the AssuredSAN Pro 5000. There were not any pre-founded limitations or code that restricted what could be done.

The RealTier aspect of the RealStor management software layer dynamically responds to user data demands by moving ‘hot’ data to an SSD tier. It monitors data ‘hot’ spots, and automatically moves them to faster media to leverage the capabilities of SSD drives and SAS drives. In addition, it manages the less active data and moves that down the storage food-chain to ensure the optimum cost and storage effi ciency is achieved. This is smart storage, provided and supported by an established vendor. Please contact NCE for more information about AssuredSAN Pro 5000.

Page 22: NCE Little Book of Data Storage

22

Auto-TieringThis is a term that is easily misunderstood. To be fair, this misunderstanding has often been generated by those who have used the phrase to present their products as Auto-Tiered although they are actually not if you stick to the true meaning of the term. So, what exactly is “Auto-Tiering”?

Tiered Storage is something that was always going to become more prevalent in a market that is seeing the phrase “consolidation” applied to all aspects of IT. The idea of combining storage that provided performance, capacity and density into a single platform is one that has traditionally been limited by the reluctance of vendors to unify the technology. However, the emergence, development and acceptance of virtualised storage has allowed this architecture to evolve.

Credit where credit is due (and let’s not forget that this was covered in the previous edition of this Book!), Compellent (now owned by Dell) – with their Fluid Data Architecture, were pioneers in this sector. By actively mapping and categorising storage blocks, they could dynamically direct the block onto the respective storage layer – with a choice of SSD (high performance, low capacity), SAS HDD (mid performance, mid capacity) and SATA HDD (low performance, high capacity) available in their hardware solution.

To some this was the holy-grail of storage and many have gone back to the drawing board to try and reproduce this concept for their storage offerings. Some of these are those who have previously presented their so called “Auto-Tiered” solutions, simply using the ability to integrate a layer of high performance flash/SSD as a buffer – something which has traditionally been delivered by cache at the controller layer. This is not tiering the storage, it’s buffering the I/O and it is limited by the ability to accommodate the required capacity.

Let’s rewind to a previous section of this book that covers the idea of a Hybrid array and what these can incorporate from a hardware perspective. Not wishing to confuse matters even more, some of these Hybrid arrays provide Auto-Tiering features and, as such, represent a complete Auto-Tiered solution. They are however brand and hardware locked which can carry a cost penalty and be dependent on device or platform certification when looking to scale the capacity or performance in the future.

Page 23: NCE Little Book of Data Storage

23

Software – VirtualisationHands-up! Who knows what Virtualisation is?

Hmmm, yes it seems that a lot more of you have become familiar with this since we fi rst made mention of it back in a previous edition of the Little Book! The new generation has great confi dence in something that seemed like black magic a decade ago. It appears that the terminology that virtualisation has brought to the party seems something that we’ve all become accustomed to. Thin-provisioning, failover, snapshot, virtual machines, hypervisor, replication, mirroring etc. are all terms that on which we are rarely challenged when using them at face to face meetings or on conference calls. Therefore the bar has been raised and additional (new) features are what the captive audience is waiting for.

No doubt you will be aware that we currently have three dominant players on the server hypervisor layer – namely Citrix (with XenServer), Microsoft (with Hyper-V) and VMware (with vSphere), all with differentiating strengths and stories to tell. We’ll skate over the detail on this as this book is about storage as opposed to server virtualisation.

From a storage hypervisor perspective, the choice seems limited. The challenge is that most, largely to save themselves a lot of work, offer a self-certifi ed hardware appliance which takes away the dynamic (and they would argue, certifi cation) of using the wide variety of storage hardware that is available today (something this publication has hopefully helped to increase your awareness of!). This appliance has a software engine pre-installed (typically running on a linux kernel) which can offer the storage hypervisor capabilities in association with the underlying storage hardware.

In some cases, this is the perfect solution for the customer (and we recognise this) as, by being a pre-confi gured and singly supported product, it can meet the specifi c deliverables that you are looking to achieve. In other cases, this is a solution that recognises a protection of the storage investment that has already been made, but provides the fl exibility to add capacity, resilience and performance into the equation – which can be preferential. It’s not a case of one size fi tting all requirements, and this is where NCE can help to position the options that are available.

Page 24: NCE Little Book of Data Storage

24

Vendor Feature: DataCoreFounded: 1998 Headquarters: Fort Lauderdale, Florida Portfolio: SANsymphony-V Storage Virtualisation Software, the Storage Hypervisor

Having worked with DataCore for over ten years, we at NCE have seen their portfolio strengthen and their market share grow significantly as a result. They are now at the point where the

company have over 8,000 customers worldwide with many of these using multi-node configurations benefiting from the DataCore synchronous and asynchronous mirroring and replication features.

The key differentiator for DataCore when compared to the Appliance based alternatives is that the company are focused on producing software, software that is both vendor independent and storage self-sufficient. This business model offers huge flexibility and scalability and one key factor of the DataCore customer base is the longevity of their use of the software; many of them have been customers for many years and have scaled the product as and when their environment has demanded the increase in capacity or performance.

SANsymphony-V was launched in 2011, and represented a complete overhaul of the DataCore software stack - with SANmelody previously representing the entry-point product and SANsymphony

Page 25: NCE Little Book of Data Storage

25

representing the enterprise class solution. Having spoken to many customers that invested in and looked at the predecessors since SANsymphony-V came to market, the general consensus is that new generation has “leapt forward enormously” and “leads the way to the future of storage” (real customers’ words, not those of a marketing machine).

Perhaps the jewel in the crown of SANsymphony-V is the ability that it has to monitor I/O behaviour, determining frequency of use, and then dynamically moving storage blocks to the appropriate Storage Tier – be that SSD, SAS HDD, SATA HDD or even out to the Cloud: true auto-tiering. The product also has one of the best storage reporting and monitoring engines in the business; it produces real-time performance charts and graphs, heat-maps and trends – something that represents huge value to anyone tasked with managing storage.

With DataCore SANsymphony-V being a storage hypervisor, features such as thin-provisioning, replication, load balancing, advanced site recovery and centralised management are integral to the product, a product that continues to capture the attention of those tasked with storage consolidation and cost effi ciency.

Page 26: NCE Little Book of Data Storage

26

Customer Case Study: EducationOne of our many long-standing customers is one the biggest colleges in the UK (with over 1,000 staff and in excess of 20,000 students). Meeting the on-going IT demands of both the staff and the students is the task of the Infrastructure Support Team at the College.

The storage and virtualisation architecture has been designed working closely with NCE and scalability and fl exibility are key elements of the NCE solution. At the heart of the storage area network (SAN) is a storage hypervisor, with its ability to synchronously and asynchronously mirror blocks of storage both locally and over distance using IP links providing full storage resilience between their multiple campus. The underlying storage in the SAN also benefi ts from auto-tiering, with a mixture of high performance low capacity SAS (15k rpm) drive technology and low performance high capacity SATA (7.2k rpm) drive technology in their arrays providing a perfect combination for this. Connectivity is provided through a mix of fi bre channel and iSCSI protocols.

NCE have also provided data availability and guaranteed uptime in the virtual server environment at the College with a solution that ensures business continuity and disaster recovery for VMware in a multi-tenanted environment. The College stipulated that one specifi c feature - automated recovery testing (providing peace of mind on the College’s key Mission Critical Virtual Servers) - was included and NCE met this objective at a realistic price point where others were unable to offer a solution.

“The underlying storage in the SAN also benefi ts from Auto-Tiering, with a mixture of High Performance Low Capacity SAS (15k rpm) and Low Performance High Capacity

SATA (7.2k rpm) drive technology”

Page 27: NCE Little Book of Data Storage

27

We have also addressed the College’s backup needs with a solution providing features including deduplication, support for both physical and virtual server backup, granular database and application backup and recovery and a visual dashboard summarising the status of the jobs across the whole campus. NCE also provided and integrated the scalable disk (based on SATA HDD technology) and tape automation (based on LTO) hardware.

Our customer base features a large number of schools, Colleges and Universities – an audience that is faced with different challenges with budgets, timelines and deliverables topping their priority lists. We also see a high demand from this sector for our warranty extension service providing on-going support for server and storage hardware that the manufacturers no longer support, coupled with our services to migrate data to new storage when required. The College featuring in this specifi c Case Study have also benefi ted from these services.

“Our customer base features a large number of schools, Colleges and Universities – an audience

that is faced with different challenges with budgets, timelines and deliverables topping their priority lists”

For independent guidance and advice Contact NCEwww.nceeurope.com

Page 28: NCE Little Book of Data Storage

28

Storage Connectivity Data can travel to storage devices in different ways. Probably the best comparison is that of the road network that we travel upon. Some would say that the toll road of storage is that of Fibre Channel as, typically, it carries a premium but (as with most toll roads!) there’s less traffic on it and you have a smoother journey. The main alternative is iSCSI which travels along the established Ethernet highway - perhaps more representative of the road that most people use with occasional traffic congestion and bottlenecks but not costing anywhere near as much to use as the toll road.

One consideration in the analogy above is that your transport means needs to be allowed on the road in order to travel upon it. Some storage supports only one or the other protocol, so in some cases you have to travel on the toll road and you don’t have the choice to take the more affordable option. Other storage offers you the choice.

Page 29: NCE Little Book of Data Storage

29

And then there’s a new concept that those stuck in the queue on the Ethernet highway are watching and wondering if they should invest in it: Fibre Channel over Ethernet (FCoE). Perhaps this should be seen as the car pool lane as it travels on the Ethernet highway, but you have to have the qualifying vehicle to use it.

Which is the best for you? This is largely dependent on which storage you will be using to achieve this connectivity; the last thing you want to do is to introduce a performance bottleneck into the equation.

At the back-end (behind the scenes as it were) of the storage infrastructure you’ll also fi nd that, in addition to the aforementioned protocols, a few others come into the equation. SAS (Serial Attached SCSI) and with 6Gbit/s interfaces featuring on storage technology this has aided performance to the device. High Performance Computing (HPC) has also seen a surge in the use of Infi niBand with the use of serial links operating at one of fi ve data rates:- single data rate (SDR); double data rate (DDR); quad data rate (QDR); fourteen data rate (FDR); enhanced data rate (EDR). These provide our industry with the reason to adopt fi ve more acronyms!

Common Storage Interfaces and Gigabit per second ratings:

SAS iSCSI/Ethernet Fibre Channel

1Gbit/s

3Gbit/s

4Gbit/s

6Gbit/s

10Gbit/s 8Gbit/s

12Gbit/s*

16Gbit/s

32Gbit/s

40Gbit/s*

100Gbit/s* *Denotes future release of technology at time of Little Book publication

Page 30: NCE Little Book of Data Storage

30

Vendor Feature: QlogicFounded: 1992 Headquarters: Aliso Viejo, California Portfolio: High perfomance storage connectivity products

“Play to your strengths”; it’s a phrase that applies to all walks of life. Qlogic have taken this expression and applied it to a new technology to add to their established portfolio with the FabricCache 10000 Series Adapters.

It won’t have escaped your notice if you’ve read this far into the Little Book that optimising performance is a huge consideration and challenge for those of you responsible for and managing the storage infrastructure. In parallel to this the constant challenge of addressing this without exceeding the already over-stretched budget allocated for this purpose is an unenviable task. Thankfully, in the Qlogic FabricCache 10000 Series Adapters (AKA “Mount Rainier”), there is a quick fix (and affordable) solution.

More often than not the performance demands of a specific server or application make it appear that the overall demands being made on the storage architecture are far worse than, in truth, they actually are. This is a solution allowing the I/O culprits (once identified) to be isolated and tackled efficiently. It complements a SAN environment that is experiencing latency issues perfectly.

Page 31: NCE Little Book of Data Storage

31

So what is it exactly? It’s an acceleration tool (using a combination of SSD, DDR3 and nvSRAM) which amalgamates the Qlogic strengths and pedigree in the Host Bus Adapter (HBA) and Storage Router market. It appears as a normal Qlogic 8Gb Fibre Channel HBA (meaning that no additional drivers are required), albeit commanding 2 x PCIe ports within the server as opposed to the one that this would typically need, and provides an aggregated read booster; (that’s my way of expressing it rather than using any offi cial terminology!). On the occasions when you implement multiple FabricCache Adapters you can cluster them (up to 8 at the time of writing this - 20 in the future) to provide a transparent cache with heartbeatverifi cation across the estate.

Other approaches to this (I/O cards) typically shift the processing demands back onto the CPU in the server. But the FabricCache solution doesn’t add overhead to the server into which it is integrated as it addresses the workload itself. I/O intense applications like VDI, Oracle RAC, SQL and Exchange will visibly benefi t from the use of FabricCache technology.

Page 32: NCE Little Book of Data Storage

32

Storage Interface GuideSCSI connection

50 Pin Centronics (SCSI 1)

50 Pin High Density (SCSI 2)

68 Pin High Density (SCSI 3)

68 Pin Very High Density VHDCI (SCSI 5)

80 Pin SCA

“Serial Attachment Interface” for SAS & SATA Connectivity

USB (Universal Serial Bus) connections

USB Type-A connection Mini USB-A

USB Type-B connection Mini USB-B

Micro-A Micro-B

Page 33: NCE Little Book of Data Storage

33

Fibre connection

SC Connector

Escon Connector

ST Connector

fDDi Connector

LC Connector

SFP/SFP+ Connector

( For SCSI, SAS or Fibre-Channel cables, terminators, GBIC’s or any other consumables please don’t hesitate to contact NCE.

Page 34: NCE Little Book of Data Storage

34

Hard Disk Drive (HDD)Contrary to the rumours, customers are still buying HDD’s and it is still very much the storage media of choice representing an excellent cost/capacity formula. It dominates the (largest) middle tier of the storage environment in any datacentre and certainly isn’t about to give up that title at anytime soon.

Let’s not forget that the technology has been around for over fifty years, so it’s fair to say that it doesn’t need to represent that it has pedigree in the storage arena. This $30bn market encompasses a wide variety of data storage purposes with many of us using HDD’s on a daily basis when we record or playback TV, save our progress on a games console or key in our destination on our satellite navigation system. Unquestionably the aforementioned flash drive technology has encroached on the HDD market with portable devices such as MP3 players and phones embracing this with its smaller and no moving parts attraction – perfectly suited to the mobile arena. But economies of scale with mass production and consumer demand along with the capacity v cost formula have, at this point, led to the Hard Disk Drive remaining the dominant player.

If you open up a hard disk drive, and believe me we do this a lot at NCE, the first thing that you’ll see is the circular “platter” onto which the data is written and from which it is read; this is supported by a spindle at the centre onto which the platter is loaded. The platter will spin at a speed measured in revolutions per minute (rpm) – typically between 5,400rpm and 15,000rpm on the current drive technology. In a Hard Disk Drive, there is the spindle motor and an actuator. An actuator (consisting of a permanent magnet and a coil), passes current through the coil and creates movement,

Page 35: NCE Little Book of Data Storage

35

depending on the direction of the current.Power connectors, jumper blocks and connectivity interfaces then feature on the back of the drive.

I’m trying not to show my age here but the simple comparison to make is with that of a traditional Record

Player (I know that this will isolate some of our readers who will have to Google what I am referring to at this point!), with the vinyl record being the platter and the stylus being the head - although the HDDs’ heads do not touch the surface they fl y just above it. Also it isn’t one continuous track like a record, but thousands of circular tracks on each surface, made up of sectors (blocks).That comparison massively dumbs down a Hard Disk Drive but it works for me…

So how exactly is our data stored on this mechanical contraption!? I’ll try and keep it simple (and physics wasn’t something I excelled at, believe me!): essentially it’s a magnetic recording process. Changes in the direction of magnetic patterns and sequences are encoded to form binary data bits. The head plays a key role in this process by translating what is written to and read from the platters that are being spun at high speeds. Platters are usually made from a non-magnetic material, for example aluminium or glass, and are coated in a narrow layer of magnetic material.

All of this sounds very meticulous with little margin for error, especially when you consider the size of a Hard Disk Drive. And, thankfully, the technology incorporates exactly that – margin for error, in the form of the Error Correction Codes (ECC) that feature to allow for bad sectors on a platter. The use of this information can be vital in foreseeing any potential drive failures that can become apparent through excessive wear, drive contamination or simply poor manufacturing and quality.

Page 36: NCE Little Book of Data Storage

36

The HDD Natural DisasterThe day when Hard Disk Drive Manufacturers realised that the term “Disaster Recovery” applied to them too…..

In October 2011, floods in Thailand resulted in the vast majority of the HDD manufacturing simply being unable to manufacture or deliver any Hard Disk Drives. Prices rocketed as supplies ran out and the disk storage industry realised how dependent it had become on one region of the world. Global news organisations focused their technology correspondents on a story that pretty much appeared overnight. I found myself sitting in front of my customers trying to represent that I had an understanding of for how long there would be short supplies of HDDs and when things would return to normal.

The annoyance was that those with hard disk drives sitting in their warehouses were scaremongering the industry saying that it would be “at least twelve to eighteen months” before drives would be available from manufacturing outlets again, ironically adding an already inflated value to the stock that they had. They would happily show you pictures of divers trying to recover key

Page 37: NCE Little Book of Data Storage

37

manufacturing machinery and facilities completely submerged by the fl oods. Representatives of the manufacturers themselves were, unfortunately but understandably, reluctant to say anything until they had a full understanding of what sort of expectation they could set. The irony was that for an industry that recognises a vast amount of revenue providing disaster recovery solutions and products for customers, and emphasising that a single point of failure was a bad approach, we didn’t appear to have practised what we had preached. Effectively the dependency on the HDD manufacture (with all its associated components) in Thailand was our very own single point of failure.

Had SSD been at a point where it could have seized the opportunity and stepped up to the plate from a price, capacity and volume perspective, this would have seen a sizeable shift towards that media. It wasn’t and it was an opportunity missed. However, it has opened the eyes of those in the storage industry to their dependency on one storage type (hard disk drives) and, arguably, given the shot in the arm to complementary and competing technologies (such as SSD) that was needed. Relying on one single storage type can be a risk in itself. It took the fl oods in Thailand to realise this.

For more information and pricing contact NCE

www.nceeurope.com

Page 38: NCE Little Book of Data Storage

38

RAID LevelsI’ve faced some challenges in my life but trying to make RAID levels an interesting subject has to be up there with the best of them. So, ride with me on this one and we’ll get through it together! You never know, between us we may find this knowledge useful somewhere down the line….

So, let’s focus on the term itself: RAID, meaning Redundant Array of Independent (or previously Inexpensive) Disks. The key word in the whole phrase being “Redundant” as this implies that a failure can occur and the disk array will still remain operational. Although we know that RAID 50 is a RAID level offered in the storage industry I am thankful to say that there aren’t fifty RAID levels to be covered in this section.

In truth there are only a few that are typically used or offered by RAID manufacturers today, and some manufacturers (let’s use NetApp as an example in the HDD environment with their own exclusive RAID level – RAID-DP or Pure Storage in the SSD environment with RAID-3D). Here’s a snapshot of what each conventional and non-exclusive RAID level provides:

RAID Level Main Feature Parity

RAID-0 Block-Level striping No

RAID-1 Mirroring No

RAID-2 Bit-Level stripingDedicated Parity (on a single drive)

RAID-3 Byte-Level stripingDedicated Parity (on a single drive)

RAID-4 Block-Level stripingDedicated Parity (on a single drive)

RAID-5 Block-Level stripingDistributed Parity (can tolerate one drive failure)

RAID-6 Block-Level stripingDouble Distributed parity (can tolerate two drive failures)

RAID-10 (1+0) Mirroring + Block-Level striping No

Page 39: NCE Little Book of Data Storage

39

Page 40: NCE Little Book of Data Storage

40

Vendor Feature : Nexsan – by ImationFounded: 1999 Headquarters: Oakdale, Minnesota Portfolio: High density storage technology

Since the last edition of this book was published, an evolution has taken place within Nexsan with respect to both the technology and the company, so please allow me to bring you up to speed.

Firstly let’s focus on the product portfolio. Storage density has always played a major part in the Nexsan success story and the arrival of the E-Series featuring the 2U 18 Bay E18, the 4U 48 bay E48 and 4U (albeit deeper than the E48) 60 bay E60 has taken the Nexsan Storage Array architecture into the next generation of storage. Those of you with Beasts and Boys will be pleased to know that they

are still supported by NCE, and will remain so as the technology moves forward. The E-Series benefits from active drawer technology, allowing the array to remain operational whilst drives are added or replaced. It also supports expansion - with additional SAS ports on the E-Series controllers allowing E18X, E48X or E60X chassis to be added to their X-less equivalents. These variants of the E-Series are based on 3.5” (LFF) drive technology. Nexsan have also announced the E32,

Page 41: NCE Little Book of Data Storage

41

in essence the same architecture as the E18 mentioned above, but supporting the 2.5” (SFF) drive technology and the increased density, performance and support for future capacity benefi ts that it offers.

Nexsan have also moved into the hybrid/appliance arena with their NST-Series. This technology provides a unifi ed storage architecture that simultaneously supports Fibre Channel, iSCSI, NFS, CIFS, SMB and FTP

protocols. TheFASTier acceleration technology on the NST uses DRAM and SSDs to provide high performance caching and increases the perceived performance of the underlying SAS and SATA HDD technology as a result. . It is worth emphasising that, even though the NST architecture supports SSD, SAS and SATA, it does not offer Auto-Tiering.

From a company perspective 2013 heralded a new name above the door for our friends at Nexsan, with established data storage giant Imation paying over $100M for their company. Imation brought the extended reach and capabilities of their billion dollar business to further strengthen the Nexsan brand and reputation globally.

If you have an interest in any of the products from the Nexsan by Imation portfolio, please contact NCE to discuss them further.

SPINNING DISK

SOLID STATE

Page 42: NCE Little Book of Data Storage

42

Roadmaps for HDDGaining access to this information (and more importantly the accuracy of what you are then told) is perhaps one of the biggest challenges in the storage industry. We tend to believe what we’ve seen and what is actually published openly rather than the vapourware that can feature heavily on corporate slide decks. On that basis, here’s what we can categorically say exists and will exist from an HDD perspective:

2.5” Small Form Factor (SFF) 3.5” Large Form Factor (LFF)

Drive Capacity SATA SAS SATA SAS

146GB 15,000 rpm

300GB10,000 rpm 15,000 rpm

450GB 10,000 rpm 15,000 rpm

500GB 7,200 rpm

600GB 10,000 rpm 15,000 rpm

900GB 10,000 rpm 10,000 rpm

1TB 7,200 rpm 7,200 rpm

1.2TB

1.5TB 5,400 rpm

2TB 5,400 rpm 7,200 rpm

3TB 7,200 rpm

4TB 7,200 rpm

*Please see article entitled “Buyers Beware: Near-Line (NL) SAS - The Sheep in Wolfs’ Clothing” for details on the truth behind the Near-Line (NL) SAS drive

You will note that there are two key variants of Hard Disk Drive – the 2.5” Small Form Factor (SFF) drive and the 3.5” Large Form Factor (LFF) drive. There has been a significant shift towards the SFF drive in the past few years as capacities have increased and server and storage array manufacturers have integrated the smaller drives into their portfolios. The LFF drive continues to have the capacity edge and subsequently we have customers that are using a mix of the two, with

Page 43: NCE Little Book of Data Storage

43

SFF drives/arrays serving their mid-tier performance requirements (typically with the real SAS drives – see the “Sheep in Wolfs’ Clothing article for details) and the LFF drives/arrays serving their low-tier capacity requirements (typically with the SATA drives).

It is also worth mentioning that the actual spin speed (rpm) of the drive when comparing the SFF with the LFF technology does not equate directly to a drive being that percentage faster. By having more real estate to cover, the large form factor drive (although spinning at 15,000 rpm, in the example of the 600GB capacity point) can be rivalled for performance by the small form factor 10,000 rpm variant as it has a smaller platter to address.

Page 44: NCE Little Book of Data Storage

44

Page 45: NCE Little Book of Data Storage

45

Buyers Beware: Near-Line (NL) SAS - The Sheep in Wolfs’ Clothing A question for you: When is a SAS drive not a SAS drive?

When it’s a Near-Line (NL) SAS drive appears to be the answer to this question as we have met many, many customers who have perceived that the “Near-Line” SAS drive is something it is not – the high performance equivalent to SATA. The perception is that a Near-Line SAS drive is exactly the same as the HDD manufactured with a SAS interface and delivering the

15k or 10k rpm high spin speed and performance expected. But the truth is that it’s a slower spin speed SATA drive (typically 7.2k rpm) with a SAS connector. Ironically it’s far cheaper than a proper SAS drive but typically more expensive than a SATA drive. The danger is that the user perceives it to be the former rather than a dressed up variant of the latter.

The real SAS drives are a better built and faster product, so when you buy the Near-Line equivalent thinking you are getting SAS at SATA pricing it seems too good to be true – and it is. The words Near-Line or letters NL are something you need to be very wary of, especially if you are looking to put in a Tiered Storage architecture. Using Near-Line SAS and SATA drives could, in essence, mean you are using the exact same spin speed and drive type, and discovering this after you’ve spent your budget on what appeared to be bargain of the day could be a very painful experience. The promotion of Near-Line SAS as SAS itself can be very misleading, but hopefully by reading this you will get what you expect to get and be aware of this marketing driven confusion.

Page 46: NCE Little Book of Data Storage

46

NCE in the Healthcare SectorNCE have a large number of customers who fall under the National Health Service (NHS) banner in the UK, and we have worked with them on a wide variety of projects encompassing a whole host of storage challenges.

The current government directive has seen two key elements of the IT service emerge: CSU (Commissioning Support Units) and those responsible for the Acute Trusts (typically a Hospital). CSU’s are providing services to what was once known as a PCT (Primary Care Trust)

but has now become a CCG (Clinical Commissioning Group). Critical to their success is IT service delivery and support.

Elements of the PCT legacy from an IT perspective have been moved to both the CSU and Acute Trust, a split of infrastructure, andthis transition has been the responsibility of those managing the IT estate (the consolidated Data Centre). In a number of cases, NCE’s skills have been brought to fruition to help with this process.

Some of you may have heard the term PACS (Picture Archiving & Communication System) in association with medical imaging, and this hasn’t escaped the attentions of NCE too! The increased quality and capabilities of medical imaging include X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Positive Emission Tomography (PET) technology. The retention of these high resolution images once generated has triggered increasing demands on the storage environment. Over the next year or so the supplier contracts that were put in place for the PACS, Radiology Information System (RIS) and/or image archive systems are due for renewal and this in itself has triggered the demand for PACS localisation at the Acute Trust level. NCE have worked, and continue to, work closely with a number of Acute Level Trusts to put in place scalable and capable solutions for this application.

The Healthcare market unquestionably competes with the storage industry in the world of acronyms, and another initiative – the drive for EPR (Electronic Patient Records) within the NHS is also putting huge demands on the IT infrastructure and its associated storage. Depending on where the Acute Trust is with regard to

Page 47: NCE Little Book of Data Storage

47

integration and consolidation, the ability to link the EPR with other systems (such as PACS) can mean storage of data is duplicated, often residing on completely different hardware vendor storage platforms and technologies. Equally many Patient Administration Systems (PAS) have yet to be migrated over to the EPR framework. The GP (General Practitioner or family doctor in old money!) may also work from a system that is independent of the EPR. In essence the objective of the government to collaborate the work within the NHS is clearly one that needs to apply to the data too, but the help of an independent storage specialist (step forward NCE!) can be key to doing this effi ciently and affordably, retaining data security and compliance throughout as patient records are not something that can be compromised.

In the era of league tables and central government looking to score every Acute Trust, CSU and CCG on its performance one of the most signifi cant I/O demanding elements of Healthcare is that of Informatics - collecting, managing, reporting and sharing the daily (and in some cases hourly) activity of the organisation. This Data Warehousing process provides the business intelligence which, ultimately, the regulators and government judge (and provide funding) against – therefore it is critical to every aspect of the NHS and seen as something that is top of the priority list for each and every one. The way that this data is collated is typically in a lump (a huge fl at fi le) so, as a result, it is an I/O intensive application. Some of the technology covered in this book is perfectly aligned to this and has already been implemented for this very purpose.

Let’s not forget that the NHS IT function also has to provide those that work tirelessly to provide us with one of the best healthcare systems in the world with access to those day to day services such as email, documents and spread sheets – and NCE have provided many storage area networks, backup and recovery solutions and Petabytes of disk storage to serve this purpose too.

Page 48: NCE Little Book of Data Storage

48

Tape StorageIt’s hard to believe but the first edition of this Little Book majored on tape storage technology. AIT, DDS (DAT), 8mm (Exabyte), DLT, S-DLT, DTF, VXA were tape formats that many chose as their backup storage media – trusting the nightly backup as their “get out of jail free” card in the event of data loss.

How times have changed? Ironically, that very first Little Book, on pages 20 & 21, covered a new (and unknown) technology in the tape storage world; Linear Tape Open (LTO). It’s a fairly controversial statement to say that LTO is the only remaining tape format. But for those that continue to use tape (and there are more of you than perhaps the media perceive, believe me!), it is the format of choice and has a huge market share (estimated at more than 90%). So, a pat on the back for me there I think, it appears that my storage crystal ball was working perfectly, even back then!

We now find ourselves with the sixth generation of the format, rather aptly named “LTO6”. With the ability to store 2.5TB of native (uncompressed) capacity on a single cartridge, it makes the first generation with its measly 100GB of capacity seem like a portable USB key in comparison. From a throughput perspective, LTO6 drives are available with either a 6Gbit SAS or 8Gbit Fibre Channel interface, with both variants able to deliver native speeds of 160Mbit/s.

The “open” aspect of LTO remains, with three main manufacturers of LTO drive technology honouring their initial promise to stand by the format; these are HP, IBM and Seagate – albeit with this aspect now part of the Quantum portfolio.Others OEM the drive from one of the above and integrate it into their products, but you’ll find that it will be manufactured by one of the above if you own an LTO drive. The media is also manufactured by three companies who have the license to produce this - Fujfilm, Maxell and Sony.

Tape doesn’t serve the purpose it once did in most businesses today. The idea of using it as a backup target has largely been displaced by the efficiencies (and eroded cost base) of high capacity HDD technology. However, it hasn’t died off because of this, arguably it has had a rebirth in a layer that was historically dominated by optical storage technology – something that fell at the hurdles of price per GB and future generations (ie how far can the capacity go).

Page 49: NCE Little Book of Data Storage

49

LTO1 100GB Native Capacity, 15MB/sec (54GB/hr) uncompressed data throughput

Son

y P

art N

umbe

r:

LTX100G

LTO2 200GB Native Capacity, 35MB/sec (126GB/hr) uncompressed data throughput

LTX200G

LTO3 400GB Native Capacity, 80MB/sec (288GB/hr) uncompressed data throughput

LTX400G

LTO3 WORM variant available – same specifi cation as above

LTX400W

LTO4 800GB Native Capacity, 120MB/sec (432GB/hr) uncompressed data throughput

LTX800G

LTO4 WORM variant available – same specifi cation as above

LTX800W

LTO51.5TB Native Capacity, 140MB/sec (504GB/hr) uncompressed data throughput with LTFS

LTX1500G

LTO5 WORM variant available – same specifi cation as above

LTX1500W

LTO62.5TB Native Capacity, 160MB/sec (576GB/hr) uncompressed data throughput with LTFS

LTX2500G

LTO Universal Cleaning Cartridge for LTO Drives LTXCLN

Page 50: NCE Little Book of Data Storage

50

Having recognised that being an Archive storage target was the future, with long-term retention and compliance at the heart of this market, LTO technology evolved.

The LTFS (Linear Tape File System) aspect of LTO technology that was introduced with the LTO5 generation of the format, has overcome the inherent label that tape is a dumb piece of media which simply streams data in a sequential fashion. Intelligence has been introduced and Dual-Partitioning of the media now means that an index and metadata (with block ID’s and pointers) are held at the beginning of the tape which point to and provide faster access to the content fi les that are contained on the latter partition of the tape. When loaded into a drive, LTFS means that the cartridge will appear as a storage device through a browser and its contents will be accessible in a drag and drop fashion. It is perhaps best to compare this with a USB drive when plugged into a USB port.

This improved effi ciency coupled with the low cost per TB of the media and low energy consumption when compared with spinning disk, plus the portability of tape cartridges, explain why LTO technology continues to have a signifi cant part to play in the storage industry. And, given that the LTO consortium have delivered six generations on time for this technology, there’s no reason to question why LTO7 and LTO8 – which are openly listed on the roadmap on the lto.org website, won’t feature in future editions of this Book.

Page 51: NCE Little Book of Data Storage

51

Continuous DataProtection (CDP)Television today is fantastic. Technology has leapt forward. You can start the content from a specifi c time and pause and rewind it. The reason for this? Storage has evolved, even in your living room. Gone are the video tapes and in their place you fi nd HDD technology providing the random access and search capability - delivered through an easy-to-use software layer and GUI - that makes our life easier. So, why can’t the same idea be provided for data? The good news is that it can.

Continuous Data Protection (CDP) takes the idea of backup and recovery technology and adds increased granularity and fl exibility to the whole process. It is important to stress that CDP is complementary to a backup and not a replacement for it. Fundamentally, CDP is based on periodic snapshot technology, automatically capturing any change that is made at a fi le or block level to data and storing it to a separate storage location.

In order to qualify whether you need a CDP solution, the questions asked typically relate to your RTO (Recovery Time Objective) and RPO (Recovery Point Objective). If you don’t really have a defi nitive answer to either of these, then the chances are that CDP technology isn’t for you. The operating system and application layer can also aid CDP, providing gateways or resources that snapshot (such as VSS in a windows environment) and can be leveraged by the CDP technology to offer the seamless rollback capability.

Page 52: NCE Little Book of Data Storage

52

Vendor feature: FalconStorFounded: 2000 Headquarters: Melville, New York Portfolio: Data Protection and Storage virtualisation software

Transparency and flexibility are two key attributes that are integral to a good CDP solution. Some applications and storage appliances offer the ability to

snapshot, mirror/replicate and rollback from within their technology stack but most are littered with caveats when offering this.

Using an independent software solution to deliver this feature set has cemented the FalconStor name as a leading name in CDP. Bringing it back to basics, what this technology offers is the ability to rewind your data to a point in time (Recovery Point) the perfect solution if a corruption has occurred or someone has inadvertently over-written or deleted a file. It is a heterogeneous software application that works with blocks of data and as a result doesn’t care whether the data is on a physical or virtual machine, resides on a specific hardware platform or storage type or is connected over a specific protocol. It delivers business continuity.

Supporting up to 1,000 snapshots per LUN, the frequency and granularity provides you with the ability to offer the users complete peace of mind knowing that you can deliver point in time recovery. The out of band approach to snapshotting means that this process does not interfere with the primary storage path and overcomes the bandwidth overhead challenge that the competition typically put in the datasheet small print.

Complemented by the patented (and industry respected) RecoverTrac technology, the CDP solution from FalconStor also provides an automated disaster recovery tool that offers a P2P, P2V, V2V or V2P server and application failover capability to complement the localised granular data rollback expected from a CDP solution.

Page 53: NCE Little Book of Data Storage

53

MicroScan technology from FalconStor maps, identifi es, andtransmits only unique disk drive sectors (512 bytes) providing WAN-optimisation for replication. In most other replication solutions they transmit entire blocks or pages whereas MicroScan can reduce network traffi c by as much as 95% through using this process. Adaptive replication automaticallyswitches between continuous and periodic data transmission in the event of temporary link outage or throughputdegradation. This process queues data for subsequenttransmission, while preserving write-order integrity.

Like what you read? Contact NCE for further information on FalconStor CDP.

Page 54: NCE Little Book of Data Storage

54

RIP “The Backup Administrator”Backup is a word that I have written in previous editions of this book more times than I’d care to mention, and this new edition is no exception. However, although it is still as important in my opinion, the word backup in isolation has become secondary to a word that used to be the poor relative to it – namely Recovery. One can’t survive without the other.

The challenge is that the success of the backup process is judged by the ability to be able to recover the data and the expectation and capability of recovery has leapt forward considerably. It’s no secret that the primary reasons for this are improvements in the storage media used as the backup target. Compared to legacy tape, the disk with its random and fast access capabilities raises the bar and even tape itself is now evolving with the arrival of LTFS (covered elsewhere in the book).

So let’s dive a little deeper into the way that these more efficient and faster storage means have shifted the emphasis from backup to recovery. Backup software used to be a fairly regimental animal, tasked with moving data from the disk where it was locally stored out to a tape drive where it could be taken off-site. If a recovery of this data was ever required it would retain information on where exactly (ie on which tape) the data could be found and thus fulfil that purpose. Essentially it did what it was told to do and worked shifts when everyone else had gone home at night. It wasn’t the best job in the IT estate but it paid the bills and took all night to complete as it chugged away in the corner doing what it had to do.

RIPBackup Admin

RIPBackup Admin

Page 55: NCE Little Book of Data Storage

55

RIPBackup Admin

RIPBackup Admin

Then along came the straight out of college kid and with him he brought new ideas and new ways to address this. He introduced a layer of low cost disk into the backup process, a staging area to have things readied (buffered) in preparation for the backup process. That made the process of backing up far more effi cient and what took a whole night to do, now only took a few hours. Equally, because the data was already staged and the backup process wasn’t impacting the front end (users and applications) it could be done during the business day.

So our overnight shift worker had to change his ways to overcome this, but he still felt that he held the key to the recovery door, after all if anyone needed their data back he still had to use his legacy backup software and tapes to get it for them. Not so, as typically the recovery of fi les and data that is demanded by the users and applications is from a very narrow window (typically within the past few days or at worst two weeks). This data was available and retained in the staged disk layer and therefore the need to recover from tape or the off-site backup was minimal. By offering the users and DBAs the ability to browse this layer and do their own data and fi le recovery, our man was struggling to maintain any control of what was once his value to the business.

He could feel a little back stabbed by his backup software as this had not moved with the times; he accepted that this disk staging (D2D2T) methodology was something that could see backup software gathering dust on a shelf with those fl oppy disks and software manuals if it hadn’t have played ball. The truth is that the role of a “Backup Administrator” is one that you rarely hear of in the modern day IT environment and understanding tape rotation schedules like Grandfather/Father/Son or Tower of Hanoi are of little value to our industry anymore.

Page 56: NCE Little Book of Data Storage

56

Vendor feature: WD Arkeia Network BackupFounded: 1996 Headquarters: Carlsbad, California Portfolio: Network Backup Software and appliances

Given the previous article positioning how Backup Software has had to evolve, it is appropriate that we provide an example of this with an established product in this market with which NCE has worked for well over a decade – Arkeia Network Backup. The average IT estate today consists of a whole myria

d of operating systems, applications (and versions of these), data sets, physical and virtual servers, storage types, connectivity protocols. Subsequently finding a solution that can encompass and support such an environment isn’t such an easy thing to achieve. Thankfully in Arkeia Software, this challenge is one that can be met.

Arkeia supports over two hundred platforms including all of the more mainstream and recognised ones plus the likes of Netware, BSD, AIX, HP-UX and some of the more obscure Linux kernels, applications including Groupwise and Lotus Domino, databases including DB2 and PostgreSQL plus LDAP and RHEV. This demonstrates

Supported Platform Categories

• Apple MacOS® X

• FreeBSD

• HP-UX ®

• HP-Compaq-Digital Tru64 ®

• IBM AIX ®

• Linux (Generic) glibc

• Linux Debian ®

• Linux Mandriva Enterprise Server ®

• Linux Mandriva Corp. Server ®

• Linux Novell SLES ®

• Linux Novell Suse ®

• Linux Fedora ®

• Linux Redhat Enterprise Linux ®

• Linux Slackware

• Linux Ubuntu®

• Linux Novell OES ®

• Linux Yellowdog

• NetBSD

• Novell Netware ®

• OpenBSD ®

• SCO UnixWare, OpenServer ®

• SGI IRIX

• SUN Solaris

• VMware vSphere, ESX/ESXi ®

• Windows® 98/XP/Vista/7

• Windows Server NT4, 2000/2003/2008

Page 57: NCE Little Book of Data Storage

57

the huge fl exibility and capabilities of the technology that make it a perfect fi t for business backup. The product is also “command-line friendly” providing increased fl exibility to those of you that like to script and make the solution more bespoke to your own requirements.

The product also has a patented “progressive deduplication” engine, and with deduplication (and how they do it) being such a key feature in every backup software vendors portfolio, the Arkeia approach to this is leading the way. It is fl exible - a word you are probably starting to realise represents pretty much everything to do with Arkeia. It provides both source-side and target-side

deduplication or a mixture of the two! It is in-line (rather than post-processed) block level deduplication and uses the patented sliding-window algorithm combined with “progressive-matching” to ensure that optimum backup effi ciency is achieved.

There are three ways to use Arkeia, as a software only product, as a “virtual application” or on a physical appliance. The latter is underpinned by the RA4300, RA5300 and RA6300 products from WD. It is worth highlighting that in early 2013, Arkeia Software were acquired by the Western Digital Corporation (WD), and as a result the Arkeia message, pedigree and bandwidth has increased massively now that they are part of the multi-billion dollar WD business.

Page 58: NCE Little Book of Data Storage

58

Page 59: NCE Little Book of Data Storage

59

Vendor Feature: Overland StorageFounded: 1980Headquarters: San Jose, CaliforniaPortfolio: Tape & Disk based storage hardware solutions

As one of the most established names in the Tape Automation market, Overland have long been associated with the NEO Library range although manufactured by Overland gaining signifi cant market share through a number of

OEM agreements,such as the HP StorageWorks badged DLT & SDLT MSL5026, MSL5052, and the LTO based MSL6030, MSL6060 variants. However, as with all Tape Automation manufacturers still in existence, Overland have identifi ed that manufacturing Tape Automation alone isn’t a sustainable business in the current climate.

The NEO branding lives on with the NEO 100S, 200S, 400S, 2000e, 4000e and 8000e continuing to provide reliable and scalable LTO based Tape Automation under the Overland banner, but in parallel to this Overland now have a solid HDD based portfolio too.

In 2008, Overland Storage acquired the well-known Snap Serverdisk based product line from Adaptec, and it is from this platform that the SnapServer DX Series, SnapScale X2 and SnapSAN S-Series have evolved combining cost effi ciency with reliability and scalability. NCE have a long-standing relationship with Overland Storage and offer the complete Disk & Tape technology stack as part of our portfolio.

Page 60: NCE Little Book of Data Storage

60

Customer Case Study: Local GovernmentNCE work with a wide variety vertical markets. One such example is a local council who we have worked with for many years on their storage and virtualisation requirements. Here’s a summary of the work that we have done with them to date:

The IT Manager at the Council recalls how he and his team opened up the relationship with NCE. “Our initial requirement was for a Backup Solution. NCE came to discuss what we were looking to do and put together a solution that took into account our capacity, scalability and budgetary requirements.” NCE’s skilled team scoped and implemented a solution featuring disk to disk backup (staging), an automated tape library based on LTO tape technology and backup software to address all of the Council’s operating systems and applications.

Having gained confi dence and trust in NCE, the company were then approached about another storage related challenge that required guidance and industry knowledge. “We had been asked to look into putting in place a Disaster Recovery Plan, something that encompassed storage so I spoke to NCE” said the IT Manager. “They suggested that there was an evolving and maturing technology set that may help to achieve this but not be price prohibitive – music to my ears!” That technology was Virtualisation both at the server and storage level.

Investment was made in the virtualised architecture and within a matter of weeks the fl exibility and speed of deployment and ability to respond to requests for additional servers and storage were realised by the Council’s IT team.

“We soon realised that the fl exibility of these Hypervisors both at the Storage Layer and the Server

Layer offered us more than we’d fi rst anticipated”

Page 61: NCE Little Book of Data Storage

61

“Virtualisation was initially something that we felt could and would offer a good failover mechanism in the event of an incident preventing us gaining access to the main server room, but even something as trivial as a workman hitting a gas main on the road outside our building could trigger this scenario. We soon realised that the fl exibility of these hypervisors both at the storage layer and the server layer offered us more than we’d fi rst anticipated. Before long we thought long and hard about every new physical server we were about to buy and decided whether making it a VM was a more practical solution.”

However the Council did experience a problem as the use VM’s in this new model continued to grow. “We saw a gradual decrease in the performance of our SAN (Storage Area Network) over a few months and users started to ask what was happening.” They contacted NCE for help. “The demands of the virtual server estate had simply outweighed the capabilities of the back-end storage (SATA based) which was initially scoped a few years beforehand.”

The solution came in the form of Arrays with higher performance SAS drives connected through the Fibre Channel Switches. “In fairness to the SAN it couldn’t cope with what we were asking of it. Thankfully, the vendor agnostic aspect of our storage hypervisor allowed us to choose a Disk Architecture that could deliver the performance we required and take into account our future demands. Our longstanding relationship with NCE has meant that we are confi dent that we have a trusted partner for storage and virtualisation”.

For independent guidance and advice Contact NCEwww.nceeurope.com

Page 62: NCE Little Book of Data Storage

62

Cloud StorageClouds were gathering in the storage and virtualisation sectors when the last edition of this book was written, and the concept is now something that we all accept as a reality – especially the home user. I for one have used Dropbox to aid with the sharing of content with our publisher on this very book, so it would be completely false of me to say that the cloud doesn’t feature in my world! The “Public Cloud” is something that is attractive to me as a home user, challenged with storing (and sharing) the digital content that I have collated and continue to grow on a daily basis. I have the choice to decide what content I want to put in the Cloud and what I don’t. The same rules apply to business.

However adoption at a business level isn’t at the rate some forecasted (excuse the pun). The demise of a British based Managed Service Provider providing Cloud Services to the corporate market in early 2013 resulted in customers being asked to “pay £40,000 just to keep the lights on” (in the off-site Data Centre where their data was hosted). This has added to the nervousness of an already uncertain prospect list.

Will the Cloud play a role in the IT estate? Without doubt, yes. The Cloud can be carved into two sections – the “Private Cloud” and the “Public Cloud”. The idea of a consolidated and centralised resource from which you can access your data anywhere (the “Public Cloud”) and on any device (such as a phone, tablet or laptop - BYOD) makes perfect sense.

However, in business terms, the preference is for the “Private Cloud”. The business retains ownership of the data and the resource. Essentially no matter which way you look at it, the company’s data is the lifeblood of the business and losing sight of it isn’t something which does not sit comfortably with too many businesses.

Page 63: NCE Little Book of Data Storage

63

My interpretation is that some businesses classify some data as non-business critical and thus don’t want all of the implications associated with retaining it in their storage environment - cost, power, management, replication, backup etc.In these cases, the Cloud (with certain compliance and security policies associated), represents a perfectly viable place to put this data. The challenge appears to be identifying where this data is in any business and having the confi dence to say that moving it out of the Company’s IT state won’t compromise the business in anyway. Few are prepared (at this point) to make such a bold call. On that basis, it’s not so much the case of the Public Cloud providers being able to deliver the required service; it’s more a case of the owners of the data that they’d like to host knowing what it is and where it is….

Without question having data either replicated to or residing at a co-located facility makes sense. It moves all of the eggs out of one basket on one hand, by having a remote site, poised to take control in the event of the primary site experiencing an issue. However, it is important to maintain an understanding and appreciation of what you need and what you’ll get when entering into conversations about this approach.

The “aaS” (careful what you read there!) tag has been somewhat over-marketed with Software as a Service (SaaS), Infrastructure as a Service (Iaas), Storage as a Service (SaaS), Platform as a Service (PaaS) and Backup as a Service (BaaS) confusing the customer to a whole new level (Confusion as a Service?!). The Cloud is integral to all of these “services” so simply implying that the Cloud maybe able to help you could expose you up to all of this terminology.

“The ‘Public Cloud’ is something that is attractive

to me as a home user”

Page 64: NCE Little Book of Data Storage

64

SGiFounded: 2009Headquarters: Fremont, CaliforniaPortfolio: High Performance Computing (HPC) and Big Data solutions

If you are looking to fi nd a Big Data solution there’s a certain calibre that you’ll be looking for in a storage vendor. Ideally they need a track record as an established name in high performance computing and representation that their Big Data portfolio isn’t just a marketing tag-line. We recognise that at NCE and, as a result, are working

closely with SGi to deliver solutions for this environment.

SGi has a heritage in this area, having deployed some of the largest Hadoop installations in the world (built on software from Cloudera) and also the largest single Hadoop cluster; so their suitability for this is unquestionable. I have stood in Data Centres where racks of SGi equipment have been deployed for this very purpose so I can personally endorse that the technology is used in the “real world”.

Perhaps the SGi marketing department have got a little carried away having named one of their Big Data solutions the “DataRaptor” and the tag-line when coupled with the MarkLogic database of “Eating Big Data for Lunch” suggests that the solution digests as opposed to ingests the content but we’ll let that one go!

The SGi UV 2000 architecture supports 64TB databases residing on memory - faster than fl ash - and is capable of ingesting 4TB/sec of incoming data. Behind this can reside PB’s of storage capacity safely stored under the Data Migration Facility (DMF) software that can create active archives using the ArcFiniti technology, perfect for Big Data demands.

Page 65: NCE Little Book of Data Storage

65

Big DataHere’s the latest industry term that nobody seems to have a clear defi nition of and the marketing department of some storage vendors are scattering all over the corporate literature. Our good friends with a well-known website (the online dictionary that we all turn to when requiring clarifi cation), defi ne it as “a collection of data sets so large and complex that it becomes diffi cult to process using on-hand database management tools or traditional data processing applications” and this leans towards certain applications and environments with this type of digital content (post production, broadcast, engineering, government, research, exploration, analytics etc).

The ingestion of this data, which is typically collated from a variety of sources and is typically unstructured, can be the primary challenge. Unifi cation of all of this data onto a single storage platform/brand typically underpins the message that many storage vendors are trying to represent.

It is in the context of processing this Big Data that you’ll hear a term called Hadoop mentioned. Hadoop is a free programming framework that has been developed by the Apache Software Foundation for this very purpose. It aggregates the performance of petabytes of data using a distributed fi le system in a “super-clustered” approach. This idea has been adopted in storage and the term “scale out” has been coined to refl ect this in a storage environment that aggregates performance over multiple heads/gateways.

Some vendors had already carved out their technology stack and had a lucky strike when the idea of a scale up (additional clustered capacity) and scale out (additional clustered performance) architecture rode into town on the Big Data bandwagon, others have had to adapt (and in some cases are still adapting) to meet this objective. From an NCE perspective, we have solutions within our portfolio that are perfectly suited for this purpose, so please don’t hesitate to ask us if you count yourself as being in the “Big Data” envelope. By the way, it’s funny how you don’t hear anyone referring to “Little Data” (yet?!)…………..maybe that will be in the next issue!

Page 66: NCE Little Book of Data Storage

66

AITAdvanced Intelligent Tape

AFA All Flash Array

APIApplication Programming Interface

ATAAdvanced Technology Attachment

BYOD Bring Your Own Device

CASContent Addressed Storage

CDPContinuous Data Protection

CIFSCommon Internet File System

CNAConverged Network Adapter

CoD Capacity on Demand

CPU Central Processing Unit

D2D2T Disk to Disk to Tape

DAS Direct Attached Storage

DAT Digital Audio Tape

DBA Database Administrator

DLT Digital Linear Tape

DR Disaster Recovery

DSDDynamically Shared Devices

ECC Error Correcting Code

eMLCenhanced Multi-Level Cell (SSD)

FCoEFibre Channel over Ethernet

FTP File Transfer Protocol

GBE Gigabit Ethernet

GBICGigabit Interface Converter

HBA Host Bus Adapter

HDD Hard Disk Drive

IDEIntegrated Drive Electronics

IP Internet Protocol

IPOInitial Public Offering (Share issue)

ISVIndependent Software Vendor

JBOD Just a Bunch of Disks

LRM Library Resource Module

LTO Linear Tape Open

LUN Logical Unit Number

LVD Low Voltage Differential

MEMMemory Expansion Module

MLC Multi Level Cell (SSD)

NAND Negated AND (Flash)

NASNetwork Attached Storage

NCENational Customer Engineering

NFS Network File System

Glossary of Terms

Page 67: NCE Little Book of Data Storage

2 67

NCE Company BackgroundOver 30 years of experience in IT has served the privately-owned NCE business well and established us as one of the leading names in providing, integrating and maintaining technology in the Corporate Data Centre. Although this book is focused on Storage, an area that NCE continue to enjoy great success in, our service skills go way beyond that aspect alone. We maintain a wide variety of Data Centre products, with servers and switches amongst our service portfolio.

Our engineers fall into two camps, with skilled personnel located at our Dedicated Repair Centres both in Europe and North America, and multi-skilled personnel providing fi eld service (on-site) capabilities through our regional support hubs in both of the aforementioned territories.

We work with a variety of “layers” of the IT industry too, providing service delivery for manufacturers & OEMs (as well as selling their technology), distribution, resellers, system integrators, service providers, third-party maintenance companies and end-users. Our customers tell us that they like our technical strengths along with our impartiality and independence, we hope that if you aren’t already an NCE customer you will become one once you’ve read through this book!

NIC Network Interface Card

nm Nanometer (Fibre Channel)

OEM Original Equipment Manufacturer

P2V Physical to Virtual

PEP Part Exchange Program

RAID Redundant Array of Independent Disks

ROI Return on Investment

RPM Revolutions per minute

RPO Recovery Point Objective

RTO Recovery Time Objective

SaaS Storage as a Service

SAN Storage Area Network

SAS Serial Attached SCSI

SATA Serial Advanced Technology Attachment

SCSI Small Computer Systems Interface

SFP Small Form-Factor Pluggable

SLA Service Level Agreement

SLC Single Level Cell (SSD)

SLS Shared Library Services

SMB Server Message Block

SSD Solid State Drive

TB Terabyte

TLC Triple Level Cell

UDO Ultra Density Optical

VADP vStorage API for Data Protection

VCB VMware Consolidated Backup

VCP VMware Certifi ed Professional

VSS Volume Snapshot Service

VTL Virtual Tape Library

WAFL Write Anywhere File Layout

WEEE Waste Electrical and Electronic Equipment

WORM Write Once Read Many

A special thanks to everyone that has played a part in helping in putting this Book together; Maddy for proof-reading it, Phil for laying it out so perfectly, the Solutions Sales Team and all at NCE for keeping the wheels turning whilst I shut myself away in a room to put this onto paper and my family who have the patience (and provide me with coffee & refreshments) to allow me to do this in what would typically be “family time”. Not to mention those of you in the industry who continue to educate me about storage on a daily basis and provide the content for this publication! Incidentally, a message for my son who says that being an author and writing a book makes you rich – with knowledge yes, that’s the truth of it……

2

Page 68: NCE Little Book of Data Storage

6 Stanier Road, Calne, Wiltshire, SN11 9PX

t: +44 (0)1249 813666f: +44 (0)1249 813777

e: [email protected]

1866 Friendship Drive,El Cajon, California CA 92020

t: +1 619 212 3000f: +1 619 596 2881

e: [email protected]

The Little Bo

ok o

f Data Sto

rage - 10th Ed

ition

INTE

RNATIONAL CERTIFICATION

ISO 9001 AND 14001 REGISTERED FIRM

@nceeurope

www.linkedin.com/company/nce-computer-group

NCE Computer Group Europe

The pocket-sized storage search engine

The little book of