hdd & raid - inicio · repair repair time between failures fig. 1 mtbf - time between failures...

21
Rev. 1.0.1 / 2012-10-10 White Paper HDD & RAID Basic information for the assessment of hard drive durability as well as on the functionality and suitability of redundant storage systems English

Upload: others

Post on 13-Mar-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

Rev. 1.0.1 / 2012-10-10

White Paper

HDD & RAIDBasic information for the assessment of hard drive durability as well as on the

functionality and suitability of redundant storage systems

English

Page 2: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 2

HDD & RAID

Table of Contents

1 Abstract .........................................................................................................3

2 HDD Durability...............................................................................................42.1 AFR, MTBF and MTTF ...................................................................................42.1.1 Definition .........................................................................................................42.1.1.1 AFR .................................................................................................................42.1.1.2 MTBF ..............................................................................................................42.1.1.3 MTTF ..............................................................................................................52.1.2 Relation ...........................................................................................................52.1.3 Interpretation ...................................................................................................52.1.4 PracticalRelevance ........................................................................................62.1.4.1 FailureRate-AGoogleInc.Study..................................................................62.1.4.2 FailureTime-ACarnegieMellonUniversityStudy .........................................72.1.4.3 FailureRate-ADallmeierelectronicAnalysis ................................................92.1.5 Conclusion ....................................................................................................102.2 SMART .........................................................................................................102.2.1 Definition .......................................................................................................102.2.2 Interpretation .................................................................................................102.2.3 PracticalRelevance ......................................................................................112.2.4 Conclusion ....................................................................................................12

3 Storage Technologies .................................................................................133.1 JBOD ............................................................................................................133.1.1 CapacityandCosts .......................................................................................133.1.2 SafetyandRebuild........................................................................................133.1.3 Conclusion ....................................................................................................133.2 RAID 1 ...........................................................................................................143.2.1 CapacityandCosts .......................................................................................143.2.2 SafetyandRebuild........................................................................................143.2.3 Conclusion ....................................................................................................143.3 RAID 5 ...........................................................................................................153.3.1 CapacityandCosts .......................................................................................153.3.2 SafetyandRebuild........................................................................................153.3.3 Conclusion ....................................................................................................163.4 RAID 6 ..........................................................................................................163.4.1 CapacityandCosts .......................................................................................163.4.2 SafetyandRebuild........................................................................................173.4.3 Conclusion ....................................................................................................17

4 Recommendations ......................................................................................18

5 References...................................................................................................19

Page 3: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 3

HDD & RAID

1 AbstractHarddiskdrives(HDDs)arestoragemediathatwriteandreadbinarydataonrotatingmag-neticdisksusingasophisticatedmechanism[1].Theyrepresentacentralcomponentofalldigitalrecordingsystemsandareusedforthecontinuousstorageofaudioandvideodata.DespitethehighestqualitystandardsanaturalwearofHDDsmustbeconsidered,whichisevenmoreincreasedbyusuallyuninterruptedoperation(24/7).

A RAIDsystemisastoragetechnologythatcombinesmultipleHDDsintoasinglelogicalunit,inwhichalldataarestoredredundantly[2].ThisredundancyallowsforthefailureandreplacementofindividualHDDswithoutdataloss.Indigitalrecordingsystems,RAIDsys-temsareusedtoabsorbthenaturalwearandfailureofharddiskdrives.

EvenadvancedRAIDsystemsinconjunctionwithhighestqualityHDDscannotguaranteecompletedatasecurity.ThisconclusionisbasedonthelimitationsoftheRAIDtechnology[3]andthelifetimeofharddiskdrivesobservedinpractice[4].Thecommonview,thataRAIDsystemprovidesthesamesecurityasabackupisnotapplicable[5].

Thisdocumentcontainsvariousexplanationsoftermsthatarerelevantinassessingthereliabilityofharddiskdrives.ThentheiractualrelevancetopracticeisderivedbasedontwohighlyregardedstudiesofGoogleInc.[4]andCarnegieMellonUniversity [6].Basedonlatter,thefunctioningofvariousRAIDsystemsandtheiradvantagesandlimitationsareconsidered.

Page 4: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 4

HDD & RAID

2 HDD DurabilityUsuallyvariousinformation(AFR,MTBF,MTTF)fromtherespectivemanufacturer’sdatasheetsisusedforassessingthereliabilityofharddiskdrives.Moreover,commonmethods(SMART)forearlyfailuredetectionareconsidered.Asshowninthefollowing,thesetheoreticalconsiderationsarelimitedtotheassessmentandplanningof storagesystems.Theymustbeexpanded to includeobservationsandexperiencesfrompractice.

2.1 AFR, MTBF and MTTFAFR,MTBFandMTTFarethemostcommonmanufacturerinformationonthereliabilityofharddiskdrives.Theyarebasedonexperience,estimatesandprojections.Therefore,theycannotbeinterpretedasabsolutefigures,butonlyasexpectationorprobabilityvalues.

2.1.1 Definition

2.1.1.1 AFR

TheAFR(AnnualizedFailureRate)isthepercentagefailureshareofacertainamountofharddisks,whichisextrapolatedtooneyearbasedonexpectationvalues[6]page3.

2.1.1.2 MTBF

TheMTBF(MeanTimeBetweenFailures)specifiestheexpectedoperatingtimebetween two consecutive failuresofadevice type inhours (definitionaccording to IEC60050(191))[7].

TheMTBFconsidersthelifecycleofadevicethatfailsrepeatedly,isrepairedandreturnedtoserviceagain.

Up

DownRepair Repair

Time between failures

Fig.1MTBF-Timebetweenfailures

Thisconsiderationcanalsoberelatedtoafailurewithoutrepair,asitistypicallyassumedforharddisks. In thiscase, theaverageoperating timeuntil the failureof thedevice isconsidered.

Page 5: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 5

HDD & RAID

2.1.1.3 MTTF

TheMTTF(MeanTimeToFailure)specifiestheexpectedoperatingtimeuntil the failure ofadevicetypeinhours(definitionaccordingtoIEC60050(191))[8].

Up

DownFailure

Time to failure

Fig.2MTTF-Timetofailure

TheexpressionsMTBFandMTTFareoftenusedsynonymouslyintermsofdrives.Eventhebackronymmean time before failurehasbecomecustomaryforMTBF[7].

SeagateTechnologyLLC,forexample,estimatestheMTBFofitsharddrivesasthenum-berofoperatinghoursperyeardividedbytheprojectedfailurerateAFR [9]page1.Thisviewisbasedonafailurewithoutrepair.MTTFwould,thus,bethecorrectterm.

2.1.2 Relation

AFR,MTBFandMTTFarevaluesbasedonexperience,estimatesandprojections.Theycanbesetbasedonasingularvieworcalculatedinrelationandusingvariousestimationmethods(e.g.Weibull,WeiBayes)[9]page1.

Simplified,andsufficient for thepurposesof thisdocument, theMTTF is thenumberofoperatinghoursperyeardividedbytheprojectedAFR [6]page3.

2.1.3 Interpretation

MostmanufacturersspecifythereliabilityoftheirharddriveslikeSeagatebyprovidingAFR and MTTF.Asshowninthefollowing,thisvalueisalwaysmuchbetterthantheobserva-tionsinpractice.

Thedifferencesresultfromdifferentdefinitions of a failure.Aharddrivethatisreplacedbyacustomerduetopeculiarbehaviour,canbeconsideredtobefullyfunctionalbythemanufacturer.Seagate,forexample,notesthat43%ofthereturnedharddrivesarefunc-tional[6]page2.Valuesbetween15%andeven60%canbeprovenforothermanufactur-ers[4]page3. Inaddition,theenvironment conditionsinpracticeandinthemanufacturerspecifictestsusuallydiffer.Inparticular,highertemperatureandhumidityaswellasincreasedutilizationduetocontinuousread-writeoperationsresultinmuchhigherfailureratesinpractice[6] page2.

AusefulinterpretationofAFR and MTTFthereforeonlysucceedsbyconsideringtheman-ufacturersprocedures.AsAdrianKingsley-Huges[10]notesinhisremark,thedifferencebetweenobservedandspecifiedMTTFscanbefoundintheirdetermination.

Simplified,theMTTFcanthereforebecalculatedasfollows:

MTTF=([testperiod]x[numberofHDDs])/[numberoffailedHDDs]

Page 6: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 6

HDD & RAID

Withatestperiodof1,000hours(approx.41days)and1,000HDDsandonefailedHDDthisresultsin:

MTTF=(1,000hoursx1,000HDD)/1HDD=1,000,000hours

TheprojectedannualfailurerateAFRresultsfromthereciprocalvalue:

AFR=([numberoffailures]/([MTTF]/[8,760hoursperyear]))x100%AFR=(1failure/([1,000,000hours]/[8,760hoursperyear]))x100%AFR=(1failure/114.16years)x100%AFR=0,86%

Conversely,onecouldcalculatetheMTTFbasedonanestimatedAFRof0.86%:

MTTF=(1,000HDDx8,760hours)/(1.000HDDx0.86%)MTTF=8,760,000HDDhours/8.6HDDMTTF=1,018,604~1,000,000hours(114years)

WhatdoesaMTTF of 1,000,000 hours(114years)express?

TheassumptionthatanHDDcanbeoperated114yearsisabsolutelywrong!Thisvalueindicatesthatthe first failure would have to be expected after 1,000 hours (42 days) of operation if 1,000 hard drives were launched simultaneously.

Lookingatitanotherwayyoucouldsaythat114 hard drives can be operated for a year and only one failure would have to be expected.

2.1.4 Practical Relevance

MTTFandAFRareprojectedexpectationvaluesand,thus,representaconsistentfailurebehaviour.Observationsinpractice,however,show,thatarelativelylargeproportionoftheHDDsdoesnotfailasexpected,butmuchsoonerorlater.

2.1.4.1 Failure Rate - A Google Inc. Study

In theirdetailedstudyonHDD failure trendPinheiro,WeberundBarroso (Google Inc.)evaluatedmore than100,000HDDs in a periodof 9month.The test group comprisesHDDswith80to400GB,whichwereusedinvarioussystemsofGoogleInc.[4]page3.

The authors note a trend towards uniformity with respect to the hard drive models in par-ticular age groups. This might influence the absolute AFR slightly, but does not change the observable trend.

Theharddisksweredividedintogroupsaccordingtotheirage.Theharddrivesthatfailedwiththeappropriateagewerethenputintorelationwiththecorrespondinggroup.[4]page4.

Page 7: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 7

HDD & RAID

Month

AFR (%)

3 6 12 24 36 48 60

10

8

6

4

2

0

Fig.3AFRandagegroups(GoogleInc.)

Thefailure ratewasbetween1.7%forHDDsthatfailedwithanageof1yearand8.6%forHDDsthatfailedwithanageof3years.TheobservedAFRs are consistently much higher than the values provided by the manufacturer.

Anotherinterestingaspectis,thata relatively high proportion of the HDDs failed very early:with3months(2.3%)or6months(appox.1.9%).Thisresultalreadyshowsthephe-nomenonofharddrives’“infantmortality”,whichisdiscussedinthefollowing.

2.1.4.2 Failure Time - A Carnegie Mellon University Study

SchroederandGibsonofCarnegieMellonUniversity foundsimilar results in theirstudy“Diskfailuresintherealworld”.Theyevaluatedthedataofabout100,000harddrivesthatwereusedinseverallarge-scalesystems[6]page1.

Theyalsofoundalargedeviationofthemanufacturer’sinformation(0,58%to0,88%)fromtheobservedfailurerates(aprrox.3%inaverage,upto13%forsinglesystems).Theaverage failure rateofallharddriveswas3.4 times higher than the highest speci-fied AFRof0.88% [6]page7.The failure ratewas6 timeshigher forsystemswithanoperatingtimeoflessthan3yearsandeven30timeshigherforsystemswithanoperatingtimeof5-8years.

Theauthorsassessedthatone-dimensionalvaluessuchasMTTFandAFRdonotrepro-ducetheobservations.Therefore,theyfocusedonamoredetailedanalysisofthetemporaldistributionoffailures.

First,theypointedtothegenerallyacceptedtheoryofthebathtub curve.Thiscurveshows the theoretical failure rates of hardware products throughout the product lifecycle [6]page8andcouldpermit thepredictionof thefailure trendofharddrives inlarge-scalesystems.

Page 8: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 8

HDD & RAID

Month

AFR (%)

12 60

Early Failure

Useful Life Ware OutFailure

Fig.4Bathtubcurve(theoreticalmodel)

Accordingtothisgraph,anincreasedfailurerateinthefirstyearwouldbeobservableforharddiskdrives,followedbyaperiodwithafailurerateataconstantandlowlevel.Towards theendofproduct lifecycle thewearouthasastrongeffect,whichwouldagainleadtoarapidlyincreasingfailurerate.

This theoretical considerationwasonly partly confirmed by the trend in practice. Thefollowinggraphofthemonthlydistributionoffailuresinoneoftheevaluatedsystemsshowsa relatively sharp delimination of early failures(infantmortality).

Month

AFR (%)

10 20 30 40 50 60

18

16

14

12

10

8

6

4

2

0

Fig.5 Distributionoffailure

Itis,however,strikingthatthefailurerateinthemiddleyearsdoesnotsettletoarelativelylowvalue.TheAFRbeginstoriseearlyandrelativelyconstant.Thisobservationsuggeststhatwear out has an early impactandthefailureratelinearlyincreasesuptotheendoftheproductlifecycle[6]page9.

Page 9: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 9

HDD & RAID

2.1.4.3 Failure Rate - A Dallmeier electronic Analysis

The findingsof theGoogle Inc. studywerealso confirmedbyan internal evaluationofDallmeierelectronic.Here,thefailuresofharddisksthathadbeenputonthemarketbe-tweenJanuary2000andDecember2005(74,000units)wereevaluated.First,themonthlyfailuresforWaveletandMPEGsystemswereidentified.

Failures (%)

2,0

1,5

1,0

0,5

0 Month

12 24 36 48 60

Wavelet

MPEG

Fig.6Harddiskfailuresstartof2000toendof2005

IncontrasttotheaverageMTTFof550,000hoursspecifiedbythemanufacturer,anactu-allyobservedmeanvalue(87%Waveletwith247000,13%MPEGwith111,000hours)of220,00hourswascalculated.ThiscorrespondstoanactualobservedAFRofabout3.9%.

Basedonthisanalysis,theavailabilityofasystemwith1,250MPEGchannels(onDIS-2moduleswithatotalof2,500harddisks)wasconsidered.Inthisrespect200diskfailuresperyearwouldmathematicallybeexpected.ForthereplacementofaDIS-2modulewithafaultyHDDmax.2minutesweresupposed.Theavailabilityresultsfrom:

([totaloperatingtime-repairtime]/totaloperatingtime)x100%([8.760hoursx60minutesx1.250channels-200x2minutes]/totaloperatingtime)x100%

([657.000.000-400]/657.000.000)x100%99,99994%availability

Referred to one channel ofthissystem,anavailabilityof99.99994%resultsinanuptimeofanannually525,599.68minutesandadowntime of only 19.2 seconds(0.32minutes).

Howfavourablethisvalueis,canbedeterminedbyusinganexamplewithanavailabilityofonly99.5%.Inthisrespect,onewouldhavetoexpectanuptimeofonly522,972minutesandadowntimeofalready2628minutes(43.8hours).

Asthisexampleillustratesaswell,theMTTFspecifiedbythemanufacturersissubstan-tiallyhigherthantheobservedMTTF.However,asystem with mature technology and intelligent designcanbeimplementedwith high availability.

Page 10: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 10

HDD & RAID

2.1.5 Conclusion

ThefailureratespecifiedbythemanufacturerswithMTTF/MTBF or AFRisusually too low.Aconservativeplanningshouldaccountforanin average at least 3 times higher AFR.

MTTF/MTBFprovidenoinformationonthedistributionoffailures.Aconservativeplanningshouldalwaysbearachild mortality in the first months of operationand increasedwear out failures towards the end of the product’s life cycle in mind.

Thelowandconstantfailurerateinthemiddleyearsofoperationimpliedbythetheoreticalbathtubcurvecannotbeconfirmedinpractice.Aconservativeplanningshouldconsidera linearly increasing failure rateduetowearout,whichalreadystarts in the middle of the product`s life cycle.

2.2 SMART

2.2.1 Definition

SMART(Self-Monitoring,AnalysisandReportingTechnology)isanindustrystandardthatisbuiltintoalmostallharddisks.SMARTenablesthepermanentmonitoringofimportantparameters,and,thus,earlydetectionofanimpendingharddiskfailure[11].

Asa functionSMARTmustbeenabled foreachharddiskdrive inBIOS.TheprovidedSMARTvaluesareevaluatedbyasoftware that is installed inaddition to theoperatingsystem.Thesoftwarecandisplaywarningswhenmanufacturerspecificthresholdsofindi-vidualparametersareexceeded.Afterprolongedusageexpectedfailurescanbepredictedaswell[11].

2.2.2 Interpretation

SMARTprovidesvaluesformanyparameters,butPinheiro,WeberandBarrosoconsid-eredonlyfourassignificantforafailurepredictionintheirstudyforGoogleInc.[4]page6ff.

Scan ErrorsHarddisksconstantlytestthesurfaceofthemagneticdiscsandcountthedetectederrorswithabackgroundfunction.Ahighnumberoferrorsisanindicatorforadefectivesurfaceand,thus,forlessreliability.

Reallocation CountsWhentheharddiskfindsabadsectoronthemagneticdisk(duringaread/writeprocessorwithabackgroundfunction),thecorrespondingsectornumberisassignedtoanewsec-torofthesectorreserve.Ahighnumberofnewassignmentsisanindicatorforthewearofthemagneticdisks.

Offline ReallocationsOfflineReallocationsareasubsetoftheabovedescribedReallocation.Onlynewassign-mentsofbadsectorsthatarefoundbyabackgroundfunctionarecounted.Badsectorsandassignmentsthatarediscoveredduringaread/writeprocessarenotconsidered.

Page 11: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 11

HDD & RAID

Probational CountsHarddiskscanputsuspicioussectors“onprobation”,untiltheyfailpermanentlyandarereassignedoruntiltheycontinuetoworkwithoutaproblem.Ahighnumberofprobationscanberegardedasaweakerrorindicator.

Whenandifthevaluesofaparametertriggeranalertofthemonitoringsoftwaredependsonthesoftwareandonthespecificationsof themanufacturer.To illustratethiscomplexanalysis,thegreatlysimplifiedandreducedexampleofa250GBSATAisusedinthefol-lowing[11].

Parameter Value (normalizescurrentmeas-uredvalue)

Worst (currentlywortsvalue)

Threshold (valueshouldbegreater)

RAWValue (actualmeas-uredvalue

Remark

RelocationCounts

100 100 005 55 55sectorshavebeenreplacedwithreservesectorsduetofailure.Thedriveestimatesthatthereisnoproblem(thevalueisstill100)0).

Seek Errors

100 100 067 0 Currentlynoread/writeerrorsoccurred.

ThenormalizedmeasurementValueiscounteddownandtriggersawarninguponreach-ingtheThreshold limit.Althoughthisexampleshowsthat55sectorhavealreadybeenreallocated,theharddiskdriveisstillconsideredtobeabsolutelyfine.

ThefailuredetectionbytheSMARTfunctionoftheharddiskdriveisindependentoftheevaluationofthevaluesbytheSMARTsoftware,butdecisiveforareliablefailureforecast.Ifthedetectiondoesnotworkreliable,SMARTmaynotbeusedasthesoletoolforfailurepredictionofharddiskdrives.

2.2.3 Practical Relevance

IntheirstudyforGoogleInc.theauthorsevaluatedSMARTlogfilesofmorethan100,000HDDs[4]page6ff.Nevertheless,theywerenot able to develop a meaningful statistical modelforfailureprediction[4]page10.

Hereinafter,thepossibilitytocreateamore simple prediction modelsolelyonthebasisofSMARTparameterswasconsidered.ButtheanalysisofthecorrespondingSMARTval-uesrevealed,thattheywerenotabletoachieveasufficientaccuracy.

Ofall failedharddiskdrives,56% showed no detected error regardingall fourstrongSMARTparameters.Aforecastonthisbasiscouldthereforeneverpredictmorethanhalfofthefailures.TakingallotherSMARTparametersintoconsideration,36%ofthefailedharddiskdrivesshowedabsolutelynoerrors(withnoparameters!).

Page 12: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 12

HDD & RAID

2.2.4 Conclusion

Theconclusionoftheauthorsisclear:„Weconcludethatit isunlikelythatSMARTdataalonecanbeeffectivelyusedtobuildmodelsthatpredictfailuresofindividualdrives.“[4] page10. SMART data alone can not be used to predict the failure of single hard disk drives.

Highvalues ofasingleparametermaycauseanunnecessaryexchangeand thereforecosts.Suddenfailureswithoutpriornotificationcouldleadtodatalossduetolackofback-ups.Thiscanleadtodoubtsaboutthereliabilityoftheoverallsystem,thoughactuallyonlytheSMARTfunctionfailed.

Asanalternative,theconservativemaintenancebasedonthefindingsinpoint1remains.For systemswithmultipleharddiskdrives, someprotectionmaybeachievedbyusingRAIDsystems,asdescribedinthefollowing.

Page 13: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 13

HDD & RAID

3 Storage TechnologiesStandardrecordingsystemsusuallyhaveoneorseveralharddiskdrives(JBOD)andarenotabletocompensateforthefailureofaHDD.High-quality recording systems redundantly store the audio and video data on severalHDDsusingaspecialstoragetechnology(RAID)andareusuallyabletocompensateforthefailureofaHDDwithoutdataloss.Nomatterwhatstoragetechnologyisused,itmustalwaysbetakenintoaccount,thatasystemcanneverbeasubstituteforabackup[5].

3.1 JBODJBOD(JustaBunchOfDisks)referstoanundeterminednumberofharddiskdrives(ar-ray)thatareconnectedtoasystem(computer,recordingsystem).Theycanbeusedbytheoperatingsystemasindividualdrivesorcombinedintoonesinglelogicaldrive[12].

A4A3A2A1

HDD 1

A8A7A6A5

HDD 2

A12A11A10A9

HDD 3

q

Fig.7JBODwithdata(A1toAx)

BecauseaJBODlacksallredundancytheexpressionisoftenusedtodistinguisharegularsystemfromaRAIDsystem.

3.1.1 Capacity and Costs

ThenetcapacityofaJBODarrayisaslargeasthesumofthecapacitiesofthesingleharddiskdrives.Thenetcapacity,thus,correspondstothetotalcapacityofasystem.Asystemof8harddiskdriveswith2TBhasanetcapacityof16TB.AJBODsystemisthemosteconomicalsystem.

3.1.2 Safety and Rebuild

ThebehaviourwhenaharddiskfailsvarieswiththeJBODsystemsfromdifferentmanufac-turers.DallmeierJBODrecordingsystemshavetheadvantageofcontinuingtherecordingwhenaHDDfails.Therecordingsontheremainingdiskscanbeevaluatedandsecuredasbefore.

3.1.3 Conclusion

JBODisasimple and very cost-efficient storage system.Butwhenasingleharddiskdrivefails,itsrecordingsarelost.

Page 14: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 14

HDD & RAID

3.2 RAID 1ARAID1systemconsistsofacombinationofatleasttwoharddiskdrives(RAIDarray).Thesamedataissimultaneouslystoredonalldisks(mirroring).ARAID1systemoffersfullredundancy[13].

A4A3A2A1

HDD 1

A4A3A2A1

HDD 2

q

Fig.8RAID1withmirroreddata(A1toA4)

3.2.1 Capacity and Costs

ThenetcapacityofaRAID1arrayisaslargeasthesmallestharddiskdrives.Thetotalcapacityofasystemideallyishalvedbymirroringthedata,thestoragecostsaredoubled.Asystemof8harddiskdriveswith2TBhasanetcapacityof8TB.

3.2.2 Safety and Rebuild

IfoneofthemirroredHDDsfails,therecordingiscontinuedontheremainingHDD.AfterreplacingthefailedharddiskdrivearebuildprocessisstartedandthedataismirroredtothenewHDD.ThefailureoftheintactHDDduringthereplacementorrebuildofthedefectiveHDDinevi-tablyleadstothelossofthedata(ifnotmirroredonmorethan2HDDs).SinceonlyafewHDDsareinvolvedinaRAID1,theprobabilityofasimultaneousfailureisrelativelylow,butcannotbeexcluded.

3.2.3 Conclusion

RAID1isasimpleandrelativelyrobuststoragesubsystem.Thestoragecostsarerelativelyhigh,sincethetotalcapacityisalwayshalved.

Page 15: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 15

HDD & RAID

3.3 RAID 5ARAID5systemconsistsofanarrayofatleastthreeharddiskdrives.Thedataisdistrib-utedandstoredacrossallharddiskdrives.Inaddition,paritydataisgeneratedandalsodistributedandstored.

Dp

C1B1A1

HDD 1

D1Cp

B2A2

HDD 2

D2C2Bp

A3

HDD 3

D3C3B3Ap

HDD 4

q

Fig.9RAID5withdistributeddata(e.g.A1toA3)andparitydata(e.g.Ap)

Ifaharddiskdrivefails,theparitydataallowsforthereconstructionofthelostdataincom-binationwiththeremainingdata[14].

3.3.1 Capacity and Costs

ThecapacityofaRAID5arraycanbecalculatedasfollows:

(numberofharddrives-1)×(capacityofthesmallestharddrive)

ARAID5arrayof8harddiskdriveswith2TBhasanetcapacityof:(8-1)×2TB=14TB

If a spare HDD is used (see below), the formula must be adjusted: (number of hard drives - 2) × (capacity of the smallest hard drive)

UnlikeRAID1aRAID5systemoffersabetterutilizationofthetotalcapacityofasystem.Thus,aredundantdatastoragecanberealizedatrelativelylowcost.

3.3.2 Safety and Rebuild

IfaHDDfails,theparitydataallowsforthereconstructionofthelostdataonthereplace-mentHDDincombinationwiththeremainingdata.TherebuildprocessstartsautomaticallyifareplacementHDD(spareHDD) isalready integrated inthesystem.If this isnot thecase,itisstartedafterthereplacementofthedefectiveHDD.

If a further hard disk drive failsduringthereplacementorrebuildofthedefectiveHDD,therebuildprocesscannotbecompleted.Thiswillleadtothelossofalldata.

ARAID5usuallyconsistsofseveralHDDs.Thefailure probability of a further HDD in-creases proportionally with their number.Itmustalsobenotedthatarebuildmaytakeseveralhourstodayswhenusinghigh-capacityHDDs.Thecriticalperiodisrelativelylong.

Page 16: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 16

HDD & RAID

Inadditiontothefailureofanotherharddisk,anunrecoverable read error(URE)canalsocausethefailureofarebuildingprocess.Ifasinglefractionoftheremainingorparitydatacannotberead,itcannotberebuildandtheprocessusuallystops.

TheURErateisthemeanvalueforaharddiskdrivemodel(notforasingledrive)statedbythemanufacturers.Atypicalvalue of 10-14 Bitmeans,thatoneunrecoverablereaderroroccursduringtheprocessingof100.000.000.000.000Bits(12TB).

EvenwithsmallerRAID5systems(e.g.RAID5with3×500GBharddrives)theconsidera-tionofaUREof10-14Bitaloneleadstoastatisticalfailure of the rebuild process in 8% of the cases [17].IflargerHDDsareused,theoccurrenceofaUREismuchmorelikely.

DuringtherebuildofaRAIDarraywith7×1TBharddiskdrivesthecontentof6HDDs(6TB)hastoberead.WithaUREof10-14Bit,thefailureoftherebuildprocesswouldhavetobeexpectedin50%ofthecases[15].

3.3.3 Conclusion

RAID5isastoragetechnologythatallowsforredundantdatamanagementatrelativelylowcost.Buttheriskofdatalossisrelativelyhigh.

3.4 RAID 6ARAID6systemconsistsofanarrayofatleastfourharddiskdrives.Thedataisdistributedandstoredonallharddrives.InthesamewayaswithRAID5,paritydataisgenerated,distributedandstored,butinthiscasetwice.

Dp

C1B1A1

HDD 1

Dq

Cp

B2A2

HDD 2

D1Cq

Bp

A3

HDD 3

D2C2Bq

Ap

HDD 4

D3C3B3Aq

HDD 5

Eq E1 E2 E3 Ep

Fig.10RAID6withdistributeddata(e.g.A1toA3)anddoubleparitydata(e.g.2ͯAp)

ThedoubleparitydataallowforaRAID6tocompensateforthefailureofuptotwoharddiskdrives[16].

3.4.1 Capacity and Costs

ThecapacityofaRAID6arraycanbecalculatedasfollows:

(numberofharddrives-2)×(capacityofthesmallestharddrive)

ARAID6arrayof8harddiskdriveswith2TBhasanetcapacityof::(8-2)×2TB=12TB

Page 17: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 17

HDD & RAID

UnlikeRAID5withoutaspareHDD,aRAID6systemutilizesthetotalcapacityofthesys-temnotquiteasgood.Nevertheless,aredundantdatastoragecanberealizedatrelativelylowcost.However,ifcomparingaRAID 6systemwithaRAID 5 system with spare HDD(inpracticemostfrequentlyused),thecapacityconsiderationtobemodified.Inthiscase,bothsystemshaveanidenticalnetcapacity(inthechosenexample12TB)andcan be realized with the same storage costs.

3.4.2 Safety and Rebuild

Generally,theproblemofdiskfailureorunrecoverablereaderrorsduringtherebuildmustbetakenintoaccount,aswell,whenregardingaRAID 6system.ButthebigadvantageofaRAID6isitstolerance of two failures.IfaHDDfails,theparitydataallowsforthereconstructionofthelostdataonthereplace-mentHDDincombinationwiththeremainingdata.IfafurtherharddiskdrivefailsduringthereplacementorrebuildofthedefectiveHDD,thiswillnotleadtothelossofthedata.Simplyput:thesecondsetofparitydatanowallowsforthereconstructionofthelostdatatoasecondreplacementHDD.

LikewithRAID5systems,theprobabilityofanotherHDDfailureincreaseswiththeHDDnumberandthedurationoftherebuild,whichcantakelongerduetotheRAID6dualparitycalculation.ForallRAIDsystems,theduration of the rebuilddependsonavarietyoffactors.Crucialaspectsarethenumberandcapacityoftheharddisk,ofcourse.Consideringrecordingsystemswithcomparableequipment,italsodependsonthetypeofrecording(SDorHDcameras,permanentorevent-driven)andwhethertherecordingiscontinuedordiscontin-ued.AtestseriesofDallmeierelectronicwithcomparableIPSsystemsatfullcapacityshoweda rebuild duration of approx. 2 hours per TByte for RAID 5 as well as for RAID 6 sys-tems.Inpracticehardlyrelevant,onecanneverthelessfindaslightlylongerrebuilddura-tionforRAID6systems.Asaruleofthumb25%to35%canbeexpected.Despitetheslightlylongerrebuildduration,RAID6hasthedistinctadvantageoftoleratingtwoharddiskfailures.TheriskoftheloosingofalldataduringalongerrebuildissomuchlowerthanwithRAID5.

3.4.3 Conclusion

RAID6isasaferstoragetechnologythatallowsforredundantdatamanagementatstillrelativelylowcost.Buttheriskofdatalossisrelativelyhigh.ComparedtoaRAID5system,however,theriskofdatalossisrelativelylow.Ingeneral,RAID6canbeconsideredasthesuperiorstoragesystem.

Page 18: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 18

HDD & RAID

4 RecommendationsPlaning1.NotethatJBOD systemsareinexpensive,butoffernofailureprotectionforindividualharddiskdrives.

2. RAID 1systemsaresimpleandrelativelyrobust,butcausehighstoragecosts.

3. RAID 5systemscauselowerstoragecoststhanRAID1systemsandtolerate the fail-ure of one hard disk drive.

4.Note thataRAID 6 systemcauses thesamestoragecostsasaRAID5systemwithspareHDD,buttolerates the failure of two hard disk drive.

5. RAID 6currentlyisthesuperior storage systemandoffersmaximumsafetyatarea-sonablecost.

6.Notethatno RAID systemprovidesthesame security as a backupofcrucialrecord-ings.

Maintenance1.Includeanatleastthree times higher failure ratethanspecifiedbythemanufacturersinyourplanning.

2.Takethelifetimeoftheharddiskdrivesspecifiedbythemanufacturersintoaccount,and replace still functioning hard disk drives early enough.

3.Consideralinearly increasingfailure ratewhichalreadystarts in the middle of the product life cycleoftheharddrives.

4.Keepinmind,thatfailures increase in the first operating monthsandtowardstheendoftheproductlifecycleoftheharddrives.

5.NotethatSMART data are not suitable for the failure predictionofindividualharddiskdrives.

Page 19: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 19

HDD & RAID

5 References[1] Variousauthors,Hard disk drive, in http://en.wikipedia.org/wiki/Hard_disk_drive (2012.09.03)

[2] Variousauthors,RAID, in http://en.wikipedia.org/wiki/RAID (2012.09.03)

[3] Variousauthors,RAID, Data backup in http://en.wikipedia.org/wiki/RAID(2012.09.03)

[4] EduardoPinheiro,Wolf-DietrichWeber,LuizAndréBarroso(GoogleInc.),inFailure Trends in a Large Disk Drive Population (Proceedings of the 5th USENIX Conference on File and Storage Technologies (FAST’07), February2007

[5] ChristopherNegus,ThomasWeeks.The Mythos of RAID Backups, in Linux Trouble-shooting Bible,Seite100,WileyPublishingInc.,2004

[6] BiancaSchroeder,GarthA.Gibson(ComputerScienceDepartment,CarnegieMel-lonUniversity),Age-dependent replacement rates, in Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? (5th USENIX Conference on File and Storage Technologies, San Jose, CA),February2007

[7] Variousauthors,Mean Time Between Failures, in http://de.wikipedia.org/wiki/Mean_Time_Between_Failures(2012.08.16)

[8] Variousauthors,Mean Time To Failures, in http://de.wikipedia.org/wiki/MTTF (2012.08.16)

[9] GerryCole.(SeagatePersonalStorageGroup),Estimating Drive Reliability in Desktop Computers, LongmontColorado, November2000

[10] AdrianKingsley-Hughes,Making sense of „mean time to failure“ (MTTF), in http://www.zdnet.com/blog/hardware/making-sense-of-mean-time-to-failure-mttf/310 (2012.08.20)

[11] Variousauthors,Self-Monitoring, Analysis and Reporting Technology, in http://de.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology(2012.08.22)

[12] Variousauthors,RAID / JBOD, in http://de.wikipedia.org/wiki/RAID(2012.08.28)

Page 20: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

www.dallmeier.com 20

HDD & RAID

[13] Variousauthors,RAID / RAID 1: Mirroring – Spiegelung, in http://de.wikipedia.org/wiki/RAID(2012.08.16)

[14] Variousauthors,RAID / RAID 5: Leistung + Parität, Block-Level Striping mit verteilter Paritätsinformation, in http://de.wikipedia.org/wiki/RAID(2012.08.16)

[15] RobinHarris,Why RAID 5 stops working in 2009, in http://www.zdnet.com/blog/stor-age/why-raid-5-stops-working-in-2009/162(2012.08.31)

[16] Variousauthors,RAID / RAID 6: Block-Level Striping mit doppelter verteilter Parität-sinformation, in http://de.wikipedia.org/wiki/RAID(2012.08.16)

[17] Variousauthors,RAID / Statistische Fehlerrate bei großen Festplatten, in http://de.wikipedia.org/wiki/RAID(2012.08.31)

Page 21: HDD & RAID - Inicio · Repair Repair Time between failures Fig. 1 MTBF - Time between failures This consideration can also be related to a failure without repair, as it is typically

Spec

ific

atio

ns

sub

ject

to

chan

ge w

ith

out

not

ice.

Err

ors

and

mis

pri

nts

exc

epte

d.

© 2

012

Dal

lmei

er e

lect

ron

ic

Dallmeier electronic GmbH & Co.KGCranachweg 193051 Regensburg, Germany

Tel.: +49 (0) 941 87 00-0Fax: +49 (0) 941 87 00-180www.dallmeier.com