features functions - caen engineering, inc....write cliff is an ssd “garbage collection”...
TRANSCRIPT
Features • Functions
and
How StorByte Eco • Flash Technology
Mitigates SSD “ Write Cliff “
4 • 8 • 16 Terabyte
Flash Drives
™
™
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Introduction
In this White Paper we will introduce the reader to the SB-SA117-BR “Hydra” 16 port Hydra Chip.
We will illustrate how we utilize our Hydra Technology to aggregate up to16 Flash Memory
modules and report the aggregate capacity to the host. During use, SSDs experience a
condition that reduces performance as the drive increases the percent of fill. This condition,
known as “Write Cliff,” can significantly impact the performance of the SSD.
This White Paper will provide the reader with information on Hydra, the causes of write cliff and
why “Hydra” can completely eliminate “ Write Cliff “ in over 99% of all use cases.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Table of Contents
Introduction 1
What is Hydra 2
SSD Form Factor Drive Example 3
What is Write Cliff 4
Why does Write Cliff Occur 5
NAND Garbage Collection High Level Overview 5
How StorByte’s Hydra Mitigates Write Cliff 7
Benchmarking 9
Test Setup 9
Drive Pre-Conditioning 12
Results 13
Single SSD MB/s vs Percent LBA Space Used: 13
Summary 17
Contact Information 17
About StorByte 17
Disclaimer 18
01
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
What is “ Hydra? “ and Eco • Flash -
The StorByte Eco • Flash drive is based on what is called “Hydra”. The Hydra device is an
integrated circuit solution which abstracts a number of storage SSD memory modules and the
host system. A key advantage of the Hydra technology is the ability to utilize commodity based
Flash Memory modules while maximizing the advantage of a Parallel set of RAID based write
algorithms providing significant performance advantages and industry leading reliability
modeling. The StorByte Hydra ASIC abstracts the individual modules and the host sees them as a single Flash storage device. The Hydra ASIC currently supports up to 16 independent memory modules.
No special software drivers are required for the host to communicate with the drive pack.
The Hydra is implemented using StorByte's proprietary technology to achieve higher capacity as
well as greater performance than the individual drives would provide. The Hydra implementation also allows for the elimination of Write Cliff effects.
Hydra can also be cascaded to create even larger multi-TB volumes for use in applications such
as “cold” data that would completely eliminate Write Cliff effects on the volume. Write Cliff is an
SSD “garbage collection” phenomenon that is seen in SSDs and severely impacts performance.
These impacts start to be seen as early as 20% fill on some SSDs and SSD RAID systems but
more typically at 40% fill. The point at which these effects begin to impact performance depends
on SSD controller type, firmware versions, capacity and the IO types used to fill the Logical
Blocks.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Some of the primary advantages of the Hydra based technology:
1. 16x drive capacity per HBA slot.
2. RAID based performance results with individual / single Flash targets amplifying overall
throughput and iops.
3. Up to 4x performance increase (IOPS and Throughput) of each single memory module / drive.The performance advantage is especially pronounced when the drive is full or nearly full ofuser data.
4. Write cliff effects are eliminated in over 99% of all use cases.
5. SSD Write Amplification will be reduced as the total host traffic from an SSD perspective is
reduced by 3/4 with Hydra.
6. Drive power management options with up to 75% power reduction per drive.
7. Significantly cooler individual drive operating temperatures.
8. On average the Eco • Flash design provides a usable 15 times extended drive life based on thepatented capabilities of the Hydra code and algorithms.
02
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
SSD Form Factor Drive Example -
In Figure 1 below, you see an example of a multi-port Eco • Flash design being used to aggregate
the capacity of sixteen NAND SSDs. In this example Hydra will report the drive size to the host as
the aggregated capacity. This can be done in form factor or out of form factor depending on the
application requirements. This allows the Eco • Flash architecture to provide all of the other
advantages of the Hydra technology while building larger, definable volumes than might
otherwise be possible based on available NAND.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Commodity Cost Abstracted
16 Drives / 256 Individual NAND Memory Modules
Active/Active - All Ports, All Lun’s
Hot Swap Enterprise Class Architecture
2U - 262TB RAW SBS 2 • 16
Eco • Flash
Hydra
NAND
Memory Modules
Figure - 1
03
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
What is a Write Cliff?
The term “Write Cliff” refers to a dramatic drop-off in performance that occurs in SSDs as the
drive is filled with data. As illustrated in Figure 4, a fresh, out of the box SSD will exhibit excellent
performance, most likely in the 400-500 MB/s range.
As the drive is filled with data, however, this performance begins to significantly degrade.
At 10% fill, the drive has a throughput performance of 475 MB/s. At 20% fill we are already
starting to see some performance degradation. By the time that it had reached 70% fill, this
performance has dropped by 40% to ~285 MB/s, and at 90% fill it is down to ~75 MB/s.
This rapid fall off in performance is referred to as the “Write Cliff.”
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Single Drive MB/s
600
450
300
150
0 20.00 % 40.00 % 60.00 % 80.00 % 100.00 %
Percent Full
Single
Drive MB/s
12
8K
B
Ra
nd
om
Wri
te M
B/s
Figure 4 - Single Drive Write Cliff
04
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Why Does the Write Cliff Occur?
The primary reason that Write Cliff occurs is the requirement for the Garbage Collection
Algorithm to run and “recycle” locations within the NAND memory to handle data erasure and
movement required for the operation of the SSD. The exact point at which the Write Cliff starts to
appear and the extent, is dependent on several drive specific items: the specific SSDs controller,
the efficiency of the garbage collection algorithm, whether compression is used by the drive, and
the specific NAND, to name a few. Since the NAND Garbage Collection has the primary impact, it
is worthwhile to spend a couple of paragraphs discussing how Garbage Collection works and
why it has to work harder as the drive achieves a higher percent of fill.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
NAND Flash devices are composed of one or more NAND Die. A NAND Die is often referred to as
a LUN (logical unit number). A LUN is composed of many erasable segments called Blocks
(or Erase Blocks). Each Block is composed of individually programmable segments called Pages.
A Page is the smallest addressable unit for read and program operations. Blocks must be erased
before the pages which makeup the block can be programmed. After a block is erased, the pages
within the block can be programmed once and read many times. The pages within a Block must
be programmed sequentially from lowest order page (page offset 0) to highest order page (page
offset pages_per_block - 1). Page sizes vary from 4kB to 16kB in size, historically increasing in
size with newer NAND technology nodes. Block sizes vary from 128 pages to 1024 pages. LUN
sizes can vary between ~1000 blocks to ~4000 blocks.
Due to the nature of NAND pages being physically restricted to a single program operation per
block erasure, coupled with the fact that the erasable unit is made up of many programmable
units, it is necessary to implement a Flash Translation Layer (FTL) to abstract the NAND physical
space from a logically addressable address space. This allows a high level application to
interface to the NAND array at a logical level as opposed to knowing the physical details of the
underlying media.
A typical implementation of this FTL is made up of two major functions: logical to physical
address translation (LPT), and garbage collection (GC). It is the responsibility of the LPT layer to
allow a high level application to address the NAND array in a logical manner, while the underlying
GC layer manages the recycling of data blocks.
A page can be in one of four states at any given time: “valid”, “clean”, “dirty”, and “bad.” Valid
means that the page contains an up to date logical segment of data. Clean means that the page is
ready to be programmed with data (i.e. freshly erased). Dirty means that the page contains an out
of date logical segment of data. Bad means that the page is inoperable.
NAND Garbage Collection High Level Overview -
05
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
NAND Garbage Collection High Level Overview - continued
When a host application requests data from the NAND array, LPT must find the corresponding
“valid” page(s) using a mapping table and return that data. When a host application wants to
store new data, LPT must find a “clean” page, program it with the new data, change that page’s
status to “valid”, and add the page’s location to the mapping table. When a host application wants
to overwrite existing data, LPT must find a “clean” page, program the “clean” page with new
data, find the corresponding “valid” page, mark that page “dirty”, and finally update the mapping
table with the new location. If any page fails to program, LPT must mark that page as “bad”
(it won’t ever be used again).
It is the responsibility of the GC layer to search for, as well as to produce, blocks full of dirty
pages to erase. During this process, erased blocks become clean pages for LPT to work with.
Often times, a block will be partially dirty (some pages valid, some dirty) and can not be erased
without losing application data. The GC layer must read physical pages from the partially dirty
block and copy the valid pages to a different area on the NAND array which has a "clean" page
status. Once all the "valid" physical pages have been read from a given block and moved to an
alternative block and the original page statuses updated to "dirty" the block can be erased by the
GC layer. Once the block has been erased, all pages can be marked with "clean" status and used
for new page programs. This process is referred to as block recycling, or block merging. If any
block fails to erase all the pages within the block are marked “bad”.
As the logical space of an SSD becomes more and more occupied with data, the number of
“valid”, “dirty”, and “bad” pages grows while the number of “clean” pages shrinks. Eventually,
the average block within the NAND array will become “dirty” with a percentage equal to the over
provisioning percentage on the drive. This is significant because when this begins to occur the
percentage of NAND activity which is host originating decreases and the remaining percentage
is made up of garbage collection activity. When this occurs, the host performance decreases and
average power consumed per host IO increases.
This decrease in performance is drastic as the LBA percentage reaches 100%.
This abrupt drop in performance is often referred to as the “Write Cliff.”
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc. 06
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
How StorByte’s Hydra mitigates Write Cliff -
The DRACO (Data Remapping and Command Optimization) engine is a hardware data path within
Hydra which maintains up to a 4x (per tier of Hydra) performance increase over a single drive,
limited only by any specific interface protocol. By distributing the host’s workload across all
member SSDs, Hydra’s DRACO engine is able to eliminate or mitigate the write cliff phenomenon.
The distribution of workload leads to smaller and/or less host originating IOs on each member
drive’s NAND array when compared to a single drive without Hydra. This allows each member
drive more cycles to run the garbage collection process, which increases the average
percentage of “clean” pages leading to an increase in member drive write performance; on top of
which Hydra will get up to a 4x performance boost to the host application.
Additionally, Hydra can be configured to virtually increase the member drives’ over-provisioning, leading to a higher “clean” page percentage, and therefore a performance increase.
Figure 5 shows a comparison of performance vs. percent capacity written for both a single drive
and for a Hydra based array of multiples of the same drive tested in the single drive case. As you
can see, the single drive experiences performance degradation as the drive begins to fill.
The Hydra based array continues at maximum performance for a much longer period of time and
when degradation ultimately begins to occur the steady state performance is much better than in
the single drive case.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 5 - Hydra Drive Write Cliff vs. Single SSD
Single Drive vs. 2x, 4x, and 16x Hydra Packs
Percent Full
12
8K
B R
an
do
m W
rite
M
B/s
Hydra 8x
Drive Pack
MB/s
Hydra 4x
Drive Pack
MB/s
Hydra 2x
Drive Pack
MB/s
Single
Drive MB/s
600
450
300
150
0 20.00 % 40.00 % 60.00 % 80.00 % 100.00 %
Hydra 16x
Drive Pack
MB/s
07
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
StorByte’s Eco • Flash - Based on a 16x Model -
The Eco • Flash Hydra allows a systems architect two ways to eliminate the write cliff entirely:
1. Utilizing Hydra’s capability to virtualize the over-provisioning of the drive system.
2. Cascade Hydra chips which allows further workload distribution among more
member SSDs.
The StorByte Eco • Flash architecture is based on a Hydra 16x utilization model. Each individual
Eco • Flash drive incorporates 16 Memory Modules within each of the single Eco • Flash drives.
In addition each StorByte Eco • Flash based storage systems maximizes the opportunity for
efficiency by implementing the multi-hydra cascading feature. Based on this Implementation
StorByte’s Eco • Flash RAID / Storage systems eliminate write cliff in over 99% of all use cases.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 6 - Eco • Flash vs. Single SSD
Single Drive vs. Eco • Flash 16x Hydra Packs
Percent Full
12
8K
B
Ra
nd
om
Wri
te M
B/s
Single
Drive MB/s
600
450
300
150
0 20.00 % 40.00 % 60.00 % 80.00 % 100.00 %
Hydra 16x
Drive Pack
MB/s
08
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Benchmarking - 01
Test Setup
In order to perform the drive benchmarking and write cliff experiments, we utilized the
IPB-SA118A-BD and IPB-SA149A-BD reference boards. The power cables were removed
from the below images for the purpose of clarity of setup. Tests were run using an off-the-shelf
Intel z87 mini-ITX host and two separate off-the-shelf SSD models. The SSD used in the 1:4 and
1:8 testing was an MLC NAND device. The SSD used in the 1:16 testing was a TLC NAND device.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 7 - Lab Test Setup: 1:4 drive
using a StorByte reference board
09
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Benchmarking - continued 02
In order to achieve a 1:8 drive configuration, we added a multi-port Hydra board in front of two
4-port Hydra boards. Shown below:
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 8 - Lab Test Setup: 1:8 drive cascade
using StorByte reference boards
10
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Benchmarking - continued 03
In order to achieve a 1:16 drive configuration, we added a multi-port Hydra board in front of four
4-port Hydra boards. Since the StorByte Hydra devices are not standard port multipliers,
we are able to cascade devices in multiple tiers in order to achieve 1:n device scaling.
Shown below:
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 9 - Lab Test Setup: 1:16 drive cascade
using StorByte reference boards
11
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Drive Preconditioning -
An automated benchmarking system was used in order to produce reproducible results which do not rely on human oversight during testing.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
1. The firmware of the drive is updated via the Linux hdparm utility (if applicable to the drive).
2. Secure erase is sent using the Linux hdparm utility.
3. The SATA cable is moved from the Linux host to the Windows host.
4. A .bat script is then called and completes several steps.
a. Using a Windows hdparm utility, the identify output of multiple devices are checked
to find the location of the drive that is to be used in the Windows dd utility.
While parsing the location, the max size of the drive is saved for later.
b. The script then calculates the 10% of LBA space value for the drive and converts
that calculated value to a count to be used in the Windows dd utility based off of a
bs=64M option.
c. The Windows dd utility is then called and uses data from its own built in /dev/
random1.
d. Once the dd is finished, an IOmeter configuration file (.icf) file is generated that lim
its the max size of the drive that can be tested in IOmeter.
e. IOmeter is then started by command line, meaning when it finishes, it will close
allowing the script to continue.
f. After IOmeter finishes, the test repeats steps b through e, but with an incrementing
prefill value. The options typically used are “increments by 10%”, “increments by
10% until at 50% then it switches to 5%”, and “increments by 10% until at 50% then
it switches to 1%.” Saved results naming is handled by the script to keep the results
separate for each prefill value.
5. Once the script gets to 100%, the drive is done being tested.
1 It is critically important to use random data for this testing so as to ensure accurate results
when a drive may utilize an internal compression engine.
12
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Results - 01 / Single SSD - 4kB - 128kB random IO
The legends on the below graphs are the percentage of the LBA which has been written on the
member drive (how full the drive is). It is clear to see that with a single SSD, the random write
performance drops quickly as the drive becomes more and more full. This trend continues from
4kB IO sizes to 128kB IO sizes.
Once the drive is 100% full, the performance drops off by nearly 60%.
Single SSD MB/s vs Percent LBA Space Used:
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Test Type vs. MB/s
Figure 10 - Single MLC SATA SSD MB/s vs. Percent LBA Space Used
(4kB-128kB random IO)
Test Type Test Type
13
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
Test Type vs. MB/s
Test Type
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Results - 02 / Hydra 8x model / 4kB - 128kB random IO
In Figure 11, you will see the consistent performance resulting from the Hydra based volume.
This volume uses one Eco • Flash drive cascading eight of the same SSD modules used to
generate Figure 10.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 11 - Hydra 8x SSD Cascade MB/s vs. Percent LBA Space Used (4kB - 128kB random IO)
14
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Results - 03 / 4k random write Single Drive vs. 8x Hydra Cascade
In Figure 12 below, you will see the consistent random write performance resulting from the
Hydra based volume as compared to a single SSD. This single MLC SSD performance is relatively
stable until hitting 90% full when the 4k Random Write IOPS performance drops by 60%.
The Hydra 8x MLC cascade exhibits no performance drop under these full conditions.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 12 - 4kB Random Write IOPS comparison using the identical memory module
between one Eco • Flash drive utilizing a Hydra 8x Cascade model
and the same single NAND module across capacity fill
Single Drive
Hydra 8x
Single Drive Cascade
4k random write Single Drive vs. Hydra 8x Drive Cascade
% of Prefill
15
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Results - 04 / 4k random write Single Drive vs. 16x Hydra Cascade
In Figure 13 below, you will see the consistent random write performance resulting from the
Hydra 1:16 cascade based volume as compared to a single TLC NAND SSD.
The single SSD performance begins to drop and become inconsistent at only 30% capacity.
At capacities greater than 95%, the performance on the single TLC NAND SSD drops by 90%!
The Hydra 1:16 cascade based SSD volume maintains consistent performance deviating by only 10% across capacity range using the same member SSD as the single drive .
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
Figure 13 - 4kB Random Write IOPS comparison between Hydra 16x SATA SSD Cascade
and single TLC NAND SSD across capacity fill
Single Drive
Hydra 16x
Single Drive Cascade
4k random write Single Drive vs. Hydra 16x Drive Cascade
% of Prefill
IOP
s
16
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
17
Summary -
In summary, as the percentage of used capacity increases on an SSD, the write performance
decreases due to an increase in internal garbage collection activity. This phenomena is called
"write cliff." We explained the underlying mechanisms by which the SSD "write cliff" phenomena
can be attributed to SSD controller garbage collection/recycling and how this phenomena can be
controlled or mitigated by using the StorByte Hydra Based Eco • Flash Technology.
For more information on the Eco • Flash / Hydra technology please contact us at:
www.caeneng.com
+1 (714) 456-0888
337 HIGHWAY 7 N, OXFORD, MS 38655 USA
About StorByte -
StorByte is a Washington DC based, globally distributed design and manufacturing company
specializing in the data storage and data management industry. Birthed by necessity
recognizing the industry challenge to support an industrial class, 100% write on write,
affordable product StorByte engineered and produced the Eco • Flash drive architecture.
Today StorByte has leveraged it’s industrial grade, enterprise class, patented, code based
feature set to provide an abstracted, intelligent approach to flash memory management.
Utilizing commodity based memory modules the StorByte Eco • Flash architecture provides the
industries first bare box price point while exceeding enterprise class speeds, feature
capabilities and reliability ratings with extended feature sets including definable power
management and extending the life of the drives by over an order of magnitude.
In addition to the Eco • Flash family of drives our customers are provided with the flexibility to
select from a portfolio of cost correct, industry defining chassis designs supporting open
standards implementations and include multi-tier, multi-location data management services.
Whether you are designing a new installation or want to integrate the StorByte family of
products into an existing architecture the StorByte product portfolio will provide the piece of
mind to complete your objective simply, reliably and affordably.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc.
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]
SBT-TE117-WP
White Paper: How the Hydra Technology Mitigates SSD Write Cliff www.StorByte.com
Disclaimer -
Information provided in this document is solely for the use and selection of StorByte Products.
Information disclosed hereunder is provided "AS IS." StorByte reserves the right to make chang-es without notice to any Product, Product Specification, Capabilities and or Design described
herein to improve reliability, functionality, or design. Information in this document is subject to
change without notice. You may not reproduce, modify, distribute, or publicly display
any information herein without prior written consent.
SBT-TE117-WP Rev 1.0
2016-2017 StorByte, Inc. All rights reserved. StorByte and the StorByte logo are trademarks of StorByte, Inc. 18
SBS 1 • 8
SBS 2 • 16
SBS 4 • 48
1U
8 drives
2U
16 drives
4U
48 drives
4 • 8 • 16
Terabyte Drives
Flex Frame
Design Flexibility
CAEN Engineering, Inc. 337 Highway 7 N, Oxford, MS 38655 714-456-0800 www.caeneng.com [email protected]