Single Controller 4/8TB SSD HakJune Oh, Director, Product Development
MOSAID Technologies Inc.
Flash Memory Summit 2013
Santa Clara, CA
Agenda
Flash Memory Summit 2013
Santa Clara, CA
2
• Introduction
• How to make a Big Capacity SSD
• 8TB HLSSD with Single Controller
• HLSSD vs. Conventional SSD
• Summary
• High Scalability – no practical limit on number of NAND die per
channel with ring architecture – 16TB SSD in 3.5” form factor
• Excellent Throughput – point-to-point connection provides excellent
signal integrity – 1600MB/s is achievable
• Low Power – single controller, low voltage IO, un-terminated bus,
data truncation, hierarchical MCP – 50% lower power than
conventional SSD
• Reduced System Cost – single controller, small footprint, reduced
networking infrastructure - lower data center TCO
HLNAND Introduction
Scalability Low Power Performance Flash Memory Summit 2013
Santa Clara, CA
4
SSD Capacity =
(# of Channels) X (# of Ways/Channel)
X (NAND Die density)
• Flash controllers with 16- or 32- channels are too complex, too big and consumes high power
• # of ways / channel limited to 4 ~ 8 NAND dies because of RC loading effect in high speed DDR NANDs
• NAND die density is now 128Gb in MLC 1ynm
Flash Memory Summit 2013
Santa Clara, CA
SSD Capacity
How to make high capacity SSD
Flash Memory Summit 2013
Santa Clara, CA
• Make SSD controller with 16- or 32-channels
Chip size is too big Low yield
High # of signal leads Signal Integrity
Active and standby power consumption ↑
# of ways per channel is still limited to 4~8
Channel write bandwidth is limited to # of
ways per channel
(Method 1)
How to make high capacity SSD
Flash Memory Summit 2013
Santa Clara, CA
• Make a Bridge-based SSD
PCIe-SAS/SATA RAID (RoC) + SAS/SATA
SSDs
SAS/SATA protocol overhead
Medium IOPS with possible RoC bottleneck
Higher power and added RoC power
(Method 2)
RAID
On
Chip
SATA
Contr
SATA
Contr
SATA
Contr
SATA
Contr
HLNAND Flash
Flash Memory Summit 2013
Santa Clara, CA
HLNAND Interface Chip
X8
X8 X8
x8 HyperLink Out x8 HyperLink In
64Gb/
128Gb
NAND
(LUN 3)
64Gb/
128Gb
NAND
(LUN 2)
64Gb/
128Gb
NAND
(LUN 1)
64Gb/
128Gb
NAND
(LUN 0)
64Gb/
128Gb
NAND
(LUN 3)
64Gb/
128Gb
NAND
(LUN 2)
64Gb/
128Gb
NAND
(LUN 1)
64Gb/
128Gb
NAND
(LUN 0)
Bank 0 Bank 1
X8
64Gb/
128Gb
NAND
(LUN 3)
64Gb/
128Gb
NAND
(LUN 2)
64Gb/
128Gb
NAND
(LUN 1)
64Gb/
128Gb
NAND
(LUN 0)
64Gb/
128Gb
NAND
(LUN 3)
64Gb/
128Gb
NAND
(LUN 2)
64Gb/
128Gb
NAND
(LUN 1)
64Gb/
128Gb
NAND
(LUN 0)
Bank 2 Bank3
• DDR-533/667/800
• Up to 16-Die-Stack
• Up to 128Gb NAND Die
• Up to 2Tb / Package
• Support SLC, MLC, eMLC,
TLC in 2x, 1x, 1y nm
Process Technology
HLNAND Controller
Flash Memory Summit 2013
Santa Clara, CA
• Dual-Core with Hardware
Accelerator
• 28nm Process Technology
• SATA3 & PCIe Gen.2 X4, NVMe
• 8-CH HyperLink
• Up to 16TB Capacity
• Enhanced 2-Dimensional data
randomization
• Enhanced BCH ECC up to
80bits/2KB
• Read refresh by bit error
monitoring
• End-to-end data protection
• Full disk encryption with AES-
128/256 ECB/CBC/CTR/XTS
• Smart power management for low
and peak-power control
• Package: 17mm x 17mm
FCFBGA
HLSSD
Controller
• Point-to-Point Daisy-chain Ring Topology Channel
• Synchronous DDR signaling (DDR-266 ~ DDR-1600)
• Packet Protocol Command & Addressing
• Up to 128 HLNAND MCPs* per single channel
. . . .
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N CH-0
* Maximum available number of NANDs per channel = 2048 (based on 16-die-stack)
Host
Interface
(SATA3 or PCIe)
CH-1
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
CH-7
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
N
HL
N N N
N N N N
Flash Memory Summit 2013
Santa Clara, CA
HLNAND Architecture
HLSSD
• Interface: Native SATA3 & PCIe Gen 2, 4 lanes (NVMe)
• Form Factor: 2.5”, 3.5”, SFF-8639, FHHL PCIe, FHFL PCIe
• Capacities: 256GB – 16TB
• Customized firmware for targeted use-cases (workload)
• Up to 560MB/s R/W & up to 100K IOPS 4K Random R/W in SATA3
• Up to 1.8GB/s R/W up to 320K IOPS 4K Random R/W in PCIe x4
SFF-8639
Form Factor
Flash Memory Summit 2013
Santa Clara, CA
HLSSD Scalability
• 128 HLNAND MCPs (64 MCPs/side)
• 8 channels
• 16 MCPs per channel
• 8TB with 8-die stacked HLAND MCP (64Gb NAND die)
• 16TB with 8-die stacked HLAND MCP (128Gb NAND die)
• PCIe Gen 2.1, x4 lane, NVMe
Flash Memory Summit 2013
Santa Clara, CA
1TB
1TB – 8TB
CH-0 DA-0 DA-1 DA-2 DA-3 DA-4 DA-5 DA-6 DA-7
DA-0 DA-1 DA-2 DA-3 DA-4 DA-5 DA-6 DA-7 CH-1
CH-6 DA-0 DA-1 DA-2 DA-3 DA-4 DA-5 DA-6 DA-7
DA-0 DA-1 DA-2 DA-3 DA-4 DA-5 DA-6 DA-7 CH-7
HLSSD Scalability (Cont’d)
Flash Memory Summit 2013
Santa Clara, CA
Power Comparison (Bridge-based 8TB SSD)
• 1 Host-SAS/SATA RAID (RoC) +
• 8 x 1TB SSD, each having;
1 SAS/SATA Controller &
8 Channels, each having
1 x 128GB PKG (8 NAND dies)
400MB/s bandwidth
Vcc=3.3V Core, Vccq=1.8V
On-Die-Termination
8 NAND die load
o DQ pad – 4pF typ.
o Input pad – 4pF typ.
PCB trace – 1pF
24 Controller pins/Channel
o 8 DQ
o 8 CE#
o 8 Controls (CLE,ALE,WE#,RE,RE#,DQS,
DQS#,WP#)
128GB DDR-400 NAND PKG (8-Die)
CH7
CH0 ECC ECC ECC ECC ECC ECC ECC ECC
SAS/SATA-0
SAS/SATA-7
SAS/SATA-1
SAS/SATA-2
SAS/SATA-3
SAS/SATA-4
SAS/SATA-5
SAS/SATA-6
1TB SSD
8 x 1TB = 8TB
(RAID)
8TB SSD
CH7
CH0 ECC ECC ECC ECC ECC ECC ECC ECC
1TB SSD
128Gb
128Gb
128Gb
128Gb
Flash Memory Summit 2013
Santa Clara, CA
Power Comparison (Native 8TB HLSSD)
• 8 Channels
• Each channel having;
8 x 128GB HLNAND MCP
400MB/s bandwidth
1.2V I/O
24 controller pins/channel
• 8 Data in
• 8 Data out
• 4 Clocks (diff. in/out)
• 4 Controls
Bridge chip load
• Input pad – 4pF
• Output pad – 4pF
PCB trace – 1pF
128GB DDR-400 HLNAND PKG (8-Die-Stack)
ECC
ECC
ECC
ECC
ECC
ECC
ECC
ECC
CH1
CH1
CH3
CH4
CH5
CH6
CH0
128Gb
128Gb
128Gb
128Gb
128Gb
128Gb
128Gb
128Gb
CH7
128Gb
128Gb
128Gb
128Gb
128Gb
128Gb
128Gb
128Gb
8TB HLSSD
Native 8TB
Flash Memory Summit 2013
Santa Clara, CA
8TB SSD Read Power (SATA3 BW Saturation)
0
1000
2000
3000
4000
5000
6000
7000
8000
Conv. 8TB SSD HLSSD 8TB
710 710
93 265 324
384 187
102 102
840 840
534 534
2800
102
1200
mW
SATA3 Switch's PHY & CorePower
SATA3 Switch's IO Power
Standby SSD Controller Power
Active SSD DRAM Power
Active SSD Controller's Power
Active SSD Controller's IO Power
Other NAND Standby Power(Read)
Channel Read IO Power
ODT Power
Interface Chip Read Power
Int. NAND IO Power
NAND Core Read Power
7.1W
2.9W
• HLSSD consumes 60%
less power
• Conventional SSD
consumes significant SATA
channels power
• HLSSD achieves better
power efficiency with
reduced controller
complexity
Flash Memory Summit 2013
Santa Clara, CA * Source: 2012 Flash Memory Summit, C. Lee and MOSAID’s Internal Data
8TB SSD Write Power (SATA3 BW Saturation)
0
2000
4000
6000
8000
10000
12000
Conv. 8TB SSD HLSSD 8TB
3832 3832
140 377
324 575 187 102 102
840 840
534 534
2800
102
1200
mW
SATA3 Switch's PHY & CorePower
SATA3 Switch's IO Power
Standby SSD Controller Power
Active SSD DRAM Power
Active SSD Controller's CorePower
Active SSD Controller's IOPower
Other NAND Standby Power(Write)
Channel Write IO Power
ODT Power
Interface Chip Write Power
Int. NAND IO Power
NAND Core Write Power
10.4W
6.1W
Flash Memory Summit 2013
Santa Clara, CA
• HLSSD consumes 40%
less power
• Conventional SSD
consumes significant SATA
channels power
• NAND’s core program
operation dominates
overall power
* Source: 2012 Flash Memory Summit, C. Lee and MOSAID’s Internal Data
Summary
Flash Memory Summit 2013
Santa Clara, CA
17
• Highly scalable HLNAND technology delivers affordable Terabyte-class SSDs
• Native PCIe controller based HLSSDs deliver the best $/IOPS, IOPS/W and IOPS/GB
• Provide fast and large storage pool to minimize the number of storage tiers – full flash array or no need for more than two “tiers” of storage
See HLSSD at Booth 714
Flash Memory Summit 2013
Santa Clara, CA
18
Scalability Low Power Performance