qct ceph solution - design consideration and reference architecture

Post on 07-Apr-2017

715 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

QCT Ceph Solution – Design

Consideration and Reference

ArchitectureGary LeeAVP, QCT

2

• Industry Trend and Customer Needs

• Ceph Architecture

• Technology

• Ceph Reference Architecture and QCT Solution

• Test Result

• QCT/Red Hat Ceph Whitepaper

AGENDA

3

Industry Trendand

Customer Needs

4

5

• Structured Data -> Unstructured/Structured Data

• Data -> Big Data, Fast Data

• Data Processing -> Data Modeling -> Data Science

• IT -> DT

• Monolithic -> Microservice

Industry Trend

6

• Scalable Size• Variable Type• Longivity Time• Distributed Location• Versatile Workload

• Affordable Price• Available Service• Continuous Innovation• Consistent

Management• Neutral Vendor

Customer Needs

7

Ceph Architecture

8

Ceph Storage Cluster

Cluster NetworkCeph

LinuxCPU

Memory

SSDHDDNIC

Ceph

Linux

CPUMemor

ySSDHDDNIC

Ceph

Linux

CPUMemor

ySSDHDDNIC

Ceph

Linux

CPUMemor

ySSDHDDNIC

Object Block File

UnifiedStorage

Scale-outCluster

Open Source Software

Open Commodity Hardware

…..

9

Block I/O

Ceph Client

RBD RADOSGW

Ceph FS

Object I/O

File I/O

RADOS/Cluster Network

OSD

File System

I/O

Disk I/O

Public N

etwork

End-to-end Data Path

App Service

10

Ceph Software Architecture

Public Network (ex. 10GbE or 40GbE)

Cluster Network (ex. 10GbE or 40GbE)

Ceph Monitor

…...

RCT or RCCNx Ceph OSD Nodes

Ceph OSD Node

Clients

Ceph OSD Node Ceph OSD Node

Ceph Hardware Architecture

12

Technology

13

• 2x Intel E5-2600 CPU• 16x DDR4 Memory• 12x 3.5” SAS/SATA HDD• 4x SATA SSD + PCIe M.2• 1x SATADOM• 1x 1G/10G NIC• BMC with 1G NIC• 1x PCIe x8 Mezz Card• 1x PCIe x8 SAS Controller• 1U

QCT Ceph Storage ServerD51PH-1ULH

14

• Mono/Dual Node • 2x Intel E5-2600 CPU• 16x DDR4 Memory• 78x or 2x 35x SSD/HDD• 1x 1G/10G NIC• BMC with 1G NIC• 1x PCIe x8 SAS Controller• 1x PCIe x8 HHLH Card• 1x PCIe x16 FHHL Card• 4U

QCT Ceph Storage ServerT21P-4U

15

• 1x Intel Xeon D SoC CPU• 4x DDR4 Memory• 12x SAS/SATA HDD• 4x SATA SSD• 2x SATA SSD for OS• 1x 1G/10G NIC• BMC with 1G NIC• 1x PCIe x8 Mezz Card• 1x PCIe x8 SAS Controller• 1U

QCT Ceph Storage ServerSD1Q-1ULH

16

• Standalone, without EC

• Standalone, with EC

• Hyper-converged, without EC

• High Core vs. High Frequency

• 1x OSD ~ (0.3-0.5)x Core + 2G RAM

CPU/Memory

17

• SSD:– Journal– Tier– File System Cache– Client Cache

• Journal– HDD: SSD (SATA/SAS): 4~5– HDD: NVMe: 12~18

SSD/NVMe

18

• 2x NVMe ~40Gb

• 4x NVMe ~100Gb

• 2x SATA SSD ~10Gb

• 1x SAS SSD ~10Gb

• (20~25)x HDD ~10Gb

• ~100x HDD ~40Gb

NIC10G/40G -> 25G/100G

19

• CPU Offload through RDMA/iWARP

• Erasure Coding Offload

• Allocate computing on different silicon areas

NICI/O Offloading

20

• Object Replication– 1 Primary + 2 Replica (or more) – CRUSH Allocation Ruleset

• Erasure Coding– [k+m], e.g. 4+2, 8+3– Better Data Efficiency

• k/(k+m) vs. 1/(1+replication)

Erasure Coding vs. Replication

21

Size/Workload Small Medium Large

Throughput Transfer BandwidthSequential R/W

Capacity Cost/capacityScalability

IOPS IOPS/ per 4k BlockRandom R/W

Hyper-converged ?

Desktop Virtualization

LatencyRandom R/W

Hadoop ?

Workload and Configuration

22

Red Hat Ceph

23

• Intel ISA-L

• Intel SPDK

• Intel CAS

• Mellanox Accelio Library

Vendor-specific Value-added Software

24

Ceph Reference Architecture and

QCT Solution

• Trade-off among Technologies

• Scalable in Architecture

• Optimized for Workload

• Affordable as Expected

Design Principle

26

1. Needs for scale-out storage

2. Target workload

3. Access method

4. Storage capacity

5. Data protection methods

6. Fault domain risk tolerance

Design Considerations

27

Transaction

DataWarehou

se Big Data

ScientificBlock Transf

erAudio Video

IOP

S

MB/sec

OLTP

OLAP

HPC

Streaming

DB

Storage Workload

SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)Throughput optimized

QxStor RCT-200 16x D51PH-1ULH (16U)• 12x 8TB HDDs• 3x SSDs• 1x dual port 10GbE• 3x replica

QxStor RCT-400 6x T21P-4U/Dual (24U)• 2x 35x 8TB HDDs• 2x 2x PCIe SSDs• 2x single port 40GbE• 3x replica

QxStor RCT-400 11x T21P-4U/Dual (44U)• 2x 35x 8TB HDDs• 2x 2x PCIe SSDs• 2x single port 40GbE• 3x replica

Cost/Capacity optimized

IOPS optimized Future direction Future direction NA

* Usable storage capacity

QxStor RCC-400Nx T21P-4U/Dual • 2x 35x 8TB HDDs• 0x SSDs• 2x dual port 10GbE• Erasure Coding 4:2

QCT QxStor Red Hat Ceph Storage Edition PortfolioWorkload-driven Integrated Software/Hardware Solution

• Densest 1U Ceph building block • Best reliability with smaller

failure domain

• Scale at high scale 2x 280TB• At once obtain best throughput

and density

• Block or object storage • 3x replication • Video, audio, image repositories, and streaming media

• Highest density 560TB raw capacity per chassis with greatest price/performance

• Typically object storage• Erasure coding common

for maximizing usable capacity • Object archive

Throughput-Optimized RCC-400RCT-200 RCT-400

Cost/Capacity-Optimized

USE

CAS

E

QCT QxStor Red Hat Ceph Storage Edition Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution

30

Ceph Solution DeploymentUsing QCT QPT Bare Metal Privision Tool

31

Ceph Solution DeploymentUsing QCT QPT Bare Metal Privision Tool

32

QCT Solution Value Proposition

• Workload-driven

• Hardware/software pre-validated, pre-optimized and

pre-integrated

• Up and running in minutes

• Balance between production (stable) and innovation

(up-streaming)

33

Test Result

Client 1S2B

Client 2S2B

Client 3S2B

Ceph 1S2PH

Ceph 2S2PH

Ceph 3S2PH

Ceph 5S2PH

Ceph 4S2PH

Client 8S2B

Client 9S2B

Client 10S2B

10Gb

10Gb

Public Network

Cluster Network

General Configuration

• 5 Ceph nodes (S2PH) with each 2 x 10Gb link.• 10 Client nodes (S2B) with each 2 x 10Gb link.

• Public network : Balanced bandwidth between Client nodes and Ceph nodes.• Cluster network : Offload the traffic from public network to improve performance.

Option 1 (w/o SSD) a. 12 OSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 128 GB

Option 2 : (w/ SSD) a. 12 OSD / 3 SSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 12 (OSD) x 2GB = 24 GB

Testing Configuration (Throughput-Optimized)

Client 1S2S

Client 2S2S

Client 3S2S

Ceph 1S2P

Ceph 2S2P

Client 6S2S

Client 7S2S

Client 8S2S

10Gb

Public Network

40Gb 40Gb

General Configuration

• 2 Ceph nodes (S2P) with each 2 x 10Gb link.• 8 Client nodes (S2S) with each 2 x 10Gb link.

• Public network : Balanced bandwidth between Client nodes and Ceph nodes.• Cluster network : Offload the traffic from public network to improve performance.

Option 1 (w/o SSD) a. 35 OSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB

Option 2 : (w/ SSD) a. 35 OSD / 2 PCI-SSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB

Testing Configuration (Capacity-Optimized)

36

Level Component Test SuiteRaw I/O Disk FIONetwork I/O Network iperfObject API I/O librados radosbenchObject I/O RGW CosbenchBlock I/O RBD librbdfio

CBT (Ceph Benchmarking Tool)

37

Linear Scale Out

38

Linear Scale Up

39

Price, in terms of Performance

40

Price, in terms of Capacity

41

Protection Scheme

42

Cluster Network

43

QCT/Red Hat Ceph

Whitepaper

44

http://www.qct.io/account/download/download?order_download_id=1022&dtype=Reference%20Architecture

QCT/Red Hat Ceph Solution Brief

https://www.redhat.com/en/files/resources/st-performance-sizing-guide-ceph-qct-inc0347490.pdf

http://www.qct.io/Solution/Software-Defined-Infrastructure/Storage-Virtualization/QCT-and-Red-Hat-Ceph-Storage-p365c225c226c230

QCT/Red Hat Ceph Reference Architecture

46

• The Red Hat Ceph Storage Test Drive lab in QCT Solution Center provides you a free hands-on experience. You'll be able to explore the features and simplicity of the product in real-time.

• Concepts:Ceph feature and functional test

• Lab Exercises:Ceph BasicsCeph Management - Calamari/CLICeph Object/Block Access

QCT Offer TryCeph (Test Drive) Later

47

Remote access to QCT cloud solution centers• Easy to test. Anytime and anywhere.

• No facilities and logistic needed

• Configurations• RCT-200 and newest QCT solutions

QCT Offer TryCeph (Test Drive) Later

48

• Ceph is Open Architecture• QCT, Red Hat and Intel collaborate to provide

– Workload-driven,– Pre-integrated, – Comprehensive-tested and – Well-optimized solution

• Red Hat – Open Software/Support PioneerIntel – Open Silicon/Technology InnovatorQCT – Open System/Solution Provider

• Together We Provide the Best

CONCLUSION

www.QuantaQCT.com

Thank you!

50

www.QCT.io

QCT CONFIDENTIAL

Looking forinnovative cloud solution?

Come to QCT, who else?

top related