memory & storage day new generations of storage excellence ... · memory & storage day ......

33
Memory & Storage Day New generations of storage excellence are emerging… Frank Ober Business Title

Upload: others

Post on 04-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Memory & Storage Day

New generations of storage excellence are emerging…Frank OberBusiness Title

Page 2: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Legal DisclaimerAll information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.

Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.

Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

2

Page 3: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

agenda

3

Page 4: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

2008 Today 2024→

K-I

OP

S p

er

TB

Ca

pa

city

IOPS/TB COMPARISON WORKING DATA

70/30 MIXED WORKLOAD

SATASSDs

NVMeNAND SSDs

Intel® Optane™ DC SSDs

HDDs

Design now for the future of storage performanceK

-IO

PS

pe

r T

B C

ap

aci

ty

IOPS/TB FOR WORKING DATA

70/30 READ/WRITE RANDOM

2008 Today 2024→

SATASSDs

NVMeNAND SSDs

HDDs

Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

4

Page 5: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

AEROSPIKE CERTIFICATION TEST FAILURE RATE @ 300K TPSALDERSTREAM VS CURRENT INTEL® SSDS (LOWER IS BETTER)

DEVICE READ LATENCY (microsecond - us)

FA

ILU

RE

RA

TE

(lo

we

r is

be

tte

r)

0

32 64 128 256 512 1024 2048

lower fail ratevs 3D NAND

Intel® SSD DC P4610

Intel® Optane™ SSD DC P4800X

Alderstream (future product)

Aerospike Certification Test (ACT) RESULTS

See Appendix A for complete system configurations. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

Maximum TPS at <5% ACT Failure Rate(HIGHER IS BETTER)

TP

S (h

igh

er

is b

ett

er)

0

Intel® SSDDC P4610

300KIntel® Optane™ SSD

DC P4800X

435K

Alderstream(future product)

5

Page 6: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Early AlderStream RocksDB performance

See slide 91for complete system configurations. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

P9

9 l

ate

ncy

Gets/sec

RocksDB Random Performance1 (96GB DRAM, 300GB Database):90% GET/10% Put

ADS

P4800X

P4610

X

4

# of threads

4 6 812

16

20

6 8

12

16

20

68

12

1620

4

higherperformance

lowerlatency

Alderstream (future product)

Intel® Optane™ SSD DC P4800X

Intel® SSD DC P4610

Alderstream builds substantially on the

gains seen with the first generation Intel® Optane™ SSD DC

P4800X, delivering big RocksDB gains

6

Page 7: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

EX: Ceph, MySQL, MS-SQL EX: VMware vSAN, Microsoft Azure Stack HCI, Cisco HyperFlex

EX: Dell EMC PowerMax, IBM Spectrum Scale, Nutanix

Metadata,Logging, temp,

journal

Storage

Writecache

Storage

write copy

b c d

all data store

A B C D E F G H I J K L MN O P Q R S T U V W X Y Z

hot data

Storage

A b c e f gd

I J K L M N O Pw x y

h

qr s t u v

Store data about data Temporarily copy or hold hottest data Intelligently store hottest data

admin data i i intelligent data placement

the actual data

A B C D E F G H I J K L MN O P Q R S T U V W X Y Z

accelerating caching TieringApplication LayerApplication LayerApplication Layer

warmdata z

Intel® Optane™ SSD DC P4800X. 1.5TB capacity used for ratio illustrative purposes.

Intel 3D NAND SSD. 8TB capacity used for ratio illustrative purposes.

read copy

h i j k l mREADcache

Activate data center storage

7

Page 8: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Memory & Storage Day

Ceph (nautilus) performanceand the future of Ceph

Page 9: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Ceph Node

Capacity StorageObject Storage Daemon (OSD) partition(s)

Metadata (RocksDB) andWrite Ahead Log (WAL) partitionCAS Caching of OSDs [optional]

2nd Generation

Ceph Node

Capacity StorageObject Storage Daemon (OSD) partition(s)

Metadata (RocksDB) and Write Ahead Log (WAL) partitionCAS Caching of OSDs [optional]

2nd Generation

Intel®Optane™DC SSD

Intel® QLC 3D NANDSSD

Me

tad

ata

OS

D D

ata

Intel® QLC 3D NANDSSDIntel® QLC 3D NANDSSDIntel® QLC 3D NANDSSDIntel® QLC 3D NANDSSDIntel® QLC 3D NANDSSD

Intel®Optane™DC SSD

Intel 3DNANDDC SSDIntel 3DNANDDC SSDIntel 3DNANDDC SSDIntel 3DNANDDC SSDIntel 3DNANDDC SSDIntel 3DNANDDC SSD

Me

tad

ata

OS

D D

ata

Re-architecting ceph block storage

9

Page 10: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

The Future of ceph IS CRIMSON

10

Ceph / Seastar and Crimson OSD https://www.youtube.com/watch?v=X4Pz-dqrwi8

Keynote from Sage Weil on Cephhttps://www.youtube.com/watch?v=MVJ2eFMBVSI

Docs for Crimson Cephhttps://docs.ceph.com/docs/master/dev/crimson/

The future of a high performance OSDhttps://static.sched.com/hosted_files/cephalocon2019/7f/2019-cephalocon-crimson-osd.pdf

Page 11: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Memory & Storage Day

VMWare Architecture With OptaneExpanding memory, accelerating storage

Page 12: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

BEST PRACTICES

Multiple disk groups create smaller failure domains

Multiple disk groups providebetter performance

Implement balanced configurations –e.g. do not configure different hosts with different number of disks per hosts and/or different number of disk groups

More Disk Groups MEANS Higher Throughput

1 HBAvSphere vSAN

Disk Group

SSD

SSD

Disk Group Disk Group Disk Group Disk Group

SSD

SSD

SSD

SSD

SSD

SSD

SSD

SSD

12

Page 13: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Intel® Optane™ SSD – VMware vSAN & Dell EMC VXRail

0

500

1,000

1,500

2,000

2,500

3,000

0 250 500 750 1,000 1,250 1,500 1,750 2,000 2,250 2,500 2,750 3,000 3,250 3,500 3,750 4,000 4,250

Thro

ugh

pu

t (M

B/S

ec)

Time (sec)

Dell EMC VxRail* with NAND SSDs

100% Sequential 64KB Writes—Throughput

For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Configuration details available from Enterprise Strategy Group: https://www.esg-global.com/validation/esg-technical-validation-dell-emc-vxrail-with-intel-xeon-scalable-processors-and-intel-optane-ssds. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate.

Throughput drops 39% during SAS cache aggressive write buffer destage

13

Page 14: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

On-Line Transaction Processing (OLTP) Workload – HammerDB

1,378,7982,215,489

6,342,419

10,190,789

0

2,000,000

4,000,000

6,000,000

8,000,000

10,000,000

12,000,000

New Orders per Minute Transactions per Minute

+ 61%

Dell EMC VxRail with NAND SSDs Dell EMC VxRail with Intel® Optane™ SSD DC P4800X

Intel® Optane™ SSD - VMware vSAN & Dell EMC VXRail

14

For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Configuration details available from Enterprise Strategy Group: https://www.esg-global.com/validation/esg-technical-validation-dell-emc-vxrail-with-intel-xeon-scalable-processors-and-intel-optane-ssds. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate.

Page 15: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Memory & Storage Day

Vhost and spdkBreak through virtual machine bottlenecks

Page 16: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Virtual Machine Acceleration (vhost)• Provides dynamic block device

provisioning

• Decreases Guest Latency and I/O Overhead

• Works with KVM/QEMU

• Increases VM Density by reducing infrastructure tax

• Supports Direct-Attached and Disaggregated usage models

Build a better local drive to break through vm bottlenecks

Intel Datacenter SSDs NVMe over Fabric

Intel® QLC3D NAND SSD

16

Page 17: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Software

Intel does software raidLet us show you how

Page 18: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Intel® Virtual RAID on CPU (VROC)for effective Optane RAID

18

Intel® VROC allows Intel® Optane™ DC SSDs to be

directly connected to the CPU to allow RAID

configurations without HBA performance

interferenceIntel® Optane™ DC SSDx4 PCIe uplink per SSD

Legacy processor Intel® Xeon® scalable processor

X8 PCIe uplink

Traditional RAID HBA

Potential Bottleneck

Break through hardware bottlenecks with direct connect virtual RAID

Page 19: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

RAID0: scale performance

RAID1: protect data

RAID10: do both

0

500,000

1,000,000

1,500,000

2,000,000

1-Drive Pass-thru 3-Drive RAID0 2-Drive RAID1 4-Drive RAID10

IOP

S

Configuration

Intel® Optane™ SSD DC P4800X RAID Performance1

RESOURCES• Intel VROC Product Brief

www.intel.com/vroc

• Intel VROC User GuidesIntel VROC User Guide

• General VROC Support DocIntel VROC Support Page

For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.1 Source – Intel. See configurations in Appendix C

19

100% Writes 70/30 Mixed R/W 100% Read

Page 20: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Memory & Storage Day

Demos aND QUESTIONS…

Page 21: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Appendix a – aerospike Certification test results

System configuration: Tested September 9, 2019 Source – Intel: Server model: 2x 2nd Generation Intel® Xeon® Scalable Processor Gold 6254 @ 3.10 GHz, 36 cores , hyper threading enabled, Intel® Server Board S2600WFT, 256GB installed DDR4 8 x 32 GB DIMMs @ 2666 Mhz. Operating System: CentOS Linux release 7.6.1810 (Core) , kernel 5.2.11-1.el7.elrepo.x86_64, ACT Version 5.2 - Aerospike Certification Tool is an open source tool that allows 66/33 read write mix. It is a database simulation tool and qualifies the get requests of different sizes for a drive to pass under 1 millisecond of read requests. No more than 5% can fail at 1millisecond. ACT Version 5.2 used. Data reviewed by Aerospike Inc. https://www.aerospike.com/docs/operations/plan/ssd/ssd_certification.html

21

Page 22: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Appendix b – rocksdb test results

System configuration: Tested September 19, 2019 Source – Intel: Server model: 2x 2nd Generation Intel® Xeon® Scalable Processor Gold 6254 @ 3.10 GHz, 36 cores , hyper-threading disabled, Intel® Server Board S2600WFT, 192GB installed DDR4 12 x 16 GB DIMMs @ 2133 Mhz. Operating System: CentOS Linux release 7.6.1810 (Core) , kernel 5.2.11-1.el7.elrepo.x86_64, RocksDB v 5.9.2 using embedded dbbench benchmarking tool utilizing uniform random 90:10 read / write workload.

22

Page 23: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Appendix c – Performance, protection,or both with intel® vroc

System configuration: Intel® Server Board S2600WFT family, Intel® Xeon® Series Processors, 24cores@ 2.1GHz, RAM 192GB , BIOS Release 7/09/2018, BIOS Version: SE5C620.86B.00.01.0014.070920180847

OS: RedHat* Linux 7.5, kernel- 3.10.0-862.20.2.el7.x86_64, mdadm - v4.0 - 2018-04-03 Intel build: Intel VROC 6.0 release, Intel ® VROC Pre-OS version 5.4.0.1039, 4x Intel® SSD DC P4800X Series 1.5TB drive firmware: E2010435, Retimer

BIOS setting: Hyper-threading enabled, Package C-State set to C6 (non retention state) and processor C6 set to enabled, P-States set to default and SpeedStep and Turbo are enabled

Workload Generator: FIO 3.13, RANDOM: Workers-8, IOdepth- 128, No Filesystem, CPU Affinitized

Performance results are based on testing as of June 19, 2019 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure

23

Page 24: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

backup

24

Page 25: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Open casTools to easily solve caching needs

25

Page 26: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Open CAS Linux

26

For more information: https://github.com/Open-CAS/ocf

Accelerates storage performance by caching specified data classes

Smarter caching with I/O classification

Incredible performance

Optimized for Intel® Optane™ DC SSDs

Improved application SLA

Enterprise validation and support available

Robust roadmap

No application changes

Features and Benefits

Use Case Examples

Application Acceleration

Virtualization

SDS Acceleration

Page 27: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

27

How Does Open CAS Linux Work?Installed as loadable kernel module

Configuration via user-space admin tool

Deployed at block layer

Built on Open CAS Framework (OCF) caching library as the policy engine

Cache “hot data” on a fast SSD

Operating modes: Write Thru, Write Back, Write Around2

Open CAS Linux

Logical File System(ext3, ext4, xfs)1

Virtual File System

Application

HDD SSD

UserSpace

KernelSpace

Physicaldevices

Block Layer

Device driver

Miss Hit

1. ext3 supports up to 16TB volume sizes.2. See Open CAS Project website at https://github.com/Open-CAS/ocf for supported modes

Open CAS Framework

Page 28: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Open CAS Linux with Intel® Optane™ Technology1

28

4.1Xqemu VM performance

1.8XFastercompletiontime

hadoop

Application Acceleration

Virtual Machine

Acceleration

Distributed Storage

Acceleration

3.1Xkernel block storage

LOWERLATENCY

Lowerlatency

3.4Xspdk

Moreiops3X

mysqlMoretransactionsper second

1.6xCeph

improvedqos at p99

1. Source – Intel. See System Configurations in Appendix D

upto

upto

upto

upto

upto

upto

Use open cas to achieve breakthrough storage system caching performance

Page 29: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

SWAP AS MEMORY EXTENDERGet lower cost virtual memory

29

Page 30: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Linux Swap questionnaire

30

1. How much memory is installed on your servers? What is the daily average utilization of it?

2. What 4k read/write latency increase for memory accesses, when memory is augmented by Swap, on Optane SSDs, am I able to tolerate in my targeted infrastructure? Example, I can tolerate a P99 latency of xx us(microseconds), when I run a test.

3. What performance characteristic on swapping system do you consider and measure for usage in production?

4. What is your memory overcommit target (DRAM:OPTANE)?

5. Have you done any endurance calculations for your targeted deployment?

6. What environment(s) do you target for memory oversubscription?

7. What cost ratio makes you consider Optane SSD for memory oversubscription?

8. Do you differentiate any characteristics in your memory or storage offerings (i.e. service levels) to end users and what are they?

Page 31: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

Linux SWAP as a low cost MEMORY EXTENDER

31

0

200,000

400,000

600,000

800,000

1,000,000

1,200,000

1,400,000

1 2 4 8 16 32

Pag

es a

cce

sse

d p

er

seco

nd

No.of threads

pmbench scalibility with mixed read write (r=50)

P4800X as swap

P4510 as swap

Higher is better

Solution Blueprint available @https://github.com/fxober/LinuxSwap

For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.1 Source – Intel. Test configuration available in Solution Blueprint at https://github.com/fxober/LinuxSwap/blob/master/Optane-SSD-Linux-Swap-Solutions-Blueprint-340608-001US.pdf

Page 32: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

32

Appendix D – open CAS linux• Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult

other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

• 1. MySQL: Source: Intel Tested. System confguration/Test Details: Server model: Intel® Wolf Pass S2600WFT (R2208WFTZS); MB: H48104-710; CPU: Dual Intel® Xeon® Gold

6154 CPU @ 3.00GHz, 18C/36T, 10.4GT/s, 24.75 MB L3 Cache, Turbo, HT (150W) DDR4-2666; Mem: 24x16GB RDIMM (384GB), 2133MT/s, DDR4-17000; NICs: Embedded Intel®

X722 10GbE LAN, BIOS Version: SE5C620.86B.00.01.0014.070920180847; OS Version: CentOS 7.5; Kernel Version: 3.10.0-862.11.6.el7.x86_64; S4510 Baseline Confg: 1x Intel®

SSD DC S4510 960GB; S4510/ Intel® Optane™P4800X/Intel® CAS confg: 1x Intel® SSD DC S4510 960GB and 1x Intel® Optane SSD DC P4800X 375GB for caching; Workload:

Sysbench 1.0.16 using MySQL 8.0.12 as database, size=100G,test block size 16kb, time-based one-hour oltp read-write conditioning (block size 128kb), time-based 20-min test,

Pareto random distribution (pareto-h constant = 0.8, meaning 80/20distribution). S4510 baseline system obtained 639 TPS and 50.07 ms latency on average. S4510/ Intel®

Optane™ P4800X/Intel® CAS system obtained 1953 TPS and 16.39 ms on average. Performance results are based on testing as of February 19, 2019 and may not reflect the

publicly available security updates. See confguration disclosure for details. No product can be absolutely secure.

• 2. FIO: Source: Intel Tested. System configuration: Server model: Intel Wolf Pass S2600WFT (R2208WFTZS); MB: H48104-710; CPU: Dual Intel(R) Xeon(R) Platinum 8160T CPU

@ 2.10GHz, 24C/48T, 10.4GT/s, 33 MB L3 Cache, Turbo, HT (150W) DDR4-2666; Mem: 4x32GB RDIMM (128GB), 2400MT/s, DDR4-19200; NICs: Intel 10-Gigabit x540-AT2 (Rev.

01) and Embedded Intel X722 10GbE LAN; TLC config: 4x Intel SSD DC P4510 8TB in 4-disk RAID5 (4DR5) for capacity storage; Optane+QLC Config: 4x Intel SSD DC P4320 8TB

in 4DR5 for capacity storage and 1x Intel Optane SSD DC P4800X 375GB for caching; Intel CAS Software Setup: Version 03.05.00.07200700, P4800X used for write-back

caching, cache size is 1.875% of the 760GB fio workload ( ~14GB). Workload: FIO, size=760GiB, 3000 files (individual filesize ~ 256MiB), block size 4kb, time-based 20-min test

(20-min runtime, 30-second ramptime), zipf random distribution (theta = 1.1) , 4K 70/30 rw mix test, 4K 100% random read test, 4K 100% random write test (3 trials each).

• 3. SPDK: Source: Intel tested. System configuration: Server model: SYS-6029U-TR4T; MB: X11DPU; CPU: Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz, 28C/56T, 38.5 MB L3 Cache, Turbo, HT (205W); Mem: 8x32GB Hynix HMA84GR7AFR4N-VK DIMMs (256GB), DDR4-2666; NICs: 4x Embedded Intel X710/X557 10GbE LAN; BIOS Version: 1.10; Operating System: Red Hat Enterprise Linux Server release 7.4; Kernel Version: 4.20.12-1.el7. TLC config: 3x Intel SSD DC P4510 (8TB) in RAID5; QLC Config: 2x Intel® Optane™ SSD DC P4800X (375GB) in RAID0 for caching, 3x Intel SSD DC P4320 (7.68TB) in RAID5 for backend storage; Software Configuration: SPDK Version 19.04 beta, OCF Version 19.03 beta, FIO Version 3.3. P4800X RAID0 used for write-back caching, cache size is 3% of the 1500GB fio workload ( ~45GB). Workload: FIO, 3 trials after one single 2 hr ramp time, each trial with: size=1500GiB, block size 16KB, zipf random distribution (theta = 1.1) , random readwrites, 70/30 rw mix, 8 IO depth, 4 jobs.

Page 33: Memory & Storage Day New generations of storage excellence ... · Memory & Storage Day ... 1 Source: Enterprise Strategy Group. Software and workloads used in performance tests may

33

Appendix D (2) – open CAS linux• 4. QEMU: Source: Intel tested. System confguration: Server model: SYS-6029U-TR4T; MB: X11DPU; CPU: Intel® Xeon® Platinum 8180 CPU @ 2.50GHz, 28C/56T, 38.5 MB L3 Cache, Turbo, HT (205W); Mem: 8x32GB Hynix

HMA84GR7AFR4N-VK DIMMs (256GB), DDR4-2666; NICs: 4x Embedded Intel X710/X557 10GbE LAN; BIOS Version: 1.10; Host Operating System: Red Hat Enterprise Linux Server release 7.5; Kernel Version: 3.10.0-862.11.6.el7.x86_64; Guest Operating System: CentOS Generic Cloud release 7.5; Kernel Version: 3.10.0-862.3.2.el7.x86_64; QEMU release v2.1.2; FIO release v3.1; TLC confg: (1) Intel® SSD DC P4510 8TB with each VM given a 15GiB partition from this drive; QLC Confg: (1) Intel® SSD DC P4320 7.68TB and (1) Intel® Optane™ SSD DC P4800X 1.5TB with each VM given a 15GiB partition from P4320 and 12GiB partition from P4800X drives; Intel CAS Software Setup: Version 03.07.00, Intel CAS for QEMU patch version 3.7.0, P4800X partition used for caching, P4320 partition used as a core drive, caching mode was set to write-through; Workload: FIO running inside guest VM, result is the average of 10 trials on preconditioned drives and after cache is warmed, each trial with: 15GiB test size, block size 4KB, time-based 50min test with 10 min ramp runtime, uniform random distribution , random readwrites, 70/30 rw mix, 1 I/O depth, 1 job; Cache warming procedure: 100% random reads from full drive capacity. Performance results are based on testing as of February 20, 2019 and may not reflect the publicly available security updates. See confguration disclosure for details. No product can be absolutely secure.

• 5. Hadoop: Source: Intel tested. 4. QEMU (slide 8): Source: Intel tested. System confguration: Server model: SYS-6029U-TR4T; MB: X11DPU; CPU: Intel® Xeon® Platinum 8180 CPU @ 2.50GHz, 28C/56T, 38.5 MB L3 Cache, Turbo, HT (205W); Mem: 8x32GB Hynix HMA84GR7AFR4N-VK DIMMs (256GB), DDR4-2666; NICs: 4x Embedded Intel X710/X557 10GbE LAN; BIOS Version: 1.10; Host Operating System: Red Hat Enterprise Linux Server release 7.5; Kernel Version: 3.10.0-862.11.6.el7.x86_64; Guest Operating System: CentOS Generic Cloud release 7.5; Kernel Version: 3.10.0-862.3.2.el7.x86_64; QEMU release v2.1.2; FIO release v3.1; TLC confg: (1) Intel® SSD DC P4510 8TB with each VM given a 15GiB partition from this drive; QLC Confg: (1) Intel® SSD DC P4320 7.68TB and (1) Intel® Optane™ SSD DC P4800X 1.5TB with each VM given a 15GiB partition from P4320 and 12GiB partition from P4800X drives; Intel CAS Software Setup: Version 03.07.00, Intel CAS for QEMU patch version 3.7.0, P4800X partition used for caching, P4320 partition used as a core drive, caching mode was set to write-through; Workload: FIO running inside guest VM, result is the average of 10 trials on preconditioned drives and after cache is warmed, each trial with: 15GiB test size, block size 4KB, time-based 50min test with 10 min ramp runtime, uniform random distribution , random readwrites, 70/30 rw mix, 1 I/O depth, 1 job; Cache warming procedure: 100% random reads from full drive capacity. Performance results are based on testing as of February 20, 2019 and may not reflect the publicly available security updates. See confguration disclosure for details. No product can be absolutely secure.

• 6. Ceph: Source: Intel tested. : System configuration: 10-Node Ceph Cluster: 5x OSD, 1x Mon/Client, 4x Client. 5x OSD, 5x Client/Mon Nodes: Supermicro 6029U-TR4T-OTO-58, CPU’s: 2x Intel® Xeon® Platinum 8180 Processor @ 2.5GHz (SkyLake 28 cores with 36MB L3 cache), Memory and Network: OSD: 64GB DDR4-2666 ECC, Client/Mon: 256GB, Intel® SSD DC S3700 (Boot drive, 200GB), 2x Ethernet Controller XL710 for 40GbE QSFP+ (rev 02). Disk drives per node: All Flash Cache Config: 3x S4500 3.84TB SATA SSD, RocksDB, WAL, and CAS OSD caching on 1x P4800x 750GB NVMe, All Flash Config: Collocated RocksDB and WAL, 3x S4500 3.84TB SATA SSD. RBD’s: 50x 170GB RBD, XFS, libaio engine, 70/30 RandRW, FIO workload. Clear PageCache, dentries and inodes prior to workload. Software: RHEL 7.5 Updated, FIO v3.8 No Zipf, Intel CAS 3.6.1, Ceph Luminous v12.2.7 Bluestore, cluster fill to 30%, Replica = 2; Ceph RocksDB size: 20GB, WAL 2GB, Cache size: 625GB; Num jobs=1; Block Size = 4k; I/O Depth = 16; Performance was scaled linearly to estimate results for IOPS targets. Tests performed on Spectre-Meltdown vulnerability-compliant systems. System Cost based on publicly available list prices for storage, CPU, memory, chassis as of September 11, 2018. Networking switches/cabling costs not considered. Operating Expenses calculated over 3 years factoring in: System Power is sum of the system TDP (CPU TDP and 90/10 read/write active power for SSD as shown at ark.intel.com). A 1.2 (20% inefficiency) Power Usage Effectiveness (PUE) multiplier is applied across total cluster wattage. $0.12 KW/hour price is applied over 3 year 24/7/365 usage. Footprint is estimated cost of solution rack space. $96/sq ft/yr cost is applied with each rack using 25 sq ft. One rack has maximum 24 KW power limit, up to 42U available rack height. Full and partial racks incur same footprint cost. Cluster Size - a target performance metric is chosen based on example customer requirements, and per system performance is applied to estimate number of servers to meet requirement. 100% performance scaling assumed unless otherwise noted. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. Performance results are based on testing as of August 20, 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.

• Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.

• Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Check with your system manufacturer or retailer or• learn more at intel.com.

• Intel, the Intel logo, Intel Inside, Intel Optane, Intel Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.• © Intel Corporation. All rights reserved.