red hat ceph storage roadmap

19
RED HAT CEPH STORAGE ROADMAP Cesar Pinto Account Manager, Red Hat Norway [email protected]

Upload: hoangminh

Post on 04-Jan-2017

245 views

Category:

Documents


4 download

TRANSCRIPT

RED HAT CEPH STORAGEROADMAP

Cesar PintoAccount Manager, Red Hat [email protected]

To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads thereby accelerating the transition to modern

IT infrastructures.

THE RED HAT STORAGE MISSION

THE RED HAT STORAGE PORTFOLIO

TODAY’S PORTFOLIO:OPTIMIZED POINT SOLUTIONS

Cephmanagement

OP

EN S

OU

RC

ESO

FTW

AR

E

Glustermanagement

Cephdata services

Glusterdata services

PR

OP

RIE

TAR

YSO

FTW

AR

E

Share-nothing, scale-out architecture provides durability and adapts to changing demands

Self-managing and self-healing features reduce operational overhead

Standards-based interfaces and full APIs ease integration with applications and systems

Supported by the expertsat Red Hat

OVERVIEW:RED HAT CEPH STORAGE

Powerful distributed storage for the cloud and beyond

OpenStack

Built from the ground up as a next-generation storage system, based on years of research and suitable for powering infrastructure platforms

TARGET USE CASES

● Cinder, Glance & Nova● Object storage for tenant apps

Object Storage for Applications● S3-compatible API

Highly tunable, extensible, and configurable

Offers mature interfaces for block and object storage for the enterprise

Customer Highlight: CiscoCisco uses Red Hat Ceph Storage to deliver storage for next-generation cloud services

RED HAT STORAGEFUTURE WORKLOADS

USE CASES:TODAY AND FUTURE

ANALYTICS

CURRENT USE CASES

Big Data analytics● Storage plug-in for

Hortonworks Data Platform

TARGET USE CASES

Big Data analytics● Persistent back-end

for Spark

Machine data analytics● Online cold storage for

IT operations data with Splunk

Machine data analytics● Storage for ELK, Solr

USE CASES:TODAY AND FUTURE

OPENSTACK

CURRENT USE CASES

Virtual machine storage● Virtual machine volume

storage with Cinder, Nova and Glance

TARGET USE CASES

Database storage● Storage for relational

databases with Trove

Object storage fortenant applications● Swift-compatible storage

for cloud applications

Storage back-end for Manila● Shared file

system-as-a-service for tenants

USE CASES:TODAY AND FUTURE

ENTERPRISESHARING

CURRENT USE CASES

Scale out file store● Storage for active archives,

media streaming, content repositories, VM images, and general-purpose file shares

TARGET USE CASES

Compliant archives● Scalable, cost-effective

storage for compliance and regulatory needs

Enterprise filesync and share● Storage for Dropbox-style

enterprise shared folders

File services for containers● File storage services for

containers and pods

USE CASES:TODAY AND FUTURE

CURRENT USE CASES

S3-based objectstorage for apps● Cost-effective,

S3-compatible, on-premise object store

TARGET USE CASES

Enterprise syncand share● Storage for shared

folders (object backend)

CLOUDSTORAGE

USE CASES:TODAY AND FUTURE

CURRENT USE CASES

Conventional virtualization storage● Integrated storage for

Red Hat Enterprise Virtualization

● (with separate compute and storage clusters)

TARGET USE CASES

Hyper-converged architectures● Hyper-converged

architectures

ENTERPRISE VIRTUALIZATION

RED HAT CEPH STORAGEROADMAP DETAIL

ROADMAP:RED HAT CEPH STORAGE

TODAY (v1.2) 6-9M (“Stockwell”) FUTURE (“Tufnell” and Beyond)

Ceph FIrefly

Ceph Hammer

MG

MT

CO

RE

OB

JEC

T

● Off-line installer● GUI management

● Erasure coding● Cache tiering● RADOS read-affinity

● User and bucket quotas

● Foreman/puppet installer● CLI :: Calamari API parity● Multi-user and multi-cluster

● OSD w/SSD optimization● More robust rebalancing● Improved repair process● Local and pyramid

erasure codes

● Improved read IOPS● Faster booting from clones

● S3 object versioning● Bucket sharing

● New UI● Alerting

● Performance Consistency

● Guided Repair● New Backing Store

● iSCSI● Mirroring

● NFS● Active/Active multi-site

MG

MT

CO

RE

BLO

CK

OB

JEC

T

MG

MT

CO

RE

BLO

CK

OB

JEC

T

DETAIL:RED HAT CEPH STORAGE V1.2

Off-line installer

MG

MT

CO

RE

CO

RE

All required dependencies are now included within a local package repository, allowing deployment to non-Internet-connected storage nodes.

These features were introduced in the most recent release of Red Hat Ceph Storage, and are now supported by Red Hat.

CO

RE

MG

MT

OB

JEC

T

GUI management

Erasure coding

Cache tiering

RADOS read-affinity

User andbucket quotas

Administrators can now perform basic cluster administration tasks through Calamari, the Ceph visual interface.

Erasure-coded storage back-ends are now available, providing durability with lower capacity requirements than traditional, replicated back-ends.

A cache tier pool can now be designated as a writeback or read cache for an underlying storage pool in order to provide cost-effective performance.

Clients can be configured to read objects from the closest replica, increasing performance and reducing network strain.

The Ceph Object Gateway now supports and enforces quotas for users and buckets.

DETAIL:RED HAT CEPH STORAGE “STOCKWELL”

Foreman/puppet installerM

GM

TM

GM

TC

OR

E

Support for deployment of new Ceph clusters using Foreman and provided Puppet modules.

These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.

CO

RE

MG

MT

CO

RE

CLI :: CalamariAPI parity

Multi-user and multi-cluster

OSD with SSD optimization

More robust rebalancing

Local/pyramid erasure codes

Improvements to the Calamari API and command-line interface that enable administrators to perform the same set of operations through each.

Support in the calamari interface for multiple administrator accounts and multiple deployed clusters.

Performance improvements for both read and write operations, especially applicable for configurations including all-flash cache tiers.

Improved rebalancing that prioritizes repair of degraded data over rebalancing of sub-optimally-placed data; optimized data placement and improved utilization reporting and management that delivers better distribution of data.

Inclusion of locally-stored parity bit (within a rack or data-center) that reduces network bandwidth required to repair degraded data.

DETAIL:RED HAT CEPH STORAGE “STOCKWELL”

Improvedread IOPSB

LOC

KO

BJE

CT

Introduction of allocation hints, which reduce file system fragmentation over time and ensure IOPS performance throughout the life of a block volume.

These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.

BLO

CK Faster booting

from clones

S3 object versioning

Bucketsharding

Addition of copy-on-read functionality to improve initial and subsequent write performance for cloned volumes.

New versioning of objects that help users avoid unintended overwrites/ deletions and allow them to archive objects and retrieve previous versions.

Sharding of buckets in the Ceph Object Gateway to improve metadata operations on those with a large number of objects.

OB

JEC

T

DETAIL:RED HAT CEPH STORAGE “TUFNELL”

MG

MT

CO

RE

CO

RE

A new user interface with improved sorting and visibility of critical data.

These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.

CO

RE

MG

MT

Alerting

Performance Consistency

Guided Repair

New Backing Store (Tech Preview)

Introduction of altering features that notify administrations of critical issues via email or SMS.

More intelligent scrubbing policies and improved peering logic to reduce impact of common operations on overall cluster performance.

More information about objects will be provided to help administrators perform repair operations on corrupted data.

New backend for OSDs to provide performance benefits on existing and modern drives (SSD, K/V).

New UI

iSCSI

BLO

CK

OB

JEC

T

Introduction of a highly-available iSCSI interface for the Ceph Block Device, allowing integration with legacy systems

These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.

BLO

CK

Mirroring

NFS

Active/Active Multi-Site

Capabilities for managing virtual block devices in multiple regions, maintaining consistency through automated mirroring of incremental changes

Access to objects stored in the Ceph Object Gateway via standard Network File System (NFS) endpoints, providing storage for legacy systems and applications

Support for deployment of the Ceph Object Gateway across multiple sites in an active/active configuration (in addition to the currently-available active/passive configuration)O

BJE

CT

DETAIL:RED HAT CEPH STORAGE “TUFNELL”

THANK YOU