red hat tug utrecht - storage update june 2015
TRANSCRIPT
Storage Update June 2015Red Hat Software Defined Storage
Marcel Hergaardensr solution architectRed Hat EMEA
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
2
AGENDA
● Technical User Group June 2015: Storage Update
● Current portfolio: Ceph and Gluster
● Red Hat Gluster Storage basics, features and roadmap
● Red Hat Ceph Storage basics, features and roadmap
● Modern workloads and Scale-out storage
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
4
Introduction Red Hat Software Defined Storage
Mostly used for filebased storage purposesCan also be used as Virtualization Store or Object Storage
Positioned as defacto storage platform for OpenStackCeph offers block- and objectbased storage
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
5
Today's portfolio: Ceph and Gluster storage
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
7
RHS Operating System RHS Operating System
Brick #1 Brick #2
The Basics
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
8
RHS Operating System RHS Operating System
SMB 2.0
The Basics
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
9
FeaturesGluster Volume Snapshot
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
10
Features Geo-Replication
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
11
FeaturesBuilt-in Nagios Monitoring
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
12
Features Hortonworks HDP Plugin
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
13
What is new ?New functionality in 3.0.4
● 3-Way Replication and JBOD support
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
14
Current use cases
● Analytics● Cold Storage for Splunk Analytics Workloads
● Hadoop Compatible File System for running Hadoop Analytics
● File Store & Sharing● Digital multi-media (video, audio, pictures) serving
● Large File Store (using NFS, SMB, FUSE or Swift access) with or without DR using Async Geo-Rep
● Active archive and near-line storage
● ownCloud File Sync n' Share
● Backup & Archive ● Backup target for Commvault Simpana
● Live virtual machine image store for RHEV
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
15
RoadmapNew features to be expected ● Key Features
Cache TieringErasure CodingBit Rot detectionSnapshot SchedulingBackup hooks
● Protocols
NFSv4 (multi-headed)SMB 3 subset of features
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
16
RoadmapProduct enhancements
● PerformanceimprovementSmall filesRebalance
● SecuritySELinux: Enforcing Mode
● ManagementDevice Management, Dashboard, Snapshots, Geo-Replication
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
17
Want to try out roadmap features today ?Community Edition: www.gluster.org
● Gluster.orgStay informed about all developments and newfeatures.
Read all about the latest technologies being used.
Download Gluster for free
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
19
The Basics
● Commodity Server HardwareStandard X86-64 servers are typically being used
● Two major “node” types- Monitor nodes (maintain and provide clustermap)- OSD nodes (nodes that provide storage OSD's)
● Graphical management- Calamari host
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
20
The Basics
● Monitor nodesMonitors always have of an odd number of serversEither 3 or 5 monitors are typically being used
● Paxos protocolMonitor nodes always work by consensus.They will need to agree with each other, using Paxos.
Paxos is a family of protocols for solving consensus in a network of unreliableprocessors. Consensus is the process of agreeing on one result among a group ofparticipants (source: Wikipedia)
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
21
The BasicsPaxos protocol
source: http://the-paper-trail.org/blog/consensus-protocols-paxos/
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
22
The BasicsMonitors and the cluster map
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
23
The BasicsOSD - Object Storage Daemon
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
25
The BasicsRADOS
Reliable Autonomic Distributed Object Store
RADOS is an object storage service, able to scale tothousands of hardware devices by running Cephsoftware on each individual node.
RADOS is an integral part of the Ceph distributedstorage system.
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
26
The BasicsLIBRADOS
Library to access RADOS
LIBRADOSLibrary allowing apps to directly access RADOSwith support for C, C++, Python, Ruby and PHP
RADOSGW
Bucked-based REST gatewayCompatible with both S3 and Swift
RBD
Reliable, fullydistributed blockdevice with Linuxkernel client and
QEMU/KVM driver
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
27
The BasicsPlacement Group
Placement Groups (PG) aggregate series ofobjects into a group, and maps the group to aseries of OSDs.
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
28
The BasicsCRUSH algorithm
CRUSHControlled Replication Under Scalable Hashing
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
29
The BasicsCRUSH algorithm
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
30
The BasicsCeph disk pool
Pools
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
31
The BasicsCeph disk pool
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
32
The BasicsSummary overview
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
35
FeaturesTiering modes
There are two main scenarios:
- Writeback Mode - Read-only Mode
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
36
FeaturesSnapshots
Ceph SnapshottingSnapshots
● Instantly created● Read-only copies
● Don’t use additional space
(until data changes)
● Do not change● Support incremental snapshots
● Data is read from original data
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
37
FeaturesSnapshots
Ceph CloningClones
Clone creation- create snapshot- protect snapshot- clone snapshot
Clone behaviour- Like any other RBD image
- Read from it- Write to it- Clone it- Resize it
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
38
SummaryCeph storage
● Distributed storage solution
● Resilient against outages
● Decentralized structure
● Ceph internal uses RADOS, converts data into objects and stores these in a distributed way
● Data access in multiple ways - Linux blockdevice driver - Qemu/KVM - Native RADOS using Librados - Rados Gateway
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
39
Modern WorkloadsTraditional Storage is inflexible
Using traditional, proprietary storage appliances:
- Capacity is purchased one large appliance at a time- Requires large investment, disruptive “forklift” upgrades- Hardware is specialized and expensive- Cost and capacity do not scale evenly
Storage update June 2015 | Technical User Group Utrecht Marcel Hergaarden
40
Modern WorkloadsStorage should scale with compute
Using standard, readily-available servers:
- Capacity can be purchased incrementally- Saves money and increases efficiency- One vendor can provide both storage and compute nodes- Increases purchasing power- Storage can be deployed by cloud builders