RED HAT STORAGE 
LIBERATE YOUR INFORMATION 
Marcel Hergaarden 
Solution Architect, Red Hat 
Tuesday, October 28, 2014
Agenda 
● Red Hat Storage and Inktank Ceph 
● Software Defined concept 
● Setup Hierarchy 
● Storage Topology Types 
● Storage for Openstack 
● RHS 3.0 New Features 
● Inktank Ceph introduction
Inktank Ceph 
● April 2014: Red Hat acquires Inktank Ceph
Future Red Hat Storage: 2 Flavours 
● Red Hat Storage – Gluster edition 
Mostly used for filebased storage purposes 
Can also be used as Virtualization Store or Object Storage 
● Red Hat Storage – Ceph Edition 
Positioned as defacto storage platform for OpenStack 
Ceph offers block- and objectbased access
Red Hat Storage Positioning 
FILE OBJECT BLOCK 
RED HAT STORAGE 
CEPH 
Best Scale-Out NAS 
Best Object Store Kernel-supported 
& exposed 
Not yet available 
SWIFT-based 
File + Object access 
Through API-only 
(libqemu)
What means Software Defined Storage ? 
● RHS is a software solution, not an appliance with disks
Open Software-Defined Storage 
● Stable scale-out storage platform 
● Runs On-premise, in Private- and in Public Cloud 
Converged Compute and Storage 
Red Hat Storage Server: Software Defined Storage Platform 
Red Hat Storage Server: Software Defined Storage Platform 
SCALE-OUT STORAGE 
ARCHITECTURE 
PHYSICAL 
Continuous Storage Platform 
Standard x86 systems 
Scale-out NAS solutions 
VIRTUAL 
Include idle or 
legacy resources 
CLOUD 
EBS EBS 
PERSISTENT DATA STORES 
Physical Virtual Cloud
Increase Data, Application and Infrastructure Agility 
ENTERPRISE 
MOBILITY 
CLOUD 
APPLICATIONS 
CONVERGED COMPUTE AND STORAGE 
OPEN, SOFTWARE-DEFINED STORAGE PLATFORM 
SCALE-OUT STORAGE 
ARCHITECTURE 
PHYSICAL 
Standard x86 systems 
Scale-out NAS solutions 
VIRTUAL 
Include idle or 
legacy resources 
CLOUD 
EBS EBS 
BIG DATA 
WORKLOADS 
ENTERPRISE 
APPLICATIONS 
DATA 
SERVICES 
PERSISTANT DATA STORES 
Inktank 
Ceph 
Enterprise 
FILE SERVICES BLOCK IO OPEN OBJECT APIs
SOFTWARE DEFINED DATACENTER 
CCOOMMPPUUTTEE 
SOFTWARE-DEFINED 
SOFTWARE-DEFINED 
Cornerstone of the modern 
/ BASED 
/ BASED 
COMPUTE 
(Virtualization) 
COMPUTE 
(Virtualization) 
SSTTOORRAAGGEE 
SOFTWARE-DEFINED 
SOFTWARE-DEFINED 
/ BASED 
/ BASED 
STORAGE 
STORAGE 
NNEETTWWOORRKKININGG 
SOFTWARE-DEFINED 
SOFTWARE-DEFINED 
/ BASED 
/ BASED 
NETWORKING 
NETWORKING 
EENNVVIRIROONNMMEENNTTAALL 
SOFTWARE-DEFINED 
SOFTWARE-DEFINED 
/ BASED 
/ BASED 
FACILITIES 
FACILITIES 
D DAATTAA C CEENNTTEERR F FAABBRRICIC
Red Hat Storage design philosophy 
● Runs on X86 commodity hardware systems 
● Agnostic deployment (on-premise, virtual, cloud) 
● Provides a single namespace storage capacity 
● Elastic storage pool – grow or shrink online as needed 
● Linear scaling – either scale-up as scale-out 
● Components can be subject to hardware failures
Scale-out Software-Defined Architecture 
... SERVER 
1TB 
4TB 
Scale out performance, capacity, and availability 
Scale up capacity 
... 
SINGLE GLOBAL NAMESPACE 
(CPU/MEM) 
... ... ... ... ... ... 
Global namespace 
Aggregates CPU, 
memory, network 
capacity 
Deploys on RHEL-supported 
servers 
and directly 
connected storage 
Scale out linearly 
Scale out performance 
and capacity as needed
New data problems 
VOLUME 
VARIETY 
SCALE 
PORTABILITY
Business data growth estimates in 2014 
Virtualization 
Standard 
Growth 
Mobile 
Computing 
+ 
Big Data 
Social 
Networks 
Internet of 
Things 
+ 
Cloud 
Computing 
50% 100% 
2013 2014
What happens in an Internet minute ?
The Challenge 
Exponential Growth of Data 
IT Storage Budget 
Existing systems 
don’t scale and are not 
built / optimized for 
unstructured data 
Increasing cost and 
complexity 
Need to invest in new 
platforms 2010 2020 ahead of time
Red Hat Storage Setup Topology 
Brick #1 Brick #2 
RHS Operating System RHS Operating System
Red Hat Storage Setup Topology 
SMB 2.0 
RHS Operating System RHS Operating System
Red Hat Storage: Distributed Volume 
DISTRIBUTED VOLUME 
DISTRIBUTED VOLUME 
server1:/exp1 server2:/exp1 
BRICK BRICK 
FILE 1 FILE 2 FILE 3 
MOUNT POINT 
server1:/exp1 server2:/exp1 
BRICK BRICK
Red Hat Storage: Replicated Volume 
MOUNT POINT 
REPLICATED VOLUME 
server1:/exp1 server2:/exp1 
BRICK BRICK 
FILE 1 FILE 2
Red Hat Storage: Distributed Replicated Volume 
MOUNT POINT 
Replicated 
Volume 0 
DISTRIBUTED VOLUME 
server1 server2 
BRICK 
(exp1) 
Replicated 
Volume 1 
BRICK 
(exp2) 
server3 server4 
BRICK 
(exp3) 
FILE 1 FILE 2 
BRICK 
(exp4)
Featured use-cases 
● Scalable Storagelibrary: Into Petabytes scale 
● VM store for RHEV (Red Hat Virtualization) 
● Target store for Backup and Archiving (Commvault) 
● Storage infra for OpenStack: Cinder, Glance & Swift 
● Storage for Fileservice and/or data archives 
● Storage for (very) Large files, also Big Data purposes 
● Storage for Multimedia purposes 
● Windows support: File service and Active Directory
Targetstore for CommVault Simpana
CommVault Simpana 
Datastreams benefits
Red Hat Storage inside Openstack
Red Hat Storage 3.0
New key features in Red Hat Storage 3.0 
Enhanced Data Protection 
● Snapshots of Gluster volumes 
● Consistent Point-in-Time copies of data 
● Help to improve Disaster Recovery Use-Case; 
● Create multiple consistent Point-in-Time copies during a 
day 
● Roll-back within minutes to the last Snapshot in case of a 
Virus-attack, Admin-error, etc. 
● Doesn't replace Backup/Recovery but enhances it
New key features in Red Hat Storage 3.0 
Cluster Monitoring 
● Nagios-based RHS-Cluster health and performance information 
● 3 different deployment options; 
● Nagios web-frontend standalone 
● Agent-only for integrating in existing Nagios environments 
● As a RHS-Console plugin
Other Important enhancements in RHS 3.0 
Deep Hadoop Integration 
HDFS-compatible filesystem eliminates overhead of data movement
Flexibility at each phase of your processing pipeline 
Data scientist, Programmers, Business Analyst 
Load Pre-process Analyze Export 
Data from 
Any Source 
If Necessary... If Necessary... 
Apache 
Hadoop 
(MapReduce/Pig/Hive/Hba 
se, etc) 
Red Hat Storage 
Posix 
Any Linux 
Tool or 
Application 
(grep, sed, awk, find, 
python, etc) 
Posix HDFS 
Commodity Hardware 
Data to 
Any Source 
Post-process 
Any Linux 
Tool or 
Application 
(grep, sed, awk, find, 
python, etc) 
Posix Posix
Other Important enhancements in RHS 3.0 
Enhanced Capacity 
● Up to 60 disks per RHS node => lower TCO 
● Up to ~205TB per node, netto usable capacity 
● Clustersize up to 120 nodes (was 64 nodes) 
Maintainability 
● None-disruptive upgrades 
Introduction of new package delivery options 
● Red Hat Storage Starter Pack SKU
Other Important enhancements in RHS 3.0 
Brick resource changes 
SSD disks as brick 
● SSD’s are now officially supported for the use as brick component 
SAN resources 
● SAN disk resources may be used as brick (architecture review req.)
Red Hat Storage Gluster edition console 
simplified management 
● Intuitive user interface 
● Manages massive 
scale out 
● Installation and 
configuration 
● Volume management 
● On-premise and 
public cloud 
● Integrates with RHEV-M
Inktank Ceph Enterprise 1.2
Key Themes in Inktank Ceph Enterprise v1.2 
Enterprise Readiness 
● RADOS Management 
● User Quotas 
● RHEL 7 support 
Lower TCO 
● Erasure Coding 
Performance 
● Primary OSD Affinity 
● Cache Tiering 
● Key/Value OSD-backend
Intro to Ceph Storage
Ceph RADOS 
Reliable Autonomous Distributed Object Store 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors
Ceph LIBRADOS 
Library to access Rados 
LIBRADOS 
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors
Ceph Unified Storage 
APP HOST/VM CLIENT 
RGW 
A web services gateway for 
object storage, compatible 
with S3 and Swift 
RBD 
A reliable, fully-distributed 
block device with cloud 
platform integration 
LIBRADOS 
CEPHFS 
A distributed file system 
with POSIX semantics and 
scale-out metadata 
management 
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors
Ceph Object Storage Daemons 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
btrfs 
xfs 
ext4 
M 
M 
M
Ceph RADOS cluster 
APPLICATION 
M M 
M M 
M 
RADOS CLUSTER
Ceph RADOS Components 
OSDs: 
 10s to 10000s in a cluster 
 One per disk (or one per SSD, RAID group…) 
 Serve stored objects to clients 
 Intelligently peer for replication & recovery 
Monitors: 
 Maintain cluster membership and state 
 Provide consensus for distributed decision-making 
 Small, odd number 
 These do not serve stored objects to clients M
Ceph CRUSH algorithm 
Dynamic Data Placement 
CRUSH: 
 Pseudo-random placement algorithm 
 Fast calculation, no lookup 
 Repeatable, deterministic 
 Statistically uniform distribution 
 Stable mapping 
 Limited data migration on change 
 Rule-based configuration 
 Infrastructure topology aware 
 Adjustable replication 
 Weighting
BLOCK 
STORAGE 
OBJECT 
STORAGE 
Equivalent to 
Amazon S3 
Equivalent to 
Amazon EBS 
FILE 
SYSTEM 
Not yet 
Enterprise 
supported 
Ceph Unified Storage
Ceph Unified Storage 
BLOCK 
STORAGE 
OBJECT 
STORAGE 
Equivalent to 
Amazon S3 
Equivalent to 
Amazon EBS
Ceph with OpenStack 
OPEN STACK 
KEYSTONE API SWIFT 
API 
CINDER 
API 
GLANCE API NOVA 
API 
CEPH STORAGE CLUSTER 
(RADOS) 
CEPH OBJECT GATEWAY 
(RGW) 
CEPH BLOCK DEVICE 
(RBD) 
HYPERVISOR 
(Qemu/KVM)
Ceph as Cloud Storage 
WEB APPLICATION 
APP SERVER APP SERVER APP SERVER 
CEPH STORAGE CLUSTER 
(RADOS) 
CEPH OBJECT GATEWAY 
(RGW) 
CEPH OBJECT GATEWAY 
(RGW) 
APP SERVER 
S3/Swift S3/Swift S3/Swift S3/Swift
Ceph Cloud Storage including DR 
WEB APPLICATION 
APP SERVER 
CEPH OBJECT GATEWAY 
(RGW) 
CEPH STORAGE CLUSTER 
(US-EAST) 
WEB APPLICATION 
APP SERVER 
CEPH OBJECT GATEWAY 
(RGW) 
CEPH STORAGE CLUSTER 
(EU-WEST)
Ceph Web Scale Applications 
WEB APPLICATION 
APP SERVER APP SERVER APP SERVER 
CEPH STORAGE CLUSTER 
(RADOS) 
APP SERVER 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol 
Native 
Protocol
Ceph Cold Storage 
APPLICATION 
CACHE POOL (REPLICATED) 
BACKING POOL (ERASURE CODED) 
CEPH STORAGE CLUSTER
Ceph management: Calimaris
Hands-on Red Hat Storage workshop 
Red Hat Storage testing on Amazon Web Services 
(AWS) 
https://engage.redhat.com/aws-test-drive-201308271223
“...A DISRUPTIVE AND 
UNSTOPPABLE FORCE.” 
–IDC REPORT 
THANK YOU

Red Hat Storage 2014 - Product(s) Overview

  • 1.
    RED HAT STORAGE LIBERATE YOUR INFORMATION Marcel Hergaarden Solution Architect, Red Hat Tuesday, October 28, 2014
  • 2.
    Agenda ● RedHat Storage and Inktank Ceph ● Software Defined concept ● Setup Hierarchy ● Storage Topology Types ● Storage for Openstack ● RHS 3.0 New Features ● Inktank Ceph introduction
  • 3.
    Inktank Ceph ●April 2014: Red Hat acquires Inktank Ceph
  • 4.
    Future Red HatStorage: 2 Flavours ● Red Hat Storage – Gluster edition Mostly used for filebased storage purposes Can also be used as Virtualization Store or Object Storage ● Red Hat Storage – Ceph Edition Positioned as defacto storage platform for OpenStack Ceph offers block- and objectbased access
  • 5.
    Red Hat StoragePositioning FILE OBJECT BLOCK RED HAT STORAGE CEPH Best Scale-Out NAS Best Object Store Kernel-supported & exposed Not yet available SWIFT-based File + Object access Through API-only (libqemu)
  • 6.
    What means SoftwareDefined Storage ? ● RHS is a software solution, not an appliance with disks
  • 7.
    Open Software-Defined Storage ● Stable scale-out storage platform ● Runs On-premise, in Private- and in Public Cloud Converged Compute and Storage Red Hat Storage Server: Software Defined Storage Platform Red Hat Storage Server: Software Defined Storage Platform SCALE-OUT STORAGE ARCHITECTURE PHYSICAL Continuous Storage Platform Standard x86 systems Scale-out NAS solutions VIRTUAL Include idle or legacy resources CLOUD EBS EBS PERSISTENT DATA STORES Physical Virtual Cloud
  • 8.
    Increase Data, Applicationand Infrastructure Agility ENTERPRISE MOBILITY CLOUD APPLICATIONS CONVERGED COMPUTE AND STORAGE OPEN, SOFTWARE-DEFINED STORAGE PLATFORM SCALE-OUT STORAGE ARCHITECTURE PHYSICAL Standard x86 systems Scale-out NAS solutions VIRTUAL Include idle or legacy resources CLOUD EBS EBS BIG DATA WORKLOADS ENTERPRISE APPLICATIONS DATA SERVICES PERSISTANT DATA STORES Inktank Ceph Enterprise FILE SERVICES BLOCK IO OPEN OBJECT APIs
  • 9.
    SOFTWARE DEFINED DATACENTER CCOOMMPPUUTTEE SOFTWARE-DEFINED SOFTWARE-DEFINED Cornerstone of the modern / BASED / BASED COMPUTE (Virtualization) COMPUTE (Virtualization) SSTTOORRAAGGEE SOFTWARE-DEFINED SOFTWARE-DEFINED / BASED / BASED STORAGE STORAGE NNEETTWWOORRKKININGG SOFTWARE-DEFINED SOFTWARE-DEFINED / BASED / BASED NETWORKING NETWORKING EENNVVIRIROONNMMEENNTTAALL SOFTWARE-DEFINED SOFTWARE-DEFINED / BASED / BASED FACILITIES FACILITIES D DAATTAA C CEENNTTEERR F FAABBRRICIC
  • 10.
    Red Hat Storagedesign philosophy ● Runs on X86 commodity hardware systems ● Agnostic deployment (on-premise, virtual, cloud) ● Provides a single namespace storage capacity ● Elastic storage pool – grow or shrink online as needed ● Linear scaling – either scale-up as scale-out ● Components can be subject to hardware failures
  • 11.
    Scale-out Software-Defined Architecture ... SERVER 1TB 4TB Scale out performance, capacity, and availability Scale up capacity ... SINGLE GLOBAL NAMESPACE (CPU/MEM) ... ... ... ... ... ... Global namespace Aggregates CPU, memory, network capacity Deploys on RHEL-supported servers and directly connected storage Scale out linearly Scale out performance and capacity as needed
  • 12.
    New data problems VOLUME VARIETY SCALE PORTABILITY
  • 13.
    Business data growthestimates in 2014 Virtualization Standard Growth Mobile Computing + Big Data Social Networks Internet of Things + Cloud Computing 50% 100% 2013 2014
  • 14.
    What happens inan Internet minute ?
  • 15.
    The Challenge ExponentialGrowth of Data IT Storage Budget Existing systems don’t scale and are not built / optimized for unstructured data Increasing cost and complexity Need to invest in new platforms 2010 2020 ahead of time
  • 16.
    Red Hat StorageSetup Topology Brick #1 Brick #2 RHS Operating System RHS Operating System
  • 17.
    Red Hat StorageSetup Topology SMB 2.0 RHS Operating System RHS Operating System
  • 18.
    Red Hat Storage:Distributed Volume DISTRIBUTED VOLUME DISTRIBUTED VOLUME server1:/exp1 server2:/exp1 BRICK BRICK FILE 1 FILE 2 FILE 3 MOUNT POINT server1:/exp1 server2:/exp1 BRICK BRICK
  • 19.
    Red Hat Storage:Replicated Volume MOUNT POINT REPLICATED VOLUME server1:/exp1 server2:/exp1 BRICK BRICK FILE 1 FILE 2
  • 20.
    Red Hat Storage:Distributed Replicated Volume MOUNT POINT Replicated Volume 0 DISTRIBUTED VOLUME server1 server2 BRICK (exp1) Replicated Volume 1 BRICK (exp2) server3 server4 BRICK (exp3) FILE 1 FILE 2 BRICK (exp4)
  • 21.
    Featured use-cases ●Scalable Storagelibrary: Into Petabytes scale ● VM store for RHEV (Red Hat Virtualization) ● Target store for Backup and Archiving (Commvault) ● Storage infra for OpenStack: Cinder, Glance & Swift ● Storage for Fileservice and/or data archives ● Storage for (very) Large files, also Big Data purposes ● Storage for Multimedia purposes ● Windows support: File service and Active Directory
  • 22.
  • 23.
  • 24.
    Red Hat Storageinside Openstack
  • 25.
  • 26.
    New key featuresin Red Hat Storage 3.0 Enhanced Data Protection ● Snapshots of Gluster volumes ● Consistent Point-in-Time copies of data ● Help to improve Disaster Recovery Use-Case; ● Create multiple consistent Point-in-Time copies during a day ● Roll-back within minutes to the last Snapshot in case of a Virus-attack, Admin-error, etc. ● Doesn't replace Backup/Recovery but enhances it
  • 27.
    New key featuresin Red Hat Storage 3.0 Cluster Monitoring ● Nagios-based RHS-Cluster health and performance information ● 3 different deployment options; ● Nagios web-frontend standalone ● Agent-only for integrating in existing Nagios environments ● As a RHS-Console plugin
  • 28.
    Other Important enhancementsin RHS 3.0 Deep Hadoop Integration HDFS-compatible filesystem eliminates overhead of data movement
  • 29.
    Flexibility at eachphase of your processing pipeline Data scientist, Programmers, Business Analyst Load Pre-process Analyze Export Data from Any Source If Necessary... If Necessary... Apache Hadoop (MapReduce/Pig/Hive/Hba se, etc) Red Hat Storage Posix Any Linux Tool or Application (grep, sed, awk, find, python, etc) Posix HDFS Commodity Hardware Data to Any Source Post-process Any Linux Tool or Application (grep, sed, awk, find, python, etc) Posix Posix
  • 30.
    Other Important enhancementsin RHS 3.0 Enhanced Capacity ● Up to 60 disks per RHS node => lower TCO ● Up to ~205TB per node, netto usable capacity ● Clustersize up to 120 nodes (was 64 nodes) Maintainability ● None-disruptive upgrades Introduction of new package delivery options ● Red Hat Storage Starter Pack SKU
  • 31.
    Other Important enhancementsin RHS 3.0 Brick resource changes SSD disks as brick ● SSD’s are now officially supported for the use as brick component SAN resources ● SAN disk resources may be used as brick (architecture review req.)
  • 32.
    Red Hat StorageGluster edition console simplified management ● Intuitive user interface ● Manages massive scale out ● Installation and configuration ● Volume management ● On-premise and public cloud ● Integrates with RHEV-M
  • 33.
  • 34.
    Key Themes inInktank Ceph Enterprise v1.2 Enterprise Readiness ● RADOS Management ● User Quotas ● RHEL 7 support Lower TCO ● Erasure Coding Performance ● Primary OSD Affinity ● Cache Tiering ● Key/Value OSD-backend
  • 35.
  • 36.
    Ceph RADOS ReliableAutonomous Distributed Object Store RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors
  • 37.
    Ceph LIBRADOS Libraryto access Rados LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors
  • 38.
    Ceph Unified Storage APP HOST/VM CLIENT RGW A web services gateway for object storage, compatible with S3 and Swift RBD A reliable, fully-distributed block device with cloud platform integration LIBRADOS CEPHFS A distributed file system with POSIX semantics and scale-out metadata management A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors
  • 39.
    Ceph Object StorageDaemons OSD FS DISK OSD FS DISK OSD FS DISK OSD FS DISK btrfs xfs ext4 M M M
  • 40.
    Ceph RADOS cluster APPLICATION M M M M M RADOS CLUSTER
  • 41.
    Ceph RADOS Components OSDs:  10s to 10000s in a cluster  One per disk (or one per SSD, RAID group…)  Serve stored objects to clients  Intelligently peer for replication & recovery Monitors:  Maintain cluster membership and state  Provide consensus for distributed decision-making  Small, odd number  These do not serve stored objects to clients M
  • 42.
    Ceph CRUSH algorithm Dynamic Data Placement CRUSH:  Pseudo-random placement algorithm  Fast calculation, no lookup  Repeatable, deterministic  Statistically uniform distribution  Stable mapping  Limited data migration on change  Rule-based configuration  Infrastructure topology aware  Adjustable replication  Weighting
  • 43.
    BLOCK STORAGE OBJECT STORAGE Equivalent to Amazon S3 Equivalent to Amazon EBS FILE SYSTEM Not yet Enterprise supported Ceph Unified Storage
  • 44.
    Ceph Unified Storage BLOCK STORAGE OBJECT STORAGE Equivalent to Amazon S3 Equivalent to Amazon EBS
  • 45.
    Ceph with OpenStack OPEN STACK KEYSTONE API SWIFT API CINDER API GLANCE API NOVA API CEPH STORAGE CLUSTER (RADOS) CEPH OBJECT GATEWAY (RGW) CEPH BLOCK DEVICE (RBD) HYPERVISOR (Qemu/KVM)
  • 46.
    Ceph as CloudStorage WEB APPLICATION APP SERVER APP SERVER APP SERVER CEPH STORAGE CLUSTER (RADOS) CEPH OBJECT GATEWAY (RGW) CEPH OBJECT GATEWAY (RGW) APP SERVER S3/Swift S3/Swift S3/Swift S3/Swift
  • 47.
    Ceph Cloud Storageincluding DR WEB APPLICATION APP SERVER CEPH OBJECT GATEWAY (RGW) CEPH STORAGE CLUSTER (US-EAST) WEB APPLICATION APP SERVER CEPH OBJECT GATEWAY (RGW) CEPH STORAGE CLUSTER (EU-WEST)
  • 48.
    Ceph Web ScaleApplications WEB APPLICATION APP SERVER APP SERVER APP SERVER CEPH STORAGE CLUSTER (RADOS) APP SERVER Native Protocol Native Protocol Native Protocol Native Protocol
  • 49.
    Ceph Cold Storage APPLICATION CACHE POOL (REPLICATED) BACKING POOL (ERASURE CODED) CEPH STORAGE CLUSTER
  • 50.
  • 52.
    Hands-on Red HatStorage workshop Red Hat Storage testing on Amazon Web Services (AWS) https://engage.redhat.com/aws-test-drive-201308271223
  • 53.
    “...A DISRUPTIVE AND UNSTOPPABLE FORCE.” –IDC REPORT THANK YOU