RED HAT CEPH STORAGE:
PAST, PRESENT AND FUTURE
Neil Levine
June 25, 2016
AGENDA
Red Hat Storage Overview
Past
Retrospective on Inktank acquisition
Red Hat Ceph Storage 1.2
Present
Red Hat Ceph Storage 1.3
RHEL-OSP with 1.3
Future
Red Hat Ceph Storage 2.0
OpenStack and Containers
Open Software-Defined Storage is a fundamental
reimagining of how storage infrastructure works.
It provides substantial economic and operational
advantages, and it has quickly become ideally
suited for a growing number of use cases.
TODAY EMERGING FUTURE
Cloud
Infrastructure
Cloud
Native Apps
Analytics
Hyper-
Convergence
Containers
???
???
OPEN, SOFTWARE-DEFINED STORAGE
A RISING TIDE
“By 2020, between 70-80% of unstructured data will be held on
lower-cost storage managed by SDS environments.”
“By 2019, 70% of existing storage array products
will also be available as software only versions”
“By 2016, server-based storage solutions will lower
storage hardware costs by 50% or more.”
Gartner: “IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry”
Innovation Insight: Separating Hype From Hope for Software-Defined Storage
Innovation Insight: Separating Hype From Hope for Software-Defined Storage
Market size is projected to increase approximately 20%
year-over-year between 2015 and 2019.
201
3
201
4
201
5
201
6
201
7
201
8
201
9
$1,349B
$1,195B
$1,029B
$859B
$706B
$592B
SDS-P MARKET SIZE BY SEGMENT
$457B
Block Storage
File Storage
Object Storage
Hyperconverged
Source: IDC
Software-Defined Storage is leading a shift in the
global storage industry, with far-reaching effects.
THE RED HAT STORAGE PORTFOLIO
Ceph
management
OPENSOURCE
SOFTWARE
Gluster
management
Ceph
data services
Gluster
data services
STANDARD
HARDWARE
Share-nothing, scale-out
architecture provides durability
and adapts to changing demands
Self-managing and self-healing
features reduce operational overhead
Standards-based interfaces
and full APIs ease integration
with applications and systems
Supported by the
experts at Red Hat
● VM Storage with OpenStack Cinder,
Glance & Nova
● Object storage for tenant apps
Built from the ground up as a next-generation
storage system, based on years of research and
suitable for powering infrastructure platforms
TARGET USE CASES
Rich Media and Archival
● S3-compatible object storage
Highly tunable, extensible, and configurable, with
policy-based control and no single point of failure
Offers mature interfaces for block and object
storage for the enterprise
Cloud Infrastructure
Customer Highlight: Cisco
Cisco uses Red Hat Ceph Storage to deliver storage
for next-generation cloud services
RED HAT CEPH STORAGE
Powerful distributed storage for the cloud and beyond
ANALYTICS
Big Data analytics with Hadoop
CLOUD
INFRASTRUCTURE
RICH MEDIA
AND ARCHIVAL
SYNC AND
SHARE
ENTERPRISE
VIRTUALIZATION
Machine data analytics with Splunk
Virtual machine storage with OpenStack
Object storage for tenant applications
Cost-effective storage for rich media streaming
Active archives
File sync and share with ownCloud
Storage for conventional
virtualization with RHEV
FOCUSED SET OF USE CASES
PAST
TIMELINE
Jul14 Mar15 Jun15May14
INKTANK CEPH ENTERPRISE
v1.2
v1.3
Inktank acquisition &
Ceph Firefly released
Ceph Hammer released
RED HAT CEPH STORAGE
MGMT
All required dependencies are now included within a local package
repository, allowing deployment to non-Internet-connected storage nodes.
MGMTCORECORECOREOBJECT
Administrators can now perform basic cluster administration tasks
through Calamari, the Ceph visual interface.
Erasure-coded storage back-ends are now available, providing durability
with lower capacity requirements than traditional, replicated back-ends.
A cache tier pool can now be designated as a writeback or read cache for
an underlying storage pool in order to provide cost-effective performance.
Clients can be configured to read objects from the closest replica,
increasing performance and reducing network strain.
The Ceph Object Gateway now supports and enforces quotas
for users and buckets.
Off-line installer
GUI management
Erasure coding
Cache tiering
RADOS read-
affinity
User and
bucket quotas
These features were introduced in version 1.2 of Red Hat Ceph Storage,
and have been supported by Red Hat since July, 2014.
DETAIL:
RED HAT CEPH STORAGE V1.2
DELIVERING RED HAT CEPH STORAGE
BEFORE DURING AFTER
DELIVERING RED HAT CEPH STORAGE
Bugzilla
Fork
Package
Doc
Test
A GENUINE RED HAT PRODUCT
CEPH SUCCESSES
PRESENT
RED HAT CEPH STORAGE 1.3
GA Today
Based on Ceph Hammer (0.94)
Core Themes
Robustness at Scale
Operational Efficiency
Performance
Red Hat Ceph Storage 1.3 contains improved logic and
algorithms that allow it to do the “right thing” for users with
multi-petabyte clusters where hardware failure is normal:
ROBUSTNESS AT SCALE
Improved self-management for large clusters
● Improved automatic rebalancing logic, which prioritizes
degraded over misplaced objects
● Rebalancing operations can be temporarily disabled so they
don’t impact performance
● Time-scheduled scrubbing, to avoid disruption during peak
times
● Sharding of object buckets to avoid hot-spots
Ceph is a distributed system with lots of moving parts.
Red Hat Ceph Storage 1.3 introduces features to help
manage storage more efficiently.
OPERATIONAL EFFICIENCY
Making administration tasks easier
● Calamari now supports multiple users and clusters
● CRUSH management via Calamari API allows
programmatic adjustment of placement policies
● Lightweight, embedded Civetweb server eases
deployment of the Ceph Object Gateway
● Faster Ceph Block Device operations make resize,
delete, and flatten operations quicker, while export
parallelism makes backups faster
CEPH WITH SANDISK INFINIFLASH
A number of performance tweaks improve the speed of
Red Hat Ceph Storage 1.3 and increase I/O consistency:
PERFORMANCE
Speedier, more efficient distributed storage
● Optimizations for flash storage devices
increases Ceph’s topline speed
● Read ahead caching accelerates virtual
machine booting in OpenStack
● Allocation hinting reduces XFS fragmentation
to avoid performance degradation over time
● Caching hinting preserves the cache’s
advantages and improves performance
PERFORMANCE
SCALE
OTHER FEATURES
S3 Object Expiration
Swift Storage Policies
IPv6 Support
Local/Pyramid Codes
RED HAT CEPH STORAGE 1.3.z
SELinux Support
Satellite Integration
Puppet-based Installer (Tech Preview)
RHEL OPENSTACK PLATFORM
w/CEPH
Feb15 Jul16Jun14
Icehouse
RHEL-OSP5
RHEL-OSP6
RHEL-OSP7
Juno
Kilo
Juno
ICE 1.2
RHCS 1.2.3 & 1.3.0
RHCS 1.3.0
RHEL OPENSTACK PLATFORM
w/CEPH
RHEL-OSP5
RHEL-OSP6
RHEL-OSP7
Integrated SKU | Integrated Installer (Client)
Ephemeral Volumes
Integrated Installer (Client and Server) | Image Conversion
FUTURE
DETAIL:
RED HAT CEPH STORAGE “TUFNELL”
CORECORECORE
More intelligent scrubbing policies and improved peering logic to reduce
impact of common operations on overall cluster performance.
More information about objects will be provided to help administrators
perform repair operations on corrupted data.
New backend for OSDs to provide performance benefits on existing
and modern drives (SSD, K/V).
Performance
Consistency
Guided Repair
New Backing Store
(Tech Preview)
These projects are currently active in the Ceph development community. They may be
available and supported by Red Hat once they reach the necessary level of maturity.
MGMT
A new user interface with improved sorting and visibility of critical data.
MGMT
Introduction of altering features that notify administrations of critical
issues via email or SMS.
New UI
Alerting
BLOCK
Introduction of a highly-available iSCSI interface for the Ceph Block Device,
allowing integration with legacy systems
BLOCKOBJECTOBJECT
Capabilities for managing virtual block devices in multiple regions, maintaining
consistency through automated mirroring of incremental changes
Access to objects stored in the Ceph Object Gateway via standard Network
File System (NFS) endpoints, providing storage for legacy systems and
applications
Support for deployment of the Ceph Object Gateway across multiple sites
in an active/active configuration (in addition to the currently-available
active/passive configuration)
iSCSI
Mirroring
NFS
Active/Active
Multi-Site
These projects are currently active in the Ceph development community. They may be
available and supported by Red Hat once they reach the necessary level of maturity.
DETAIL:
RED HAT CEPH STORAGE “TUFNELL”
RHEL OPENSTACK PLATFORM
w/CEPH
RHEL-OSP 8
Containers
QoS | Live Migration | Disaster Recovery
RBD Driver for Kubernetes | S3 Backend for OpenShift
Red Hat Ceph Storage: Past, Present and Future

Red Hat Ceph Storage: Past, Present and Future

  • 1.
    RED HAT CEPHSTORAGE: PAST, PRESENT AND FUTURE Neil Levine June 25, 2016
  • 2.
    AGENDA Red Hat StorageOverview Past Retrospective on Inktank acquisition Red Hat Ceph Storage 1.2 Present Red Hat Ceph Storage 1.3 RHEL-OSP with 1.3 Future Red Hat Ceph Storage 2.0 OpenStack and Containers
  • 3.
    Open Software-Defined Storageis a fundamental reimagining of how storage infrastructure works. It provides substantial economic and operational advantages, and it has quickly become ideally suited for a growing number of use cases. TODAY EMERGING FUTURE Cloud Infrastructure Cloud Native Apps Analytics Hyper- Convergence Containers ??? ??? OPEN, SOFTWARE-DEFINED STORAGE
  • 4.
    A RISING TIDE “By2020, between 70-80% of unstructured data will be held on lower-cost storage managed by SDS environments.” “By 2019, 70% of existing storage array products will also be available as software only versions” “By 2016, server-based storage solutions will lower storage hardware costs by 50% or more.” Gartner: “IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry” Innovation Insight: Separating Hype From Hope for Software-Defined Storage Innovation Insight: Separating Hype From Hope for Software-Defined Storage Market size is projected to increase approximately 20% year-over-year between 2015 and 2019. 201 3 201 4 201 5 201 6 201 7 201 8 201 9 $1,349B $1,195B $1,029B $859B $706B $592B SDS-P MARKET SIZE BY SEGMENT $457B Block Storage File Storage Object Storage Hyperconverged Source: IDC Software-Defined Storage is leading a shift in the global storage industry, with far-reaching effects.
  • 5.
    THE RED HATSTORAGE PORTFOLIO Ceph management OPENSOURCE SOFTWARE Gluster management Ceph data services Gluster data services STANDARD HARDWARE Share-nothing, scale-out architecture provides durability and adapts to changing demands Self-managing and self-healing features reduce operational overhead Standards-based interfaces and full APIs ease integration with applications and systems Supported by the experts at Red Hat
  • 6.
    ● VM Storagewith OpenStack Cinder, Glance & Nova ● Object storage for tenant apps Built from the ground up as a next-generation storage system, based on years of research and suitable for powering infrastructure platforms TARGET USE CASES Rich Media and Archival ● S3-compatible object storage Highly tunable, extensible, and configurable, with policy-based control and no single point of failure Offers mature interfaces for block and object storage for the enterprise Cloud Infrastructure Customer Highlight: Cisco Cisco uses Red Hat Ceph Storage to deliver storage for next-generation cloud services RED HAT CEPH STORAGE Powerful distributed storage for the cloud and beyond
  • 7.
    ANALYTICS Big Data analyticswith Hadoop CLOUD INFRASTRUCTURE RICH MEDIA AND ARCHIVAL SYNC AND SHARE ENTERPRISE VIRTUALIZATION Machine data analytics with Splunk Virtual machine storage with OpenStack Object storage for tenant applications Cost-effective storage for rich media streaming Active archives File sync and share with ownCloud Storage for conventional virtualization with RHEV FOCUSED SET OF USE CASES
  • 8.
  • 9.
    TIMELINE Jul14 Mar15 Jun15May14 INKTANKCEPH ENTERPRISE v1.2 v1.3 Inktank acquisition & Ceph Firefly released Ceph Hammer released RED HAT CEPH STORAGE
  • 10.
    MGMT All required dependenciesare now included within a local package repository, allowing deployment to non-Internet-connected storage nodes. MGMTCORECORECOREOBJECT Administrators can now perform basic cluster administration tasks through Calamari, the Ceph visual interface. Erasure-coded storage back-ends are now available, providing durability with lower capacity requirements than traditional, replicated back-ends. A cache tier pool can now be designated as a writeback or read cache for an underlying storage pool in order to provide cost-effective performance. Clients can be configured to read objects from the closest replica, increasing performance and reducing network strain. The Ceph Object Gateway now supports and enforces quotas for users and buckets. Off-line installer GUI management Erasure coding Cache tiering RADOS read- affinity User and bucket quotas These features were introduced in version 1.2 of Red Hat Ceph Storage, and have been supported by Red Hat since July, 2014. DETAIL: RED HAT CEPH STORAGE V1.2
  • 11.
    DELIVERING RED HATCEPH STORAGE BEFORE DURING AFTER
  • 12.
    DELIVERING RED HATCEPH STORAGE Bugzilla Fork Package Doc Test
  • 13.
    A GENUINE REDHAT PRODUCT
  • 14.
  • 15.
  • 16.
    RED HAT CEPHSTORAGE 1.3 GA Today Based on Ceph Hammer (0.94) Core Themes Robustness at Scale Operational Efficiency Performance
  • 17.
    Red Hat CephStorage 1.3 contains improved logic and algorithms that allow it to do the “right thing” for users with multi-petabyte clusters where hardware failure is normal: ROBUSTNESS AT SCALE Improved self-management for large clusters ● Improved automatic rebalancing logic, which prioritizes degraded over misplaced objects ● Rebalancing operations can be temporarily disabled so they don’t impact performance ● Time-scheduled scrubbing, to avoid disruption during peak times ● Sharding of object buckets to avoid hot-spots
  • 18.
    Ceph is adistributed system with lots of moving parts. Red Hat Ceph Storage 1.3 introduces features to help manage storage more efficiently. OPERATIONAL EFFICIENCY Making administration tasks easier ● Calamari now supports multiple users and clusters ● CRUSH management via Calamari API allows programmatic adjustment of placement policies ● Lightweight, embedded Civetweb server eases deployment of the Ceph Object Gateway ● Faster Ceph Block Device operations make resize, delete, and flatten operations quicker, while export parallelism makes backups faster
  • 19.
    CEPH WITH SANDISKINFINIFLASH
  • 20.
    A number ofperformance tweaks improve the speed of Red Hat Ceph Storage 1.3 and increase I/O consistency: PERFORMANCE Speedier, more efficient distributed storage ● Optimizations for flash storage devices increases Ceph’s topline speed ● Read ahead caching accelerates virtual machine booting in OpenStack ● Allocation hinting reduces XFS fragmentation to avoid performance degradation over time ● Caching hinting preserves the cache’s advantages and improves performance PERFORMANCE SCALE
  • 21.
    OTHER FEATURES S3 ObjectExpiration Swift Storage Policies IPv6 Support Local/Pyramid Codes
  • 22.
    RED HAT CEPHSTORAGE 1.3.z SELinux Support Satellite Integration Puppet-based Installer (Tech Preview)
  • 23.
    RHEL OPENSTACK PLATFORM w/CEPH Feb15Jul16Jun14 Icehouse RHEL-OSP5 RHEL-OSP6 RHEL-OSP7 Juno Kilo Juno ICE 1.2 RHCS 1.2.3 & 1.3.0 RHCS 1.3.0
  • 24.
    RHEL OPENSTACK PLATFORM w/CEPH RHEL-OSP5 RHEL-OSP6 RHEL-OSP7 IntegratedSKU | Integrated Installer (Client) Ephemeral Volumes Integrated Installer (Client and Server) | Image Conversion
  • 25.
  • 26.
    DETAIL: RED HAT CEPHSTORAGE “TUFNELL” CORECORECORE More intelligent scrubbing policies and improved peering logic to reduce impact of common operations on overall cluster performance. More information about objects will be provided to help administrators perform repair operations on corrupted data. New backend for OSDs to provide performance benefits on existing and modern drives (SSD, K/V). Performance Consistency Guided Repair New Backing Store (Tech Preview) These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. MGMT A new user interface with improved sorting and visibility of critical data. MGMT Introduction of altering features that notify administrations of critical issues via email or SMS. New UI Alerting
  • 27.
    BLOCK Introduction of ahighly-available iSCSI interface for the Ceph Block Device, allowing integration with legacy systems BLOCKOBJECTOBJECT Capabilities for managing virtual block devices in multiple regions, maintaining consistency through automated mirroring of incremental changes Access to objects stored in the Ceph Object Gateway via standard Network File System (NFS) endpoints, providing storage for legacy systems and applications Support for deployment of the Ceph Object Gateway across multiple sites in an active/active configuration (in addition to the currently-available active/passive configuration) iSCSI Mirroring NFS Active/Active Multi-Site These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. DETAIL: RED HAT CEPH STORAGE “TUFNELL”
  • 28.
    RHEL OPENSTACK PLATFORM w/CEPH RHEL-OSP8 Containers QoS | Live Migration | Disaster Recovery RBD Driver for Kubernetes | S3 Backend for OpenShift