Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Red Hat Storage Server Roadmap & Integration With Open Stack


Published on

"Red Hat Storage Server is an open, software-defined storage product for private, public, and hybrid cloud environments, based on the open source GlusterFS project, a distributed scale-out file system technology.

In this session, you’ll:

Hear about the near- and medium-term Red Hat Storage Server roadmap.
Get deep insight into its integration roadmap with Red Hat Enterprise Linux OpenStack Platform and its feature roadmap for running big data analytics workloads.
Have an opportunity to share your perspectives with senior business and technical leaders from the Red Hat Storage team to help shape the future of Red Hat Storage Server."

Published in: Technology
  • Be the first to comment

Red Hat Storage Server Roadmap & Integration With Open Stack

  1. 1. Red Hat Storage Server: Roadmap & integration with OpenStack Sayan Saha Sr. Manager, Storage Product Management, Red Hat Vijay Bellur Architect, Gluster, Red Hat
  2. 2. How many hours of videos are uploaded to YouTube every hour?
  3. 3. Use Case: File Store for Big Data Analytics Profile   Leading automobile manufacturer   Store & Analyze sensor data for next generation automobiles   Expected data growth – 200 TB per week   5 PB total data store Pain Points  Ability to scale rapidly with demand  Cost-effective scaling Solution & Outcome   Selected Red Hat Storage Server to scale cost-effectively with demand   Leverage SMB & native client client for rapid data ingest
  4. 4. Use Case: Content Store for video production Profile   Leading “video production as a service” provider   Ingest video from various sources, transcode in 9 formats, distribute via web servers Pain Points  Un-supported storage platform. Using community Gluster  Storage platform key to their principal business process  Could not afford to be on an un-supported platform Solution & Outcome   Adopted Red Hat Storage Server to provide a cloud based video platform that enables media processing, distribution & IP streaming   Simplify delivery of content for TV and content providers.
  5. 5. Use Case: Disaster Recovery Using Replicated Storage Profile   Public Transportation Provider   Filestore for storing data received from various sensors in the subway line that trains monitor   Replicated copy needed in a nearby data center for continuous availability of the monitoring service Pain Points  Stuck with an End of Life proprietary clustered file system  Unreliable with weak replication capabilities  Wanted to re-use existing commodity hardware Solution & Outcome   Adopted Red Hat Storage Server to provide a reliable monitoring solution for the subway system   Access from both Windows & Linux boxes   Secondary usage as a document store
  6. 6. Red Hat Storage Roadmap Strategy Create the best open software- defined storage for File Serving Create the best storage provider for RHELOSP Continue to do foundational work for Big Data Storage
  7. 7. Use Cases & Workloads Use Case Workload Description Example Workload Instance(s) Content Cloud – Storage for unstructured data Storing & Accessing files with I/O patterns - Write infrequently, Read infrequently/many/never OwnCloud, pydio, backup target for Commvault, Document & File Store Storage Provider for OpenStack Scale-out block & object Storage back-end for Cinder, Nova & Glance Storage for Big Data Analytics Log analytics, Big Data batch analytics & Big Sciences Analytics Splunk, Hadoop Map- reduce, Illumina
  8. 8. Hardware Advances Helping the Cause
  9. 9. GlusterFS Upstream Innovation Pipeline
  10. 10. Red Hat Storage Technology Stack Libgfapi (not public)
  11. 11. GlusterFS Upstream Roadmap GlusterFS 3.5 (GA within a few days)  Distributed geo-replication  Quota Scalability  Readdir ahead translator  File snapshots for virtual machine image files  libgfapi support for NFS Ganesha  Brick failure detection  Encryption at rest (Experimental)  On wire Compression translator (Experimental) GlusterFS 3.6 (June 2014)  Volume Snapshot  AFRv2  Data Classification  SSL support  Disperse Volumes (erasure coding)  Heterogeneous Brick support  Trash translator  Better Peer Identification  RDMA Improvements
  12. 12. GlusterFS 3.7 Predictions   Scalability for Big Data   Content cloud enhancements -  Sharding & Multi-protocol compatibility   Improvements for OpenStack   Data Protection Improvements -  Geo-replicated snapshots, bitrot detection   Support Btrfs features   SSD/Flash leverage
  13. 13. Best open software-defined storage for file serving
  14. 14. Looking  Back  …     Red  Hat  Storage  Server  2.0  launched  June  2012     6  numbered  updates  released     VM  image  store,  performance  &  stability     EOL  June  2014     Red  Hat  Storage  Server  2.1  launched  September  2013     2  numbered  updates  released  so  far     Quota,  Geo-­‐Rep,  management  Console,  SMB  2.0     Current  shipping  release  
  15. 15. Red Had Storage Server Roadmap Summary 3.0 (Denali) mid-year 2014 Theme: data protection & storage management • RHEL 6 & GlusterFS 3.6 • Volume Snapshots with user serviceability • Monitoring Using Nagios • Hadoop Plug-In • 60 drives per server (up from 36) • Non-disruptive upgrade from previous major version • Catalog/ID based Logging 3.1 (Everglades) 1H CY 2015 Theme: TCO reduction • RHEL 6 & GlusterFS 3.7 • 3-way replication • SSD support • SSDs as bricks • SSDs for tiering • Snapshot Enhancements • Support for RAID less h/w configurations • NFSv4 full support (tentative) …
  16. 16. Zooming  In  on  RHSS  3.0  (Denali)     Official  Version:  Red  Hat  Storage  Server  3     Theme:  Data  protecQon  &  storage  management     Based  on  GlusterFS  3.6     Denali  releases  on  RHEL  6  mainline  not  EUS     Underlying  filesystem:  XFS     Underlying  Volume  Management:  dm-­‐thinp  
  17. 17. Snapshots     Point  in  Qme  copy  of  a  GlusterFS  volume     Create,  list,  status,  info,  restore  &  delete     Support  a  maximum  of  256  snapshots  per  volume     A  snapshot  can  be  taken  on  one  volume  at  a  Qme     Snapshot  names  need  to  be  cluster-­‐wide  unique     Management  via  CLI  only     User  serviceable  snapshots  in  scope  but  may  need  some   more  Qme  to  stabilize  
  18. 18. Scope of Monitoring   Monitor RHSS Logical Entities – cluster, volume, brick, node   Monitor Physical Entities – CPU, disk, network   Alerting when critical components fails – SNMP   Reporting – historical record of outages, events, notifications   Trending – to enable capacity planning
  19. 19. Monitoring  Using  Nagios:  Supported  Use  Cases   Use-­‐Case  1:  user  has  no  exisQng  monitoring   infrastructure  in  place  or  does  not  use  Nagios     Use-­‐Case  2:  user  already  has  Nagios  infrastructure   in  place  –  use  plugins  only     Use-­‐Case  3:  usage  in  conjuncQon  with  Red  Hat   Storage  Console  
  20. 20. Red  Hat  Storage  as  an  Add-­‐On  –  RPM  based   delivery     Considering  this  packaging  opQon  to:   – Comply  with  corporate  governance  &  security   requirements     – Beber  support  storage  co-­‐resident  applicaQons     – Embedded  use-­‐cases   – Ease  of  usage  for  channels  &  partners    
  21. 21. GlusterFS + Other RPMs + XFS RHEL GlusterFS + Other RPMs + XFS RHEL GlusterFS + Other RPMs + XFS RHELRHSS RHSS RHSS RHSS ISO RHSS as an Add-ON
  22. 22.  3-­‐way  replicaQon    SSD  support    Bricks  using  SSDs    Tiering  using  RHEL’s  dm-­‐cache    Snapshot  Enhancements    Consistency  Groups    Snapshot  scheduling    Support  for  RAID  less  h/w  configuraQons    NFSv4  full  support     Zooming  In  on  RHSS  3.1  (Everglades)  
  23. 23.  Bit  Rot  DetecQon  &  RestoraQon    Erasure  coding  using  the  disperse  translator      MulQ-­‐protocol  support  for  NFS  &  SMB    pNFS  server  side  support       Everglades  pipeline  …  
  24. 24. The path to best storage for OpenStack
  25. 25. Best storage provider for OpenStack   Goal   –  Create  the  best  storage  offering  for  RHELOSP     What  is  needed?   –  Focus  on  requirements  and  feature  set  that  are  specific  to  OpenStack’s  storage   substrate   –  Match  product  delivery  model  and  life-­‐cycle  requirements  to  address   expectaQons  of  RHELOSP  adopters   –  Clear  offering  for  the  market  that  tracks  to  the  RHELOSP  roadmap     Plan   –  Create  a  new  product  family  that  is  exclusively  targeted  to  address  OpenStack’s   storage  use  cases   25
  26. 26. New  Product  Family:  Red  Hat  Storage  Server  for   RHELOSP   Principal Product Line: Red Hat Storage   Product Family: Red Hat Storage Server for RHELOSP   Target Use-case: OpenStack Storage Provider   Delivery Model: RPM-only & CDN   Delivery Vehicle: Layered product on RHELOSP   SKU: Provides access to RHSS RPMs only   Pricing: Align with RHELOSP’s pricing model   Roadmap: OpenStack Storage & management feature set
  27. 27. Content   IniQally  derived  from  Red  Hat  Storage  Server   –  Start  with  the  same  core  package  set     Package  set  added/removed  as  needed     Provisioning,  configuraQon  management  &  monitoring  will   be  fully  aligned  with  RHELOSP’s  roadmap  &  capabiliQes
  28. 28. April 2014 Deploy & Configure with Foreman & Puppet Post IceHouse Cinder enhancements •  Backup-Restore •  Migrate •  Per project user quotas Co-residency of storage & compute Native Swift Juno OpenStack-m •  Deployment & configuration for the undercloud File-As-A-Service (Manila) Red  Hat  Storage  Server  for  RHELOSP  Roadmap  
  29. 29. RHELOSP  +  RHSS  Co-­‐Residency:  Logical  View                           RHELOSP  Node  1               Node  1                             RHELOSP  Node  2                             RHELOSP  Node  N                                                                                    RHSS  for  RHELOSP   Nova  VMs  0..N   Local  Storage   RHELOSP  cinder   Nova  VMs  0..N   Local  Storage   Nova  VMs  0..N   Local  Storage   RHELOSP  cinder  RHELOSP  cinder           
  30. 30. What’s up with Big Data Storage?
  31. 31. Benefits  of  using  RHSS  for  Hadoop  AnalyVcs     NFS & FUSE support for data ingestion   No single point of failure   POSIX compliance   Co-location of compute & data   Multiple Volume Support   Ability to partition data across multiple name spaces   DR capabilities
  32. 32. Hadoop  Plug-­‐In  Roadmap   Feb 2014 High Touch Beta Released on RHSS 2.1 • Single volume in a cluster with one brick per server • Supported for HDP 2.0.6 and it’s 8 services: Pig, Hive, HBase, sqoop, Flume, Oozie, Mahout & Zookeeper RHSS 3.0 (Denali) Promote the Hadoop Plug-In to GA status • Support multiple volumes per cluster running Hadoop • Run in-place analytics on existing data in RHSS volumes • Support Hadoop distro HDP 2.1 and it’s new services like Tez & Storm
  33. 33. Summary   Focus: File Serving on commodity hardware   New product family for RHELOSP storage   Foundational work for Big Data Workloads
  34. 34. Check Out Other Red Hat Storage Activities at The Summit 
   Enter the raffle to win tickets for a $500 gift card or trip to LegoLand! -  Entry cards available in all storage sessions - the more you attend, the more chances you have to win!   Talk to Storage Experts:  Red Hat Booth (# 211) -  Infrastructure -  Infrastructure-as-a-Service   Storage Partner Solutions Booth (# 605)   Upstream Gluster projects -  Developer Lounge
  35. 35. Thank You. Please fill out the feedback forms.