Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

New use cases for Ceph, beyond OpenStack, Luis Rico

134 views

Published on

Ceph Day Galicia
April 4th 2018, Santiago de Compostela ES
Luis Rico, Red Hat

Published in: Technology
  • Be the first to comment

  • Be the first to like this

New use cases for Ceph, beyond OpenStack, Luis Rico

  1. 1. Luis Rico – luis.rico@redhat.com AMTEGA - CDTIC – 4 April 2018 – Santiago de Compostela New use cases for Ceph, beyond OpenStack
  2. 2. Agenda •  Intro to Ceph •  Ceph as best unified storage for OpenStack •  New use cases for Ceph Free virtual training on Red Hat Ceph Storage: https://red.ht/storage-testdrive
  3. 3. RED HAT CEPH STORAGE
  4. 4. •  Contributions from Intel, SanDisk, CERN, and Yahoo •  Presenting Ceph Days in cities around the world and quarterly virtual Ceph Developer Summit events •  Over 11M downloads in the last 12 months •  Increased development velocity, authorship, and discussion has resulted in rapid feature expansion OPEN SOFTWARE DEFINED STORAGE 97 AUTHORS/MO 2,453 COMMITS/MO 260 POSTERS/MO 33 AUTHORS/MO 97 COMMITS/MO 138 POSTERS/MO
  5. 5. RED HAT CEPH STORAGE Distributed, enterprise-grade object storage, proven at web scale Open source, massively-scalable, software-defined based on Ceph Flexible, scale-out architecture on clustered standard hardware Single, efficient, unified storage platform (object, block, file) User-driven storage lifecycle management with 100% API coverage S3 compatible object API Designed for modern workloads like cloud infrastructure and data lakes
  6. 6. DIFFERENT KINDS OF STORAGE FILE STORAGE File systems allow users to organize the data stored in those blocks using hierarchical folders and files. OBJECT STORAGE Object stores distribute data algorithmically throughout a cluster of storage media, without a rigid structure. BLOCK STORAGE Physical storage media appears to computers as a series of sequential blocks of a uniform size.
  7. 7. RED HAT CEPH STORAGE ARCHITECTURAL COMPONENTS RBD A reliable, fully distributed block device with cloud platform integration RGW A web services gateway for object storage, compatible with S3 and Swift APP HOST/VM LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby RADOS A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors CEPHFS A distributed file system with POSIX semantics & scale-out metadata CLIENT
  8. 8. BUSINESS BENEFITS OPEN SOURCE No proprietary lock-in, with a large commercial ecosystem and broad community PEACE OF MIND Over a decade of active development, proven in production and backed by Red Hat LOWER COST More economical than traditional NAS/SAN, particularly at petabyte scale
  9. 9. TECHNICAL BENEFITS •  Massive scalability to support petabytes of data •  Relies on no single point of failure, for maximum uptime •  Self-manages and self-heals to reduce maintenance •  Data distributed among servers and disks dynamically
  10. 10. TECHNICAL DETAILED ARCHITECTURE
  11. 11. PLACEMENT GROUPS Placement Groups (PGs) are shards or fragments of a logical object pool that place objects as a group into OSDs.
  12. 12. CRUSH OVERVIEW CRUSH (Controlled Replication Under Scalable Hashing) •  Controlled, Scalable, Decentralized Placement of Replicated Data. •  The CRUSH algorithm determines how to store and retrieve data by computing data storage locations. •  CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and retrieve data in OSDs with a uniform distribution of data across the cluster.
  13. 13. HOW IT WORKS? With a replica 3 pool, how client does WRITE
  14. 14. HOW IT WORKS? With a replica 3 pool, how client does READ
  15. 15. EFFICIENCY •  Standard servers and disks •  Erasure coding - reduced footprint •  Thin provisioning •  Traditional and containerized deployment, including CSDs SCALABILITY •  Multi-petabyte support •  Hundreds of nodes •  CRUSH algorithm – placement/rebalancing •  No single point of failure PERFORMANCE •  Server-side journaling •  BlueStore (updated tech preview) APIs & PROTOCOLS •  S3, Swift, Apache Hadoop S3A filesystem client •  Cinder block storage Native API protocols •  NFS v3, V4 •  iSCSI SECURITY •  Integrated on premise monitoring dashboard •  RGW SSL support •  Pool-level authentication •  Active Directory, LDAP, Keystone v3 •  At-rest encryption with keys held on separate hosts DATA SERVICES •  Global clusters for S3/Swift storage •  Disaster recovery for block and object storage •  Snapshots, cloning, and copy-on-write CORE PRODUCT FEATURES FEATURES IN RED ARE SPECIFIC TO RED HAT CEPH STORAGE 3 (BASED ON LUMINOUS)
  16. 16. MONITORING CLUSTERS WITH MORE PRECISION ●  Red Hat Ceph Storage dashboard, based on upstream ‘cephmetrics’ project ●  New web interface adds ease of use and insight into Ceph cluster activity ●  14 dashboards to monitor health / troubleshoot issues ●  Detailed graphical view of usage data for cluster or components
  17. 17. TARGET USE CASES •  Private Cloud - enterprise deployments growing for test & dev and production application deployments. FSI, retail and technology sectors. •  Archive & Backup: object storage as a replacement for tape and expensive dedicated appliances. Hybrid cloud compatibility critical. •  NFVi (new) - OpenStack with Ceph dominant reference platform for next- generation telco networks. Global demand for Ceph now standalone and hyperconverged. •  Enterprise Virtualization (new): legacy protocol support for legacy VM storage to be managed on same platform as modern, private cloud storage. •  Big Data (new) - object storage providing common, data lake for multiple analytics applications for greater efficiencies and better business insights
  18. 18. CEPH FOR OPENSTACK
  19. 19. COMPLETE OPENSTACK STORAGE •  Deeply integrated with modular architecture and components for ephemeral & persistent storage Ø  Nova, Cinder, Manila, Glance, Keystone, Ceilometer, Swift RED HAT CEPH STORAGE OPENSTACK Keystone API Swift API Glance API Cinder API Nova API HYPERVISOR (LibRBD) CEPH OBJECT GATEWAY Manila API CephFS CephFS
  20. 20. ADVANTAGES FOR OPENSTACK USERS •  Instantaneous booting of 1 or 100s of VMs •  Instant backups via seamless data migration between Glance, Cinder, Nova •  Multi-site replication for disaster recovery or archiving RED HAT CEPH STORAGE HYPERVISOR VM
  21. 21. OVERWHELMINGLY PREFERRED SOURCE: OpenStack User Survey, October 2016 OVERWHELMINGLY PREFERRED FOR OPENSTACK
  22. 22. SPECIAL INTEGRATION WITH RED HAT OPENSTACK PLATFORM DIRECTOR •  Automated object and block deployment •  Automated upgrades from Red Hat Ceph Storage 1.3 •  Support for existing Ceph Clusters •  OpenStack Manila file deployment as composable controller service via integrated CephFS driver •  Co-location of Red Hat OpenStack Platform and Red Hat Ceph Storage
  23. 23. CHALLENGE: Produban wanted to create a private cloud platform to provide cloud services across Grupo Santander’s businesses, aiming to increase its agility and reduce costs. PRODUCTS AND SERVICES USED: RESULTS: • Created a reliable, production-ready and controlled IaaS environment, while reducing Produban’s technology footprint and costs • Built a standardized and efficient IaaS environment with consistent management and deployment across its hybrid cloud services • Gained a single, efficient platform to support the demanding storage needs of its OpenStack-based cloud • Increased agility and reduced time-to-market for different services, including big data analytics PRODUBAN DELIVERS MODERN CLOUD SERVICES
  24. 24. NEW USE CASES FOR CEPH
  25. 25. RGW, Ceph’s object storage interface: •  Support for authentication using Active Directory, LDAP & OpenStack Keystone v3 •  Greater compatibility with the Amazon S3 and OpenStack Swift object storage APIs •  AWS v4 signatures, object versioning, bulk deletes •  NFS gateway for bulk import and export of object data OBJECT STORAGE FOCUS
  26. 26. Global object storage clusters with single namespace •  Enables deployment of clusters across multiple geographic locations •  Clusters synchronize, allowing users to read from or write to the closest one Multi-site replication for block devices •  Replicates virtual block devices across regions for disaster recovery and archival MULTISITE CAPABILITIES STORAGE CLUSTER US-EAST STORAGE CLUSTER US-WEST
  27. 27. CHALLENGE: Wanted to replace existing Document Management solution based in IBM, very expensive to mantain, with a new solution able to scale in a more cost- effective way PRODUCTS AND SERVICES USED: RESULTS: •  With a leading Global System Integrator, provide a new document management system based on open source components •  Reduce annual costs in 80% •  The storage platform is based on Ceph as object storage •  Using Multi-site active-active capability to have High Availbility and distribute workloads INSURANCE COMPANY IN SPAIN
  28. 28. RADOS RGW S3 API OBJECT FILE S3A RGW NFS Data Ingest S3A COMPATIBILITY WITH HADOOP S3A FILESYSTEM CLIENT S3A
  29. 29. ELASTIC COMPUTE AND STORAGE FOR BIG DATA Analytics vendors focus on analytics... Red Hat on infrastructure. Analytics Red Hat provides infrastructure software Analytics vendors provide analytics software OpenStack or OpenShift Compute Pool Infrastructure Shared Data Lake on Ceph Object Storage
  30. 30. plus.google.com/+RedHat linkedin.com/company/red-hat youtube.com/user/RedHatVideos facebook.com/redhatinc twitter.com/RedHat THANK YOU

×