Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

6_OPEN17_SUSE Enterprise Storage 4

554 views

Published on

The evolution in storage.
Why an open source initiative like Ceph found its way into the enterprise storage world. Traditional storage solutions are expensive and you will probably need a forklift getting it in your datacenter. Meanwhile you have an ever growing demand for storage capacity by adopting new technologies like IoT, video for marketing & surveillance now in 4k, expanding user data with the adoption of BYOD and increasing backup requirements.

This demand created the opportunity for Ceph, a Scale-out Software Defined Storage solution, driven by one of the best open source communities worldwide. Standardize on Industry Standard Servers and grow your storage estate at YOUR rate.

In this session we will introduce you to the enterprise adoption of Ceph, give you a technical deep dive of Ceph and how erasure coding is improving your level of data protection.

Published in: Technology
  • Get the best essay, research papers or dissertations. from ⇒ www.WritePaper.info ⇐ A team of professional authors with huge experience will give u a result that will overcome your expectations.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

6_OPEN17_SUSE Enterprise Storage 4

  1. 1. 1 SUSEWe Adapt. You Succeed. Tom D’Hont Sales Engineer tom.dhont@suse.com
  2. 2. 2 25+ Years of Open Source Engineering Experience 2/3+ Of the Fortune Global 100 use SUSE Linux Enterprise 10 Awards in 2016 for SUSE Enterprise Storage 1st Enterprise Linux Distribution 1st Enterprise OpenStack Distribution 50%+ Development Engineers 1.4B Annual Revenue Top 15 Worldwide System infrastructure Software Vendor +8% SUSE Growth vs. Other Linux in 2015*
  3. 3. 3 Recent Tech News Open Source Projects • Growing contribution to many key communities • Joined Cloud Foundry Foundation board • Founding member of new projects—Zero Outage Partnerships • Fujitsu and SUSE: mission-critical, hybrid cloud, OpenStack and container orchestration • Intel: HPC Orchestrator stack • HPE: preferred Linux and SUSE OpenStack Cloud • SaltStack: expanded partnership for automated management
  4. 4. 4 A Broad and Connected Ecosystem 13,500+ Certified Hardware Systems 1800 Technology Partners 8500+ Certified Applications 5000+ Partner Ecosystem Members 3200 Service Providers & System Integrators 600 Training Partners
  5. 5. 5 How SUSE Adds Value • Enterprise-quality innovation and service • Affordable mission- critical performance • Open, flexible, optimized for mixed IT • Increased Agility • Reclaimed Budget • Reduced Risk • Protect Investment Customer Benefits• Enterprise-quality innovation and service • Affordable mission-critical performance • Open, flexible, optimized for mixed IT The Open, Open Source Company
  6. 6. 6 YaST Collaboration and Contribution
  7. 7. 7 SUSE Software-Defined Infrastructure An Open, Flexible Infrastructure Approach Application Delivery Custom Micro Service Applications Kubernetes / Magnum Physical Infrastructure: Server, Switches, Storage Public Cloud SUSE Cloud Service Provider Program Containers SUSE CaaS Platform Software Defined Everything Storage SUSE Enterprise Storage Networking SDN and NFV Virtualization KVM, Xen, VMware, Hyper-V, z/VM Operating System SUSE Linux Enterprise Server Platform as a Service Cloud Foundry Private Cloud / IaaS SUSE OpenStack Cloud Management Operations, Monitor and Patch • SUSE Manager • openATTIC Cluster Deployment • Crowbar • Salt Orchestration • Heat • Kubernetes
  8. 8. SUSE Enterprise Storage Software-defined Storage
  9. 9. 9 Difficult to Scale and Manage Data Growth Expensive $ Challenges of Traditional Enterprise Storage Not easily Extended to the Software-defined Data Center
  10. 10. 10 Enterprise Data Capacity Utilization Tier 0 Ultra High Performance Tier 1 High-value, Online Transaction Processing (OLTP), Revenue Generating Tier 2 Backup/Recovery, Reference Data, Bulk Data Tier 3 Object, Archive, Compliance Archive, Long-term Retention
  11. 11. 11 $ SUSE Enterprise Storage Enterprise Class Storage Using Commodity Servers and Disk Drives Latest hardware Reduce Capital Expense Hardware flexibility
  12. 12. 12 SUSE Enterprise Storage Unlimited Scalability with Self-managing Technology Block Object File Increase capacity and performance by simply adding new storage or storage nodes to the cluster. Monitor Nodes Management Node Storage Nodes
  13. 13. 13 SUSE Enterprise Storage Powered by Ceph Architecture Client Servers (Windows, Linux, Unix) RADOS (Common Object Store) Block Devices Server Object Storage File Interface Storage Server Storage Server Storage Server Storage Server Server Server Storage Server Storage Server Applications File Share OSD OSD OSD OSD OSD OSD NetworkCluster MON RBD iSCSI S3 SWIFT CephFS* Monitors MONMON Code Developers 782 Core Regular Casual 22 53 705 Total downloads 160,015,454 Unique downloads 21,264,047
  14. 14. 14 SUSE Enterprise Storage 4: Engineered to Reduce Storage Frustrations 22% 19% 20% 22% 19% 27% 28% 36% 43% 46% 45% 46% 49% 44% 46% 45% 35% 34% 34% 32% 32% 29% 26% 20% Difficult to manage Being tied into legacy vendors Lack of scalability – can’t effectively grow with the business Lack of agility - can’t support changes in the business environment Inability to support innovation / drive value Complex / highly fragmented Performance concerns Overall cost Significant frustration Moderate frustration Not a frustration % frustration 80% 74% 71% 68% 68% 66% 66% 66% *1202 senior IT decision makers across 11 countries completed an online survey in July / August 2016
  15. 15. 15 SUSE Enterprise Storage Enable Transformation Support today’s investment Adapt to the future Legacy Data Center • Network, compute and storage silos • Traditional protocols – Fibre Channel, iSCSI, CIFS/SMB, NFS Process Driven • Slow to respond This is where you probably are today Software-defined Data Center • Software-defined everything Agile Infrastructure • Supporting a DevOps model • Business driven This is where you need to get to Mode 1 – Gartner for Traditional Mode 2 – Gartner for Software Defined
  16. 16. 16 Use Cases Video Surveillance • Security surveillance • Red light / traffic cameras • License plate readers • Body cameras for law enforcement • Military/government visual reconnaissance Virtual Machine Storage Low and mid i/o performance for major hypervisor platforms • kvm – native RBD • Hyper-V – iSCSI • VMware - iSCSI Bulk Storage • SharePoint data • Medical records • Medical images • X-rays • MRIs • CAT scans • Financial records Data Archive Long-term storage and back up: • HPC • Log retention • Tax documents • Revenue reports
  17. 17. 17 SUSE Enterprise Storage Fit in the Backup Architecture How does SUSE Enterprise Storage fit? It Replaces Dedupe appliances and it augments tape devices by keeping more data online cost effectively. Application Servers Backup Server Dedupe Appliance or Disk Array Tape Library SUSE Enterprise Storage Replace Augment
  18. 18. 18 SUSE Enterprise Storage 4 Major Features • Unified block, object and files with production ready CephFS filesystem • Expanded hardware-platform choice with support for 64 bit ARM • Asynchronous replication for block storage and multisite object replication • Enhanced ease of management with SUSE openATTIC • Enhanced cluster orchestration using Salt • Early access to NFS Ganesha support and NFS access to S3 buckets Object Storage Block Storage File System
  19. 19. 19 http://tinyurl.com/hdz8ywu
  20. 20. 20 SUSE Enterprise Storage 4 – openAttic SUSE openATTIC advanced graphical user interface
  21. 21. 21 SUSE Enterprise Storage 4 Major Feature Summary
  22. 22. 22 SUSE Enterprise Storage Roadmap 2016 2017 2018 V6 V7 V8 Confidential—For Internal Use Only. Information is forward looking and subject to change at any time. SUSE Enterprise Storage 3 SUSE Enterprise Storage 4 SUSE Enterprise Storage 5 Built On • Ceph Jewel release • SLES 12 SP1 (Server) Manageability • Initial Salt integration (tech preview) Interoperability • CephFS (Tech Preview) • AArch64 (Tech Preview) Availability • Multisite object replication (Tech Preview) • Asynch block mirroring (Tech Preview) Built On • Ceph Jewel release • SLES 12 SP 2 (Server) Manageability • SES openATTIC management • Initial Salt integration Interoperability • AArch64 • CephFS (production use cases) • NFS Ganesha (Tech Preview) • NFS access to S3 buckets (Tech Preview) • CIFS Samba (Tech Preview) • RDMA/Infiniband (Tech Preview) Availability • Multisite object replication • Asynchronous block mirroring Built On • Ceph Luminous release • SLES 12 SP 3 (Server) Manageability • SES openATTIC management phase 2 • SUSE Manager integration Interoperability • NFS Ganesha • NFS access to S3 buckets • CIFS Samba (Tech Preview) • Fibre Channel (Tech Preview) • RDMA/Infiniband • Support for containers Availability • Asynchronous block mirroring • Erasure coded block pool Efficiency • BlueStore back-end • Data compression • Quality of Service (Tech Preview)
  23. 23. 23 Example of SUSE Enterprise Storage Partners HPE - OEM - Apollo – Apollo 4200 - Proliant – DL380/DL360 Thomas-Krenn – SES Appliance Integrated products via resellers
  24. 24. SUSE Enterprise Storage 4 Technical
  25. 25. 25 x86 server Storage Node FS FSFSFS File System (XFS) or BlueStore OSD OSDOSDOSD Object Storage Daemon Physical disk or other persistent storage Disk DiskDiskDisk
  26. 26. 26 Monitor Node Brains of the cluster Cluster membership: up, down, in, out Distributed decision making Not in the performance path Do not serve stored object to clients M M
  27. 27. 27 RADOS Cluster Reliable Autonomous Distributed Object Store MM M RADOS Reliable, autonomous distributed object store comprised of self-healing, self-managing, intelligent storage nodes
  28. 28. 28 RADOS Cluster RADOS Reliable, autonomous distributed object store comprised of self-healing, self-managing, intelligent storage nodes LIBRADOS Library allowing apps to directly Access RADOS C, C++, Java, Python, Ruby And PHP APP RADOSGW Bucket-based REST gateway, compatible with S3 and Swift APP RBD Reliable and fully- distributed block device, with a Linux kernel client and QEMU/KVM dirver Host / VM CEPH FS POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE Client
  29. 29. 29 Pools Full copies Very high durability 3x (200% overhead) Quicker recovery One copy plus parity Cost-effective durability 1.4x (40% overhead) Expensive recovery CEPH Storage Cluster Replicated Pool Copy Copy Copy CEPH Storage Cluster Erasure Coded Pool 1 2 3 4 5 X Y
  30. 30. 30 Placement Groups DC 1 DC 2 DC 3 PG A: DC 1 PG B: DC 2 + DC 3 PG C: All
  31. 31. 31 Cache Tiered Pools CEPH Storage Cluster Backing Pool (Erasure Coded) Cache Pool (Replicated on SSD) Application
  32. 32. SUSE Enterprise Storage 4 Case study • 4 DC campus environment • 480TB • 40-80 € / TB / Year • CIFS / NFS
  33. 33. Existing Landscape Robocopy Rsync
  34. 34. Proposed Landscape [HP DL360 Gen9] CIFS/NFS gateway / management node 2x E5-2630v3 4x 16GB PC4-2133 1x Dual 120GB SSD M.2 2x 10GbE T 500W R-PS [HP DL160 Gen9] monitoring node 1x E5-2603v3 1x 8GB PC4-2133 1x Dual 1210GB SSD M.2 1x 10GbE T 550W PS [HP Apollo 4200 Gen9] OSD node 2x E5-2630v3 6x 32GB PC4-2133 1x Dual 120GB SSD M.2 2x 400GB SSD 1x 800GB SSD 12x 8TB HDD 2x 10GbE T 800W R-PS Start with 480 TB Netto + extend with 68,6 TB 1 OSD node 12x 8 TB 96 TB RAW 7 OSD nodes 84x 8 TB 672 TB RAW (96x7) Erasure code k=5, m=2 480 TB Netto (672/7x5) 1 OSD node k=5, m=2 68,6 TB Netto (480/7) UID 2 1 4 3 6 5 7 8 ProLiant DL160 Gen9 monitoring node UID 2 1 4 3 6 5 7 8 ProLiant DL160 Gen9 monitoring node UID 2 1 4 3 6 5 7 8 ProLiant DL160 Gen9 monitoring node UID 2 1 4 3 6 5 7 8 ProLiant DL360 Gen9 CIFS/NFS gateway management node ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB CIFS/NFS network 10GbE OSD node 1 OSD node 8 OSD node 2 OSD node 3 OSD node 4 OSD node 5 OSD node 6 OSD node 7 480 TB Netto Spare capacity DL 360 Gen9 DL 160 Gen9 DL 160 Gen9 DL 160 Gen9 DL 380 Gen9
  35. 35. UID 2 1 4 3 6 5 7 8 ProLiant DL160 Gen9 monitoring node UID 2 1 4 3 6 5 7 8 ProLiant DL160 Gen9 monitoring node UID 2 1 4 3 6 5 7 8 ProLiant DL160 Gen9 monitoring node UID 2 1 4 3 6 5 7 8 ProLiant DL360 Gen9 CIFS/NFS gateway management node ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB SATA 7.2K 4.0 TB CIFS/NFS network 10GbE OSD node 1 OSD node 8 OSD node 2 OSD node 3 OSD node 4 OSD node 5 OSD node 6 OSD node 7 DL 360 Gen9 DL 160 Gen9 DL 160 Gen9 DL 160 Gen9 DL 380 Gen9 Proposed Landscape Erasure Coding – Think of it as software RAID for an object – Object is broken up into ‘k’ fragments and given ‘m’ durability pieces – k=5, m=2  RAID 6 D 1 D 2 D 3 D 4 D 5 P 2 P 1 D 1 D 2 D 3 D 4 D 5 P 1 P 2 D 1 D 2 D 3 D 4 D 5 P 1 P 2 D 2 D 3 D 1 D 4 D 5 P 1 P 2 D 1 D 3 D 4 D 2 D 5 P 1 P 2 Sparecapacity 1 2 3 4 5 D 1 D 2 D 3 D 4 D 5 P 2 P 1 Objects
  36. 36. SUSE Enterprise Storage Extend your scale out storage to improve resilience 1 DC k=5, m=2 40% overhead Failure protection:  2 OSDs D1 D2 P1 P2 D3 D4 D5 DC 1 D2 D4 P1 P3 P5 D1 D3 P2 P4 D5 DC 2DC 1 D1 D5 P1 P5 D2 D6 P2 P6 D3 D7 P3 P7 D4 D8 P4 P8 DC 1 DC 2 DC 3 DC 4 2 DCs k=5, m=5 100% overhead Failure protection:  5 OSDs  1 Datacenter 4 DCs k=8, m=8 100% overhead Failure protection:  8 OSDs  2 Datacenters
  37. 37. HPE Apollo 4200 + SUSE Enterprise Storage Price evolution
  38. 38. 39 SUSECON 2017 Prague, Czechia September 25-29, 2017 Save the date: Why attend SUSECON 2017? • Technical training from the source – from engineers who create the solutions • 100 hours of hands-on training • Inspiring keynotes from industry leaders • Unparalleled access to SUSE executives and technical staff • Complimentary certification exams on Linux, OpenStack cloud and Ceph storage • More fun than you should have at a tech conference!
  39. 39. Q&A Thank you for participating Tom D’Hont Sales Engineer tom.dhont@suse.com
  40. 40. SUSE Enterprise Storage 4 Pricing
  41. 41. 42 SUSE Enterprise Storage Pricing Base Configuration - $10000 (Priority Subscription) SUSE Enterprise Storage and limited use of SUSE Linux Enterprise Server to provide: • 4 storage OSD nodes (1-2 sockets) • 6 infrastructure nodes Expansion Node - $2300 (Priority Subscription) SUSE Enterprise Storage and limited use of SUSE Linux Enterprise Server to provide: • 1 SES storage OSD node (1-2 sockets) • 1 SES infrastructure node • 1, 3 and 5 Year SKU available
  42. 42. 43 SUSE Enterprise Storage Minimum Configuration 4 SES OSD storage nodes • 10 Gb Ethernet (2 networks bonded to multiple switches) • 32 OSD’s per storage cluster • OSD journal can reside on OSD disk • Dedicated OS disk per OSD storage node • 1 GB RAM per TB raw OSD capacity for each OSD storage node • 1.5 GHz per OSD for each OSD storage node • Monitor nodes, gateway nodes and metadata server node can reside on SES OSD storage nodes: • 3 SES monitor nodes (requires SSD for dedicated OS drive) • iSCSI gateway, object gateway or metadata server nodes require redundant deployment • iSCSI gateway, object gateway or metadata server require incremental 4 GB RAM and 4 Cores Separate management node • 4 GB RAM, 4 Core, 1 TB capacity https://www.suse.com/documentation/ses-3/book_storage_admin/data/cha_ceph_sysreq.html 4
  43. 43. 44 Minimum Recommended Configuration (Production) 7 SES OSD storage nodes (no single node exceeds ~15%) • 10 Gb Ethernet (4 physical networks bonded to multiple switches) • 56+ OSDs per storage cluster • RAID 1 OS disks for each OSD storage node • SSDs for Journal • 6:1 ratio SSD journal to OSD • 1.5 GB RAM per TB raw OSD capacity for each OSD storage node • 2 GHz per OSD for each OSD storage node Dedicated physical nodes for infrastructure nodes: • 3 SES Monitors; 4 GB RAM , 4 core processor, RAID 1 SSDs for disk • 1 SES management node; 4GB RAM, 4 core processor, RAID 1 SSDs for disk • Redundant physical deployment of gateway nodes or metadata server nodes: • SES object gateway nodes; 32 GB RAM, 8 core processor, RAID 1 SSDs for disk • SES iSCSI gateway nodes 16 GB RAM, 4 core processor, RAID 1 SSDs for disk • SES metadata server nodes (one active/one hot standby); 32 GB RAM, 8 core processor, RAID 1 SSDs for disk https://www.suse.com/documentation/ses-3/book_storage_admin/data/cha_ceph_sysreq.html 4
  44. 44. 4545 Unpublished Work of SUSE LLC. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third- party trademarks are the property of their respective owners.

×