Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Red Hat Storage Day Boston - Supermicro Super Storage

261 views

Published on

Paul McLeod, Sr. Product Manager, Supermicro

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Red Hat Storage Day Boston - Supermicro Super Storage

  1. 1. Confidential © 2016 Supermicro Supermicro SuperStorage Red Hat Storage Day Boston November 3, 2016 Paul McLeod Sr. Product Manager PaulM@Supermicro.com
  2. 2. Confidential
  3. 3. Confidential Supermicro Total Solution Evolution Subsystem Innovation 1993 - 2003 Server/Storage Innovation IPO 2007 Total Solution Innovation • Motherboard & Building Blocks Expertise • Period of Vertical Integration • Product Line Growth & System Optimization • Twin & GPU Architectures Breakthrough • Software & Total Solution Optimization • Service & Global Expansion 2004 - 2013 2014 - 2016
  4. 4. Confidential SuperStorage Product Evolution Supermicro Product Portfolio Evolution Expanding from Compute to Storage with the emergence of <Software/Defined> Storage <Custom Storage Software>StorageAppliance <Software/Defined> Storage
  5. 5. Confidential Comprehensive Portfolio: <Software/Defined> Storage Capacity/Density Throughput/Latency Performance  SAS 3.0 - 12Gb/s  NVDIMM  All Flash NVMe Capacity / Density  Up to 90 Hot-swap bays in 4U  Simply Double  Top-loading Servers  Double-sided Servers Scale-up Scalability
  6. 6. Confidential All Flash Summary SYS-1028U-TN10RT+ SYS-2028U-TN24R4T+ SSG-2028R-NR48N  24 hot-swap 2.5” NVMe drives  32 PCIe lanes to 24 NVMe drives  Performance  IOPS (4K Random Read): up to 5.6M  1600W Titanium level high efficiency digital power supply  Quad Port 10G Base-T  2 Rear hot-swap 2.5” SATA drives  48 hot-swap 2.5” NVMe drives  32 PCIe lanes to 48 NVMe drives  Performance  IOPS (4K Random Read): 5.6M  1620W Titanium level high efficiency digital power supply  SIOM  2 Rear hot-swap 2.5” SATA drives  10 hot-swap 2.5” NVMe drives  40 PCIe lanes to 10 NVMe drives  Performance  IOPS (4K Random Read): up to 7M  1000W Titanium level high efficiency digital power supply  Dual Port 10G Base-T
  7. 7. Confidential x10-Series Storage Server – 2U/24 SSG-6028R-E1CR24 N/L PROCESSOR Dual Intel Xeon E5-2600 v3/V4 socket R CHIPSET Intel® C612 Express chipset MEMORY 24 DIMM, Up to 3TB ECC 3DS LRDIMM, 768GB ECC RDIMM Available for EXPANSION 2x PCI-E 3.0 x16 & 1x PCI-E 3.0 x8 EXTERNAL I/O SUPPORT SIOM support for flexible networking options; 2x USB 3.0 ports DRIVE BAYS 24x Hot-Swap 3.5” SAS3/SATA3 drive bays 2x 2.5” rear hot-swap drive bays STORAGE CONTROLLER LSISAS3108 HW RAID (N-series) or 3008 IT mode (L-series) POWER SUPPLY 1620W Redundant Power, 80PLUS Titanium ADDITIONAL IPMI 2.0 w/ dedicated LAN port & KVM; 4 Pin PWM Fan Speed control; Thermal and voltage monitoring; KEY FEATURES  Dual Intel E5-2600 v3/v4 (Socket R) Server  24x Hot-Swap 3.5” SAS3/SATA3 bays ( 12Gb/sec )  IT mode LSI 3008 (L-series) / 3108 HW RAID (N-series)  24x DIMM Slots (DDR4)  SIOM for flexible networking options  IPMI 2.0 (dedicated LAN) with Virtual Media/KVM over LAN  Cable-arm for hot-swap access of second row APPLICATIONS  Storage Appliance  File servers  Virtual tape library  Active Archive and Backup 34”
  8. 8. Confidential https://www.supermicro.com/solutions/storage_ceph.cfm Software Defined Solutions
  9. 9. Confidential  Generate well-balanced solutions without compromising on availability, performance and cost  Derive configurations from empirical data during the testing.  Provide reference architectures to prime opportunities and accelerate sales and deployment Solution Objectives
  10. 10. Confidential Red Hat Ceph Reference Architecture(s) http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers https://www.redhat.com/en/resources/mysql-databases-ceph-storage-reference-architecture
  11. 11. Confidential Workload Optimization Workload IO Profiles Workload Examples Searchable Examples Workload IO Characteristics Hardware Characteristics IOPS MySQL MariaDB PostgreSQL Medallia High IOPS/GB Low latency IO Small random IO 70/30 read/write mix Compute servers High core:drive ratio 10GbE NVMe SSD Throughput Digital media serving Server virtualization (OpenStack Cinder) Acquia, Bloomberg Target, Walmart yahoo!, Facebook, Intuit High MB/Sec/GB Latency consistency Large sequential IO Balanced read/write Storage servers Balanced core:drive ratio 10GbE -> 40GbE HDD -> SSD Capacity-Archive Digital media archive Object archive Big Data archive yahoo! CERN Low cost/GB High write volume/hr Sequential IO 90/10 write/read mix Dense storage servers Low core:drive ratio 40GbE HDD Where Ceph and Gluster are used
  12. 12. Confidential Supermicro: TIME TO VALUE RackCluster Workload Requirement input IOP per GB (Provisioned IOP) Throughput (MB per Second) Capacity (Price / Capacity) output Or Turn-key Cluster Solutions From Public Cloud to On-premise Storage Sizing / Guidance  Workload Defined  Building Blocks Structure  Deterministic Performance  Best TCO Speeds and Feeds
  13. 13. Confidential ARCHITECTURAL CONSIDERATIONS FUNDAMENTALLY DIFFERENT DESIGN Traditional Ceph Workload • 50-300+ TB per server • Magnetic Media (HDD) • Low CPU-core:OSD ratio • 10GbE->40GbE MySQL Ceph Workload • < 10 TB per server • Flash (SSD -> NVMe) • High CPU-core:OSD ratio • 10GbE
  14. 14. Confidential Supermicro/Red Hat: Ceph IOP testing Setup OSD Storage Server Systems 5x SuperStorage SSG-6028R-OSDXXX Dual Intel Xeon E5-2650v3 (10x core) 32GB of 1866 MHz DDR3 ECC SDRAM DIMMs 2x 80GB boot drives 4x 800GB Intel DC D3700 (Hot-swap U.2 NVMe) 1x dual port 10GbE network adaptors AOC-STGN-i2S 8x Seagate 6TB 7200 RPM SAS (ST600MN00347200) Mellanox Networking installed but not used in test Client Systems 3x Super Server 2UTwin2 (12 nodes) Dual Intel Xeon E5-2670v2 64GB SDRAM DIMMs Storage Server Software: Red Hat Ceph Storage 1.3 Red Hat Enterprise Linux 7.1 5x OSD Nodes 12x Client Nodes Shared10GSFP+Networking Monitor Nodes
  15. 15. Confidential Optimal FULL 18 18 19 6 34 34 36 8 0 5 10 15 20 25 30 35 40 Ceph cluster 80 cores 8 NVMe (87% capacity) Ceph cluster 40 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 12 NVMe (84% capacity) IOPS/GB 100% Write 70/30 RW CONSIDERING CORE-TO-FLASH RATIO Core and NVMe count based on 4x node cluster –Capacity Full
  16. 16. Confidential 8x Nodes in 3U chassis Model: SYS-5038MR-OSD006P Per Node Configuration: CPU: Single Intel Xeon E5-2630 v4 Memory: 32GB NVMe Storage: Single 800GB Intel P3700 Networking: 1x single-port 10G SFP+ + + 1x CPU + 1x NVMe + 1x SFP OPTIMAL PERFORMANCE MicroCloud for Ceph/MySQL https://www.redhat.com/en/resources/mysql-databases-ceph-storage-reference-architecture
  17. 17. Confidential Supermicro’s Ceph Ready Nodes X10 Models CPU/Mem Drive Config SSG-6018R-MON2 Dual Intel Xeon E5-2630 v3 64GB Ix 800GB PCI-flash / NVMe SYS-5038MR-OSD006P 8x Nodes with single Intel Xeon E5-2630 v4 32GB 8x Nodes, 1x NVMe per node SSG-6028R-OSD072P Single Intel Xeon E5-2620 v3 64GB (12+1) 12x 6TB HDD + 1x NVMe SSG-6048R-OSD216P Dual Intel Xeon E5-2630 v3 128GB (36+2) 36x 6TB HDD + 2x NVMe SSG-6048R-OSD360P Dual Intel Xeon E5-2690 v3 256GB (60+12) 60x 6TB HDD + 12x SSD IOP per GB (Provisioned IOP) Throughput (MB/s) Capacity (Cost Per GB) Monitor Node
  18. 18. Confidential Monitor Node Supermicro’s Red Hat Storage Ready Nodes Throughput Optimized Capacity Optimized High Density General Purpose Backup & Archive IOP per GB CEPH Gluster
  19. 19. Confidential Supermicro’s Total Solutions Turn-key Cluster Level Solutions  Validated Hardware  Red Hat Storage Subscription (Turn-key)  24x7x4-Hour Service Ready Hardware  Validated Hardware  Bare Metal  Flexible Hardware Service Options Ready Nodes Ready Racks Turn-key Racks Turn-key Clusters Total SolutionReady Hardware Flexibility
  20. 20. Confidential © 2016 Supermicro Q&A Thank you
  21. 21. Confidential © 2016 Supermicro Backup
  22. 22. Confidential Supermicro FatTwin And SuperStorage Systems Power BlackMesh Managed Services  Challenges:  To deliver the best customer experience and the highest degree of security to its users in the public and private sectors  Looking for an infrastructure vendor that meets all of its requirements – particularly in regards to TCO and short product fulfillment lead times  Solutions:  A 100% Supermicro server farm with over 500 systems deployed in three highest rated Tier 4 data centers. Total capacity 1.5 PB  High density and high performance FatTwin servers and SuperStorage  RHEL, Red Hat Ceph, OpenStack, OpenShift, Ansible all deployed on Supermicro  Results:  Exponential growth and continuous uptime  Gaining more clients, the most significant being the federal agencies that require the highest degree of network security and reliability “By leveraging Supermicro hardware, BlackMesh is able to maintain its goal of unlimited, on-demand support while producing cost-effective savings.” Jason Ford, CTO of BlackMesh Las Vegas Reston, VA Toronto, ON SuperStorage SSG-6027R-E1R12N FatTwin™ SYS-F617R2-R72+
  23. 23. Confidential Optimal FULL EFFECT OF CEPH CLUSTER LOADING ON IOPS/GB 78 37 25 19 134 72 37 36 0 20 40 60 80 100 120 140 160 Ceph cluster (14% capacity) Ceph cluster (36% capacity) Ceph cluster (72% capacity) Ceph cluster (87% capacity) IOPS/GB 100% Write 70/30 RW
  24. 24. Confidential Optimal Scale-out IOP per GB 4x Node Cluster Comparison 80 Cores 4x NVMe 40 Cores 4x NVMe 80 Cores 8x NVMe 80 Cores 12x NVMe

×