MichaelGreene
VICE PRESIDENT, SOFTWARE AND SERVICES GROUP, INTEL
GENERAL MANAGER, SYSTEM TECHNOLOGIES AND OPTIMIZATION
@greene1of5
June 10, 2017
*Other names and brands may be claimed as the property of others.
From now until 2020, the size of the digital
universe will about double every two years*
InformationGrowth
What we do with data is changing, traditional
storage infrastructure does not solve
tomorrow’s problems
Complexity
Shifting of IT services to cloud computing
and next-generation platforms
Cloud
Emergence of flash storage, new
storage media and software-defined
environments
NewTechnologies
Trends driving
the need for
Storage
Modernization
2
Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014
EnterpriseITStorageEnd-UserPainPoints
SURGING CAPACITY IS THE PRIMARY CHALLENGE AND THE MAJOR DRIVER OF THE STORAGE NEEDS
Source: 451 Research, Voice of the Enterprise: Storage Q4 2015
Data/Capacity
Inadequate Performance
Licensing Cost & Maintenance Cost
Disaster Recovery
Multiple Storage Silos
Typicalend-userstoragepainpoints
Costs
Provisioning And Configuration
Performance
& Capabilities
Data Silos
3
Intel’sroleinstorage
AdvancetheIndustry
OpenSource&Standards
BuildanOpenEcosystem
Intel®StorageBuilders
Endusersolutions
Cloud,Enterprise
IntelTechnologyLeadershipStorage Optimized Platforms
Intel® Xeon® E5-2600 v4 Platform
Intel® Xeon® Processor D-1500 Platform
Ethernet Controllers 10/40/100Gig
Intel® SSD’s for DC & Cloud
Storage Optimized Software
Intel® Intelligent Storage Acceleration Library
Intel® Storage Performance Development Kit
Intel Cache acceleration software
VSM, COSBench, CeTune
SSD & Non-Volatile Memory
Interfaces: SATA , NVMe PCIe,
Form Factors: 2.5”, M.2, U.2, PCIe AIC
New Technologies: 3D NAND, Intel® Optane
Cloud & Enterprise partner storage
solution architectures80+ partners
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific
computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in
fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
helpingcustomerstoenablecloudstorage
Next gen solutions architectures
Intel solution architects have deep
expertise on Ceph for low cost
and high performance usage
4
5
Ceph*inPRC
Ceph* is very important in PRC
– Redevelopment based on the upstream code
– More companies move to Open Source storage solutions
Intel/Redhat held three Ceph* days in Beijing and Shanghai
– 1000+ attendees from 500+ companies
– A vibrant community and ecosystem
Growing number of PRC code contributors.
– Alibaba*, China Mobile*, Chinac , Ebay*, H3C*, Istuary*, Kylin
Cloud*, LETV*, Tencent*, UMCloud*, UnitedStack*, XSKY*,
ZTE*
*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
6
Ceph*atIntel–Our2017CephFocusAreas
Optimize for Intel® Platforms, Flash and Networking
• Hardware offloads through QAT & SOCs
• 3D Xpoint™ enabling
• IA optimized storage libraries to reduce latency (ISA-L, SPDK)
Performance Profiling, Analysis and Community Contributions
Ceph* Enterprise readiness and Hardening
End Customer POCs
POCs
Enable IA optimized Ceph based storage solutions
Gotomarket
Intel® Storage
Acceleration Library
(Intel® ISA-L)
Intel® Storage Performance
Development Kit (Intel® SPDK)
Intel® Cache Acceleration
Software (Intel® CAS)
Virtual Storage Manager Ce-Tune Ceph Profiler
*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
7
Ceph*performancetrendwithSSD
18.5x
*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Refer to P14-17 for detail configuration
PER-NODE
PERFORMANCE
IMPROVEMENT IN
CEPH ALL-FLASH
ARRAY!
6.2x 3.7x
4.21x
1.19x
1.05x
0.80.1 0.86 0.86+Jemalloc 10.0.5 BlueStore 12.0.0+numa opt.
4x SNB_UP
3x S3700
10xHDD
4x IVB_DP
6x S3700
5x HSW_DP
1x P3700
4x S3510
5x BDW_DP
1x P3700
4x P3520
per node throughput 588.25 3673 13573.75 57093.4 68000
0
10000
20000
30000
40000
50000
60000
70000
80000
IOPS
Ceph4KRWper-nodeperformanceoptimizationhistory
6.2x
3.7x
4.21x
1.19x
8
The1st OptaneCephAll-FlashArrayCluster!
Refer to P17 for detail configuration*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Intel®Optane™
+TLC3DNAND
2.8MIOPSfor4K
random read with
extremely low latency!
0.9ms average latency,
2.25ms 99.99% tail latency
2.25xPerformance improvement
compare with P3700 + 4x
P3520 on HSW_DP
20x
latency reduction
99.99%
latency
compared with P3700
All-flash demo at
OpenStack
Summit Boston!
For details check out the poster chat during the Ceph Day
Callfor
action
• Participate in the open source community, and
different storage projects
• Try our tools – and give us feedback
• CeTune: https://github.com/01org/CeTune
• Virtual Storage Manager: https://01.org/virtual-storage-manager
• COSBench: https://github.com/intel-cloud/cosbench
• Optimize Ceph* for efficient SDS solutions!
9
*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
HaveaproductiveCephDay* Beijing
Big Thank you to:
Speakers from Intel, Redhat*, QCT*, XSKY*, Inspur*, Alibaba*, ZTE*, ChinaMobile*…
Ceph.com and ceph.org.cn for the support
Your participation.
10
欢迎*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Legalnotices
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this
document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of
merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of
performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided
here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule,
specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from
published specifications. Current characterized errata are available on request.
Intel, the Intel logo, 3D Xpoint, Optane are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others
© 2017 Intel Corporation.
11
Backup
13
14
Ceph* All Flash SATA configuration
- IVB (E5 -2680 V2) + 6X S3700
COMPUTE NODE
2 nodes with Intel® Xeon™ processor x5570 @
2.93GHz, 128GB mem
1 node with Intel Xeon processor E5 2680
@2.8GHz, 56GB mem
STORAGE NODE
Intel Xeon processor E5-2680 v2
32GB Memory
1xSSD for OS
6x 200GB Intel® SSD DC S3700
2 OSD instances each Drive
WORKLOADS
•Fio with librbd
•20x 30 GB volumes each client
•4 test cases: 4K random read & write; 64K
Sequential read & write
2x10Gb NIC
CEPH1
MON
OSD1 OSD12…
FIO FIO
CLIENT 1
1x10Gb NIC
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
CLIENT 4
CEPH2 CEPH3 CEPH4
TESTENVIRONMENT
*other names and brands may be claimed as the property of others.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Ceph* All Flash SATA configuration
- HSW (E5 -2699 v3) + P3700 + S3510
2x10Gb NIC
CEPH1
MON
OSD1 OSD8…
FIO FIO
CLIENT 1
1x10Gb NIC
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
CLIENT 4
FIO FIO
CLIENT 5
CEPH2 CEPH3 CEPH4 CEPH5
5x Client Node
Intel® Xeon® processor E5-2699 v3 @
2.3GHz, 64GB memory
10Gb NIC
5x Storage Node
Intel Xeon processor E5-2699 v3 @
2.3GHz, 64GB memory
1x Intel® DC P3700 800G SSD for
Journal (U.2)
4x 1.6TB Intel® SSD DC S3510 as data
drive
2 OSDs on single S3510 SSD
Workloads
• Fio with librbd
• 20x 30 GB volumes each client
• 4 test cases: 4K random read & write;
64K Sequential read & write
Test Environment
16
Ceph* All Flash 3D NAND configuration
- HSW (E5 -2699 v3) + P3700 + P3520
5x Client Node
Intel® Xeon™ processor E5-2699 v3
@ 2.3GHz, 64GB mem
10Gb NIC
5x Storage Node
Intel Xeon processor E5-2699 v3 @
2.3 GHz
128GB Memory
1x 400G SSD for OS
1x Intel® DC P3700 800G SSD for
journal (U.2)
4x 2.0TB Intel® SSD DC P3520 as data
drive
2 OSD instances one each P3520 SSD
Test Environment
CEPH1
MON
OSD1 OSD8…
FIO FIO
CLIENT 1
1x10Gb NIC
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
CLIENT 4
FIO FIO
CLIENT 5
CEPH2 CEPH3 CEPH4 CEPH5
*Other names and brands may be
claimed as the property of others.
Workloads
• Fio with librbd
• 20x 30 GB volumes each client
• 4 test cases: 4K random read &
write; 64K Sequential read & write
2x10Gb NIC
CEPH2
17
Ceph* All Flash Optane configuration
- BDW (E5-2699 v4) + Optane + P4500
8x Client Node
• Intel® Xeon™ processor E5-2699 v4 @ 2.3GHz,
64GB mem
• 1x X710 40Gb NIC
8x Storage Node
• Intel Xeon processor E5-2699 v4 @ 2.3 GHz
• 256GB Memory
• 1x 400G SSD for OS
• 1x Intel® DC P4800 375G SSD as WAL and rocksdb
• 8x 2.0TB Intel® SSD DC P4500 as data drive
• 2 OSD instances one each P4500 SSD
• Ceph 12.0.0 with Ubuntu 14.01
2x40Gb NIC
Test Environment
CEPH1
MON
OSD1 OSD16…
FIO FIO
CLIENT 1
1x40Gb NIC
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
…..
FIO FIO
CLIENT8
*Other names and brands may be claimed as the property of others.
CEPH3 … CEPH8
Workloads
• Fio with librbd
• 20x 30 GB volumes each client
• 4 test cases: 4K random read & write; 64K
Sequential read & write

Ceph Day Beijing - Storage Modernization with Intel & Ceph

  • 1.
    MichaelGreene VICE PRESIDENT, SOFTWAREAND SERVICES GROUP, INTEL GENERAL MANAGER, SYSTEM TECHNOLOGIES AND OPTIMIZATION @greene1of5 June 10, 2017 *Other names and brands may be claimed as the property of others.
  • 2.
    From now until2020, the size of the digital universe will about double every two years* InformationGrowth What we do with data is changing, traditional storage infrastructure does not solve tomorrow’s problems Complexity Shifting of IT services to cloud computing and next-generation platforms Cloud Emergence of flash storage, new storage media and software-defined environments NewTechnologies Trends driving the need for Storage Modernization 2 Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014
  • 3.
    EnterpriseITStorageEnd-UserPainPoints SURGING CAPACITY ISTHE PRIMARY CHALLENGE AND THE MAJOR DRIVER OF THE STORAGE NEEDS Source: 451 Research, Voice of the Enterprise: Storage Q4 2015 Data/Capacity Inadequate Performance Licensing Cost & Maintenance Cost Disaster Recovery Multiple Storage Silos Typicalend-userstoragepainpoints Costs Provisioning And Configuration Performance & Capabilities Data Silos 3
  • 4.
    Intel’sroleinstorage AdvancetheIndustry OpenSource&Standards BuildanOpenEcosystem Intel®StorageBuilders Endusersolutions Cloud,Enterprise IntelTechnologyLeadershipStorage Optimized Platforms Intel®Xeon® E5-2600 v4 Platform Intel® Xeon® Processor D-1500 Platform Ethernet Controllers 10/40/100Gig Intel® SSD’s for DC & Cloud Storage Optimized Software Intel® Intelligent Storage Acceleration Library Intel® Storage Performance Development Kit Intel Cache acceleration software VSM, COSBench, CeTune SSD & Non-Volatile Memory Interfaces: SATA , NVMe PCIe, Form Factors: 2.5”, M.2, U.2, PCIe AIC New Technologies: 3D NAND, Intel® Optane Cloud & Enterprise partner storage solution architectures80+ partners Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. helpingcustomerstoenablecloudstorage Next gen solutions architectures Intel solution architects have deep expertise on Ceph for low cost and high performance usage 4
  • 5.
    5 Ceph*inPRC Ceph* is veryimportant in PRC – Redevelopment based on the upstream code – More companies move to Open Source storage solutions Intel/Redhat held three Ceph* days in Beijing and Shanghai – 1000+ attendees from 500+ companies – A vibrant community and ecosystem Growing number of PRC code contributors. – Alibaba*, China Mobile*, Chinac , Ebay*, H3C*, Istuary*, Kylin Cloud*, LETV*, Tencent*, UMCloud*, UnitedStack*, XSKY*, ZTE* *other names and brands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
  • 6.
    6 Ceph*atIntel–Our2017CephFocusAreas Optimize for Intel®Platforms, Flash and Networking • Hardware offloads through QAT & SOCs • 3D Xpoint™ enabling • IA optimized storage libraries to reduce latency (ISA-L, SPDK) Performance Profiling, Analysis and Community Contributions Ceph* Enterprise readiness and Hardening End Customer POCs POCs Enable IA optimized Ceph based storage solutions Gotomarket Intel® Storage Acceleration Library (Intel® ISA-L) Intel® Storage Performance Development Kit (Intel® SPDK) Intel® Cache Acceleration Software (Intel® CAS) Virtual Storage Manager Ce-Tune Ceph Profiler *other names and brands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
  • 7.
    7 Ceph*performancetrendwithSSD 18.5x *other names andbrands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. Refer to P14-17 for detail configuration PER-NODE PERFORMANCE IMPROVEMENT IN CEPH ALL-FLASH ARRAY! 6.2x 3.7x 4.21x 1.19x 1.05x 0.80.1 0.86 0.86+Jemalloc 10.0.5 BlueStore 12.0.0+numa opt. 4x SNB_UP 3x S3700 10xHDD 4x IVB_DP 6x S3700 5x HSW_DP 1x P3700 4x S3510 5x BDW_DP 1x P3700 4x P3520 per node throughput 588.25 3673 13573.75 57093.4 68000 0 10000 20000 30000 40000 50000 60000 70000 80000 IOPS Ceph4KRWper-nodeperformanceoptimizationhistory 6.2x 3.7x 4.21x 1.19x
  • 8.
    8 The1st OptaneCephAll-FlashArrayCluster! Refer toP17 for detail configuration*other names and brands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. Intel®Optane™ +TLC3DNAND 2.8MIOPSfor4K random read with extremely low latency! 0.9ms average latency, 2.25ms 99.99% tail latency 2.25xPerformance improvement compare with P3700 + 4x P3520 on HSW_DP 20x latency reduction 99.99% latency compared with P3700 All-flash demo at OpenStack Summit Boston! For details check out the poster chat during the Ceph Day
  • 9.
    Callfor action • Participate inthe open source community, and different storage projects • Try our tools – and give us feedback • CeTune: https://github.com/01org/CeTune • Virtual Storage Manager: https://01.org/virtual-storage-manager • COSBench: https://github.com/intel-cloud/cosbench • Optimize Ceph* for efficient SDS solutions! 9 *other names and brands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
  • 10.
    HaveaproductiveCephDay* Beijing Big Thankyou to: Speakers from Intel, Redhat*, QCT*, XSKY*, Inspur*, Alibaba*, ZTE*, ChinaMobile*… Ceph.com and ceph.org.cn for the support Your participation. 10 欢迎*other names and brands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
  • 11.
    Legalnotices No license (expressor implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Intel, the Intel logo, 3D Xpoint, Optane are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others © 2017 Intel Corporation. 11
  • 13.
  • 14.
    14 Ceph* All FlashSATA configuration - IVB (E5 -2680 V2) + 6X S3700 COMPUTE NODE 2 nodes with Intel® Xeon™ processor x5570 @ 2.93GHz, 128GB mem 1 node with Intel Xeon processor E5 2680 @2.8GHz, 56GB mem STORAGE NODE Intel Xeon processor E5-2680 v2 32GB Memory 1xSSD for OS 6x 200GB Intel® SSD DC S3700 2 OSD instances each Drive WORKLOADS •Fio with librbd •20x 30 GB volumes each client •4 test cases: 4K random read & write; 64K Sequential read & write 2x10Gb NIC CEPH1 MON OSD1 OSD12… FIO FIO CLIENT 1 1x10Gb NIC FIO FIO CLIENT 2 FIO FIO CLIENT 3 FIO FIO CLIENT 4 CEPH2 CEPH3 CEPH4 TESTENVIRONMENT *other names and brands may be claimed as the property of others. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
  • 15.
    Ceph* All FlashSATA configuration - HSW (E5 -2699 v3) + P3700 + S3510 2x10Gb NIC CEPH1 MON OSD1 OSD8… FIO FIO CLIENT 1 1x10Gb NIC FIO FIO CLIENT 2 FIO FIO CLIENT 3 FIO FIO CLIENT 4 FIO FIO CLIENT 5 CEPH2 CEPH3 CEPH4 CEPH5 5x Client Node Intel® Xeon® processor E5-2699 v3 @ 2.3GHz, 64GB memory 10Gb NIC 5x Storage Node Intel Xeon processor E5-2699 v3 @ 2.3GHz, 64GB memory 1x Intel® DC P3700 800G SSD for Journal (U.2) 4x 1.6TB Intel® SSD DC S3510 as data drive 2 OSDs on single S3510 SSD Workloads • Fio with librbd • 20x 30 GB volumes each client • 4 test cases: 4K random read & write; 64K Sequential read & write Test Environment
  • 16.
    16 Ceph* All Flash3D NAND configuration - HSW (E5 -2699 v3) + P3700 + P3520 5x Client Node Intel® Xeon™ processor E5-2699 v3 @ 2.3GHz, 64GB mem 10Gb NIC 5x Storage Node Intel Xeon processor E5-2699 v3 @ 2.3 GHz 128GB Memory 1x 400G SSD for OS 1x Intel® DC P3700 800G SSD for journal (U.2) 4x 2.0TB Intel® SSD DC P3520 as data drive 2 OSD instances one each P3520 SSD Test Environment CEPH1 MON OSD1 OSD8… FIO FIO CLIENT 1 1x10Gb NIC FIO FIO CLIENT 2 FIO FIO CLIENT 3 FIO FIO CLIENT 4 FIO FIO CLIENT 5 CEPH2 CEPH3 CEPH4 CEPH5 *Other names and brands may be claimed as the property of others. Workloads • Fio with librbd • 20x 30 GB volumes each client • 4 test cases: 4K random read & write; 64K Sequential read & write 2x10Gb NIC
  • 17.
    CEPH2 17 Ceph* All FlashOptane configuration - BDW (E5-2699 v4) + Optane + P4500 8x Client Node • Intel® Xeon™ processor E5-2699 v4 @ 2.3GHz, 64GB mem • 1x X710 40Gb NIC 8x Storage Node • Intel Xeon processor E5-2699 v4 @ 2.3 GHz • 256GB Memory • 1x 400G SSD for OS • 1x Intel® DC P4800 375G SSD as WAL and rocksdb • 8x 2.0TB Intel® SSD DC P4500 as data drive • 2 OSD instances one each P4500 SSD • Ceph 12.0.0 with Ubuntu 14.01 2x40Gb NIC Test Environment CEPH1 MON OSD1 OSD16… FIO FIO CLIENT 1 1x40Gb NIC FIO FIO CLIENT 2 FIO FIO CLIENT 3 FIO FIO ….. FIO FIO CLIENT8 *Other names and brands may be claimed as the property of others. CEPH3 … CEPH8 Workloads • Fio with librbd • 20x 30 GB volumes each client • 4 test cases: 4K random read & write; 64K Sequential read & write