SlideShare a Scribd company logo
Using Recently
Published Ceph
Reference
Architectures to
Select Your Ceph
Daniel Ferber
Open Source Software Defined Storage Technologist,
Intel Storage Group May 25, 2016
Ceph Days Portland, Oregon
2
Legal notices
Copyright © 2016 Intel Corporation.
All rights reserved. Intel, the Intel Logo, Xeon, Intel Inside, and Intel Atom are trademarks of Intel Corporation in the U.S.
and/or other countries.
*Other names and brands may be claimed as the property of others.
FTC Optimization Notice
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to
Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not
guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel.
Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not
specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference
Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804
The cost reduction scenarios described in this document are intended to enable you to get a better understanding of how
the purchase of a given Intel product, combined with a number of situation-specific variables, might affect your future cost
and savings. Nothing in this document should be interpreted as either a promise of or contract for a given level of costs.
3
Welcome to Intel Hillsboro
and Agenda for First Half of this Talk
• Inventory of Published Referenced Architectures from Red Hat and SUSE
• Walk through highlights of a soon to be published Intel and Red Hat Ceph
Reference Architecture paper
• Introduce an Intel all-NVMe Ceph configuration benchmark for MySQL
• Show examples of Ceph solutions
Then Jack Zhang, Intel SSD Architect – second half of this presentation
What Are Reference Architecture Key Components
• Starts with workload (use case) and points to one or more resulting
recommended configurations
• Configurations should be recipes that one can purchase and build
• Key related elements should be recommended
• Replication versus EC, media types for storage, failure domains
• Ideally, performance data and tunings are supplied for the configurations
Tour of Existing Reference Architectures
Available Reference Architectures (recipes)
*Other names and brands may be claimed as the property of others.
Available Reference Architectures (recipes)
• http://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro-INC0270868_v2_0715.pdf
• http://www.qct.io/account/download/download?order_download_id=1065&dtype=Reference%20Architecture
• https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide
• https://www.percona.com/resources/videos/accelerating-ceph-database-workloads-all-pcie-ssd-cluster
• https://www.percona.com/resources/videos/mysql-cloud-head-head-performance-lab
• http://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=4aa6-3911enw
• https://intelassetlibrary.tagcmd.com/#assets/gallery/11492083
A Brief Look at 3
of the Reference Architecture Documents
QCT CEPH performance and
sizing guide
• Target audience: Mid-size to large cloud and enterprise customers
• Showcases Intel based QCT solutions for multiple customer workloads
• Introduces a three tier configuration and solution model:
• IOPS Optimized, Throughput Optimized, Capacity Optimized
• Specifies specific and orderable QCT solutions based on above classifications
• Shows actual Ceph performance observed for the configurations
• Purchase fully configured solutions
per above model from QCT
• Red Hat Ceph Storage Pre-Installed
• Red Hat Ceph Storage support included
• Datasheets and white papers at
www.qct.io
* Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
Supermicro performance and
sizing guide
• Target audience: Mid-size to large cloud and enterprise customers
• Showcases Intel based Supermicro solutions for multiple customer workloads
• Introduces a three tier configuration and solution model:
• IOPS Optimized, Throughput Optimized, Capacity Optimized
• Specifies specific and orderable Supermicro solutions based on above classifications
• Shows actual Ceph performance observed for the configurations
• Purchase fully configured solutions
per above model from Supermicro
• Red Hat Ceph Storage Pre-Installed
• Red Hat Ceph Storage support included
• Datasheets and white papers at
supermicro.com
* Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
intel solutions for ceph
deployments
• Target audience: Mid-size to large cloud and enterprise customers
• Showcases Intel based solutions for multiple customer workloads
• Uses the three tier configuration and solution model:
• IOPS Optimized, Throughput Optimized, Capacity Optimized
• Contains Intel configurations and performance data
• Contains a Yahoo case study
• Contains specific use case examples
• Adds a Good, Better, Best model for
all SSD Ceph configurations
• Adds configuration and performance
data for Intel* Cache Acceleration
• Overviews CeTune and VSM tools
• Datasheets and white papers at
intelassetlibrary.tagcmd.com/#assets/gallery/11492083
* Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
Quick Look at 3 Tables Inside the Intel
and Red Hat Reference Architecture Document
(to be published soon)
Generic Red Hat Ceph Reference Architecture Preview
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide
*Other names and brands may be claimed as the property of others.
• IOPS optimized config is all
NVME SSD
• Typically block with
replication
Allows database work
• Journals are NVME
• Bluestore, when
supported, will
increase performance
• Throughout optimized is a
balanced config
• HDD storage with SSD
journals
• Block or object, with
replication
• Capacity optimized typically
all HDD storage
• Object and EC
Intel and Red Hat Ceph Reference Architecture Preview
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide
*Other names and brands may be claimed as the property of others.
• IOPS optimized Ceph
clusters are typically in the TB
ranges
• Throughput clusters will likely
move to 2.5” inch enclosures
and all SSD over time
• Capacity optimized likely to
favor 3.5” for HDD storage
Intel and Red Hat Ceph Reference Architecture Preview
*Other names and brands may be claimed as the property of others.
• Specific recommended Intel
processor and SSD models
are now specified
• Intel processor
recommendations depend on
how many OSDs are used
Intel and Red Hat Ceph Reference Architecture
*Other names and brands may be claimed as the property of others.
• Recommendations for
specific Intel SSDs and
journals, with two options
• Specific Intel processor
recommendations, depending
on how many OSDs
Intel and Red Hat Ceph Reference Architecture
*Other names and brands may be claimed as the property of others.
• No SSDs for capacity model
• Specific Intel processor
recommendations are same
as on previous throughput
config recommendations, and
are based on number of
OSDs
Intel all-NVMe SSD
Ceph Reference Architecture
Presented by Intel at Percona Live 2016
*Other names and brands may be claimed as the property of others.
An “All-NVMe” high-density Ceph Cluster Configuration
Supermicro 1028U-TN10RT+
NVMe1 NVMe2
NVMe3 NVMe4
CephOSD1
CephOSD2
CephOSD3
CephOSD4
CephOSD16
5-Node all-NVMe Ceph Cluster
Dual-Xeon E5 2699v4@2.2GHz, 44C HT, 128GBDDR4
Centos 7.2, 3.10-327, Ceph v10.1.2, bluestore async
ClusterNW2x10GbE
10x Client Systems + 1x Ceph MON
Dual-socket Xeon E5 2699v3@2.3GHz
36 Cores HT, 128GBDDR4
Public NW 2x 10GbE
Docker1 (krbd)
MySQL DB Server
Docker2 (krbd)
MySQL DB Server
Docker3
Sysbench Client
FIO
Docker4
Sysbench Client
DB containers - 16 vCPUs, 32GB mem, 200GB RBD volume,
100GB MySQL dataset, InnoDBbuf cache 25GB (25%)
Client containers – 16 vCPUs, 32GB RAM
FIO 2.8, Sysbench 0.5
Test-set 1
Test-set2
*Other names and brands may be claimed as the property of others.
Software and workloads used in
performance tests may have been
optimized for performance only on Intel
microprocessors. Any difference in
system hardware or software design or
configuration may affect actual
performance. See configuration slides in
backup for details on software
configuration and test benchmark
parameters.
Tunings for the all-NVE Ceph Cluster
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or
software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark
parameters.
• TCMalloc with 128MB thread cache
• or use JEMalloc
• Disable debug
• Disable auth
*Other names and brands may be claimed as the property of others.
All NVMe Flash Ceph Storage – Summary
• Intel NVMe Flash storage works for low latency workloads
• Ceph makes a compelling case for database workloads
• 1.4 million random read IOPS is achievable in 5U with ~1ms latency today.
• Sysbench MySQL OLTP Performance numbers were good at 400k 70/30%
OLTP QPS @~50 ms avg
• Using Xeon E5 v4 standard high-volume servers and Intel NVMe SSDs, one
can now deploy a high performance Ceph cluster for database workloads
• Recipe and tunings for this solution are here:
www.percona.com/live/data-performance-conference-2016/content/accelerating-ceph-
database-workloads-all-pcie-ssd-cluster
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Any difference in system hardware or software design or configuration may affect actual performance.
See configuration slides in backup for details on software configuration and test benchmark parameters. *Other names and brands may be claimed as the property of others.
Ceph Solutions Available
in addition to the
QCT, Supermicro, and HP Solutions
Already Mentioned
Thomas Krenn SUSE Enterprise Storage
https://www.thomas-krenn.com/en/products/storage-systems/suse-enterprise-storage.html
*Other names and brands may be claimed as the property of others.
Fujistu Intel Based Ceph Appliance
http://www.fujitsu.com/global/products/computing/storage/eternus-cd/s2/ *Other names and brands may be claimed as the property of others.
Ceph Reference Architectures
Summary
Ceph Reference Architectures Summary
• The community has a growing number of good reference architectures
• Some point to specific hardware, others are generic
• Different workloads are catered for
• Some of the documents contain performance and tuning information
• Commercial support available for professional services and software support
• Intel will continue to work with its ISV and hardware systems partners on
reference architectures
• And continue Intel’s Ceph development focused on Ceph performance
Next – a Focus on Nvm
technologies for today’s and
tomorrow’s Ceph
Jack Zhang, Enterprise Architect, Intel Corporation
May 2016
29
Solid State Drive (SSD) for Ceph today
Three Ceph* Configurations and data
30
The performance – Motivations for SSD on Ceph
Storage Providers are Struggling to achieve the required high performance
 There is a growing trend for cloud provider to adopt SSD
– CSP who wants to build EBS alike service for their OpenStack based public/private cloud
Strong demands to run enterprise applications
 OLTP workloads running on Ceph
 high performance multi-purpose Ceph cluster is the key advantages
 Performance is still an important factor
SSD price continue to decrease
Standard/good
NVMe/PCIe SSD for Journal + Caching, HDDs as OSD data
drive
Example: 1 x Intel P3700 1.6TB as Journal + Intel iCAS
caching software, + 12 HDDs
Better (best TCO, as today)
NVMe/PCIe SSD as Journal + High capacity SATA SSD for
data drive
Example: 1 x Intel P3700 800GB + 4 x Intel S3510 1.6TB
Best Performance
All NVMe/PCIe SSDs
Example: 4 x Intel P3700 2TB SSDs
Three Configurations for Ceph Storage Node
Ceph storage node --Good
CPU Intel(R) Xeon(R) CPU E5-2650v3
Memory 64 GB
NIC 10GbE
Disks 1x 1.6TB P3700 + 12 x 4TB HDDs (1:12 ratio)
P3700 as Journal and caching
Caching software Intel iCAS 3.0, option: RSTe/MD4.3
Ceph Storage node --Better
CPU Intel(R) Xeon(R) CPU E5-2690
Memory 128 GB
NIC Duel 10GbE
Disks 1x 800GB P3700 + 4x S3510 1.6TB
Ceph Storage node --Best
CPU Intel(R) Xeon(R) CPU E5-2699v3
Memory >= 128 GB
NIC 2x 40GbE, 4x dual 10GbE
Disks 4 x P3700 2TB
Intel® NVMe SSD + CAS Accelerates Ceph
forCHALLENGE:
• How can I use Ceph for scalable storage solution?
(High latency and low throughput due to erasure
coding, write twice, huge number of small files)
• Use over-provision to address performance is
costly.
Intel® CAS
SOLUTION:
• Intel® NVMe SSD – consistently amazing
• Intel® CAS 3.0 feature – hinting
• Intel® CAS 3.0 fine tuned for Yahoo! – cache metadata
YAHOO! PERFORMANCE
GOAL:
2X
Throughput
1/2
LATENCY
COST REDUCTION:
• CapEx savings (over-provision )
OpEx savings (Power, Space, Cooling… )
• Improved scalability planning (Performance and Predictability )
User
Web Server
Client
Ceph Cluster
32
33
Yahoo! (Ceph obj) - Results
34
2x10Gb NIC
Test Environment
5x Client Node
• Intel® Xeon™ processor
E5-2699 v3 @ 2.3GHz,
64GB mem
• 10Gb NIC
5x Storage Node
• Intel Xeon™ processor E5-
2699 v3 @ 2.3 GHz
• 128GB Memory
• 1x 1T HDD for OS
• 1x Intel® DC P3700 800G
SSD for Journal (U.2)
• 4x 1.6TB Intel® SSD DC
S3510 as data drive
• 2 OSD instances one each
S3510 SSD
CEPH1
MON
OSD1 OSD8…
FIO FIO
CLIENT 1
1x10Gb NIC
Note: Refer to backup for detailed test configuration for hardware, Ceph and testing scripts
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
CLIENT 4
FIO FIO
CLIENT 5
CEPH2
OSD1 OSD8…
CEPH3
OSD1 OSD8…
CEPH4
OSD1 OSD8…
CEPH5
OSD1 OSD8…
All SSD, better configuration – NVMe/PCIe + SATA
SSDs
35
 1.08M IOPS for 4K random read, 144K IOPS for 4K random write with tunings
and optimizations
1
8
64
0 200000 400000 600000 800000100000012000001400000
LATENCY(MS)
IOPS
RANDOM READ PERFORMANCE
RBD # SCALE TEST
4K Rand.R 8K Rand.R 16K Rand.R 64K Rand.R
63K 64k Random Read
IOPS @ 40ms
300K 16k Random
Read IOPS @ 10 ms
1.08M 4k Random
Read IOPS @ 3.4ms500K 8k Random
Read IOPS @ 8.8ms
0
10
0 20000 40000 60000 80000 100000120000140000160000
LATENCY(MS)
IOPS
RANDOM WRITE PERFORMANCE
RBD # SCALE TEST
4K Rand.W 8K Rand.w
16K Rand.W 64K Rand.W
23K 64k Random Write
IOPS @ 2.6ms
88K 16kRandom Write
IOPS @ 2.7ms
132K 8k Random Write
IOPS @ 4.1ms
144K 4kRandom Write
IOPS @ 4.3ms
Excellent random read performance and Acceptable random write performance
All SSD (NVMe + SATA SSD) Configuration – Break Million IOPS
Comparison: SSD Cluster vs. HDD Cluster
 Both journal on PCIe/NVMe SSD
 4K random write, need ~ 58x HDD Cluster (~ 2320
HDDs) to get same performance
 4K random read, need ~ 175x HDD Cluster (~ 7024
HDDs) to get the same performance
36
ALL SSD Ceph provides Best TCO (both Capx and Opex), not only performance but also space, Power, Fail rate, etc
Client Node
5 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 64GB memory
OS : Ubuntu Trusty
Storage Node
5 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 128GB memory
Ceph Version : 9.2.0 OS : Ubuntu Trusty
 1 x P3700 SSDs for Journal per node
Cluster difference:
SSD cluster : 4xS3510 1.6TB for OSD per node
HDD cluster : 14x STAT 7200RPM HDDs as OSD per node
37
The performance Observations and Tunings
Leveldb became the bottleneck
 Single thread leveldb pushed one core to 100% utilization
Omap overhead
 Among the 3800+ threads, average ~47 threads are running, ~ 10 pipe threads
and ~9 OSD op threads are running, most of OSD op threads are sleep (top –H)
 Osd op threads are waiting for throttle of filestore to be released
 Disable omap operation can speedup release of filestore throttle, which makes
more OSD op thread in running state, average ~ 105 threads are running.
 Throughput improved 63%
High CPU consumption
 70% CPU utilization of Two high-end Xeon E5 v3 processors (36 cores) with 4
S3510s
 Perf showed that most of CPU intensive functions are malloc, free and other
system calls
*Bypass omap: Ingore object_map->set_keys in FileStore::_omap_setkeys, for tests only
38
The performance Tunings
• Up to 16x performance improvement for 4K random read, peak throughput
1.08M IOPS
• Up to 7.6x performance improvement for 4K random write, 140K IOPS
4K Random Read
Tunings
4K Random Write
Tunings
Default Single OSD Single OSD
Tuning-1 2 OSD instances per SSD 2 OSD instances per SSD
Tuning-2 Tuning1 + debug=0 Tuning2+Debug 0
Tuning-3 Tuning2 + jemalloc
tuning3+ op_tracker off, tuning fd
cache
Tuning-4 Tuning3 + read_ahead_size=16 Tuning4+jemalloc
Tuning-5 Tuning4 + osd_op_thread=32 Tuning4 + Rocksdb to store omap
Tuning-6 Tuning5 + rbd_op_thread=4 N/A
-
5.00
10.00
15.00
20.00
Normalized
4K random Read/Write Tunings
4K Random Read 4K random write
Ceph Storage Cluster
Best Configuration-- All NVMe SSDs
Ceph network (192.168.142.0/24) - 10Gbps
CBT / Zabbix /
Monitoring FIO RBD Client
• OSD System Config: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4
• Each system with 4x P3700 800GB NVMe, partitioned into 4 OSD’s each, 16 OSD’s total per node
• FIO Client Systems: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4
• Ceph v0.94.3 Hammer Release, CentOS 7.1, 3.10-229 Kernel, Linked with JEMalloc 3.6
• CBT used for testing and data acquisition
• Single 10GbE network for client & replication data transfer, Replication factor 2
FIO RBD Client
FIO RBD Client
FIO RBD Client
FIO RBD Client
FIO RBD Client
FatTwin (4x dual-socket XeonE5 v3)
FatTwin (4x dual-socket XeonE5 v3)
CephOSD1
NVMe1 NVMe3
NVMe2 NVMe4
CephOSD2
CephOSD3
CephOSD4
CephOSD16
…
CephOSD1
NVMe1 NVMe3
NVMe2 NVMe4
CephOSD2
CephOSD3
CephOSD4
CephOSD16
…
CephOSD1
NVMe1 NVMe3
NVMe2 NVMe4
CephOSD2
CephOSD3
CephOSD4
CephOSD16
…
CephOSD1
NVMe1 NVMe3
NVMe2 NVMe4
CephOSD2
CephOSD3
CephOSD4
CephOSD16
…
CephOSD1
NVMe1 NVMe3
NVMe2 NVMe4
CephOSD2
CephOSD3
CephOSD4
CephOSD16
…
SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U
Intel Xeon E5 v3 18 Core CPUs
Intel P3700 NVMe PCI-e Flash
Easily serviceable NVMe Drives
40
First Ceph cluster to break 1 Million 4K random IOPS, ~1ms response
time
0
1
2
3
4
5
6
7
8
9
10
0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1,400,000
AvgLatency(ms)
IOPS
IODepth Scaling - Latency vs IOPS -
100% 4K RandomRead 100% 4K RandomWrite 70/30% 4K Random OLTP
171K 100% 4k Random
Write IOPS @ 6ms
400K 70/30% (OLTP) 4k
Random IOPS @~3ms
1M 100% 4k Random
Read IOPS @~1.1ms
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or
software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark
parameters.
All NVMe Configuration– Million IOPS, Low Latency
• OSD System Config: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4
• Each system with 4x P3700 800GB NVMe, partitioned into 4 OSD’s each, 16 OSD’s total per node
• FIO Client Systems: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4
• Ceph v0.94.3 Hammer Release, CentOS 7.1, 3.10-229 Kernel, Linked with JEMalloc 3.6
• CBT used for testing and data acquisition
• Single 10GbE network for client & replication data transfer, Replication factor 2
41
3D NAND and 3D XPoint™ for Ceph tomorrow
NAND Flash and 3D XPoint™ Technology for
Ceph Tomorrow
42
3D MLC and TLC NAND
Building block enabling expansion of
SSD into HDD segments
3D Xpoint™
Building blocks for ultra high
performance storage &
memory
3D Xpoint™ TECHNOLOGY
In Pursuit of Large Memory Capacity … Word Access … Immediately Available …
Word (Cache Line)
Crosspoint Structure
Selectors allow dense packing
and individual access to bits
Large Memory Capacity
Crosspoint & Scalable
Memory layers can be stacked
in a 3D manner
NVM Breakthrough
Material Advances
Compatible switch and
memory cell materials
Immediately Available
High Performance Cell and
array architecture that can
switch states 1000x faster than
NAND
TEAM QUANTA
NAND SSD
Latency: ~100,000X
Size of Data: ~1,000X
Latency: 1X
Size of Data: 1X
SRAM
Latency: ~10 MillionX
Size of Data: ~10,000X
HDD
Latency: ~10X
Size of Data: ~100X
DRAM
3D XPoint ™
Memory Media
Latency: ~100X
Size of Data: ~1,000X
STORAGE
MEMORYMEMORYTechnology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies
recorded on published specifications of in-market memory products against internal Intel specifications.
3D Xpoint™ TECHNOLOGY
Breaks the Memory Storage Barrier
44
Intel® Optane™ (prototype) vs Intel® SSD DC P3700 Series at QD=1
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate
performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Server Configuration: 2x Intel® Xeon® E5 2690 v3 NVM
Express* (NVMe) NAND based SSD: Intel P3700 800 GB, 3D Xpoint based SSD: Optane NVMe OS: Red Hat* 7.1
46
3D Xpoint™ opportunities – Bluestore backend
Three usages for PMEM device
• Backend of bluestore: raw PMEM block device or
file of dax-enabled FS
• Backend of rocksdb: raw PMEM block device or
file of dax-enabled FS
• Backend of rocksdb’s WAL: raw PMEM block
device or file of DAX-enabled FS
Two methods for accessing PMEM
devices
• libpmemblk
• mmap + libpmemlib
BlueStore
Rocksdb
BlueFS
PMEMDevice PMEMDevice PMEMDevice
Metadata
Libpmemlib
Libpmemblk
DAX Enabled File System
mmap
Load/store
mmap
Load/store
File
File
File
API
API
Data
47
3D NAND - Ceph cost effective solution
Enterprise class, highly reliable, feature rich, and cost
effective AFA solution
 NVMe SSD is today’s SSD, and 3D NAND or TLC SSD is today’s
HDD
– NVMe as Journal, high capacity SATA SSD or 3D NAND SSD as
data store
– Provide high performance, high capacity, a more cost effective
solution
– 1M 4K Random Read IOPS delivered by 5 Ceph nodes
– Cost effective: 1000 HDD Ceph nodes (10K HDDs) to deliver same
throughput
– High capacity: 100TB in 5 nodes
 W/ special software optimization on filestore and bluestore backend
Ceph Node
S3510
1.6TB
S3510
1.6TB
S3510
1.6TB
S3510
1.6TB
P3700
M.2 800GB
Ceph Node
P3520
4TB
P3520
4TB
P3520
4TB
P3520
4TB
P3700 & 3D Xpoint™ SSDs
3D NAND
P3520
4TB
3D Xpoint™
48
Legal Notices and Disclaimers
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer.
No computer system can be absolutely secure.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to
evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost
savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the
latest forecast, schedule, specifications and roadmaps.
Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed
discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate.
Intel, Xeon and the Intel logo are trademarks of Intel Corporation in the United States and other countries.
*Other names and brands may be claimed as the property of others.
© 2015 Intel Corporation.
49
Legal Information: Benchmark and Performance Claims
Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as
SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors
may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including
the performance of that product when combined with other products.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual
performance. Consult other sources of information to evaluate performance as you consider your purchase.
Test and System Configurations: See Back up for details.
For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a
number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks," "estimates," "may," "will," "should" and their variations identify
forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect
Intel's actual results, and variances from Intel's current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-
looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations. Demand for
Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic conditions; consumer confidence or income levels;
customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other disruptions affecting
customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel’s gross margin percentage could vary
significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in
revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit costs; defects or
disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel product
introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new features into existing
products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions
in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and
fluctuations in currency exchange rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and doing-business
regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are either fixed or difficult to reduce in
the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could be affected by changes in Intel’s priorities for the use of cash, such
as operational spending, capital spending, acquisitions, and as a result of changes to Intel’s cash flows and changes in tax laws. Product defects or errata (deviations from published
specifications) may adversely impact our expenses, revenues and reputation. Intel’s results could be affected by litigation or regulatory matters involving intellectual property,
stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling
one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual
property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. A detailed discussion of these and other factors that
could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Form 10-Q, Form 10-K and earnings release.
Rev. 1/15/15 50
Ceph Benchmark/deployment/Tuning Tool: CeTune
52
 CeTune controller
– Reads configuration
files and controls the
process to deploy,
benchmark and analyze
the collected data
 CeTune workers
– Controlled by CeTune
as workload generator,
system metrics collector
Intel® Virtual Storage Manager
53
Management framework = Consistent configuration
Operator-friendly interface for management & monitoring
Cluster Server
(OSDs/Monitor/MDS))
VSM Agent
(on each Ceph node)
VSM
Controller
(Controller Node
Server)
Client Server
OpenStack
Nova Controller
Server(s)
API
Data Center Firewall
Internet
Data Center
SSH
HTTP
Socket
Operator
Using Recently Published Ceph Reference Architectures to Select Your Ceph Configuration

More Related Content

What's hot

Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph Community
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Community
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 

What's hot (20)

Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Stabilizing Ceph
Stabilizing CephStabilizing Ceph
Stabilizing Ceph
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 

Viewers also liked

QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM Ceph Community
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 
Ceph Tech Talk: Bluestore
Ceph Tech Talk: BluestoreCeph Tech Talk: Bluestore
Ceph Tech Talk: BluestoreCeph Community
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a NutshellKaran Singh
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Community
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Community
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Community
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Community
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 

Viewers also liked (17)

QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
Ceph Tech Talk: Bluestore
Ceph Tech Talk: BluestoreCeph Tech Talk: Bluestore
Ceph Tech Talk: Bluestore
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a Nutshell
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 

Similar to Using Recently Published Ceph Reference Architectures to Select Your Ceph Configuration

Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...
 Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive... Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...
Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...Databricks
 
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryAccelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryDatabricks
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Community
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Accelerate Ceph performance via SPDK related techniques
Accelerate Ceph performance via SPDK related techniques Accelerate Ceph performance via SPDK related techniques
Accelerate Ceph performance via SPDK related techniques Ceph Community
 
Intel® Select Solutions for the Network
Intel® Select Solutions for the NetworkIntel® Select Solutions for the Network
Intel® Select Solutions for the NetworkLiz Warner
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Michelle Holley
 
Процессор Intel Xeon
Процессор Intel Xeon Процессор Intel Xeon
Процессор Intel Xeon Nick Turunov
 
Intel Technologies for High Performance Computing
Intel Technologies for High Performance ComputingIntel Technologies for High Performance Computing
Intel Technologies for High Performance ComputingIntel Software Brasil
 
MySQL 5.6, news in 5.7 and our HA options
MySQL 5.6, news in 5.7 and our HA optionsMySQL 5.6, news in 5.7 and our HA options
MySQL 5.6, news in 5.7 and our HA optionsTed Wennmark
 
Výhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database ApplianceVýhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database ApplianceMarketingArrowECS_CZ
 
Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...
Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...
Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...Alluxio, Inc.
 
Как выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОД
Как выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОДКак выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОД
Как выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОДNick Turunov
 
Develop, Deploy, and Innovate with Intel® Cluster Ready
Develop, Deploy, and Innovate with Intel® Cluster ReadyDevelop, Deploy, and Innovate with Intel® Cluster Ready
Develop, Deploy, and Innovate with Intel® Cluster ReadyIntel IT Center
 

Similar to Using Recently Published Ceph Reference Architectures to Select Your Ceph Configuration (20)

Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...
 Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive... Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...
Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...
 
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryAccelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
 
Clear Linux OS - Architecture Overview
Clear Linux OS - Architecture OverviewClear Linux OS - Architecture Overview
Clear Linux OS - Architecture Overview
 
Clear Linux Overview and Engagement
Clear Linux Overview and EngagementClear Linux Overview and Engagement
Clear Linux Overview and Engagement
 
Clear Linux OS - Introduction
Clear Linux OS - IntroductionClear Linux OS - Introduction
Clear Linux OS - Introduction
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in Ceph
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Accelerate Ceph performance via SPDK related techniques
Accelerate Ceph performance via SPDK related techniques Accelerate Ceph performance via SPDK related techniques
Accelerate Ceph performance via SPDK related techniques
 
Intel® Select Solutions for the Network
Intel® Select Solutions for the NetworkIntel® Select Solutions for the Network
Intel® Select Solutions for the Network
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...
 
Процессор Intel Xeon
Процессор Intel Xeon Процессор Intel Xeon
Процессор Intel Xeon
 
Intel Technologies for High Performance Computing
Intel Technologies for High Performance ComputingIntel Technologies for High Performance Computing
Intel Technologies for High Performance Computing
 
MySQL 5.6, news in 5.7 and our HA options
MySQL 5.6, news in 5.7 and our HA optionsMySQL 5.6, news in 5.7 and our HA options
MySQL 5.6, news in 5.7 and our HA options
 
Výhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database ApplianceVýhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database Appliance
 
Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...
Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...
Intel: How to Use Alluxio to Accelerate BigData Analytics on the Cloud and Ne...
 
Как выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОД
Как выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОДКак выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОД
Как выбрать оптимальную серверную архитектуру для создания высокоэффективных ЦОД
 
Develop, Deploy, and Innovate with Intel® Cluster Ready
Develop, Deploy, and Innovate with Intel® Cluster ReadyDevelop, Deploy, and Innovate with Intel® Cluster Ready
Develop, Deploy, and Innovate with Intel® Cluster Ready
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 2
UiPath Test Automation using UiPath Test Suite series, part 2UiPath Test Automation using UiPath Test Suite series, part 2
UiPath Test Automation using UiPath Test Suite series, part 2DianaGray10
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...Product School
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform EngineeringJemma Hussein Allen
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Thierry Lestable
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaRTTS
 
НАДІЯ ФЕДЮШКО БАЦ «Професійне зростання QA спеціаліста»
НАДІЯ ФЕДЮШКО БАЦ  «Професійне зростання QA спеціаліста»НАДІЯ ФЕДЮШКО БАЦ  «Професійне зростання QA спеціаліста»
НАДІЯ ФЕДЮШКО БАЦ «Професійне зростання QA спеціаліста»QADay
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...Sri Ambati
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Product School
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsPaul Groth
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Product School
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 2
UiPath Test Automation using UiPath Test Suite series, part 2UiPath Test Automation using UiPath Test Suite series, part 2
UiPath Test Automation using UiPath Test Suite series, part 2
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
НАДІЯ ФЕДЮШКО БАЦ «Професійне зростання QA спеціаліста»
НАДІЯ ФЕДЮШКО БАЦ  «Професійне зростання QA спеціаліста»НАДІЯ ФЕДЮШКО БАЦ  «Професійне зростання QA спеціаліста»
НАДІЯ ФЕДЮШКО БАЦ «Професійне зростання QA спеціаліста»
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 

Using Recently Published Ceph Reference Architectures to Select Your Ceph Configuration

  • 1. Using Recently Published Ceph Reference Architectures to Select Your Ceph Daniel Ferber Open Source Software Defined Storage Technologist, Intel Storage Group May 25, 2016 Ceph Days Portland, Oregon
  • 2. 2
  • 3. Legal notices Copyright © 2016 Intel Corporation. All rights reserved. Intel, the Intel Logo, Xeon, Intel Inside, and Intel Atom are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. FTC Optimization Notice Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804 The cost reduction scenarios described in this document are intended to enable you to get a better understanding of how the purchase of a given Intel product, combined with a number of situation-specific variables, might affect your future cost and savings. Nothing in this document should be interpreted as either a promise of or contract for a given level of costs. 3
  • 4. Welcome to Intel Hillsboro and Agenda for First Half of this Talk • Inventory of Published Referenced Architectures from Red Hat and SUSE • Walk through highlights of a soon to be published Intel and Red Hat Ceph Reference Architecture paper • Introduce an Intel all-NVMe Ceph configuration benchmark for MySQL • Show examples of Ceph solutions Then Jack Zhang, Intel SSD Architect – second half of this presentation
  • 5. What Are Reference Architecture Key Components • Starts with workload (use case) and points to one or more resulting recommended configurations • Configurations should be recipes that one can purchase and build • Key related elements should be recommended • Replication versus EC, media types for storage, failure domains • Ideally, performance data and tunings are supplied for the configurations
  • 6. Tour of Existing Reference Architectures
  • 7. Available Reference Architectures (recipes) *Other names and brands may be claimed as the property of others.
  • 8. Available Reference Architectures (recipes) • http://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro-INC0270868_v2_0715.pdf • http://www.qct.io/account/download/download?order_download_id=1065&dtype=Reference%20Architecture • https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide • https://www.percona.com/resources/videos/accelerating-ceph-database-workloads-all-pcie-ssd-cluster • https://www.percona.com/resources/videos/mysql-cloud-head-head-performance-lab • http://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=4aa6-3911enw • https://intelassetlibrary.tagcmd.com/#assets/gallery/11492083
  • 9. A Brief Look at 3 of the Reference Architecture Documents
  • 10. QCT CEPH performance and sizing guide • Target audience: Mid-size to large cloud and enterprise customers • Showcases Intel based QCT solutions for multiple customer workloads • Introduces a three tier configuration and solution model: • IOPS Optimized, Throughput Optimized, Capacity Optimized • Specifies specific and orderable QCT solutions based on above classifications • Shows actual Ceph performance observed for the configurations • Purchase fully configured solutions per above model from QCT • Red Hat Ceph Storage Pre-Installed • Red Hat Ceph Storage support included • Datasheets and white papers at www.qct.io * Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
  • 11. Supermicro performance and sizing guide • Target audience: Mid-size to large cloud and enterprise customers • Showcases Intel based Supermicro solutions for multiple customer workloads • Introduces a three tier configuration and solution model: • IOPS Optimized, Throughput Optimized, Capacity Optimized • Specifies specific and orderable Supermicro solutions based on above classifications • Shows actual Ceph performance observed for the configurations • Purchase fully configured solutions per above model from Supermicro • Red Hat Ceph Storage Pre-Installed • Red Hat Ceph Storage support included • Datasheets and white papers at supermicro.com * Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
  • 12. intel solutions for ceph deployments • Target audience: Mid-size to large cloud and enterprise customers • Showcases Intel based solutions for multiple customer workloads • Uses the three tier configuration and solution model: • IOPS Optimized, Throughput Optimized, Capacity Optimized • Contains Intel configurations and performance data • Contains a Yahoo case study • Contains specific use case examples • Adds a Good, Better, Best model for all SSD Ceph configurations • Adds configuration and performance data for Intel* Cache Acceleration • Overviews CeTune and VSM tools • Datasheets and white papers at intelassetlibrary.tagcmd.com/#assets/gallery/11492083 * Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
  • 13. Quick Look at 3 Tables Inside the Intel and Red Hat Reference Architecture Document (to be published soon)
  • 14. Generic Red Hat Ceph Reference Architecture Preview https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide *Other names and brands may be claimed as the property of others. • IOPS optimized config is all NVME SSD • Typically block with replication Allows database work • Journals are NVME • Bluestore, when supported, will increase performance • Throughout optimized is a balanced config • HDD storage with SSD journals • Block or object, with replication • Capacity optimized typically all HDD storage • Object and EC
  • 15. Intel and Red Hat Ceph Reference Architecture Preview https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide *Other names and brands may be claimed as the property of others. • IOPS optimized Ceph clusters are typically in the TB ranges • Throughput clusters will likely move to 2.5” inch enclosures and all SSD over time • Capacity optimized likely to favor 3.5” for HDD storage
  • 16. Intel and Red Hat Ceph Reference Architecture Preview *Other names and brands may be claimed as the property of others. • Specific recommended Intel processor and SSD models are now specified • Intel processor recommendations depend on how many OSDs are used
  • 17. Intel and Red Hat Ceph Reference Architecture *Other names and brands may be claimed as the property of others. • Recommendations for specific Intel SSDs and journals, with two options • Specific Intel processor recommendations, depending on how many OSDs
  • 18. Intel and Red Hat Ceph Reference Architecture *Other names and brands may be claimed as the property of others. • No SSDs for capacity model • Specific Intel processor recommendations are same as on previous throughput config recommendations, and are based on number of OSDs
  • 19. Intel all-NVMe SSD Ceph Reference Architecture Presented by Intel at Percona Live 2016
  • 20. *Other names and brands may be claimed as the property of others. An “All-NVMe” high-density Ceph Cluster Configuration Supermicro 1028U-TN10RT+ NVMe1 NVMe2 NVMe3 NVMe4 CephOSD1 CephOSD2 CephOSD3 CephOSD4 CephOSD16 5-Node all-NVMe Ceph Cluster Dual-Xeon E5 2699v4@2.2GHz, 44C HT, 128GBDDR4 Centos 7.2, 3.10-327, Ceph v10.1.2, bluestore async ClusterNW2x10GbE 10x Client Systems + 1x Ceph MON Dual-socket Xeon E5 2699v3@2.3GHz 36 Cores HT, 128GBDDR4 Public NW 2x 10GbE Docker1 (krbd) MySQL DB Server Docker2 (krbd) MySQL DB Server Docker3 Sysbench Client FIO Docker4 Sysbench Client DB containers - 16 vCPUs, 32GB mem, 200GB RBD volume, 100GB MySQL dataset, InnoDBbuf cache 25GB (25%) Client containers – 16 vCPUs, 32GB RAM FIO 2.8, Sysbench 0.5 Test-set 1 Test-set2 *Other names and brands may be claimed as the property of others. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark parameters.
  • 21. Tunings for the all-NVE Ceph Cluster Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark parameters. • TCMalloc with 128MB thread cache • or use JEMalloc • Disable debug • Disable auth
  • 22. *Other names and brands may be claimed as the property of others. All NVMe Flash Ceph Storage – Summary • Intel NVMe Flash storage works for low latency workloads • Ceph makes a compelling case for database workloads • 1.4 million random read IOPS is achievable in 5U with ~1ms latency today. • Sysbench MySQL OLTP Performance numbers were good at 400k 70/30% OLTP QPS @~50 ms avg • Using Xeon E5 v4 standard high-volume servers and Intel NVMe SSDs, one can now deploy a high performance Ceph cluster for database workloads • Recipe and tunings for this solution are here: www.percona.com/live/data-performance-conference-2016/content/accelerating-ceph- database-workloads-all-pcie-ssd-cluster Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark parameters. *Other names and brands may be claimed as the property of others.
  • 23. Ceph Solutions Available in addition to the QCT, Supermicro, and HP Solutions Already Mentioned
  • 24. Thomas Krenn SUSE Enterprise Storage https://www.thomas-krenn.com/en/products/storage-systems/suse-enterprise-storage.html *Other names and brands may be claimed as the property of others.
  • 25. Fujistu Intel Based Ceph Appliance http://www.fujitsu.com/global/products/computing/storage/eternus-cd/s2/ *Other names and brands may be claimed as the property of others.
  • 27. Ceph Reference Architectures Summary • The community has a growing number of good reference architectures • Some point to specific hardware, others are generic • Different workloads are catered for • Some of the documents contain performance and tuning information • Commercial support available for professional services and software support • Intel will continue to work with its ISV and hardware systems partners on reference architectures • And continue Intel’s Ceph development focused on Ceph performance
  • 28. Next – a Focus on Nvm technologies for today’s and tomorrow’s Ceph Jack Zhang, Enterprise Architect, Intel Corporation May 2016
  • 29. 29 Solid State Drive (SSD) for Ceph today Three Ceph* Configurations and data
  • 30. 30 The performance – Motivations for SSD on Ceph Storage Providers are Struggling to achieve the required high performance  There is a growing trend for cloud provider to adopt SSD – CSP who wants to build EBS alike service for their OpenStack based public/private cloud Strong demands to run enterprise applications  OLTP workloads running on Ceph  high performance multi-purpose Ceph cluster is the key advantages  Performance is still an important factor SSD price continue to decrease
  • 31. Standard/good NVMe/PCIe SSD for Journal + Caching, HDDs as OSD data drive Example: 1 x Intel P3700 1.6TB as Journal + Intel iCAS caching software, + 12 HDDs Better (best TCO, as today) NVMe/PCIe SSD as Journal + High capacity SATA SSD for data drive Example: 1 x Intel P3700 800GB + 4 x Intel S3510 1.6TB Best Performance All NVMe/PCIe SSDs Example: 4 x Intel P3700 2TB SSDs Three Configurations for Ceph Storage Node Ceph storage node --Good CPU Intel(R) Xeon(R) CPU E5-2650v3 Memory 64 GB NIC 10GbE Disks 1x 1.6TB P3700 + 12 x 4TB HDDs (1:12 ratio) P3700 as Journal and caching Caching software Intel iCAS 3.0, option: RSTe/MD4.3 Ceph Storage node --Better CPU Intel(R) Xeon(R) CPU E5-2690 Memory 128 GB NIC Duel 10GbE Disks 1x 800GB P3700 + 4x S3510 1.6TB Ceph Storage node --Best CPU Intel(R) Xeon(R) CPU E5-2699v3 Memory >= 128 GB NIC 2x 40GbE, 4x dual 10GbE Disks 4 x P3700 2TB
  • 32. Intel® NVMe SSD + CAS Accelerates Ceph forCHALLENGE: • How can I use Ceph for scalable storage solution? (High latency and low throughput due to erasure coding, write twice, huge number of small files) • Use over-provision to address performance is costly. Intel® CAS SOLUTION: • Intel® NVMe SSD – consistently amazing • Intel® CAS 3.0 feature – hinting • Intel® CAS 3.0 fine tuned for Yahoo! – cache metadata YAHOO! PERFORMANCE GOAL: 2X Throughput 1/2 LATENCY COST REDUCTION: • CapEx savings (over-provision ) OpEx savings (Power, Space, Cooling… ) • Improved scalability planning (Performance and Predictability ) User Web Server Client Ceph Cluster 32
  • 33. 33 Yahoo! (Ceph obj) - Results
  • 34. 34 2x10Gb NIC Test Environment 5x Client Node • Intel® Xeon™ processor E5-2699 v3 @ 2.3GHz, 64GB mem • 10Gb NIC 5x Storage Node • Intel Xeon™ processor E5- 2699 v3 @ 2.3 GHz • 128GB Memory • 1x 1T HDD for OS • 1x Intel® DC P3700 800G SSD for Journal (U.2) • 4x 1.6TB Intel® SSD DC S3510 as data drive • 2 OSD instances one each S3510 SSD CEPH1 MON OSD1 OSD8… FIO FIO CLIENT 1 1x10Gb NIC Note: Refer to backup for detailed test configuration for hardware, Ceph and testing scripts FIO FIO CLIENT 2 FIO FIO CLIENT 3 FIO FIO CLIENT 4 FIO FIO CLIENT 5 CEPH2 OSD1 OSD8… CEPH3 OSD1 OSD8… CEPH4 OSD1 OSD8… CEPH5 OSD1 OSD8… All SSD, better configuration – NVMe/PCIe + SATA SSDs
  • 35. 35  1.08M IOPS for 4K random read, 144K IOPS for 4K random write with tunings and optimizations 1 8 64 0 200000 400000 600000 800000100000012000001400000 LATENCY(MS) IOPS RANDOM READ PERFORMANCE RBD # SCALE TEST 4K Rand.R 8K Rand.R 16K Rand.R 64K Rand.R 63K 64k Random Read IOPS @ 40ms 300K 16k Random Read IOPS @ 10 ms 1.08M 4k Random Read IOPS @ 3.4ms500K 8k Random Read IOPS @ 8.8ms 0 10 0 20000 40000 60000 80000 100000120000140000160000 LATENCY(MS) IOPS RANDOM WRITE PERFORMANCE RBD # SCALE TEST 4K Rand.W 8K Rand.w 16K Rand.W 64K Rand.W 23K 64k Random Write IOPS @ 2.6ms 88K 16kRandom Write IOPS @ 2.7ms 132K 8k Random Write IOPS @ 4.1ms 144K 4kRandom Write IOPS @ 4.3ms Excellent random read performance and Acceptable random write performance All SSD (NVMe + SATA SSD) Configuration – Break Million IOPS
  • 36. Comparison: SSD Cluster vs. HDD Cluster  Both journal on PCIe/NVMe SSD  4K random write, need ~ 58x HDD Cluster (~ 2320 HDDs) to get same performance  4K random read, need ~ 175x HDD Cluster (~ 7024 HDDs) to get the same performance 36 ALL SSD Ceph provides Best TCO (both Capx and Opex), not only performance but also space, Power, Fail rate, etc Client Node 5 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 64GB memory OS : Ubuntu Trusty Storage Node 5 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 128GB memory Ceph Version : 9.2.0 OS : Ubuntu Trusty  1 x P3700 SSDs for Journal per node Cluster difference: SSD cluster : 4xS3510 1.6TB for OSD per node HDD cluster : 14x STAT 7200RPM HDDs as OSD per node
  • 37. 37 The performance Observations and Tunings Leveldb became the bottleneck  Single thread leveldb pushed one core to 100% utilization Omap overhead  Among the 3800+ threads, average ~47 threads are running, ~ 10 pipe threads and ~9 OSD op threads are running, most of OSD op threads are sleep (top –H)  Osd op threads are waiting for throttle of filestore to be released  Disable omap operation can speedup release of filestore throttle, which makes more OSD op thread in running state, average ~ 105 threads are running.  Throughput improved 63% High CPU consumption  70% CPU utilization of Two high-end Xeon E5 v3 processors (36 cores) with 4 S3510s  Perf showed that most of CPU intensive functions are malloc, free and other system calls *Bypass omap: Ingore object_map->set_keys in FileStore::_omap_setkeys, for tests only
  • 38. 38 The performance Tunings • Up to 16x performance improvement for 4K random read, peak throughput 1.08M IOPS • Up to 7.6x performance improvement for 4K random write, 140K IOPS 4K Random Read Tunings 4K Random Write Tunings Default Single OSD Single OSD Tuning-1 2 OSD instances per SSD 2 OSD instances per SSD Tuning-2 Tuning1 + debug=0 Tuning2+Debug 0 Tuning-3 Tuning2 + jemalloc tuning3+ op_tracker off, tuning fd cache Tuning-4 Tuning3 + read_ahead_size=16 Tuning4+jemalloc Tuning-5 Tuning4 + osd_op_thread=32 Tuning4 + Rocksdb to store omap Tuning-6 Tuning5 + rbd_op_thread=4 N/A - 5.00 10.00 15.00 20.00 Normalized 4K random Read/Write Tunings 4K Random Read 4K random write
  • 39. Ceph Storage Cluster Best Configuration-- All NVMe SSDs Ceph network (192.168.142.0/24) - 10Gbps CBT / Zabbix / Monitoring FIO RBD Client • OSD System Config: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4 • Each system with 4x P3700 800GB NVMe, partitioned into 4 OSD’s each, 16 OSD’s total per node • FIO Client Systems: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4 • Ceph v0.94.3 Hammer Release, CentOS 7.1, 3.10-229 Kernel, Linked with JEMalloc 3.6 • CBT used for testing and data acquisition • Single 10GbE network for client & replication data transfer, Replication factor 2 FIO RBD Client FIO RBD Client FIO RBD Client FIO RBD Client FIO RBD Client FatTwin (4x dual-socket XeonE5 v3) FatTwin (4x dual-socket XeonE5 v3) CephOSD1 NVMe1 NVMe3 NVMe2 NVMe4 CephOSD2 CephOSD3 CephOSD4 CephOSD16 … CephOSD1 NVMe1 NVMe3 NVMe2 NVMe4 CephOSD2 CephOSD3 CephOSD4 CephOSD16 … CephOSD1 NVMe1 NVMe3 NVMe2 NVMe4 CephOSD2 CephOSD3 CephOSD4 CephOSD16 … CephOSD1 NVMe1 NVMe3 NVMe2 NVMe4 CephOSD2 CephOSD3 CephOSD4 CephOSD16 … CephOSD1 NVMe1 NVMe3 NVMe2 NVMe4 CephOSD2 CephOSD3 CephOSD4 CephOSD16 … SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U Intel Xeon E5 v3 18 Core CPUs Intel P3700 NVMe PCI-e Flash Easily serviceable NVMe Drives
  • 40. 40 First Ceph cluster to break 1 Million 4K random IOPS, ~1ms response time 0 1 2 3 4 5 6 7 8 9 10 0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1,400,000 AvgLatency(ms) IOPS IODepth Scaling - Latency vs IOPS - 100% 4K RandomRead 100% 4K RandomWrite 70/30% 4K Random OLTP 171K 100% 4k Random Write IOPS @ 6ms 400K 70/30% (OLTP) 4k Random IOPS @~3ms 1M 100% 4k Random Read IOPS @~1.1ms Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark parameters. All NVMe Configuration– Million IOPS, Low Latency • OSD System Config: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4 • Each system with 4x P3700 800GB NVMe, partitioned into 4 OSD’s each, 16 OSD’s total per node • FIO Client Systems: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4 • Ceph v0.94.3 Hammer Release, CentOS 7.1, 3.10-229 Kernel, Linked with JEMalloc 3.6 • CBT used for testing and data acquisition • Single 10GbE network for client & replication data transfer, Replication factor 2
  • 41. 41 3D NAND and 3D XPoint™ for Ceph tomorrow
  • 42. NAND Flash and 3D XPoint™ Technology for Ceph Tomorrow 42 3D MLC and TLC NAND Building block enabling expansion of SSD into HDD segments 3D Xpoint™ Building blocks for ultra high performance storage & memory
  • 43. 3D Xpoint™ TECHNOLOGY In Pursuit of Large Memory Capacity … Word Access … Immediately Available … Word (Cache Line) Crosspoint Structure Selectors allow dense packing and individual access to bits Large Memory Capacity Crosspoint & Scalable Memory layers can be stacked in a 3D manner NVM Breakthrough Material Advances Compatible switch and memory cell materials Immediately Available High Performance Cell and array architecture that can switch states 1000x faster than NAND
  • 44. TEAM QUANTA NAND SSD Latency: ~100,000X Size of Data: ~1,000X Latency: 1X Size of Data: 1X SRAM Latency: ~10 MillionX Size of Data: ~10,000X HDD Latency: ~10X Size of Data: ~100X DRAM 3D XPoint ™ Memory Media Latency: ~100X Size of Data: ~1,000X STORAGE MEMORYMEMORYTechnology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications. 3D Xpoint™ TECHNOLOGY Breaks the Memory Storage Barrier 44
  • 45. Intel® Optane™ (prototype) vs Intel® SSD DC P3700 Series at QD=1 Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Server Configuration: 2x Intel® Xeon® E5 2690 v3 NVM Express* (NVMe) NAND based SSD: Intel P3700 800 GB, 3D Xpoint based SSD: Optane NVMe OS: Red Hat* 7.1
  • 46. 46 3D Xpoint™ opportunities – Bluestore backend Three usages for PMEM device • Backend of bluestore: raw PMEM block device or file of dax-enabled FS • Backend of rocksdb: raw PMEM block device or file of dax-enabled FS • Backend of rocksdb’s WAL: raw PMEM block device or file of DAX-enabled FS Two methods for accessing PMEM devices • libpmemblk • mmap + libpmemlib BlueStore Rocksdb BlueFS PMEMDevice PMEMDevice PMEMDevice Metadata Libpmemlib Libpmemblk DAX Enabled File System mmap Load/store mmap Load/store File File File API API Data
  • 47. 47 3D NAND - Ceph cost effective solution Enterprise class, highly reliable, feature rich, and cost effective AFA solution  NVMe SSD is today’s SSD, and 3D NAND or TLC SSD is today’s HDD – NVMe as Journal, high capacity SATA SSD or 3D NAND SSD as data store – Provide high performance, high capacity, a more cost effective solution – 1M 4K Random Read IOPS delivered by 5 Ceph nodes – Cost effective: 1000 HDD Ceph nodes (10K HDDs) to deliver same throughput – High capacity: 100TB in 5 nodes  W/ special software optimization on filestore and bluestore backend Ceph Node S3510 1.6TB S3510 1.6TB S3510 1.6TB S3510 1.6TB P3700 M.2 800GB Ceph Node P3520 4TB P3520 4TB P3520 4TB P3520 4TB P3700 & 3D Xpoint™ SSDs 3D NAND P3520 4TB 3D Xpoint™
  • 48. 48 Legal Notices and Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Intel, Xeon and the Intel logo are trademarks of Intel Corporation in the United States and other countries. *Other names and brands may be claimed as the property of others. © 2015 Intel Corporation.
  • 49. 49 Legal Information: Benchmark and Performance Claims Disclaimers Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. Test and System Configurations: See Back up for details. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
  • 50. Risk Factors The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks," "estimates," "may," "will," "should" and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel's actual results, and variances from Intel's current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward- looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations. Demand for Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic conditions; consumer confidence or income levels; customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel’s gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel product introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new features into existing products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and doing-business regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are either fixed or difficult to reduce in the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could be affected by changes in Intel’s priorities for the use of cash, such as operational spending, capital spending, acquisitions, and as a result of changes to Intel’s cash flows and changes in tax laws. Product defects or errata (deviations from published specifications) may adversely impact our expenses, revenues and reputation. Intel’s results could be affected by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Form 10-Q, Form 10-K and earnings release. Rev. 1/15/15 50
  • 51.
  • 52. Ceph Benchmark/deployment/Tuning Tool: CeTune 52  CeTune controller – Reads configuration files and controls the process to deploy, benchmark and analyze the collected data  CeTune workers – Controlled by CeTune as workload generator, system metrics collector
  • 53. Intel® Virtual Storage Manager 53 Management framework = Consistent configuration Operator-friendly interface for management & monitoring Cluster Server (OSDs/Monitor/MDS)) VSM Agent (on each Ceph node) VSM Controller (Controller Node Server) Client Server OpenStack Nova Controller Server(s) API Data Center Firewall Internet Data Center SSH HTTP Socket Operator