4. Welcome to Intel Hillsboro
and Agenda for First Half of this Talk
• Inventory of Published Referenced Architectures from Red Hat and SUSE
• Walk through highlights of a soon to be published Intel and Red Hat Ceph
Reference Architecture paper
• Introduce an Intel all-NVMe Ceph configuration benchmark for MySQL
• Show examples of Ceph solutions
Then Jack Zhang, Intel SSD Architect – second half of this presentation
5. What Are Reference Architecture Key Components
• Starts with workload (use case) and points to one or more resulting
recommended configurations
• Configurations should be recipes that one can purchase and build
• Key related elements should be recommended
• Replication versus EC, media types for storage, failure domains
• Ideally, performance data and tunings are supplied for the configurations
9. A Brief Look at 3
of the Reference Architecture Documents
10. QCT CEPH performance and
sizing guide
• Target audience: Mid-size to large cloud and enterprise customers
• Showcases Intel based QCT solutions for multiple customer workloads
• Introduces a three tier configuration and solution model:
• IOPS Optimized, Throughput Optimized, Capacity Optimized
• Specifies specific and orderable QCT solutions based on above classifications
• Shows actual Ceph performance observed for the configurations
• Purchase fully configured solutions
per above model from QCT
• Red Hat Ceph Storage Pre-Installed
• Red Hat Ceph Storage support included
• Datasheets and white papers at
www.qct.io
* Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
11. Supermicro performance and
sizing guide
• Target audience: Mid-size to large cloud and enterprise customers
• Showcases Intel based Supermicro solutions for multiple customer workloads
• Introduces a three tier configuration and solution model:
• IOPS Optimized, Throughput Optimized, Capacity Optimized
• Specifies specific and orderable Supermicro solutions based on above classifications
• Shows actual Ceph performance observed for the configurations
• Purchase fully configured solutions
per above model from Supermicro
• Red Hat Ceph Storage Pre-Installed
• Red Hat Ceph Storage support included
• Datasheets and white papers at
supermicro.com
* Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
12. intel solutions for ceph
deployments
• Target audience: Mid-size to large cloud and enterprise customers
• Showcases Intel based solutions for multiple customer workloads
• Uses the three tier configuration and solution model:
• IOPS Optimized, Throughput Optimized, Capacity Optimized
• Contains Intel configurations and performance data
• Contains a Yahoo case study
• Contains specific use case examples
• Adds a Good, Better, Best model for
all SSD Ceph configurations
• Adds configuration and performance
data for Intel* Cache Acceleration
• Overviews CeTune and VSM tools
• Datasheets and white papers at
intelassetlibrary.tagcmd.com/#assets/gallery/11492083
* Other names and brands may be claimed as the property of others* Other names and brands may be claimed as the property of others
13. Quick Look at 3 Tables Inside the Intel
and Red Hat Reference Architecture Document
(to be published soon)
14. Generic Red Hat Ceph Reference Architecture Preview
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide
*Other names and brands may be claimed as the property of others.
• IOPS optimized config is all
NVME SSD
• Typically block with
replication
Allows database work
• Journals are NVME
• Bluestore, when
supported, will
increase performance
• Throughout optimized is a
balanced config
• HDD storage with SSD
journals
• Block or object, with
replication
• Capacity optimized typically
all HDD storage
• Object and EC
15. Intel and Red Hat Ceph Reference Architecture Preview
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide
*Other names and brands may be claimed as the property of others.
• IOPS optimized Ceph
clusters are typically in the TB
ranges
• Throughput clusters will likely
move to 2.5” inch enclosures
and all SSD over time
• Capacity optimized likely to
favor 3.5” for HDD storage
16. Intel and Red Hat Ceph Reference Architecture Preview
*Other names and brands may be claimed as the property of others.
• Specific recommended Intel
processor and SSD models
are now specified
• Intel processor
recommendations depend on
how many OSDs are used
17. Intel and Red Hat Ceph Reference Architecture
*Other names and brands may be claimed as the property of others.
• Recommendations for
specific Intel SSDs and
journals, with two options
• Specific Intel processor
recommendations, depending
on how many OSDs
18. Intel and Red Hat Ceph Reference Architecture
*Other names and brands may be claimed as the property of others.
• No SSDs for capacity model
• Specific Intel processor
recommendations are same
as on previous throughput
config recommendations, and
are based on number of
OSDs
20. *Other names and brands may be claimed as the property of others.
An “All-NVMe” high-density Ceph Cluster Configuration
Supermicro 1028U-TN10RT+
NVMe1 NVMe2
NVMe3 NVMe4
CephOSD1
CephOSD2
CephOSD3
CephOSD4
CephOSD16
5-Node all-NVMe Ceph Cluster
Dual-Xeon E5 2699v4@2.2GHz, 44C HT, 128GBDDR4
Centos 7.2, 3.10-327, Ceph v10.1.2, bluestore async
ClusterNW2x10GbE
10x Client Systems + 1x Ceph MON
Dual-socket Xeon E5 2699v3@2.3GHz
36 Cores HT, 128GBDDR4
Public NW 2x 10GbE
Docker1 (krbd)
MySQL DB Server
Docker2 (krbd)
MySQL DB Server
Docker3
Sysbench Client
FIO
Docker4
Sysbench Client
DB containers - 16 vCPUs, 32GB mem, 200GB RBD volume,
100GB MySQL dataset, InnoDBbuf cache 25GB (25%)
Client containers – 16 vCPUs, 32GB RAM
FIO 2.8, Sysbench 0.5
Test-set 1
Test-set2
*Other names and brands may be claimed as the property of others.
Software and workloads used in
performance tests may have been
optimized for performance only on Intel
microprocessors. Any difference in
system hardware or software design or
configuration may affect actual
performance. See configuration slides in
backup for details on software
configuration and test benchmark
parameters.
21. Tunings for the all-NVE Ceph Cluster
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or
software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark
parameters.
• TCMalloc with 128MB thread cache
• or use JEMalloc
• Disable debug
• Disable auth
22. *Other names and brands may be claimed as the property of others.
All NVMe Flash Ceph Storage – Summary
• Intel NVMe Flash storage works for low latency workloads
• Ceph makes a compelling case for database workloads
• 1.4 million random read IOPS is achievable in 5U with ~1ms latency today.
• Sysbench MySQL OLTP Performance numbers were good at 400k 70/30%
OLTP QPS @~50 ms avg
• Using Xeon E5 v4 standard high-volume servers and Intel NVMe SSDs, one
can now deploy a high performance Ceph cluster for database workloads
• Recipe and tunings for this solution are here:
www.percona.com/live/data-performance-conference-2016/content/accelerating-ceph-
database-workloads-all-pcie-ssd-cluster
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Any difference in system hardware or software design or configuration may affect actual performance.
See configuration slides in backup for details on software configuration and test benchmark parameters. *Other names and brands may be claimed as the property of others.
24. Thomas Krenn SUSE Enterprise Storage
https://www.thomas-krenn.com/en/products/storage-systems/suse-enterprise-storage.html
*Other names and brands may be claimed as the property of others.
25. Fujistu Intel Based Ceph Appliance
http://www.fujitsu.com/global/products/computing/storage/eternus-cd/s2/ *Other names and brands may be claimed as the property of others.
27. Ceph Reference Architectures Summary
• The community has a growing number of good reference architectures
• Some point to specific hardware, others are generic
• Different workloads are catered for
• Some of the documents contain performance and tuning information
• Commercial support available for professional services and software support
• Intel will continue to work with its ISV and hardware systems partners on
reference architectures
• And continue Intel’s Ceph development focused on Ceph performance
28. Next – a Focus on Nvm
technologies for today’s and
tomorrow’s Ceph
Jack Zhang, Enterprise Architect, Intel Corporation
May 2016
30. 30
The performance – Motivations for SSD on Ceph
Storage Providers are Struggling to achieve the required high performance
There is a growing trend for cloud provider to adopt SSD
– CSP who wants to build EBS alike service for their OpenStack based public/private cloud
Strong demands to run enterprise applications
OLTP workloads running on Ceph
high performance multi-purpose Ceph cluster is the key advantages
Performance is still an important factor
SSD price continue to decrease
31. Standard/good
NVMe/PCIe SSD for Journal + Caching, HDDs as OSD data
drive
Example: 1 x Intel P3700 1.6TB as Journal + Intel iCAS
caching software, + 12 HDDs
Better (best TCO, as today)
NVMe/PCIe SSD as Journal + High capacity SATA SSD for
data drive
Example: 1 x Intel P3700 800GB + 4 x Intel S3510 1.6TB
Best Performance
All NVMe/PCIe SSDs
Example: 4 x Intel P3700 2TB SSDs
Three Configurations for Ceph Storage Node
Ceph storage node --Good
CPU Intel(R) Xeon(R) CPU E5-2650v3
Memory 64 GB
NIC 10GbE
Disks 1x 1.6TB P3700 + 12 x 4TB HDDs (1:12 ratio)
P3700 as Journal and caching
Caching software Intel iCAS 3.0, option: RSTe/MD4.3
Ceph Storage node --Better
CPU Intel(R) Xeon(R) CPU E5-2690
Memory 128 GB
NIC Duel 10GbE
Disks 1x 800GB P3700 + 4x S3510 1.6TB
Ceph Storage node --Best
CPU Intel(R) Xeon(R) CPU E5-2699v3
Memory >= 128 GB
NIC 2x 40GbE, 4x dual 10GbE
Disks 4 x P3700 2TB
32. Intel® NVMe SSD + CAS Accelerates Ceph
forCHALLENGE:
• How can I use Ceph for scalable storage solution?
(High latency and low throughput due to erasure
coding, write twice, huge number of small files)
• Use over-provision to address performance is
costly.
Intel® CAS
SOLUTION:
• Intel® NVMe SSD – consistently amazing
• Intel® CAS 3.0 feature – hinting
• Intel® CAS 3.0 fine tuned for Yahoo! – cache metadata
YAHOO! PERFORMANCE
GOAL:
2X
Throughput
1/2
LATENCY
COST REDUCTION:
• CapEx savings (over-provision )
OpEx savings (Power, Space, Cooling… )
• Improved scalability planning (Performance and Predictability )
User
Web Server
Client
Ceph Cluster
32
34. 34
2x10Gb NIC
Test Environment
5x Client Node
• Intel® Xeon™ processor
E5-2699 v3 @ 2.3GHz,
64GB mem
• 10Gb NIC
5x Storage Node
• Intel Xeon™ processor E5-
2699 v3 @ 2.3 GHz
• 128GB Memory
• 1x 1T HDD for OS
• 1x Intel® DC P3700 800G
SSD for Journal (U.2)
• 4x 1.6TB Intel® SSD DC
S3510 as data drive
• 2 OSD instances one each
S3510 SSD
CEPH1
MON
OSD1 OSD8…
FIO FIO
CLIENT 1
1x10Gb NIC
Note: Refer to backup for detailed test configuration for hardware, Ceph and testing scripts
FIO FIO
CLIENT 2
FIO FIO
CLIENT 3
FIO FIO
CLIENT 4
FIO FIO
CLIENT 5
CEPH2
OSD1 OSD8…
CEPH3
OSD1 OSD8…
CEPH4
OSD1 OSD8…
CEPH5
OSD1 OSD8…
All SSD, better configuration – NVMe/PCIe + SATA
SSDs
35. 35
1.08M IOPS for 4K random read, 144K IOPS for 4K random write with tunings
and optimizations
1
8
64
0 200000 400000 600000 800000100000012000001400000
LATENCY(MS)
IOPS
RANDOM READ PERFORMANCE
RBD # SCALE TEST
4K Rand.R 8K Rand.R 16K Rand.R 64K Rand.R
63K 64k Random Read
IOPS @ 40ms
300K 16k Random
Read IOPS @ 10 ms
1.08M 4k Random
Read IOPS @ 3.4ms500K 8k Random
Read IOPS @ 8.8ms
0
10
0 20000 40000 60000 80000 100000120000140000160000
LATENCY(MS)
IOPS
RANDOM WRITE PERFORMANCE
RBD # SCALE TEST
4K Rand.W 8K Rand.w
16K Rand.W 64K Rand.W
23K 64k Random Write
IOPS @ 2.6ms
88K 16kRandom Write
IOPS @ 2.7ms
132K 8k Random Write
IOPS @ 4.1ms
144K 4kRandom Write
IOPS @ 4.3ms
Excellent random read performance and Acceptable random write performance
All SSD (NVMe + SATA SSD) Configuration – Break Million IOPS
36. Comparison: SSD Cluster vs. HDD Cluster
Both journal on PCIe/NVMe SSD
4K random write, need ~ 58x HDD Cluster (~ 2320
HDDs) to get same performance
4K random read, need ~ 175x HDD Cluster (~ 7024
HDDs) to get the same performance
36
ALL SSD Ceph provides Best TCO (both Capx and Opex), not only performance but also space, Power, Fail rate, etc
Client Node
5 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 64GB memory
OS : Ubuntu Trusty
Storage Node
5 nodes with Intel® Xeon™ processor E5-2699 v3 @ 2.30GHz, 128GB memory
Ceph Version : 9.2.0 OS : Ubuntu Trusty
1 x P3700 SSDs for Journal per node
Cluster difference:
SSD cluster : 4xS3510 1.6TB for OSD per node
HDD cluster : 14x STAT 7200RPM HDDs as OSD per node
37. 37
The performance Observations and Tunings
Leveldb became the bottleneck
Single thread leveldb pushed one core to 100% utilization
Omap overhead
Among the 3800+ threads, average ~47 threads are running, ~ 10 pipe threads
and ~9 OSD op threads are running, most of OSD op threads are sleep (top –H)
Osd op threads are waiting for throttle of filestore to be released
Disable omap operation can speedup release of filestore throttle, which makes
more OSD op thread in running state, average ~ 105 threads are running.
Throughput improved 63%
High CPU consumption
70% CPU utilization of Two high-end Xeon E5 v3 processors (36 cores) with 4
S3510s
Perf showed that most of CPU intensive functions are malloc, free and other
system calls
*Bypass omap: Ingore object_map->set_keys in FileStore::_omap_setkeys, for tests only
38. 38
The performance Tunings
• Up to 16x performance improvement for 4K random read, peak throughput
1.08M IOPS
• Up to 7.6x performance improvement for 4K random write, 140K IOPS
4K Random Read
Tunings
4K Random Write
Tunings
Default Single OSD Single OSD
Tuning-1 2 OSD instances per SSD 2 OSD instances per SSD
Tuning-2 Tuning1 + debug=0 Tuning2+Debug 0
Tuning-3 Tuning2 + jemalloc
tuning3+ op_tracker off, tuning fd
cache
Tuning-4 Tuning3 + read_ahead_size=16 Tuning4+jemalloc
Tuning-5 Tuning4 + osd_op_thread=32 Tuning4 + Rocksdb to store omap
Tuning-6 Tuning5 + rbd_op_thread=4 N/A
-
5.00
10.00
15.00
20.00
Normalized
4K random Read/Write Tunings
4K Random Read 4K random write
40. 40
First Ceph cluster to break 1 Million 4K random IOPS, ~1ms response
time
0
1
2
3
4
5
6
7
8
9
10
0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1,400,000
AvgLatency(ms)
IOPS
IODepth Scaling - Latency vs IOPS -
100% 4K RandomRead 100% 4K RandomWrite 70/30% 4K Random OLTP
171K 100% 4k Random
Write IOPS @ 6ms
400K 70/30% (OLTP) 4k
Random IOPS @~3ms
1M 100% 4k Random
Read IOPS @~1.1ms
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or
software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark
parameters.
All NVMe Configuration– Million IOPS, Low Latency
• OSD System Config: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4
• Each system with 4x P3700 800GB NVMe, partitioned into 4 OSD’s each, 16 OSD’s total per node
• FIO Client Systems: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4
• Ceph v0.94.3 Hammer Release, CentOS 7.1, 3.10-229 Kernel, Linked with JEMalloc 3.6
• CBT used for testing and data acquisition
• Single 10GbE network for client & replication data transfer, Replication factor 2
42. NAND Flash and 3D XPoint™ Technology for
Ceph Tomorrow
42
3D MLC and TLC NAND
Building block enabling expansion of
SSD into HDD segments
3D Xpoint™
Building blocks for ultra high
performance storage &
memory
43. 3D Xpoint™ TECHNOLOGY
In Pursuit of Large Memory Capacity … Word Access … Immediately Available …
Word (Cache Line)
Crosspoint Structure
Selectors allow dense packing
and individual access to bits
Large Memory Capacity
Crosspoint & Scalable
Memory layers can be stacked
in a 3D manner
NVM Breakthrough
Material Advances
Compatible switch and
memory cell materials
Immediately Available
High Performance Cell and
array architecture that can
switch states 1000x faster than
NAND
44. TEAM QUANTA
NAND SSD
Latency: ~100,000X
Size of Data: ~1,000X
Latency: 1X
Size of Data: 1X
SRAM
Latency: ~10 MillionX
Size of Data: ~10,000X
HDD
Latency: ~10X
Size of Data: ~100X
DRAM
3D XPoint ™
Memory Media
Latency: ~100X
Size of Data: ~1,000X
STORAGE
MEMORYMEMORYTechnology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies
recorded on published specifications of in-market memory products against internal Intel specifications.
3D Xpoint™ TECHNOLOGY
Breaks the Memory Storage Barrier
44
45. Intel® Optane™ (prototype) vs Intel® SSD DC P3700 Series at QD=1
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate
performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Server Configuration: 2x Intel® Xeon® E5 2690 v3 NVM
Express* (NVMe) NAND based SSD: Intel P3700 800 GB, 3D Xpoint based SSD: Optane NVMe OS: Red Hat* 7.1
46. 46
3D Xpoint™ opportunities – Bluestore backend
Three usages for PMEM device
• Backend of bluestore: raw PMEM block device or
file of dax-enabled FS
• Backend of rocksdb: raw PMEM block device or
file of dax-enabled FS
• Backend of rocksdb’s WAL: raw PMEM block
device or file of DAX-enabled FS
Two methods for accessing PMEM
devices
• libpmemblk
• mmap + libpmemlib
BlueStore
Rocksdb
BlueFS
PMEMDevice PMEMDevice PMEMDevice
Metadata
Libpmemlib
Libpmemblk
DAX Enabled File System
mmap
Load/store
mmap
Load/store
File
File
File
API
API
Data
47. 47
3D NAND - Ceph cost effective solution
Enterprise class, highly reliable, feature rich, and cost
effective AFA solution
NVMe SSD is today’s SSD, and 3D NAND or TLC SSD is today’s
HDD
– NVMe as Journal, high capacity SATA SSD or 3D NAND SSD as
data store
– Provide high performance, high capacity, a more cost effective
solution
– 1M 4K Random Read IOPS delivered by 5 Ceph nodes
– Cost effective: 1000 HDD Ceph nodes (10K HDDs) to deliver same
throughput
– High capacity: 100TB in 5 nodes
W/ special software optimization on filestore and bluestore backend
Ceph Node
S3510
1.6TB
S3510
1.6TB
S3510
1.6TB
S3510
1.6TB
P3700
M.2 800GB
Ceph Node
P3520
4TB
P3520
4TB
P3520
4TB
P3520
4TB
P3700 & 3D Xpoint™ SSDs
3D NAND
P3520
4TB
3D Xpoint™
49. 49
Legal Information: Benchmark and Performance Claims
Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as
SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors
may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including
the performance of that product when combined with other products.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual
performance. Consult other sources of information to evaluate performance as you consider your purchase.
Test and System Configurations: See Back up for details.
For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
50. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a
number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks," "estimates," "may," "will," "should" and their variations identify
forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect
Intel's actual results, and variances from Intel's current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-
looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations. Demand for
Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic conditions; consumer confidence or income levels;
customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other disruptions affecting
customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel’s gross margin percentage could vary
significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in
revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit costs; defects or
disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel product
introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new features into existing
products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions
in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and
fluctuations in currency exchange rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and doing-business
regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are either fixed or difficult to reduce in
the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could be affected by changes in Intel’s priorities for the use of cash, such
as operational spending, capital spending, acquisitions, and as a result of changes to Intel’s cash flows and changes in tax laws. Product defects or errata (deviations from published
specifications) may adversely impact our expenses, revenues and reputation. Intel’s results could be affected by litigation or regulatory matters involving intellectual property,
stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling
one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual
property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. A detailed discussion of these and other factors that
could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Form 10-Q, Form 10-K and earnings release.
Rev. 1/15/15 50
51.
52. Ceph Benchmark/deployment/Tuning Tool: CeTune
52
CeTune controller
– Reads configuration
files and controls the
process to deploy,
benchmark and analyze
the collected data
CeTune workers
– Controlled by CeTune
as workload generator,
system metrics collector
53. Intel® Virtual Storage Manager
53
Management framework = Consistent configuration
Operator-friendly interface for management & monitoring
Cluster Server
(OSDs/Monitor/MDS))
VSM Agent
(on each Ceph node)
VSM
Controller
(Controller Node
Server)
Client Server
OpenStack
Nova Controller
Server(s)
API
Data Center Firewall
Internet
Data Center
SSH
HTTP
Socket
Operator