SlideShare a Scribd company logo
1 of 24
1© AppliedMicro Proprietary & Confidential
CEPH on 64-bit ARM with X-Gene®
Kumar Sankaran
Associate Vice President,
Software and Platform Engineering
CEPH Day Mar 30th 2016
2© AppliedMicro Proprietary & Confidential
Applied Micro: A Global Presence
San Diego, CA
Ottawa, Canada
Andover, MA
Austin, TX
Raleigh, NC
Copenhagen
SANTA CLARA, CA
India
Pune
Bangalore
Vietnam
Shenzhen
TokyoShanghai
BeijingParis
Manchester
Munich
China
Founded in 1978
Headquarters: Santa Clara, CA, USA
NASDAQ: Ticker AMCC
~500 Employees Worldwide
3© AppliedMicro Proprietary & Confidential
X-Gene®
Industry First 64-bit ARM Server-on-Chip
CPU Performance
 Strong Single Thread Performance
Power
 Low Power Envelope
 High Perf/Watt
IO Integration
 Integrated 10G NICs
 PCIe Gen 3
 SATA Gen 3
 Programmable Offloads
DRAM Memory
 Low latency
 Multiple channels
 Large Memory
Addressability
© AppliedMicro Proprietary & Confidential
NOVEMBER 2015
X-Gene 3 Announced
X-Tend Announced
OCTOBER 2014
HP’s m400
product launch
MARCH 2013
X-Gene silicon
samples
to customers
APRIL 2012
X-Gene FPGA release
enables ecosystem
partners
MARCH 2010
World’s 1st ARMv8
Server Architecture
Licensee
2012 2014
2013 2015
OCTOBER 2009
Dedicated processor
team begins 64b
development
2009
2010 2014
2014
SPRING 2014
X-Gene 2 samples
to customers
OCTOBER 2014
X-Gene 2 wins ARM
TechCon “Best in
Show”
OEM
Hyperscale
ODM
HPC
Networking Storage
Memcache Enterprise
Search
Mainstream
Web Tier
X-Gene Driving ARM in Data Centers
Target Customer Types Target Deployments
X-Gene® Market Segments
Web Serving/Proxy  Apache, NGINX, HAProxy
Web Tier
Cold Storage/Big
Data/Data Analytics
High Performance
Computing
Cold Storage  CEPH, GlusterFS, OpenStack Swift/Cinder
CPU + GPU combination for HPC workloads
Web Apps/Hosting  Dhrupal, WordPress, Rails
Web Caching  Memcached, Redis
Databases  MySQL, MongoDB, Cassandra, PostgreSQL
Big Data  Hadoop, MapReduce
Data Analytics  Lucene, ElasticSearch, SPARK
X-Gene® Features
Memory subsystem
 Multiple DDR3/DDR4 Channels
X-Gene®  Server on Chip
 Multi-core 64-bit ARMv8-A CPUs
 Three Level Cache Hierarchy
 L1 /L2/L3 caches
 Out-of-Order Execution
 HW virtualization
 Coherent Network
Boot/Power Management
 UEFI with SBBR compliance
 ACPI compliant
Connectivity
PCIe 3.0, SATA 3.0, USB 3.0
Integrated 1GbE/10GbE
 UART, I2C, SPI, MDIO, GPIO
X-Gene® Evaluation Kits Available Today
APM X-C1 (Mustang)
Platform APM X-C2 (Merlin)
Platform
© AppliedMicro Proprietary & Confidential 7
X-Gene® Production Platforms
Multiple SKUs from Leading OEM and ODM Partners
8© AppliedMicro Proprietary & Confidential
CEPH Storage Server Based on X-Gene 1
 Microserver featuring the Applied Micro X-Gene 1 “Storm” ARM 64bit
2.4GHz, 8-Core SoC
– Memory
– DDR3 UDIMMs 1600MHz ECC
– Integrated I/O
– 10GbE
– PCIe Gen3
– SATA Gen3
– Programmable Offloads
– Sub 40W TDP
 Two customer-defined workloads
– Cold-archival (one motherboard) or Hadoop storage (two motherboards)
– Unique 1U form factor in a cost-effective
-- Monolithic chassis to maximize density
 14 internal SATA storage devices: 12 x 3.5” hot-plug, Up to 2 x 2.5” boot
devices
 Optional JBOD building block capability for increasingly dense storage needs
 Redundant hot-plug power supplies
 Flexible I/O capability, including integrated 10GbE SFP+ support
© 2014 Applied Micro Circuits Corporation 9
Wistron X5 OCP CEPH Server
QS Compute Tray 8 APM X-Gene 1 ARM64 Compute Nodes
Each Compute Node has 8 DIMMs, 1 128GB M.2 SSD
2 QSFP+ connectors from OCP MEZZ card support 40GbE
DS Storage Tray 1 APM X-Gene 1 Compute Node
Marvell Storage Controller, Aspeed AST1250 BMC
10 3TB 3.5” SATA 6Gb HDD
1 SFP+ connector support 10GbE network
Software Open Source Linux, Ubuntu 14.04 LTS
Power Supply 3 +1 Redundant Power Supply design, max power 4800W
Sampling Now
10© AppliedMicro Proprietary & Confidential
• Red Hat is working in partnership with Applied Micro
• World’s First Full Ceph reference design on 64-bit ARM using
Applied Micro CEPH ARMv8 Server
• Complete the first Performance and Configuration Guide
Applied Micro RedHat Partnership for CEPH
11© AppliedMicro Proprietary & Confidential
CEPH Network Hardware & Software
• X-Gene 1 based Cold Storage Solution
– 4 node cluster with CEPH servers
– 12 SATA HDD drives per node
– ~40TB data storage
– Marvell 4-port PCIE SATA expander
• Platform Software
– Operating System: RHELSA 7.2, CentOS 7.2
– BIOS: Tianocore UEFI
• Cluster Software
– Apache v2.4.7, Python v2.7.6
– Ceph “Jewel” early access release
– Ceph Cluster Monitors, Client Load generators
– Ceph Benchmark Toolkit (Open Source tool for Ceph testing/benchmarks)
– FIO, RADOS Bench, other standardized tools for common test results
12© AppliedMicro Proprietary & Confidential
CEPH Hardware Topology
10GbE/1GbE Switch
Servers Under Test
10GbE/1GbE 10GbE/1GbE
10GbE/1GbE
1GbE
HP Moonshot
CEPH cluster monitor/load generator
10GbE/1GbE
13© AppliedMicro Proprietary & Confidential
Cluster Network Topology
• Ceph Cluster Network - Backend network – Data replication by OSD-daemons
• Ceph Public network - Frontend network – ceph-client to cluster control path network
• External Network - Management interface, Software Installation, Gateway
Node1 Node2 Node3 Node4
Ceph Public Network – 192.168.3.x
Ceph Cluster Network – 192.168.4.x
ExternalNetwork-10.58.12.x
14© AppliedMicro Proprietary & Confidential
CEPH FEATURE SUMMARY
CEPH
TRAINING
OBJECT
STORAGE
Unlimited Object Size
Integrates with
Openstack Keystone
Multi-Tenant
S3/SWIFT API
Usage Statistics
BLOCK
STORAGE
Thin Provisioning
Copy on Write
Integrates with
KVM/QEMU/Libvirt
Linux Kernel
Snapshots/Cloning
FILE SYSTEM
Mainline Linux
Kernel Support
Auto-balancing
Metadata Servers
POSIX Compliant
Openstack Cinder
15© AppliedMicro Proprietary & Confidential
CEPH Software Stack
S3/SWIFT
HOST/HYPERVI
SOR
iSCSI CIFS/NFS SDK
INTERFACESSTORAGECLUSTERS
MONITORS OBJECT STORAGE DAEMONS (OSD)
BLOCK
STORAGE
FILE SYSTEM
OBJECT
STORAGE
16© AppliedMicro Proprietary & Confidential
CEPH Software Architecture
Applied Micro CEPH Storage Server using X-Gene 1
Tianocore UEFI
EXT4
Ceph
Config Ceph
OSD
RADOS
NTP SSH
17© AppliedMicro Proprietary & Confidential
CEPH Components
 Ceph OSD
 Daemon to manage and store
cluster data
 Per disk
 Total #48 in cluster
 EXT4 File system on OSD
 Ceph MON (M)
 Monitor to check health of
cluster and OSD status
Image Courtsey: InkTank
18© AppliedMicro Proprietary & Confidential
CEPH Cluster Access
– librados interface
• ceph-admin
• rados client
– RADOS gateway
• AmazonWS S3 compatible
Image Courtsey: InkTank
19© AppliedMicro Proprietary & Confidential
Librados Library
RESTful
client
1. LIBRADOS
a client library
for direct access
of data in a
RADOS cluster
2. Ceph
Object
Gateway
host/VM
client
POSIX fs
client
3. Ceph
Block Device
4. Ceph
File System
native
client
RADOS
a reliable, autonomous, distributed object store
20© AppliedMicro Proprietary & Confidential
CEPH Performance Benchmarking – Seq Read/Write
21© AppliedMicro Proprietary & Confidential
CEPH Performance Benchmarking – Random Read/Write
22© AppliedMicro Proprietary & Confidential
• There is enough Community / Upstream and Customer interest
to begin the path to a supported build of Ceph for ARM
• Step 1: RHELSA – RHEL Server for ARM is released as a
Developers Tech Preview in RHEL 7.2 currently
• Step 2: the first Ceph on ARM builds will be available in the
Jewel Release in May 2016
• Ceph on ARM becomes available upstream!
• Step 3: Community, Developers, and customers to start making
use of Ceph on ARM
CEPH Upstream Status
23© AppliedMicro Proprietary & Confidential
• Goal of Ceph Performance and Configuration Guide is to offer guidance to
users wishing to utilize APM ARM Servers
• Guide will offer guidance on what performance can be expected for the
various tested workloads with guidance on configurations that were tested
for repeatable results in the field.
• Block Use Cases for Rados Block Device (RBD) in various block sizes
w/results for each (ie. 4K Random, etc)
• RBD Linux Kernel, Virtual Machines & Container persistent back end
storage, KVM, QEMU, Libvirt
• Object Use Cases for Swift/S3 compatible Ceph Objects (RADOS
Benchmark) for sequential read/write Object workloads
• Digital Media, Content Delivery Networks, Archive storage, Cloud
Object Storage Services
Conclusion
24© AppliedMicro Proprietary & Confidential

More Related Content

What's hot

Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Community
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Community
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleJames Saint-Rossy
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 

What's hot (17)

Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 

Viewers also liked

Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Community
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 

Viewers also liked (20)

Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 

Similar to Ceph on 64-bit ARM with X-Gene

Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Community
 
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Community
 
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Community
 
Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance Ceph Community
 
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
Modular by Design: Supermicro’s New Standards-Based Universal GPU ServerModular by Design: Supermicro’s New Standards-Based Universal GPU Server
Modular by Design: Supermicro’s New Standards-Based Universal GPU ServerRebekah Rodriguez
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...Filipe Miranda
 
Introduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AIIntroduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AITyrone Systems
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
 
Arm - ceph on arm update
Arm - ceph on arm updateArm - ceph on arm update
Arm - ceph on arm updateinwin stack
 
Sharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual MachinesSharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
 

Similar to Ceph on 64-bit ARM with X-Gene (20)

Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance Networks
 
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks
 
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
 
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
 
Demystify OpenPOWER
Demystify OpenPOWERDemystify OpenPOWER
Demystify OpenPOWER
 
Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance Deploying flash storage for Ceph without compromising performance
Deploying flash storage for Ceph without compromising performance
 
IBM Power8 announce
IBM Power8 announceIBM Power8 announce
IBM Power8 announce
 
OpenCAPI Technology Ecosystem
OpenCAPI Technology EcosystemOpenCAPI Technology Ecosystem
OpenCAPI Technology Ecosystem
 
Deeplearningusingcloudpakfordata
DeeplearningusingcloudpakfordataDeeplearningusingcloudpakfordata
Deeplearningusingcloudpakfordata
 
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
Modular by Design: Supermicro’s New Standards-Based Universal GPU ServerModular by Design: Supermicro’s New Standards-Based Universal GPU Server
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph
CephCeph
Ceph
 
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...
 
Introduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AIIntroduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AI
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Arm - ceph on arm update
Arm - ceph on arm updateArm - ceph on arm update
Arm - ceph on arm update
 
POWER9 for AI & HPC
POWER9 for AI & HPCPOWER9 for AI & HPC
POWER9 for AI & HPC
 
Sharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual MachinesSharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual Machines
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 

Recently uploaded

英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfFerryKemperman
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...OnePlan Solutions
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsAhmed Mohamed
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in NoidaBuds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in Noidabntitsolutionsrishis
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfStefano Stabellini
 
How to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdfHow to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdfLivetecs LLC
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...Technogeeks
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceBrainSell Technologies
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 

Recently uploaded (20)

英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdf
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML Diagrams
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in NoidaBuds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdf
 
How to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdfHow to Track Employee Performance A Comprehensive Guide.pdf
How to Track Employee Performance A Comprehensive Guide.pdf
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. Salesforce
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 

Ceph on 64-bit ARM with X-Gene

  • 1. 1© AppliedMicro Proprietary & Confidential CEPH on 64-bit ARM with X-Gene® Kumar Sankaran Associate Vice President, Software and Platform Engineering CEPH Day Mar 30th 2016
  • 2. 2© AppliedMicro Proprietary & Confidential Applied Micro: A Global Presence San Diego, CA Ottawa, Canada Andover, MA Austin, TX Raleigh, NC Copenhagen SANTA CLARA, CA India Pune Bangalore Vietnam Shenzhen TokyoShanghai BeijingParis Manchester Munich China Founded in 1978 Headquarters: Santa Clara, CA, USA NASDAQ: Ticker AMCC ~500 Employees Worldwide
  • 3. 3© AppliedMicro Proprietary & Confidential X-Gene® Industry First 64-bit ARM Server-on-Chip CPU Performance  Strong Single Thread Performance Power  Low Power Envelope  High Perf/Watt IO Integration  Integrated 10G NICs  PCIe Gen 3  SATA Gen 3  Programmable Offloads DRAM Memory  Low latency  Multiple channels  Large Memory Addressability
  • 4. © AppliedMicro Proprietary & Confidential NOVEMBER 2015 X-Gene 3 Announced X-Tend Announced OCTOBER 2014 HP’s m400 product launch MARCH 2013 X-Gene silicon samples to customers APRIL 2012 X-Gene FPGA release enables ecosystem partners MARCH 2010 World’s 1st ARMv8 Server Architecture Licensee 2012 2014 2013 2015 OCTOBER 2009 Dedicated processor team begins 64b development 2009 2010 2014 2014 SPRING 2014 X-Gene 2 samples to customers OCTOBER 2014 X-Gene 2 wins ARM TechCon “Best in Show” OEM Hyperscale ODM HPC Networking Storage Memcache Enterprise Search Mainstream Web Tier X-Gene Driving ARM in Data Centers Target Customer Types Target Deployments
  • 5. X-Gene® Market Segments Web Serving/Proxy  Apache, NGINX, HAProxy Web Tier Cold Storage/Big Data/Data Analytics High Performance Computing Cold Storage  CEPH, GlusterFS, OpenStack Swift/Cinder CPU + GPU combination for HPC workloads Web Apps/Hosting  Dhrupal, WordPress, Rails Web Caching  Memcached, Redis Databases  MySQL, MongoDB, Cassandra, PostgreSQL Big Data  Hadoop, MapReduce Data Analytics  Lucene, ElasticSearch, SPARK
  • 6. X-Gene® Features Memory subsystem  Multiple DDR3/DDR4 Channels X-Gene®  Server on Chip  Multi-core 64-bit ARMv8-A CPUs  Three Level Cache Hierarchy  L1 /L2/L3 caches  Out-of-Order Execution  HW virtualization  Coherent Network Boot/Power Management  UEFI with SBBR compliance  ACPI compliant Connectivity PCIe 3.0, SATA 3.0, USB 3.0 Integrated 1GbE/10GbE  UART, I2C, SPI, MDIO, GPIO X-Gene® Evaluation Kits Available Today APM X-C1 (Mustang) Platform APM X-C2 (Merlin) Platform
  • 7. © AppliedMicro Proprietary & Confidential 7 X-Gene® Production Platforms Multiple SKUs from Leading OEM and ODM Partners
  • 8. 8© AppliedMicro Proprietary & Confidential CEPH Storage Server Based on X-Gene 1  Microserver featuring the Applied Micro X-Gene 1 “Storm” ARM 64bit 2.4GHz, 8-Core SoC – Memory – DDR3 UDIMMs 1600MHz ECC – Integrated I/O – 10GbE – PCIe Gen3 – SATA Gen3 – Programmable Offloads – Sub 40W TDP  Two customer-defined workloads – Cold-archival (one motherboard) or Hadoop storage (two motherboards) – Unique 1U form factor in a cost-effective -- Monolithic chassis to maximize density  14 internal SATA storage devices: 12 x 3.5” hot-plug, Up to 2 x 2.5” boot devices  Optional JBOD building block capability for increasingly dense storage needs  Redundant hot-plug power supplies  Flexible I/O capability, including integrated 10GbE SFP+ support
  • 9. © 2014 Applied Micro Circuits Corporation 9 Wistron X5 OCP CEPH Server QS Compute Tray 8 APM X-Gene 1 ARM64 Compute Nodes Each Compute Node has 8 DIMMs, 1 128GB M.2 SSD 2 QSFP+ connectors from OCP MEZZ card support 40GbE DS Storage Tray 1 APM X-Gene 1 Compute Node Marvell Storage Controller, Aspeed AST1250 BMC 10 3TB 3.5” SATA 6Gb HDD 1 SFP+ connector support 10GbE network Software Open Source Linux, Ubuntu 14.04 LTS Power Supply 3 +1 Redundant Power Supply design, max power 4800W Sampling Now
  • 10. 10© AppliedMicro Proprietary & Confidential • Red Hat is working in partnership with Applied Micro • World’s First Full Ceph reference design on 64-bit ARM using Applied Micro CEPH ARMv8 Server • Complete the first Performance and Configuration Guide Applied Micro RedHat Partnership for CEPH
  • 11. 11© AppliedMicro Proprietary & Confidential CEPH Network Hardware & Software • X-Gene 1 based Cold Storage Solution – 4 node cluster with CEPH servers – 12 SATA HDD drives per node – ~40TB data storage – Marvell 4-port PCIE SATA expander • Platform Software – Operating System: RHELSA 7.2, CentOS 7.2 – BIOS: Tianocore UEFI • Cluster Software – Apache v2.4.7, Python v2.7.6 – Ceph “Jewel” early access release – Ceph Cluster Monitors, Client Load generators – Ceph Benchmark Toolkit (Open Source tool for Ceph testing/benchmarks) – FIO, RADOS Bench, other standardized tools for common test results
  • 12. 12© AppliedMicro Proprietary & Confidential CEPH Hardware Topology 10GbE/1GbE Switch Servers Under Test 10GbE/1GbE 10GbE/1GbE 10GbE/1GbE 1GbE HP Moonshot CEPH cluster monitor/load generator 10GbE/1GbE
  • 13. 13© AppliedMicro Proprietary & Confidential Cluster Network Topology • Ceph Cluster Network - Backend network – Data replication by OSD-daemons • Ceph Public network - Frontend network – ceph-client to cluster control path network • External Network - Management interface, Software Installation, Gateway Node1 Node2 Node3 Node4 Ceph Public Network – 192.168.3.x Ceph Cluster Network – 192.168.4.x ExternalNetwork-10.58.12.x
  • 14. 14© AppliedMicro Proprietary & Confidential CEPH FEATURE SUMMARY CEPH TRAINING OBJECT STORAGE Unlimited Object Size Integrates with Openstack Keystone Multi-Tenant S3/SWIFT API Usage Statistics BLOCK STORAGE Thin Provisioning Copy on Write Integrates with KVM/QEMU/Libvirt Linux Kernel Snapshots/Cloning FILE SYSTEM Mainline Linux Kernel Support Auto-balancing Metadata Servers POSIX Compliant Openstack Cinder
  • 15. 15© AppliedMicro Proprietary & Confidential CEPH Software Stack S3/SWIFT HOST/HYPERVI SOR iSCSI CIFS/NFS SDK INTERFACESSTORAGECLUSTERS MONITORS OBJECT STORAGE DAEMONS (OSD) BLOCK STORAGE FILE SYSTEM OBJECT STORAGE
  • 16. 16© AppliedMicro Proprietary & Confidential CEPH Software Architecture Applied Micro CEPH Storage Server using X-Gene 1 Tianocore UEFI EXT4 Ceph Config Ceph OSD RADOS NTP SSH
  • 17. 17© AppliedMicro Proprietary & Confidential CEPH Components  Ceph OSD  Daemon to manage and store cluster data  Per disk  Total #48 in cluster  EXT4 File system on OSD  Ceph MON (M)  Monitor to check health of cluster and OSD status Image Courtsey: InkTank
  • 18. 18© AppliedMicro Proprietary & Confidential CEPH Cluster Access – librados interface • ceph-admin • rados client – RADOS gateway • AmazonWS S3 compatible Image Courtsey: InkTank
  • 19. 19© AppliedMicro Proprietary & Confidential Librados Library RESTful client 1. LIBRADOS a client library for direct access of data in a RADOS cluster 2. Ceph Object Gateway host/VM client POSIX fs client 3. Ceph Block Device 4. Ceph File System native client RADOS a reliable, autonomous, distributed object store
  • 20. 20© AppliedMicro Proprietary & Confidential CEPH Performance Benchmarking – Seq Read/Write
  • 21. 21© AppliedMicro Proprietary & Confidential CEPH Performance Benchmarking – Random Read/Write
  • 22. 22© AppliedMicro Proprietary & Confidential • There is enough Community / Upstream and Customer interest to begin the path to a supported build of Ceph for ARM • Step 1: RHELSA – RHEL Server for ARM is released as a Developers Tech Preview in RHEL 7.2 currently • Step 2: the first Ceph on ARM builds will be available in the Jewel Release in May 2016 • Ceph on ARM becomes available upstream! • Step 3: Community, Developers, and customers to start making use of Ceph on ARM CEPH Upstream Status
  • 23. 23© AppliedMicro Proprietary & Confidential • Goal of Ceph Performance and Configuration Guide is to offer guidance to users wishing to utilize APM ARM Servers • Guide will offer guidance on what performance can be expected for the various tested workloads with guidance on configurations that were tested for repeatable results in the field. • Block Use Cases for Rados Block Device (RBD) in various block sizes w/results for each (ie. 4K Random, etc) • RBD Linux Kernel, Virtual Machines & Container persistent back end storage, KVM, QEMU, Libvirt • Object Use Cases for Swift/S3 compatible Ceph Objects (RADOS Benchmark) for sequential read/write Object workloads • Digital Media, Content Delivery Networks, Archive storage, Cloud Object Storage Services Conclusion