SlideShare a Scribd company logo
1 of 59
Download to read offline
Using Ceph with 
OpenNebula 
John Spray 
john.spray@redhat.com
Agenda 
● What is it? 
● Architecture 
● Integration with OpenNebula 
● What's new? 
2 OpenNebulaConf 2014 Berlin
What is Ceph? 
3 OpenNebulaConf 2014 Berlin
What is Ceph? 
● Highly available resilient data store 
● Free Software (LGPL) 
● 10 years since inception 
● Flexible object, block and filesystem interfaces 
● Especially popular in private clouds as VM image 
service, and S3-compatible object storage service. 
4 OpenNebulaConf 2014 Berlin
Interfaces to storage 
S3 & Swift 
Multi-tenant 
Snapshots 
Clones 
5 OpenNebulaConf 2014 Berlin 
FILE 
SYSTEM 
CephFS 
BLOCK 
STORAGE 
RBD 
OBJECT 
STORAGE 
RGW 
Keystone 
Geo-Replication 
Native API 
OpenStack 
Linux Kernel 
iSCSI 
POSIX 
Linux Kernel 
CIFS/NFS 
HDFS 
Distributed Metadata
Ceph Architecture 
6 OpenNebulaConf 2014 Berlin
Architectural Components 
APP HOST/VM CLIENT 
RGW 
A web services 
gateway for object 
storage, compatible 
with S3 and Swift 
RBD 
A reliable, fully-distributed 
block 
device with cloud 
platform integration 
LIBRADOS 
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors 
7 OpenNebulaConf 2014 Berlin 
CEPHFS 
A distributed file 
system with POSIX 
semantics and scale-out 
metadata 
management
Object Storage Daemons 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
OSD 
FS 
DISK 
btrfs 
xfs 
ext4 
8 OpenNebulaConf 2014 Berlin 
M 
M 
M
RADOS Components 
OSDs: 
 10s to 10000s in a cluster 
 One per disk (or one per SSD, RAID group…) 
 Serve stored objects to clients 
 Intelligently peer for replication & recovery 
Monitors: 
 Maintain cluster membership and state 
 Provide consensus for distributed decision-making 
 Small, odd number 
 These do not serve stored objects to clients 
M 
9 OpenNebulaConf 2014 Berlin
Rados Cluster 
APPLICATION 
M M 
M M 
M 
RADOS CLUSTER 
10 OpenNebulaConf 2014 Berlin
Where do objects live? 
?? 
APPLICATION 
11 OpenNebulaConf 2014 Berlin 
M 
M 
M 
OBJECT
A Metadata Server? 
1 
APPLICATION 
12 OpenNebulaConf 2014 Berlin 
M 
M 
M 
2
Calculated placement 
APPLICATION F 
13 OpenNebulaConf 2014 Berlin 
M 
M 
M 
A-G 
H-N 
O-T 
U-Z
Even better: CRUSH 
14 OpenNebulaConf 2014 Berlin 
01 11 
11 
01 
RADOS CLUSTER 
OBJECT 
10 
01 
01 
10 
10 
01 
11 
01 
10 
01 
01 
10 
10 
01 
01 10 
10 10 01 01
CRUSH is a quick calculation 
15 OpenNebulaConf 2014 Berlin 
01 11 
11 
01 
RADOS CLUSTER 
OBJECT 
10 
01 
01 
10 
10 
01 
01 10 
10 10 01 01
CRUSH: Dynamic data placement 
CRUSH: 
 Pseudo-random placement algorithm 
 Fast calculation, no lookup 
 Repeatable, deterministic 
 Statistically uniform distribution 
 Stable mapping 
 Limited data migration on change 
 Rule-based configuration 
 Infrastructure topology aware 
 Adjustable replication 
 Weighting 
16 OpenNebulaConf 2014 Berlin
Architectural Components 
APP HOST/VM CLIENT 
RGW 
A web services 
gateway for object 
storage, compatible 
with S3 and Swift 
RBD 
A reliable, fully-distributed 
block 
device with cloud 
platform integration 
LIBRADOS 
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) 
RADOS 
A software-based, reliable, autonomous, distributed object store comprised of 
self-healing, self-managing, intelligent storage nodes and lightweight monitors 
17 OpenNebulaConf 2014 Berlin 
CEPHFS 
A distributed file 
system with POSIX 
semantics and scale-out 
metadata 
management
RBD: Virtual disks in Ceph 
18 OpenNebulaConf 2014 Berlin 
18 
RADOS BLOCK DEVICE: 
 Storage of disk images in RADOS 
 Decouples VMs from host 
 Images are striped across the cluster (pool) 
 Snapshots 
 Copy-on-write clones 
 Support in: 
 Mainline Linux Kernel (2.6.39+) 
 Qemu/KVM 
 OpenStack, CloudStack, OpenNebula, 
Proxmox
Storing virtual disks 
VM 
HYPERVISOR 
LIBRBD 
M M 
RADOS CLUSTER 
19 
19 OpenNebulaConf 2014 Berlin
Using Ceph with OpenNebula 
20 OpenNebulaConf 2014 Berlin
Storage in OpenNebula deployments 
OpenNebula Cloud Architecture Survey 2014 (http://c12g.com/resources/survey/) 
21 OpenNebulaConf 2014 Berlin
RBD and libvirt/qemu 
● librbd (user space) client integration with libvirt/qemu 
● Support for live migration, thin clones 
● Get recent versions! 
● Directly supported in OpenNebula since 4.0 with the 
Ceph Datastore (wraps `rbd` CLI) 
More info online: 
http://ceph.com/docs/master/rbd/libvirt/ 
http://docs.opennebula.org/4.10/administration/storage/ceph_ds.html 
22 OpenNebulaConf 2014 Berlin
Other hypervisors 
● OpenNebula is flexible, so can we also use Ceph with 
non-libvirt/qemu hypervisors? 
● Kernel RBD: can present RBD images in /dev/ on 
hypervisor host for software unaware of librbd 
● Docker: can exploit RBD volumes with a local 
filesystem for use as data volumes – maybe CephFS 
in future...? 
● For unsupported hypervisors, can adapt to Ceph using 
e.g. iSCSI for RBD, or NFS for CephFS (but test re-exports 
carefully!) 
23 OpenNebulaConf 2014 Berlin
Choosing hardware 
Testing/benchmarking/expert advice is needed, but there 
are general guidelines: 
● Prefer many cheap nodes to few expensive nodes (10 
is better than 3) 
● Include small but fast SSDs for OSD journals 
● Don't simply buy biggest drives: consider 
IOPs/capacity ratio 
● Provision network and IO capacity sufficient for your 
workload plus recovery bandwidth from node failure. 
24 OpenNebulaConf 2014 Berlin
What's new? 
25 OpenNebulaConf 2014 Berlin
Ceph releases 
● Ceph 0.80 firefly (May 2014) 
– Cache tiering & erasure coding 
– Key/val OSD backends 
– OSD primary affinity 
● Ceph 0.87 giant (October 2014) 
– RBD cache enabled by default 
– Performance improvements 
– Locally recoverable erasure codes 
● Ceph x.xx hammer (2015) 
26 OpenNebulaConf 2014 Berlin
Additional components 
● Ceph FS – scale-out POSIX filesystem service, 
currently being stabilized 
● Calamari – monitoring dashboard for Ceph 
● ceph-deploy – easy SSH-based deployment tool 
● Puppet, Chef modules 
27 OpenNebulaConf 2014 Berlin
Get involved 
Evaluate the latest releases: 
http://ceph.com/resources/downloads/ 
Mailing list, IRC: 
http://ceph.com/resources/mailing-list-irc/ 
Bugs: 
http://tracker.ceph.com/projects/ceph/issues 
Online developer summits: 
https://wiki.ceph.com/Planning/CDS 
28 OpenNebulaConf 2014 Berlin
Questions? 
29 OpenNebulaConf 2014 Berlin
30 OpenNebulaConf 2014 Berlin
Spare slides 
31 OpenNebulaConf 2014 Berlin
32 OpenNebulaConf 2014 Berlin
Ceph FS 
33 OpenNebulaConf 2014 Berlin
CephFS architecture 
● Dynamically balanced scale-out metadata 
● Inherit flexibility/scalability of RADOS for data 
● POSIX compatibility 
● Beyond POSIX: Subtree snapshots, recursive statistics 
Weil, Sage A., et al. "Ceph: A scalable, high-performance distributed file 
system." Proceedings of the 7th symposium on Operating systems 
design and implementation. USENIX Association, 2006. 
http://ceph.com/papers/weil-ceph-osdi06.pdf 
34 OpenNebulaConf 2014 Berlin
Components 
● Client: kernel, fuse, libcephfs 
● Server: MDS daemon 
● Storage: RADOS cluster (mons & OSDs) 
35 OpenNebulaConf 2014 Berlin
Components 
Linux host 
ceph.ko 
metadata 01 data 
10 
M M 
M 
Ceph server daemons 
36 OpenNebulaConf 2014 Berlin
From application to disk 
Application 
ceph-fuse libcephfs Kernel client 
ceph-mds 
Client network protocol 
RADOS 
Disk 
37 OpenNebulaConf 2014 Berlin
Scaling out FS metadata 
● Options for distributing metadata? 
– by static subvolume 
– by path hash 
– by dynamic subtree 
● Consider performance, ease of implementation 
38 OpenNebulaConf 2014 Berlin
Dynamic subtree placement 
39 OpenNebulaConf 2014 Berlin
Dynamic subtree placement 
● Locality: get the dentries in a dir from one MDS 
● Support read heavy workloads by replicating non-authoritative 
copies (cached with capabilities just like 
clients do) 
● In practice work at directory fragment level in order to 
handle large dirs 
40 OpenNebulaConf 2014 Berlin
Data placement 
● Stripe file contents across RADOS objects 
● get full rados cluster bw from clients 
● fairly tolerant of object losses: reads return zero 
● Control striping with layout vxattrs 
● layouts also select between multiple data pools 
● Deletion is a special case: client deletions mark files 
'stray', RADOS delete ops sent by MDS 
41 OpenNebulaConf 2014 Berlin
Clients 
● Two implementations: 
● ceph-fuse/libcephfs 
● kclient 
● Interplay with VFS page cache, efficiency harder with 
fuse (extraneous stats etc) 
● Client perf. matters, for single-client workloads 
● Slow client can hold up others if it's hogging metadata 
locks: include clients in troubleshooting 
42 OpenNebulaConf 2014 Berlin
Journaling and caching in MDS 
● Metadata ops initially journaled to striped journal "file" 
in the metadata pool. 
– I/O latency on metadata ops is sum of network 
latency and journal commit latency. 
– Metadata remains pinned in in-memory cache 
until expired from journal. 
43 OpenNebulaConf 2014 Berlin
Journaling and caching in MDS 
● In some workloads we expect almost all metadata 
always in cache, in others its more of a stream. 
● Control cache size with mds_cache_size 
● Cache eviction relies on client cooperation 
● MDS journal replay not only recovers data but also 
warms up cache. Use standby replay to keep that 
cache warm. 
44 OpenNebulaConf 2014 Berlin
Lookup by inode 
● Sometimes we need inode → path mapping: 
● Hard links 
● NFS handles 
● Costly to store this: mitigate by piggybacking paths 
(backtraces) onto data objects 
● Con: storing metadata to data pool 
● Con: extra IOs to set backtraces 
● Pro: disaster recovery from data pool 
● Future: improve backtrace writing latency 
45 OpenNebulaConf 2014 Berlin
CephFS in practice 
ceph-deploy mds create myserver 
ceph osd pool create fs_data 
ceph osd pool create fs_metadata 
ceph fs new myfs fs_metadata fs_data 
mount -t cephfs x.x.x.x:6789 /mnt/ceph 
46 OpenNebulaConf 2014 Berlin
Managing CephFS clients 
● New in giant: see hostnames of connected clients 
● Client eviction is sometimes important: 
● Skip the wait during reconnect phase on MDS restart 
● Allow others to access files locked by crashed client 
● Use OpTracker to inspect ongoing operations 
47 OpenNebulaConf 2014 Berlin
CephFS tips 
● Choose MDS servers with lots of RAM 
● Investigate clients when diagnosing stuck/slow access 
● Use recent Ceph and recent kernel 
● Use a conservative configuration: 
● Single active MDS, plus one standby 
● Dedicated MDS server 
● Kernel client 
● No snapshots, no inline data 
48 OpenNebulaConf 2014 Berlin
Towards a production-ready CephFS 
● Focus on resilience: 
1. Don't corrupt things 
2. Stay up 
3. Handle the corner cases 
4. When something is wrong, tell me 
5. Provide the tools to diagnose and fix problems 
● Achieve this first within a conservative single-MDS 
configuration 
49 OpenNebulaConf 2014 Berlin
Giant->Hammer timeframe 
● Initial online fsck (a.k.a. forward scrub) 
● Online diagnostics (`session ls`, MDS health alerts) 
● Journal resilience & tools (cephfs-journal-tool) 
● flock in the FUSE client 
● Initial soft quota support 
● General resilience: full OSDs, full metadata cache 
50 OpenNebulaConf 2014 Berlin
FSCK and repair 
● Recover from damage: 
● Loss of data objects (which files are damaged?) 
● Loss of metadata objects (what subtree is damaged?) 
● Continuous verification: 
● Are recursive stats consistent? 
● Does metadata on disk match cache? 
● Does file size metadata match data on disk? 
● Repair: 
● Automatic where possible 
● Manual tools to enable support 
51 OpenNebulaConf 2014 Berlin
Client management 
● Current eviction is not 100% safe against rogue clients 
● Update to client protocol to wait for OSD blacklist 
● Client metadata 
● Initially domain name, mount point 
● Extension to other identifiers? 
52 OpenNebulaConf 2014 Berlin
Online diagnostics 
● Bugs exposed relate to failures of one client to release 
resources for another client: “my filesystem is frozen”. 
Introduce new health messages: 
● “client xyz is failing to respond to cache pressure” 
● “client xyz is ignoring capability release messages” 
● Add client metadata to allow us to give domain names 
instead of IP addrs in messages. 
● Opaque behavior in the face of dead clients. Introduce 
`session ls` 
● Which clients does MDS think are stale? 
● Identify clients to evict with `session evict` 
53 OpenNebulaConf 2014 Berlin
Journal resilience 
● Bad journal prevents MDS recovery: “my MDS crashes 
on startup”: 
● Data loss 
● Software bugs 
● Updated on-disk format to make recovery from 
damage easier 
● New tool: cephfs-journal-tool 
● Inspect the journal, search/filter 
● Chop out unwanted entries/regions 
54 OpenNebulaConf 2014 Berlin
Handling resource limits 
● Write a test, see what breaks! 
● Full MDS cache: 
● Require some free memory to make progress 
● Require client cooperation to unpin cache objects 
● Anticipate tuning required for cache behaviour: what 
should we evict? 
● Full OSD cluster 
● Require explicit handling to abort with -ENOSPC 
● MDS → RADOS flow control: 
● Contention between I/O to flush cache and I/O to journal 
55 OpenNebulaConf 2014 Berlin
Test, QA, bug fixes 
● The answer to “Is CephFS production ready?” 
● teuthology test framework: 
● Long running/thrashing test 
● Third party FS correctness tests 
● Python functional tests 
● We dogfood CephFS internally 
● Various kclient fixes discovered 
● Motivation for new health monitoring metrics 
● Third party testing is extremely valuable 
56 OpenNebulaConf 2014 Berlin
What's next? 
● You tell us! 
● Recent survey highlighted: 
● FSCK hardening 
● Multi-MDS hardening 
● Quota support 
● Which use cases will community test with? 
● General purpose 
● Backup 
● Hadoop 
57 OpenNebulaConf 2014 Berlin
Reporting bugs 
● Does the most recent development release or kernel 
fix your issue? 
● What is your configuration? MDS config, Ceph 
version, client version, kclient or fuse 
● What is your workload? 
● Can you reproduce with debug logging enabled? 
http://ceph.com/resources/mailing-list-irc/ 
http://tracker.ceph.com/projects/ceph/issues 
http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/ 
58 OpenNebulaConf 2014 Berlin
Future 
● Ceph Developer Summit: 
● When: 8 October 
● Where: online 
● Post-Hammer work: 
● Recent survey highlighted multi-MDS, quota support 
● Testing with clustered Samba/NFS? 
59 OpenNebulaConf 2014 Berlin

More Related Content

What's hot

OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9
OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9
OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9OpenNebula Project
 
TechDay - Toronto 2016 - Hyperconvergence and OpenNebula
TechDay - Toronto 2016 - Hyperconvergence and OpenNebulaTechDay - Toronto 2016 - Hyperconvergence and OpenNebula
TechDay - Toronto 2016 - Hyperconvergence and OpenNebulaOpenNebula Project
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Community
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
 
GlusterFS and Openstack Storage
GlusterFS and Openstack StorageGlusterFS and Openstack Storage
GlusterFS and Openstack StorageDeepak Shetty
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStackMirantis
 
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Gluster.org
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...
GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...
GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...Deepak Shetty
 
Accessing gluster ufo_-_eco_willson
Accessing gluster ufo_-_eco_willsonAccessing gluster ufo_-_eco_willson
Accessing gluster ufo_-_eco_willsonGluster.org
 
Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015Sean Cohen
 
Compute 101 - OpenStack Summit Vancouver 2015
Compute 101 - OpenStack Summit Vancouver 2015Compute 101 - OpenStack Summit Vancouver 2015
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
DOST: Ceph in a security critical OpenStack cloud
DOST: Ceph in a security critical OpenStack cloudDOST: Ceph in a security critical OpenStack cloud
DOST: Ceph in a security critical OpenStack cloudDanny Al-Gaaf
 

What's hot (20)

OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9
OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9
OpenNebulaconf2017US: Multi-Site Hyperconverged OpenNebula with DRBD9
 
TechDay - Toronto 2016 - Hyperconvergence and OpenNebula
TechDay - Toronto 2016 - Hyperconvergence and OpenNebulaTechDay - Toronto 2016 - Hyperconvergence and OpenNebula
TechDay - Toronto 2016 - Hyperconvergence and OpenNebula
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...
 
Ceph on Windows
Ceph on WindowsCeph on Windows
Ceph on Windows
 
GlusterFS and Openstack Storage
GlusterFS and Openstack StorageGlusterFS and Openstack Storage
GlusterFS and Openstack Storage
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStack
 
Rethinking the OS
Rethinking the OSRethinking the OS
Rethinking the OS
 
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...
GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...
GlusterFS Native driver for Openstack Manila at GlusterNight Paris @ Openstac...
 
Accessing gluster ufo_-_eco_willson
Accessing gluster ufo_-_eco_willsonAccessing gluster ufo_-_eco_willson
Accessing gluster ufo_-_eco_willson
 
Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015
 
Compute 101 - OpenStack Summit Vancouver 2015
Compute 101 - OpenStack Summit Vancouver 2015Compute 101 - OpenStack Summit Vancouver 2015
Compute 101 - OpenStack Summit Vancouver 2015
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
DOST: Ceph in a security critical OpenStack cloud
DOST: Ceph in a security critical OpenStack cloudDOST: Ceph in a security critical OpenStack cloud
DOST: Ceph in a security critical OpenStack cloud
 

Similar to OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula - John Spray

Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Community
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
DevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform SimulationsDevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform SimulationsJeremy Eder
 
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise KubernetesMongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise KubernetesMongoDB
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewMarcel Hergaarden
 
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...TomBarron
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed_Hat_Storage
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdfOpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdfssuser9e06a61
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph Community
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installationRobert Bohne
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetupktdreyer
 
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 DemoNFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 DemoManageIQ
 
Red hat ceph storage customer presentation
Red hat ceph storage customer presentationRed hat ceph storage customer presentation
Red hat ceph storage customer presentationRodrigo Missiaggia
 
Comparison of control plane deployment architectures in the scope of hypercon...
Comparison of control plane deployment architectures in the scope of hypercon...Comparison of control plane deployment architectures in the scope of hypercon...
Comparison of control plane deployment architectures in the scope of hypercon...Miroslav Halas
 

Similar to OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula - John Spray (20)

Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
DevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform SimulationsDevConf 2017 - Realistic Container Platform Simulations
DevConf 2017 - Realistic Container Platform Simulations
 
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise KubernetesMongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
MongoDB World 2018: Partner Talk - Red Hat: Deploying to Enterprise Kubernetes
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdfOpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installation
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
 
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 DemoNFVO based on ManageIQ - OPNFV Summit 2016 Demo
NFVO based on ManageIQ - OPNFV Summit 2016 Demo
 
Red hat ceph storage customer presentation
Red hat ceph storage customer presentationRed hat ceph storage customer presentation
Red hat ceph storage customer presentation
 
Comparison of control plane deployment architectures in the scope of hypercon...
Comparison of control plane deployment architectures in the scope of hypercon...Comparison of control plane deployment architectures in the scope of hypercon...
Comparison of control plane deployment architectures in the scope of hypercon...
 

More from OpenNebula Project

OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
 
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
 
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
 
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
 
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
 
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebula Project
 
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
 
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
 
Replacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaReplacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaOpenNebula Project
 
NTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItNTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItOpenNebula Project
 
OpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula Project
 
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHNTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHOpenNebula Project
 
Performant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayPerformant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
 
NetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaNetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaOpenNebula Project
 
NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10OpenNebula Project
 
Security for Private Cloud Environments
Security for Private Cloud EnvironmentsSecurity for Private Cloud Environments
Security for Private Cloud EnvironmentsOpenNebula Project
 
CheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaCheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaOpenNebula Project
 
Cloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaCloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaOpenNebula Project
 

More from OpenNebula Project (20)

OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
 
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
 
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
 
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
 
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
 
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
 
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
 
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
 
Replacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaReplacing vCloud with OpenNebula
Replacing vCloud with OpenNebula
 
NTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItNTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do It
 
OpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISP
 
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHNTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
 
Performant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayPerformant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux Way
 
NetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaNetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebula
 
NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10
 
Security for Private Cloud Environments
Security for Private Cloud EnvironmentsSecurity for Private Cloud Environments
Security for Private Cloud Environments
 
CheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaCheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebula
 
DE-CIX: CloudConnectivity
DE-CIX: CloudConnectivityDE-CIX: CloudConnectivity
DE-CIX: CloudConnectivity
 
DDC Demo
DDC DemoDDC Demo
DDC Demo
 
Cloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaCloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebula
 

Recently uploaded

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 

Recently uploaded (20)

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 

OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula - John Spray

  • 1. Using Ceph with OpenNebula John Spray john.spray@redhat.com
  • 2. Agenda ● What is it? ● Architecture ● Integration with OpenNebula ● What's new? 2 OpenNebulaConf 2014 Berlin
  • 3. What is Ceph? 3 OpenNebulaConf 2014 Berlin
  • 4. What is Ceph? ● Highly available resilient data store ● Free Software (LGPL) ● 10 years since inception ● Flexible object, block and filesystem interfaces ● Especially popular in private clouds as VM image service, and S3-compatible object storage service. 4 OpenNebulaConf 2014 Berlin
  • 5. Interfaces to storage S3 & Swift Multi-tenant Snapshots Clones 5 OpenNebulaConf 2014 Berlin FILE SYSTEM CephFS BLOCK STORAGE RBD OBJECT STORAGE RGW Keystone Geo-Replication Native API OpenStack Linux Kernel iSCSI POSIX Linux Kernel CIFS/NFS HDFS Distributed Metadata
  • 6. Ceph Architecture 6 OpenNebulaConf 2014 Berlin
  • 7. Architectural Components APP HOST/VM CLIENT RGW A web services gateway for object storage, compatible with S3 and Swift RBD A reliable, fully-distributed block device with cloud platform integration LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors 7 OpenNebulaConf 2014 Berlin CEPHFS A distributed file system with POSIX semantics and scale-out metadata management
  • 8. Object Storage Daemons OSD FS DISK OSD FS DISK OSD FS DISK OSD FS DISK btrfs xfs ext4 8 OpenNebulaConf 2014 Berlin M M M
  • 9. RADOS Components OSDs:  10s to 10000s in a cluster  One per disk (or one per SSD, RAID group…)  Serve stored objects to clients  Intelligently peer for replication & recovery Monitors:  Maintain cluster membership and state  Provide consensus for distributed decision-making  Small, odd number  These do not serve stored objects to clients M 9 OpenNebulaConf 2014 Berlin
  • 10. Rados Cluster APPLICATION M M M M M RADOS CLUSTER 10 OpenNebulaConf 2014 Berlin
  • 11. Where do objects live? ?? APPLICATION 11 OpenNebulaConf 2014 Berlin M M M OBJECT
  • 12. A Metadata Server? 1 APPLICATION 12 OpenNebulaConf 2014 Berlin M M M 2
  • 13. Calculated placement APPLICATION F 13 OpenNebulaConf 2014 Berlin M M M A-G H-N O-T U-Z
  • 14. Even better: CRUSH 14 OpenNebulaConf 2014 Berlin 01 11 11 01 RADOS CLUSTER OBJECT 10 01 01 10 10 01 11 01 10 01 01 10 10 01 01 10 10 10 01 01
  • 15. CRUSH is a quick calculation 15 OpenNebulaConf 2014 Berlin 01 11 11 01 RADOS CLUSTER OBJECT 10 01 01 10 10 01 01 10 10 10 01 01
  • 16. CRUSH: Dynamic data placement CRUSH:  Pseudo-random placement algorithm  Fast calculation, no lookup  Repeatable, deterministic  Statistically uniform distribution  Stable mapping  Limited data migration on change  Rule-based configuration  Infrastructure topology aware  Adjustable replication  Weighting 16 OpenNebulaConf 2014 Berlin
  • 17. Architectural Components APP HOST/VM CLIENT RGW A web services gateway for object storage, compatible with S3 and Swift RBD A reliable, fully-distributed block device with cloud platform integration LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors 17 OpenNebulaConf 2014 Berlin CEPHFS A distributed file system with POSIX semantics and scale-out metadata management
  • 18. RBD: Virtual disks in Ceph 18 OpenNebulaConf 2014 Berlin 18 RADOS BLOCK DEVICE:  Storage of disk images in RADOS  Decouples VMs from host  Images are striped across the cluster (pool)  Snapshots  Copy-on-write clones  Support in:  Mainline Linux Kernel (2.6.39+)  Qemu/KVM  OpenStack, CloudStack, OpenNebula, Proxmox
  • 19. Storing virtual disks VM HYPERVISOR LIBRBD M M RADOS CLUSTER 19 19 OpenNebulaConf 2014 Berlin
  • 20. Using Ceph with OpenNebula 20 OpenNebulaConf 2014 Berlin
  • 21. Storage in OpenNebula deployments OpenNebula Cloud Architecture Survey 2014 (http://c12g.com/resources/survey/) 21 OpenNebulaConf 2014 Berlin
  • 22. RBD and libvirt/qemu ● librbd (user space) client integration with libvirt/qemu ● Support for live migration, thin clones ● Get recent versions! ● Directly supported in OpenNebula since 4.0 with the Ceph Datastore (wraps `rbd` CLI) More info online: http://ceph.com/docs/master/rbd/libvirt/ http://docs.opennebula.org/4.10/administration/storage/ceph_ds.html 22 OpenNebulaConf 2014 Berlin
  • 23. Other hypervisors ● OpenNebula is flexible, so can we also use Ceph with non-libvirt/qemu hypervisors? ● Kernel RBD: can present RBD images in /dev/ on hypervisor host for software unaware of librbd ● Docker: can exploit RBD volumes with a local filesystem for use as data volumes – maybe CephFS in future...? ● For unsupported hypervisors, can adapt to Ceph using e.g. iSCSI for RBD, or NFS for CephFS (but test re-exports carefully!) 23 OpenNebulaConf 2014 Berlin
  • 24. Choosing hardware Testing/benchmarking/expert advice is needed, but there are general guidelines: ● Prefer many cheap nodes to few expensive nodes (10 is better than 3) ● Include small but fast SSDs for OSD journals ● Don't simply buy biggest drives: consider IOPs/capacity ratio ● Provision network and IO capacity sufficient for your workload plus recovery bandwidth from node failure. 24 OpenNebulaConf 2014 Berlin
  • 25. What's new? 25 OpenNebulaConf 2014 Berlin
  • 26. Ceph releases ● Ceph 0.80 firefly (May 2014) – Cache tiering & erasure coding – Key/val OSD backends – OSD primary affinity ● Ceph 0.87 giant (October 2014) – RBD cache enabled by default – Performance improvements – Locally recoverable erasure codes ● Ceph x.xx hammer (2015) 26 OpenNebulaConf 2014 Berlin
  • 27. Additional components ● Ceph FS – scale-out POSIX filesystem service, currently being stabilized ● Calamari – monitoring dashboard for Ceph ● ceph-deploy – easy SSH-based deployment tool ● Puppet, Chef modules 27 OpenNebulaConf 2014 Berlin
  • 28. Get involved Evaluate the latest releases: http://ceph.com/resources/downloads/ Mailing list, IRC: http://ceph.com/resources/mailing-list-irc/ Bugs: http://tracker.ceph.com/projects/ceph/issues Online developer summits: https://wiki.ceph.com/Planning/CDS 28 OpenNebulaConf 2014 Berlin
  • 31. Spare slides 31 OpenNebulaConf 2014 Berlin
  • 33. Ceph FS 33 OpenNebulaConf 2014 Berlin
  • 34. CephFS architecture ● Dynamically balanced scale-out metadata ● Inherit flexibility/scalability of RADOS for data ● POSIX compatibility ● Beyond POSIX: Subtree snapshots, recursive statistics Weil, Sage A., et al. "Ceph: A scalable, high-performance distributed file system." Proceedings of the 7th symposium on Operating systems design and implementation. USENIX Association, 2006. http://ceph.com/papers/weil-ceph-osdi06.pdf 34 OpenNebulaConf 2014 Berlin
  • 35. Components ● Client: kernel, fuse, libcephfs ● Server: MDS daemon ● Storage: RADOS cluster (mons & OSDs) 35 OpenNebulaConf 2014 Berlin
  • 36. Components Linux host ceph.ko metadata 01 data 10 M M M Ceph server daemons 36 OpenNebulaConf 2014 Berlin
  • 37. From application to disk Application ceph-fuse libcephfs Kernel client ceph-mds Client network protocol RADOS Disk 37 OpenNebulaConf 2014 Berlin
  • 38. Scaling out FS metadata ● Options for distributing metadata? – by static subvolume – by path hash – by dynamic subtree ● Consider performance, ease of implementation 38 OpenNebulaConf 2014 Berlin
  • 39. Dynamic subtree placement 39 OpenNebulaConf 2014 Berlin
  • 40. Dynamic subtree placement ● Locality: get the dentries in a dir from one MDS ● Support read heavy workloads by replicating non-authoritative copies (cached with capabilities just like clients do) ● In practice work at directory fragment level in order to handle large dirs 40 OpenNebulaConf 2014 Berlin
  • 41. Data placement ● Stripe file contents across RADOS objects ● get full rados cluster bw from clients ● fairly tolerant of object losses: reads return zero ● Control striping with layout vxattrs ● layouts also select between multiple data pools ● Deletion is a special case: client deletions mark files 'stray', RADOS delete ops sent by MDS 41 OpenNebulaConf 2014 Berlin
  • 42. Clients ● Two implementations: ● ceph-fuse/libcephfs ● kclient ● Interplay with VFS page cache, efficiency harder with fuse (extraneous stats etc) ● Client perf. matters, for single-client workloads ● Slow client can hold up others if it's hogging metadata locks: include clients in troubleshooting 42 OpenNebulaConf 2014 Berlin
  • 43. Journaling and caching in MDS ● Metadata ops initially journaled to striped journal "file" in the metadata pool. – I/O latency on metadata ops is sum of network latency and journal commit latency. – Metadata remains pinned in in-memory cache until expired from journal. 43 OpenNebulaConf 2014 Berlin
  • 44. Journaling and caching in MDS ● In some workloads we expect almost all metadata always in cache, in others its more of a stream. ● Control cache size with mds_cache_size ● Cache eviction relies on client cooperation ● MDS journal replay not only recovers data but also warms up cache. Use standby replay to keep that cache warm. 44 OpenNebulaConf 2014 Berlin
  • 45. Lookup by inode ● Sometimes we need inode → path mapping: ● Hard links ● NFS handles ● Costly to store this: mitigate by piggybacking paths (backtraces) onto data objects ● Con: storing metadata to data pool ● Con: extra IOs to set backtraces ● Pro: disaster recovery from data pool ● Future: improve backtrace writing latency 45 OpenNebulaConf 2014 Berlin
  • 46. CephFS in practice ceph-deploy mds create myserver ceph osd pool create fs_data ceph osd pool create fs_metadata ceph fs new myfs fs_metadata fs_data mount -t cephfs x.x.x.x:6789 /mnt/ceph 46 OpenNebulaConf 2014 Berlin
  • 47. Managing CephFS clients ● New in giant: see hostnames of connected clients ● Client eviction is sometimes important: ● Skip the wait during reconnect phase on MDS restart ● Allow others to access files locked by crashed client ● Use OpTracker to inspect ongoing operations 47 OpenNebulaConf 2014 Berlin
  • 48. CephFS tips ● Choose MDS servers with lots of RAM ● Investigate clients when diagnosing stuck/slow access ● Use recent Ceph and recent kernel ● Use a conservative configuration: ● Single active MDS, plus one standby ● Dedicated MDS server ● Kernel client ● No snapshots, no inline data 48 OpenNebulaConf 2014 Berlin
  • 49. Towards a production-ready CephFS ● Focus on resilience: 1. Don't corrupt things 2. Stay up 3. Handle the corner cases 4. When something is wrong, tell me 5. Provide the tools to diagnose and fix problems ● Achieve this first within a conservative single-MDS configuration 49 OpenNebulaConf 2014 Berlin
  • 50. Giant->Hammer timeframe ● Initial online fsck (a.k.a. forward scrub) ● Online diagnostics (`session ls`, MDS health alerts) ● Journal resilience & tools (cephfs-journal-tool) ● flock in the FUSE client ● Initial soft quota support ● General resilience: full OSDs, full metadata cache 50 OpenNebulaConf 2014 Berlin
  • 51. FSCK and repair ● Recover from damage: ● Loss of data objects (which files are damaged?) ● Loss of metadata objects (what subtree is damaged?) ● Continuous verification: ● Are recursive stats consistent? ● Does metadata on disk match cache? ● Does file size metadata match data on disk? ● Repair: ● Automatic where possible ● Manual tools to enable support 51 OpenNebulaConf 2014 Berlin
  • 52. Client management ● Current eviction is not 100% safe against rogue clients ● Update to client protocol to wait for OSD blacklist ● Client metadata ● Initially domain name, mount point ● Extension to other identifiers? 52 OpenNebulaConf 2014 Berlin
  • 53. Online diagnostics ● Bugs exposed relate to failures of one client to release resources for another client: “my filesystem is frozen”. Introduce new health messages: ● “client xyz is failing to respond to cache pressure” ● “client xyz is ignoring capability release messages” ● Add client metadata to allow us to give domain names instead of IP addrs in messages. ● Opaque behavior in the face of dead clients. Introduce `session ls` ● Which clients does MDS think are stale? ● Identify clients to evict with `session evict` 53 OpenNebulaConf 2014 Berlin
  • 54. Journal resilience ● Bad journal prevents MDS recovery: “my MDS crashes on startup”: ● Data loss ● Software bugs ● Updated on-disk format to make recovery from damage easier ● New tool: cephfs-journal-tool ● Inspect the journal, search/filter ● Chop out unwanted entries/regions 54 OpenNebulaConf 2014 Berlin
  • 55. Handling resource limits ● Write a test, see what breaks! ● Full MDS cache: ● Require some free memory to make progress ● Require client cooperation to unpin cache objects ● Anticipate tuning required for cache behaviour: what should we evict? ● Full OSD cluster ● Require explicit handling to abort with -ENOSPC ● MDS → RADOS flow control: ● Contention between I/O to flush cache and I/O to journal 55 OpenNebulaConf 2014 Berlin
  • 56. Test, QA, bug fixes ● The answer to “Is CephFS production ready?” ● teuthology test framework: ● Long running/thrashing test ● Third party FS correctness tests ● Python functional tests ● We dogfood CephFS internally ● Various kclient fixes discovered ● Motivation for new health monitoring metrics ● Third party testing is extremely valuable 56 OpenNebulaConf 2014 Berlin
  • 57. What's next? ● You tell us! ● Recent survey highlighted: ● FSCK hardening ● Multi-MDS hardening ● Quota support ● Which use cases will community test with? ● General purpose ● Backup ● Hadoop 57 OpenNebulaConf 2014 Berlin
  • 58. Reporting bugs ● Does the most recent development release or kernel fix your issue? ● What is your configuration? MDS config, Ceph version, client version, kclient or fuse ● What is your workload? ● Can you reproduce with debug logging enabled? http://ceph.com/resources/mailing-list-irc/ http://tracker.ceph.com/projects/ceph/issues http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/ 58 OpenNebulaConf 2014 Berlin
  • 59. Future ● Ceph Developer Summit: ● When: 8 October ● Where: online ● Post-Hammer work: ● Recent survey highlighted multi-MDS, quota support ● Testing with clustered Samba/NFS? 59 OpenNebulaConf 2014 Berlin