SlideShare a Scribd company logo
1 of 46
Download to read offline
Practical CephFS with NFS today
using OpenStack Manila
Tom Barron
Ceph Day Berlin
12 November 2018
About me
● PTL for upstream manila project since Rocky
● Work downstream for Red Hat on OpenStack Storage
○ Co-ordinated with Ceph, Ganesha, Openstack HA, and OpenStack TripleO teams on the CephFS
with NFS solution presented here
● Feel free to follow up:
○ Email: tbarron@redhat.com
○ irc: tbarron
■ #openstack-manila
Agenda
● The target: CephFS with NFS using OpenStack Manila
○ What?
○ Why?
● Development/Experimental vs Production Deployment
○ Isolated data center networks
○ Fault Tolerance
● Deploying with TripleO
● Post-deployment tasks for the Cloud administrator
● Cloud user workflow to use manila shares with NFS protocol backed by CephFS
● Next ...
Target: Use TripleO to deploy CephFS back
end for OpenStack Manila NFS shares
Use work from these open source projects to enable real-world,
production quality deployment of OpenStack Manila with CephFS back
end storage exposed as NFS shares
Manila
CephFS: Use Ceph for File Shares as a Service
in OpenStack
CephFS
RGW
S3 and Swift compatible object
storage with object versioning,
multi-site federation, and
replication
LIBRADOS
A library allowing apps to direct access RADOS (C, C++, Java, Python, Ruby, PHP)
RADOS
A software-based, reliable, autonomic, distributed object store comprised of
self-healing, self-managing, intelligent storage nodes (OSDs) and lightweight monitors (Mons)
RBD
A virtual block device with
snapshots, copy-on-write clones,
and multi-site replication
CEPHFS
A distributed POSIX file system
with coherent caches and
snapshots on any directory
OBJECT BLOCK FILE
Manila: OpenStack tenant-aware abstraction
layer for file share management
Tenant B
OpenStack Storage Model
■ Tenant (keystone project, user) aware self-service storage
■ Abstracts the actual physical storage back ends
■ Multiple protocols available: NFS, CIFS, CephFS, GlusterFS, HDFS, ...
■ Standard REST API for managing life-cycle of and access-control for
shares
Tenant A
TripleO: deploy real cloud infrastructure!
development/prototyping environment vs production cloud
● Ansible playbook to build devstack using vagrant with libvirt/KVM, deploying
manila with CephFS native or CephFS with NFS back end
○ https://github.com/tombarron/vagrant-libvirt-devstack
● But what if you want:
○ Fault tolerant service processes
○ Survive failure of any one hardware node
○ Scale out compute
○ Scale out storage
● TripleO!
○ There are other deployment tools out there of course
○ We can help you if you want to make those work for manila with CephFS via NFS
Ganesha: expose CephFS shares via NFS.
NFS Ganesha
■ User-space NFSv2, NFSv3, NFSv4, NFSv4.1 and pNFS server
■ Modular architecture: Pluggable FSAL allow for various storage backend (e.g. vfs,
xfs, glusterfs, cephfs)
■ Dynamic export/unexport/update with DBUS
■ Can manage huge metadata caches
■ Simple access for other user-space services (e.g. KRB5, NIS, LDAP)
■ Open source
https://www.openstack.org/user-survey/survey-2017
14
Most Openstack users are
also running a Ceph cluster
already
Open source storage
solution
CephFS metadata
scalability is ideally suited
to cloud environments.
There’s a perfectly good native CephFS
solution for Manila
CephFS native driver deployment with TripleO
Public OpenStack Service API (External) network
Storage (Ceph public) network
External
Provider
Network
Storage
Provider
Network
Router Router
Tenant VMs with 2 nics
Manila
Share
service
Ceph MON
Ceph MDS
Ceph OSD Ceph OSD
Ceph OSD
Controller
Nodes
Storage nodes
Tenant A Tenant B
Compute Nodes
Manila API
service
Why NFS Ganesha?
■ If you want NFS backed by an open source storage technology
■ If you want to leverage an existing Ceph deployment while keeping
your NFS shares
■ Ubiquitous, well-understood client software
■ Familiar IP based access control
■ Allows clear separation between trusted cloud administrators and
untrusted guests
CephFS NFS driver deployment with TripleO
Public OpenStack Service API (External) network
Storage (Ceph public) network
External
Provider
Network
Storage NFS
Network
Router Router
Tenant VMs with 2 nics
Manila
Share
service
Ceph MON
Ceph MDS
Ceph OSD Ceph OSD
Ceph OSD
Controller
Nodes
Storage nodes
Tenant A Tenant B
Compute Nodes
Manila API
service
NFS Ganesha deployment challenges
■ Ceph MON, MDS, MGR, OSDs manage their own HA
■ But not NFS-Ganesha
■ Only one NFS-Ganesha instance can run at a time!
■ But we cannot have a SPOF in the data path
■ So we need to run NFS-ganesha under control of
pacemaker-corosync
○ Expose exports via a VIP
○ Migrate the service to a new node as required
TripleO Deployment
Standard TripleO topology
■ Undercloud node running its own specialized OpenStack
■ Three Controller Nodes
■ M x Compute Nodes
■ N X Storage Nodes
Queens +
■ No undercloud deployment changes
■ New containers
■ Custom controller role
■ Custom isolated network
■ New environment files
■ Uses Ceph Luminous
■ NFS ganesha 2.5 latest
Install ceph-ansible on undercloud
■ Deploy undercloud as normal
■ Install ceph-ansible on undercloud
■ Required step for all TripleO ceph deployments
[stack@undercloud ~]$ sudo yum install -y ceph-ansible
...
Total download size: 196 k
Installed size: 996 k
Downloading packages:
Ceph-ansible-3.1.9-1.el7.noarch.rpm
...
Installed:
ceph-ansible.noarch 0:3.1.9-1.el7
Complete!
Containerized Deployment
■ Queens deploys all storage service daemons in docker containers
■ So on the undercloud, be sure to include the relevant ceph and
manila environment files when preparing containers for the
overcloud.
■ Ganesha will run its own container but it uses the standard ceph
container image
[stack@undercloud ~]$ openstack overcloud container image prepare 
...
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml

-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/manila.yaml 
...
Generating the Custom Roles File
● The ControllerStorageNFS custom role is used to set up the
isolated StorageNFS network.
● This role is similar to the default Controller.yaml role file with the
addition of the StorageNFS network and the CephNfs (aka
nfs-ganesha) service.
[stack@undercloud ~]$ openstack overcloud roles generate --roles-path
/usr/share/openstack-tripleo-templates/roles -o /home/stack/roles_data.yaml
ControllerStorageNfs Compute CephStorage
Custom network-data-ganesha file
● By default overcloud deploy uses network definitions from
network-data.yaml in /usr/share/openstack-tripleo-heat-templates
● We instead use network-data-ganesha.yaml* which adds definitions for the
StorageNFS network
name: StorageNFS
enabled: true
vip: true
ame_lower: storage_nfs
vlan: 70
ip_subnet: '172.16.4.0/24'
allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}]
ipv6_subnet: 'fd00:fd00:fd00:7000::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end':
'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]
*If you have customized network-data.yaml, make corresponding adjustments to the ganesha
equivalent.
Deploy the overcloud
● Use custom network (1) and roles (2) files.
● Use environment files for ceph-ansible (3), ceph-mds (4) , and
manila with nfs-ganesha (5)
[stack@undercloud ~]$ openstack overcloud deploy 
--templates /usr/share/openstack-tripleo-heat-templates 
-n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml  (1)
-r /home/stack/roles_data.yaml  (2)
-e /home/stack/containers-default-parameters.yaml 
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml 
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml 
-e /home/stack/network-environment.yaml 
-e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml  (3)
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml  (4)
-e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml (5)
CephFS NFS driver deployment with TripleO
Public OpenStack Service API (External) network
Storage (Ceph public) network
External
Provider
Network
Storage NFS
Network
Router Router
Tenant VMs with 2 nics
Manila
Share
service
Ceph MON
Ceph MDS
Ceph OSD Ceph OSD
Ceph OSD
Controller
Nodes
Storage nodes
Tenant A Tenant B
Compute Nodes
Manila API
service
Post-Deployment tasks for the
Cloud Administrator
Create Overcloud Neutron Storage-NFS
network
● And map it to the isolated Storage NFS network in the data center
● NFS clients (e.g. Nova VMs) will connect to ganesha over this
software defined network
(overcloud) [stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan
--provider-physical-network datacentre --provider-segment 70
Create StorageNFS: by convention we use ‘StorageNFS’ as the name of the neutron SDN that maps to the
data centre isolated StorageNFS network
--share: This provider network can be shared by multiple tenants
--provider type vlan: We define the isolated data center StorageNFS network as vlan 70
--provider-physical-network datacentre: Name of the physical network on which the isolated StorageNFS
network was defined
Create Overcloud Neutron Storage-NFS
sub-network
● Create subnet on the StorageNFS neutron network
● Give it a DHCP server, using an allocation pool compatible with
that defined for the undercloud’s StorageNFS allocation pool
● No gateway/default route is needed since this network will only be
used for NFS mounts
● NFS clients (e.g. Nova VMs) will connect to ganesha over this
software defined network
(overcloud) [stack@undercloud-0 ~]$ openstack subnet create --allocation-pool start=172.16.4.150,end=172.16.4.250
--dhcp --network StorageNFS --subnet-range 172.16.4.0/24 --gateway none StorageNFSSubnet
Create default share type
● Manila needs a default-share type, which is used when creating
shares if an explicit share-type argument is not supplied
● TripleO deploys manila configured to expect a default share type
named ‘default’ but it does not itself create the share type
● So the cloud administrator needs to create it:
(overcloud) [stack@undercloud-0 ~]$ manila type-create default False
● The manila type-create command requires that the implicit DHSS
field be set
● “DHSS” means “driver handles share servers.” In our case the
share server is implemented by ganesha and its life cycle is
controlled by TripleO rather than by the manila driver, so we set it
to False
Cloud-user workflow
Create security group
● One-time task that isolates a tenant’s VMs from others attaching to the
Storage NFS network
● Content of this security group is the same as the original content of the
‘default’ security group but the latter has often been changed so we make a
new security group just to be safe:
(user) [stack@undercloud-0 ~]$ openstack security group create no-ingress
● As suggested by its name, this group allows egress packets but no ingress
packets from unestablished connections
● Cloud-administrator can do this specifying ‘--project’ for each tenant to make
cloud-user workflow simpler.
Create port on StorageNFS network
● Per VM (nova instance) task
● Use the no-ingress security group just created when creating the port
(user) [stack@undercloud-0 ~]$ openstack port create nfs-port0 --network StorageNFS
--security-group no-ingress
● Neutron will assign an IP address to this new port from the allocation-range
set up for the StorageNFS network, set up DHCP to serve that address when
an interface is bound to this port, and ensure that an interface can only use
this address if it is bound to this port.
Add the port to Nova VM
(user) [stack@undercloud-0 ~]$ openstack server add port instance0 nfs-port0
(user) [stack@undercloud-0 ~]$ openstack server list -f yaml -
Flavor: m1.micro
ID: 0b878c11-e791-434b-ab63-274ecfc957
Image: manila-test
Name: demo-instance
Networks: demo-network=172.20.0.4, 10.0.0.53; StorageNFS=172.17.5.160
Status: ACTIVE
● Before the port was added, server instance0 had a private address 172.20.0.4 and a
floating public IP 10.0.0.53.
● After the port is added, the server also has assigned to it an address on the
StorageNFS network, 172.17.5.160
Activate the StorageNFS address on the
Nova VM
● Assigning a port to an OpenStack compute instance reserves an IP for it and
sets up DHCP for it but does not perform any actions on the compute instance
itself to activate a new interface with the IP.
● Procedures to do this last part are compute-instance image specific.
● If the second interface does not already exist in the image it must be created
and configured.
● Then the interface must be toggled or networking restarted (or the VM
rebooted) so that the interface actually comes up with the Storage NFS IP.
Allow access to manila shares from the
Nova VM at its StorageNFS address
● share-01 is created using the default share-type, NFS protocol, and is 2
gigabytes
● Access is allowed to share-01 from IP address 172.17.5.169
● This is the StorageNFS network IP assigned to compute instance instance0
two slides back
(user) [stack@undercloud-0 ~]$ manila create --name share-01 nfs 2
(user) [stack@undercloud-0 ~]$ manila access-allow share-01 ip 172.17.5.160
List share’s export locations
● 172.17.5.13 is the IP at which the ganesha server is listening for mount requests
● The string at the right of the colon is the export path to share with uuid
e840b4ae-6a04-49ee-9d6e-67d4999fbc01
(user) [stack@undercloud-0 ~]$ manila share-export-location-list share-01
172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01
● The manila share-export-location-list command reveals export locations to be used in mount
commands to mount a share.
Mount share on compute instance
● Login to the compute instance:
(user) [stack@undercloud-0 ~]$ openstack server ssh demo-instance0 --login root
# hostname
demo-instance-o
● Mount the share using the export location infromation from the previous slide:
# mount.nfs -v 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 /mnt
mount.nfs: timeout set for Wed Sep 19 09:14:46 2018
mount.nfs: trying text-based options 'vers=4.2,addr=172.17.5.13,clientaddr=172.17.5.160'
172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 on /mnt type nfs
# mount | grep mnt
172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 on /mnt type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=
600,retrans=2,sec=sys,clientaddr=172.17.5.160,local_lock=none,addr=172.17.5.13)
Futures
Current CephFS NFS Driver
Pros
● Security: isolates user VMs from ceph public network and its daemons.
● Familiar NFS semantics, access control, and end user operations.
● Large base of clients who can now use Ceph storage for file shares without doing
anything different.
○ NFS supported out of the box, doesn’t need any specific drivers
● Path separation in the backend storage and network policy (enforced by neutron
security rules on a dedicated StorageNFS network) provide multi-tenancy support.
Current CephFS NFS Driver
Cons
● Ganesha is a “man in the middle” in the data path and a potential performance
bottleneck.
● HA using the controller node pacemaker cluster impacts our ability to scale
● As does the (current) inability to run ganesha active-active, and
● We’d like to be able to spawn ganesha services on demand, per-tenant, as required
rather than statically launching them at cloud deployment time.
● High Availability
○ Kubernetes managed Ganesha container
■ Container life-cycle and resurrection not managed by Ceph.
■ ceph-mgr creates shares and launches containers through Kubernetes
● Scale-Out (avoid Single Point of Failure)
○ ceph-mgr creates multiple Ganesha containers for a share.
○ (Potentially) Kubernetes load balancer allows for automatic multiplexing between
Ganesha containers via a single service IP.
HA and Scale-Out
Ganesha per Tenant running under k8s control
Public OpenStack Service API (External) network
Ceph public network
External
Provider
Network
Router Router
Tenant VMs
Manila
Share
service
Ceph MON
Ceph MDS Ceph OSD Ceph OSD
Ceph OSD
Controller
Nodes Tenant A Tenant B
Compute Nodes
Manila API
service
Ceph MGR kubernetes
Q&A
Thank you!

More Related Content

What's hot

Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Community
 
My personal journey through the World of Open Source! How What Was Old Beco...
My personal journey through  the World of Open Source!  How What Was Old Beco...My personal journey through  the World of Open Source!  How What Was Old Beco...
My personal journey through the World of Open Source! How What Was Old Beco...Ceph Community
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatHyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatOpenStack
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Ceph Community
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Community
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed_Hat_Storage
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to CephCeph Community
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red_Hat_Storage
 
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Gluster.org
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...Ceph Community
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
 

What's hot (20)

Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
 
My personal journey through the World of Open Source! How What Was Old Beco...
My personal journey through  the World of Open Source!  How What Was Old Beco...My personal journey through  the World of Open Source!  How What Was Old Beco...
My personal journey through the World of Open Source! How What Was Old Beco...
 
NantOmics
NantOmicsNantOmics
NantOmics
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatHyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS Plans
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 
MySQL on Ceph
MySQL on CephMySQL on Ceph
MySQL on Ceph
 
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
CephFS Update
CephFS UpdateCephFS Update
CephFS Update
 
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 

Similar to CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA

Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)Hien Nguyen Van
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes][BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]Wong Hoi Sing Edison
 
final proposal-Xen based Hypervisor in a Box
final proposal-Xen based Hypervisor in a Boxfinal proposal-Xen based Hypervisor in a Box
final proposal-Xen based Hypervisor in a BoxParamkusham Shruthi
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Gluster.org
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersSage Weil
 
Big data analytics and docker the thrilla in manila
Big data analytics and docker  the thrilla in manilaBig data analytics and docker  the thrilla in manila
Big data analytics and docker the thrilla in manilaDean Hildebrand
 
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityTechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityOpenNebula Project
 
OSSV [Open System SnapVault]
OSSV [Open System SnapVault]OSSV [Open System SnapVault]
OSSV [Open System SnapVault]Ashwin Pawar
 
Openstack overview thomas-goirand
Openstack overview thomas-goirandOpenstack overview thomas-goirand
Openstack overview thomas-goirandOpenCity Community
 
CERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8sCERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8sBelmiro Moreira
 
Laravel, docker, kubernetes
Laravel, docker, kubernetesLaravel, docker, kubernetes
Laravel, docker, kubernetesPeter Mein
 
3 Dia Livre - Implementando Nuvens Privadas com XCP 1.1
3 Dia Livre - Implementando Nuvens Privadas com XCP 1.13 Dia Livre - Implementando Nuvens Privadas com XCP 1.1
3 Dia Livre - Implementando Nuvens Privadas com XCP 1.1Lorscheider Santiago
 
[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...
[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...
[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...Wong Hoi Sing Edison
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
 
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...TomBarron
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
 
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiStor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiCeph Community
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Dave Holland
 

Similar to CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA (20)

Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes][BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
 
final proposal-Xen based Hypervisor in a Box
final proposal-Xen based Hypervisor in a Boxfinal proposal-Xen based Hypervisor in a Box
final proposal-Xen based Hypervisor in a Box
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containers
 
Big data analytics and docker the thrilla in manila
Big data analytics and docker  the thrilla in manilaBig data analytics and docker  the thrilla in manila
Big data analytics and docker the thrilla in manila
 
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard UniverityTechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
 
OSSV [Open System SnapVault]
OSSV [Open System SnapVault]OSSV [Open System SnapVault]
OSSV [Open System SnapVault]
 
Openstack overview thomas-goirand
Openstack overview thomas-goirandOpenstack overview thomas-goirand
Openstack overview thomas-goirand
 
CERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8sCERN OpenStack Cloud Control Plane - From VMs to K8s
CERN OpenStack Cloud Control Plane - From VMs to K8s
 
Laravel, docker, kubernetes
Laravel, docker, kubernetesLaravel, docker, kubernetes
Laravel, docker, kubernetes
 
3 Dia Livre - Implementando Nuvens Privadas com XCP 1.1
3 Dia Livre - Implementando Nuvens Privadas com XCP 1.13 Dia Livre - Implementando Nuvens Privadas com XCP 1.1
3 Dia Livre - Implementando Nuvens Privadas com XCP 1.1
 
[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...
[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...
[HKOSCON][20180616][Containerized High Availability Virtual Hosting Deploymen...
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
 
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
Easy multi-tenant-kubernetes-rwx-storage-with-cloud-provider-openstack-and-ma...
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
 
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiStor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017
 

Recently uploaded

The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024SynarionITSolutions
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 

Recently uploaded (20)

The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 

CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA

  • 1. Practical CephFS with NFS today using OpenStack Manila Tom Barron Ceph Day Berlin 12 November 2018
  • 2. About me ● PTL for upstream manila project since Rocky ● Work downstream for Red Hat on OpenStack Storage ○ Co-ordinated with Ceph, Ganesha, Openstack HA, and OpenStack TripleO teams on the CephFS with NFS solution presented here ● Feel free to follow up: ○ Email: tbarron@redhat.com ○ irc: tbarron ■ #openstack-manila
  • 3. Agenda ● The target: CephFS with NFS using OpenStack Manila ○ What? ○ Why? ● Development/Experimental vs Production Deployment ○ Isolated data center networks ○ Fault Tolerance ● Deploying with TripleO ● Post-deployment tasks for the Cloud administrator ● Cloud user workflow to use manila shares with NFS protocol backed by CephFS ● Next ...
  • 4. Target: Use TripleO to deploy CephFS back end for OpenStack Manila NFS shares
  • 5. Use work from these open source projects to enable real-world, production quality deployment of OpenStack Manila with CephFS back end storage exposed as NFS shares Manila
  • 6. CephFS: Use Ceph for File Shares as a Service in OpenStack
  • 7. CephFS RGW S3 and Swift compatible object storage with object versioning, multi-site federation, and replication LIBRADOS A library allowing apps to direct access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomic, distributed object store comprised of self-healing, self-managing, intelligent storage nodes (OSDs) and lightweight monitors (Mons) RBD A virtual block device with snapshots, copy-on-write clones, and multi-site replication CEPHFS A distributed POSIX file system with coherent caches and snapshots on any directory OBJECT BLOCK FILE
  • 8. Manila: OpenStack tenant-aware abstraction layer for file share management
  • 9. Tenant B OpenStack Storage Model ■ Tenant (keystone project, user) aware self-service storage ■ Abstracts the actual physical storage back ends ■ Multiple protocols available: NFS, CIFS, CephFS, GlusterFS, HDFS, ... ■ Standard REST API for managing life-cycle of and access-control for shares Tenant A
  • 10. TripleO: deploy real cloud infrastructure!
  • 11. development/prototyping environment vs production cloud ● Ansible playbook to build devstack using vagrant with libvirt/KVM, deploying manila with CephFS native or CephFS with NFS back end ○ https://github.com/tombarron/vagrant-libvirt-devstack ● But what if you want: ○ Fault tolerant service processes ○ Survive failure of any one hardware node ○ Scale out compute ○ Scale out storage ● TripleO! ○ There are other deployment tools out there of course ○ We can help you if you want to make those work for manila with CephFS via NFS
  • 12. Ganesha: expose CephFS shares via NFS.
  • 13. NFS Ganesha ■ User-space NFSv2, NFSv3, NFSv4, NFSv4.1 and pNFS server ■ Modular architecture: Pluggable FSAL allow for various storage backend (e.g. vfs, xfs, glusterfs, cephfs) ■ Dynamic export/unexport/update with DBUS ■ Can manage huge metadata caches ■ Simple access for other user-space services (e.g. KRB5, NIS, LDAP) ■ Open source
  • 14. https://www.openstack.org/user-survey/survey-2017 14 Most Openstack users are also running a Ceph cluster already Open source storage solution CephFS metadata scalability is ideally suited to cloud environments. There’s a perfectly good native CephFS solution for Manila
  • 15. CephFS native driver deployment with TripleO Public OpenStack Service API (External) network Storage (Ceph public) network External Provider Network Storage Provider Network Router Router Tenant VMs with 2 nics Manila Share service Ceph MON Ceph MDS Ceph OSD Ceph OSD Ceph OSD Controller Nodes Storage nodes Tenant A Tenant B Compute Nodes Manila API service
  • 16. Why NFS Ganesha? ■ If you want NFS backed by an open source storage technology ■ If you want to leverage an existing Ceph deployment while keeping your NFS shares ■ Ubiquitous, well-understood client software ■ Familiar IP based access control ■ Allows clear separation between trusted cloud administrators and untrusted guests
  • 17. CephFS NFS driver deployment with TripleO Public OpenStack Service API (External) network Storage (Ceph public) network External Provider Network Storage NFS Network Router Router Tenant VMs with 2 nics Manila Share service Ceph MON Ceph MDS Ceph OSD Ceph OSD Ceph OSD Controller Nodes Storage nodes Tenant A Tenant B Compute Nodes Manila API service
  • 18. NFS Ganesha deployment challenges ■ Ceph MON, MDS, MGR, OSDs manage their own HA ■ But not NFS-Ganesha ■ Only one NFS-Ganesha instance can run at a time! ■ But we cannot have a SPOF in the data path ■ So we need to run NFS-ganesha under control of pacemaker-corosync ○ Expose exports via a VIP ○ Migrate the service to a new node as required
  • 20. Standard TripleO topology ■ Undercloud node running its own specialized OpenStack ■ Three Controller Nodes ■ M x Compute Nodes ■ N X Storage Nodes
  • 21. Queens + ■ No undercloud deployment changes ■ New containers ■ Custom controller role ■ Custom isolated network ■ New environment files ■ Uses Ceph Luminous ■ NFS ganesha 2.5 latest
  • 22. Install ceph-ansible on undercloud ■ Deploy undercloud as normal ■ Install ceph-ansible on undercloud ■ Required step for all TripleO ceph deployments [stack@undercloud ~]$ sudo yum install -y ceph-ansible ... Total download size: 196 k Installed size: 996 k Downloading packages: Ceph-ansible-3.1.9-1.el7.noarch.rpm ... Installed: ceph-ansible.noarch 0:3.1.9-1.el7 Complete!
  • 23. Containerized Deployment ■ Queens deploys all storage service daemons in docker containers ■ So on the undercloud, be sure to include the relevant ceph and manila environment files when preparing containers for the overcloud. ■ Ganesha will run its own container but it uses the standard ceph container image [stack@undercloud ~]$ openstack overcloud container image prepare ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/manila.yaml ...
  • 24. Generating the Custom Roles File ● The ControllerStorageNFS custom role is used to set up the isolated StorageNFS network. ● This role is similar to the default Controller.yaml role file with the addition of the StorageNFS network and the CephNfs (aka nfs-ganesha) service. [stack@undercloud ~]$ openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage
  • 25. Custom network-data-ganesha file ● By default overcloud deploy uses network definitions from network-data.yaml in /usr/share/openstack-tripleo-heat-templates ● We instead use network-data-ganesha.yaml* which adds definitions for the StorageNFS network name: StorageNFS enabled: true vip: true ame_lower: storage_nfs vlan: 70 ip_subnet: '172.16.4.0/24' allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}] *If you have customized network-data.yaml, make corresponding adjustments to the ganesha equivalent.
  • 26. Deploy the overcloud ● Use custom network (1) and roles (2) files. ● Use environment files for ceph-ansible (3), ceph-mds (4) , and manila with nfs-ganesha (5) [stack@undercloud ~]$ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml (1) -r /home/stack/roles_data.yaml (2) -e /home/stack/containers-default-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml (3) -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml (4) -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml (5)
  • 27. CephFS NFS driver deployment with TripleO Public OpenStack Service API (External) network Storage (Ceph public) network External Provider Network Storage NFS Network Router Router Tenant VMs with 2 nics Manila Share service Ceph MON Ceph MDS Ceph OSD Ceph OSD Ceph OSD Controller Nodes Storage nodes Tenant A Tenant B Compute Nodes Manila API service
  • 28. Post-Deployment tasks for the Cloud Administrator
  • 29. Create Overcloud Neutron Storage-NFS network ● And map it to the isolated Storage NFS network in the data center ● NFS clients (e.g. Nova VMs) will connect to ganesha over this software defined network (overcloud) [stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70 Create StorageNFS: by convention we use ‘StorageNFS’ as the name of the neutron SDN that maps to the data centre isolated StorageNFS network --share: This provider network can be shared by multiple tenants --provider type vlan: We define the isolated data center StorageNFS network as vlan 70 --provider-physical-network datacentre: Name of the physical network on which the isolated StorageNFS network was defined
  • 30. Create Overcloud Neutron Storage-NFS sub-network ● Create subnet on the StorageNFS neutron network ● Give it a DHCP server, using an allocation pool compatible with that defined for the undercloud’s StorageNFS allocation pool ● No gateway/default route is needed since this network will only be used for NFS mounts ● NFS clients (e.g. Nova VMs) will connect to ganesha over this software defined network (overcloud) [stack@undercloud-0 ~]$ openstack subnet create --allocation-pool start=172.16.4.150,end=172.16.4.250 --dhcp --network StorageNFS --subnet-range 172.16.4.0/24 --gateway none StorageNFSSubnet
  • 31. Create default share type ● Manila needs a default-share type, which is used when creating shares if an explicit share-type argument is not supplied ● TripleO deploys manila configured to expect a default share type named ‘default’ but it does not itself create the share type ● So the cloud administrator needs to create it: (overcloud) [stack@undercloud-0 ~]$ manila type-create default False ● The manila type-create command requires that the implicit DHSS field be set ● “DHSS” means “driver handles share servers.” In our case the share server is implemented by ganesha and its life cycle is controlled by TripleO rather than by the manila driver, so we set it to False
  • 33. Create security group ● One-time task that isolates a tenant’s VMs from others attaching to the Storage NFS network ● Content of this security group is the same as the original content of the ‘default’ security group but the latter has often been changed so we make a new security group just to be safe: (user) [stack@undercloud-0 ~]$ openstack security group create no-ingress ● As suggested by its name, this group allows egress packets but no ingress packets from unestablished connections ● Cloud-administrator can do this specifying ‘--project’ for each tenant to make cloud-user workflow simpler.
  • 34. Create port on StorageNFS network ● Per VM (nova instance) task ● Use the no-ingress security group just created when creating the port (user) [stack@undercloud-0 ~]$ openstack port create nfs-port0 --network StorageNFS --security-group no-ingress ● Neutron will assign an IP address to this new port from the allocation-range set up for the StorageNFS network, set up DHCP to serve that address when an interface is bound to this port, and ensure that an interface can only use this address if it is bound to this port.
  • 35. Add the port to Nova VM (user) [stack@undercloud-0 ~]$ openstack server add port instance0 nfs-port0 (user) [stack@undercloud-0 ~]$ openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957 Image: manila-test Name: demo-instance Networks: demo-network=172.20.0.4, 10.0.0.53; StorageNFS=172.17.5.160 Status: ACTIVE ● Before the port was added, server instance0 had a private address 172.20.0.4 and a floating public IP 10.0.0.53. ● After the port is added, the server also has assigned to it an address on the StorageNFS network, 172.17.5.160
  • 36. Activate the StorageNFS address on the Nova VM ● Assigning a port to an OpenStack compute instance reserves an IP for it and sets up DHCP for it but does not perform any actions on the compute instance itself to activate a new interface with the IP. ● Procedures to do this last part are compute-instance image specific. ● If the second interface does not already exist in the image it must be created and configured. ● Then the interface must be toggled or networking restarted (or the VM rebooted) so that the interface actually comes up with the Storage NFS IP.
  • 37. Allow access to manila shares from the Nova VM at its StorageNFS address ● share-01 is created using the default share-type, NFS protocol, and is 2 gigabytes ● Access is allowed to share-01 from IP address 172.17.5.169 ● This is the StorageNFS network IP assigned to compute instance instance0 two slides back (user) [stack@undercloud-0 ~]$ manila create --name share-01 nfs 2 (user) [stack@undercloud-0 ~]$ manila access-allow share-01 ip 172.17.5.160
  • 38. List share’s export locations ● 172.17.5.13 is the IP at which the ganesha server is listening for mount requests ● The string at the right of the colon is the export path to share with uuid e840b4ae-6a04-49ee-9d6e-67d4999fbc01 (user) [stack@undercloud-0 ~]$ manila share-export-location-list share-01 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 ● The manila share-export-location-list command reveals export locations to be used in mount commands to mount a share.
  • 39. Mount share on compute instance ● Login to the compute instance: (user) [stack@undercloud-0 ~]$ openstack server ssh demo-instance0 --login root # hostname demo-instance-o ● Mount the share using the export location infromation from the previous slide: # mount.nfs -v 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 /mnt mount.nfs: timeout set for Wed Sep 19 09:14:46 2018 mount.nfs: trying text-based options 'vers=4.2,addr=172.17.5.13,clientaddr=172.17.5.160' 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 on /mnt type nfs # mount | grep mnt 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo= 600,retrans=2,sec=sys,clientaddr=172.17.5.160,local_lock=none,addr=172.17.5.13)
  • 41. Current CephFS NFS Driver Pros ● Security: isolates user VMs from ceph public network and its daemons. ● Familiar NFS semantics, access control, and end user operations. ● Large base of clients who can now use Ceph storage for file shares without doing anything different. ○ NFS supported out of the box, doesn’t need any specific drivers ● Path separation in the backend storage and network policy (enforced by neutron security rules on a dedicated StorageNFS network) provide multi-tenancy support.
  • 42. Current CephFS NFS Driver Cons ● Ganesha is a “man in the middle” in the data path and a potential performance bottleneck. ● HA using the controller node pacemaker cluster impacts our ability to scale ● As does the (current) inability to run ganesha active-active, and ● We’d like to be able to spawn ganesha services on demand, per-tenant, as required rather than statically launching them at cloud deployment time.
  • 43. ● High Availability ○ Kubernetes managed Ganesha container ■ Container life-cycle and resurrection not managed by Ceph. ■ ceph-mgr creates shares and launches containers through Kubernetes ● Scale-Out (avoid Single Point of Failure) ○ ceph-mgr creates multiple Ganesha containers for a share. ○ (Potentially) Kubernetes load balancer allows for automatic multiplexing between Ganesha containers via a single service IP. HA and Scale-Out
  • 44. Ganesha per Tenant running under k8s control Public OpenStack Service API (External) network Ceph public network External Provider Network Router Router Tenant VMs Manila Share service Ceph MON Ceph MDS Ceph OSD Ceph OSD Ceph OSD Controller Nodes Tenant A Tenant B Compute Nodes Manila API service Ceph MGR kubernetes
  • 45. Q&A