SlideShare a Scribd company logo
1 of 38
Download to read offline
Ceph in a security critical
OpenStack cloud
Danny Al-Gaaf (Deutsche Telekom)
Deutsche OpenStack Tage 2015 - Frankfurt
● Ceph and OpenStack
● Secure NFV cloud at DT
● Attack surface
● Proactive countermeasures
○ Setup
○ Vulnerability prevention
○ Breach mitigation
● Reactive countermeasures
○ 0-days, CVEs
○ Security support SLA and lifecycle
● Conclusions
Overview
2
Ceph and OpenStack
Ceph Architecture
4
Ceph and OpenStack
5
Secure NFV Cloud @ DT
NFV Cloud @ Deutsche Telekom
● Datacenter design
○ BDCs
■ few but classic DCs
■ high SLAs for infrastructure and services
■ for private/customer data and services
○ FDCs
■ small but many
■ near to the customer
■ lower SLAs, can fail at any time
■ NFVs:
● spread over many FDCs
● failures are handled by services and not the infrastructure
● Run telco core services @ OpenStack/KVM/Ceph
7
Fundamentals - The CIA Triad
8
CONFIDENTIALITY
INTEGRITY
AVAILABILITY
Preventing sensitive data
against unauthorized
access
Maintaining consistency,
accuracy, and trustworthiness
of data
Protecting systems against
disrupting services and
availability of information
High Security Requirements
● Multiple security placement zones (PZ)
○ e.g. EHD, DMZ, MZ, SEC, Management
○ TelcoWG “Security Segregation” use case
● Separation between PZs required for:
○ compute
○ networks
○ storage
● Protect against many attack vectors
● Enforced and reviewed by security department
9
Solutions for storage separation
● Physical separation
○ Large number of clusters (>100)
○ Large hardware demand (compute and storage)
○ High maintenance effort
○ Less flexibility
● RADOS pool separation
○ Much more flexible
○ Efficient use of hardware
● Question:
○ Can we get the same security as physical separation?
10
Separation through Placement Zones
● One RADOS pool for each security zone
○ Limit access using Ceph capabilities
● OpenStack AZs as PZs
○ Cinder
■ Configure one backend/volume type per pool (with own key)
■ Need to map between AZs and volume types via policy
○ Glance
■ Lacks separation between control and compute/storage layer
■ Separate read-only vs management endpoints
○ Manila
■ Currently not planned to use in production with CephFS
■ May use RBD via NFS
11
Attack Surface
RadosGW attack surface
● S3/Swift
○ Network access to gateway
only
○ No direct access for
consumer to other Ceph
daemons
● Single API attack
surface
13
RBD librbd attack surface
● Protection from hypervisor
block layer
○ transparent for the guest
○ No network access or CephX
keys needed at guest level
● Issue:
○ hypervisor is software and
therefore not 100% secure…
■ breakouts are no mythical creature
■ e.g., Virtunoid, SYSENTER,
Venom!14
RBD.ko attack surface
● RBD kernel module
○ e.g. used with XEN or on bare
metal
○ Requires direct access to Ceph
public network
○ Requires CephX keys/secret at
guest level
● Issue:
○ no separation between cluster
and guest
15
CephFS attack surface
● pure CephFS tears a big hole
in hypervisor separation
○ Requires direct access to Ceph
public network
○ Requires CephX keys/secret at
guest level
○ Complete file system visible to
guest
■ Separation currently only via POSIX
user/group
16
Host attack surface
● If KVM is compromised, the attacker ...
○ has access to neighbor VMs
○ has access to local Ceph keys
○ has access to Ceph public network and Ceph daemons
● Firewalls, deep packet inspection (DPI), ...
○ partly impractical due to used protocols
○ implications to performance and cost
● Bottom line: Ceph daemons must resist attack
○ C/C++ is harder to secure than e.g. Python
○ Homogenous: if one daemon is vulnerable, all in the cluster are!
17
Network attack surface
● Sessions are authenticated
○ Attacker cannot impersonate clients or servers
○ Attacker cannot mount man-in-the-middle attacks
● Client/cluster sessions are not encrypted
○ Sniffer can recover any data read or written
18
Denial of Service
● Attack against:
○ Ceph Cluster:
■ Submit many / large / expensive IOs
■ Open many connections
■ Use flaws to crash Ceph daemons
■ Identify non-obvious but expensive features of client/OSD interface
○ Ceph Cluster hosts:
■ Crash complete cluster hosts e.g. through flaws in kernel network layer
○ VMs on same host:
■ Saturate the network bandwidth of the host
19
Proactive Countermeasures
Deployment and Setup
● Network
○ Always use separated cluster and public networks
○ Always separate your control nodes from other networks
○ Don’t expose cluster to the open internet
○ Encrypt inter-datacenter traffic
● Avoid hyper-converged infrastructure
○ Don’t mix
■ compute and storage resources, isolate them!
■ OpenStack and Ceph control nodes
○ Scale resources independently
○ Risk mitigation if daemons are compromised or DoS’d
21
Deploying RadosGW
● Big and easy target through
HTTP(S) protocol
● Small appliance per tenant with
○ Separate network
○ SSL terminated proxy forwarding
requests to radosgw
○ WAF (mod_security) to filter
○ Placed in secure/managed zone
○ different type of webserver than
RadosGW
● Don’t share buckets/users
between tenants22
Ceph security: CephX
● Monitors are trusted key servers
○ Store copies of all entity keys
○ Each key has an associated “capability”
■ Plaintext description of what the key user is
allowed to do
● What you get
○ Mutual authentication of client + server
○ Extensible authorization w/ “capabilities”
○ Protection from man-in-the-middle, TCP
session hijacking
● What you don’t get
○ Secrecy (encryption over the wire)
23
Ceph security: CephX take-aways
● Monitors must be secured
○ Protect the key database
● Key management is important
○ Separate key for each Cinder backend/AZ
○ Restrict capabilities associated with each key
○ Limit administrators’ power
■ use ‘allow profile admin’ and ‘allow profile readonly’
■ restrict role-definer or ‘allow *’ keys
○ Careful key distribution (Ceph and OpenStack nodes)
● To do:
○ Thorough CephX code review by security experts
○ Audit OpenStack deployment tools’ key distribution
○ Improve security documentation24
● Static Code Analysis (SCA)
○ Buffer overflows and other code flaws
○ Regular Coverity scans
■ 996 fixed, 284 dismissed; 420 outstanding
■ defect density 0.97
○ cppcheck
○ LLVM: clang/scan-build
● Runtime analysis
○ valgrind memcheck
● Plan
○ Reduce backlog of low-priority issues (e.g., issues in test code)
○ Automated reporting of new SCA issues on pull requests
○ Improve code reviewer awareness of security defects
Preventing Breaches - Defects
25
● Pen-testing
○ human attempt to subvert security, generally guided by code review
● Fuzz testing
○ computer attempt to subvert or crash, by feeding garbage input
● Harden build
○ -fpie -fpic
○ -stack-protector=strong
○ -Wl,-z,relro,-z,now
○ -D_FORTIFY_SOURCE=2 -O2 (?)
○ Check for performance regression!
Preventing Breaches - Hardening
26
Mitigating Breaches
● Run non-root daemons (WIP: PR #4456)
○ Prevent escalating privileges to get root
○ Run as ‘ceph’ user and group
○ Pending for Infernalis
● MAC
○ SELinux / AppArmor
○ Profiles for daemons and tools planned for Infernalis
● Run (some) daemons in VMs or containers
○ Monitor and RGW - less resource intensive
○ MDS - maybe
○ OSD - prefers direct access to hardware
● Separate MON admin network
27
Encryption: Data at Rest
● Encryption at application vs cluster level
● Some deployment tools support dm-crypt
○ Encrypt raw block device (OSD and journal)
○ Allow disks to be safely discarded if key remains secret
● Key management is still very simple
○ Encryption key stored on disk via LUKS
○ LUKS key stored in /etc/ceph/keys
● Plan
○ Petera, a new key escrow project from Red Hat
■ https://github.com/npmccallum/petera
○ Alternative: simple key management via monitor (CDS blueprint)
28
● Goal
○ Protect data from someone listening in on network
○ Protect administrator sessions configuring client keys
● Plan
○ Generate per-session keys based on existing tickets
○ Selectively encrypt monitor administrator sessions
○ alternative: make use of IPSec (performance and management
implications)
Encryption: On Wire
29
● Limit load from client
○ Use qemu IO throttling features - set safe upper bound
● To do:
○ Limit max open sockets per OSD
○ Limit max open sockets per source IP
■ handle on Ceph or in the network layer?
○ Throttle operations per-session or per-client (vs just globally)?
Denial of Service attacks
30
CephFS
● No standard virtualization layer (unlike block)
○ Filesystem passthrough (9p/virtfs) to host
○ Proxy through gateway (NFS?)
○ Allow direct access from tenant VM (most unsecure)
● Granularity of access control is harder
○ No simple mapping to RADOS objects
● Work in progress
○ root_squash (Infernalis blueprint)
○ Restrict mount to subtree
○ Restrict mount to user
31
Reactive Countermeasures
● Community
○ Single point of contact: security@ceph.com
■ Core development team
■ Red Hat, SUSE, Canonical security teams
○ Security related fixes are prioritized and backported
○ Releases may be accelerated on ad hoc basis
○ Security advisories to ceph-announce@ceph.com
● Red Hat Ceph
○ Strict SLA on issues raised with Red Hat security team
○ Escalation process to Ceph developers
○ Red Hat security team drives CVE process
○ Hot fixes distributed via Red Hat’s CDN
Reactive Security Process
33
Detecting and Preventing Breaches
● Brute force attacks
○ Good logging of any failed authentication
○ Monitoring easy via existing tools like e.g. Nagios
● To do:
○ Automatic blacklisting IPs/clients after n-failed attempts on Ceph level
(Jewel blueprint)
● Unauthorized injection of keys
○ Monitor the audit log
■ trigger alerts for auth events -> monitoring
○ Periodic comparison with signed backup of auth database?
34
Conclusions
Summary
● Reactive processes are in place
○ security@ceph.com, CVEs, downstream product updates, etc.
● Proactive measures in progress
○ Code quality improves (SCA, etc.)
○ Unprivileged daemons
○ MAC (SELinux, AppArmor)
○ Encryption
● Progress defining security best-practices
○ Document best practices for security
● Ongoing process
36
Get involved !
● Ceph
○ https://ceph.com/community/contribute/
○ ceph-devel@vger.kernel.org
○ IRC: OFTC
■ #ceph,
■ #ceph-devel
○ Ceph Developer Summit
● OpenStack
○ Telco Working Group
■ #openstack-nfv
○ Cinder, Glance, Manila, ...
37
danny.al-gaaf@telekom.de
dalgaaf
linkedin.com/in/dalgaaf
Danny Al-Gaaf
Senior Cloud Technologist
IRC
THANK YOU!

More Related Content

What's hot

What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and BeyondSage Weil
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...TomBarron
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to CephCeph Community
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...Ceph Community
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Gluster.org
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Ceph Community
 
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFSCEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFSCeph Community
 
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAYCEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAYCeph Community
 
Software defined storage
Software defined storageSoftware defined storage
Software defined storageGluster.org
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebula Project
 
Gluster technical overview
Gluster technical overviewGluster technical overview
Gluster technical overviewGluster.org
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Community
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed_Hat_Storage
 

What's hot (20)

What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0
 
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFSCEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
CEPH DAY BERLIN - UNLIMITED FILESERVER WITH SAMBA CTDB AND CEPHFS
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAYCEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAY
 
CephFS Update
CephFS UpdateCephFS Update
CephFS Update
 
Software defined storage
Software defined storageSoftware defined storage
Software defined storage
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
 
Gluster technical overview
Gluster technical overviewGluster technical overview
Gluster technical overview
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
Ceph on Windows
Ceph on WindowsCeph on Windows
Ceph on Windows
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 

Viewers also liked

Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 
L'enseignement de la priere
L'enseignement de la priereL'enseignement de la priere
L'enseignement de la priereNoor Al Islam
 
Søkemotor quiz
Søkemotor quizSøkemotor quiz
Søkemotor quizmediequiz
 
Regalos para hospitales
Regalos para hospitalesRegalos para hospitales
Regalos para hospitalesFashion Fruit
 
Open Text RedDot CMS: Improving Installation Performance
Open Text RedDot CMS: Improving Installation PerformanceOpen Text RedDot CMS: Improving Installation Performance
Open Text RedDot CMS: Improving Installation PerformancedotCMS
 
Aida opera email
Aida opera emailAida opera email
Aida opera emailDave Shafer
 
Addicted to-success-brochure-english
Addicted to-success-brochure-englishAddicted to-success-brochure-english
Addicted to-success-brochure-englishYesEuropa
 
Ruta madrid racing 16
Ruta madrid racing 16Ruta madrid racing 16
Ruta madrid racing 16Javier Lopez
 
2010 05 it profits - lightning talk datenlogistik
2010 05 it profits - lightning talk datenlogistik2010 05 it profits - lightning talk datenlogistik
2010 05 it profits - lightning talk datenlogistikTschitschereengreen
 
IPS Parts Online - Our Products
IPS Parts Online - Our ProductsIPS Parts Online - Our Products
IPS Parts Online - Our ProductsIPS Parts Online
 
Volatilitaet als Werttreiber risikooptimierter Handelsstrategien
Volatilitaet als Werttreiber risikooptimierter HandelsstrategienVolatilitaet als Werttreiber risikooptimierter Handelsstrategien
Volatilitaet als Werttreiber risikooptimierter HandelsstrategienMario Ledencan
 
Ftth access regulation
Ftth access regulationFtth access regulation
Ftth access regulationIgors Cardoso
 
Riqueza Paraguaya
Riqueza ParaguayaRiqueza Paraguaya
Riqueza ParaguayaCeleste
 
Company & partnership registration
Company & partnership registrationCompany & partnership registration
Company & partnership registrationdebtcollecction
 

Viewers also liked (20)

Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
L'enseignement de la priere
L'enseignement de la priereL'enseignement de la priere
L'enseignement de la priere
 
Søkemotor quiz
Søkemotor quizSøkemotor quiz
Søkemotor quiz
 
Amistad
AmistadAmistad
Amistad
 
Toxico
ToxicoToxico
Toxico
 
Regalos para hospitales
Regalos para hospitalesRegalos para hospitales
Regalos para hospitales
 
Open Text RedDot CMS: Improving Installation Performance
Open Text RedDot CMS: Improving Installation PerformanceOpen Text RedDot CMS: Improving Installation Performance
Open Text RedDot CMS: Improving Installation Performance
 
Brake testing equipment_p-1257_en
Brake testing equipment_p-1257_enBrake testing equipment_p-1257_en
Brake testing equipment_p-1257_en
 
Ainu nt
Ainu ntAinu nt
Ainu nt
 
Aida opera email
Aida opera emailAida opera email
Aida opera email
 
Abaka 2012-03-26
Abaka 2012-03-26Abaka 2012-03-26
Abaka 2012-03-26
 
Addicted to-success-brochure-english
Addicted to-success-brochure-englishAddicted to-success-brochure-english
Addicted to-success-brochure-english
 
Ruta madrid racing 16
Ruta madrid racing 16Ruta madrid racing 16
Ruta madrid racing 16
 
2010 05 it profits - lightning talk datenlogistik
2010 05 it profits - lightning talk datenlogistik2010 05 it profits - lightning talk datenlogistik
2010 05 it profits - lightning talk datenlogistik
 
IPS Parts Online - Our Products
IPS Parts Online - Our ProductsIPS Parts Online - Our Products
IPS Parts Online - Our Products
 
Floriani - Resume 2014
Floriani - Resume 2014Floriani - Resume 2014
Floriani - Resume 2014
 
Volatilitaet als Werttreiber risikooptimierter Handelsstrategien
Volatilitaet als Werttreiber risikooptimierter HandelsstrategienVolatilitaet als Werttreiber risikooptimierter Handelsstrategien
Volatilitaet als Werttreiber risikooptimierter Handelsstrategien
 
Ftth access regulation
Ftth access regulationFtth access regulation
Ftth access regulation
 
Riqueza Paraguaya
Riqueza ParaguayaRiqueza Paraguaya
Riqueza Paraguaya
 
Company & partnership registration
Company & partnership registrationCompany & partnership registration
Company & partnership registration
 

Similar to DOST: Ceph in a security critical OpenStack cloud

CloudStack In Production
CloudStack In ProductionCloudStack In Production
CloudStack In ProductionClayton Weise
 
25 years of firewalls and network filtering - From antiquity to the cloud
25 years of firewalls and network filtering - From antiquity to the cloud25 years of firewalls and network filtering - From antiquity to the cloud
25 years of firewalls and network filtering - From antiquity to the cloudshira koper
 
Security of Linux containers in the cloud
Security of Linux containers in the cloudSecurity of Linux containers in the cloud
Security of Linux containers in the cloudDobrica Pavlinušić
 
[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...
[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...
[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...CODE BLUE
 
Kubernetes from scratch at veepee sysadmins days 2019
Kubernetes from scratch at veepee   sysadmins days 2019Kubernetes from scratch at veepee   sysadmins days 2019
Kubernetes from scratch at veepee sysadmins days 2019🔧 Loïc BLOT
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
Linux 开源操作系统发展新趋势
Linux 开源操作系统发展新趋势Linux 开源操作系统发展新趋势
Linux 开源操作系统发展新趋势Anthony Wong
 
OpenVZ Linux containers
OpenVZ Linux containersOpenVZ Linux containers
OpenVZ Linux containersOpenVZ
 
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsXPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsThe Linux Foundation
 
Containerization & Docker - Under the Hood
Containerization & Docker - Under the HoodContainerization & Docker - Under the Hood
Containerization & Docker - Under the HoodImesha Sudasingha
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
 
Shall we play a game
Shall we play a gameShall we play a game
Shall we play a gamejackpot201
 
We shall play a game....
We shall play a game....We shall play a game....
We shall play a game....Sadia Textile
 
Guest Agents: Support & Implementation
Guest Agents: Support & ImplementationGuest Agents: Support & Implementation
Guest Agents: Support & ImplementationMirantis
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project UpdateCeph Community
 
Linux Virtualization
Linux VirtualizationLinux Virtualization
Linux VirtualizationOpenVZ
 
syzbot and the tale of million kernel bugs
syzbot and the tale of million kernel bugssyzbot and the tale of million kernel bugs
syzbot and the tale of million kernel bugsDmitry Vyukov
 

Similar to DOST: Ceph in a security critical OpenStack cloud (20)

CloudStack In Production
CloudStack In ProductionCloudStack In Production
CloudStack In Production
 
25 years of firewalls and network filtering - From antiquity to the cloud
25 years of firewalls and network filtering - From antiquity to the cloud25 years of firewalls and network filtering - From antiquity to the cloud
25 years of firewalls and network filtering - From antiquity to the cloud
 
Security of Linux containers in the cloud
Security of Linux containers in the cloudSecurity of Linux containers in the cloud
Security of Linux containers in the cloud
 
[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...
[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...
[CB19] Semzhu-Project – A self-made new world of embedded hypervisors and att...
 
Kubernetes from scratch at veepee sysadmins days 2019
Kubernetes from scratch at veepee   sysadmins days 2019Kubernetes from scratch at veepee   sysadmins days 2019
Kubernetes from scratch at veepee sysadmins days 2019
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Wordpress deployment on aws
Wordpress deployment on awsWordpress deployment on aws
Wordpress deployment on aws
 
Linux 开源操作系统发展新趋势
Linux 开源操作系统发展新趋势Linux 开源操作系统发展新趋势
Linux 开源操作系统发展新趋势
 
OpenVZ Linux containers
OpenVZ Linux containersOpenVZ Linux containers
OpenVZ Linux containers
 
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsXPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
 
Containerization & Docker - Under the Hood
Containerization & Docker - Under the HoodContainerization & Docker - Under the Hood
Containerization & Docker - Under the Hood
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
Shall we play a game?
Shall we play a game?Shall we play a game?
Shall we play a game?
 
0507 057 01 98 * Adana Klima Servisleri
0507 057 01 98 * Adana Klima Servisleri0507 057 01 98 * Adana Klima Servisleri
0507 057 01 98 * Adana Klima Servisleri
 
Shall we play a game
Shall we play a gameShall we play a game
Shall we play a game
 
We shall play a game....
We shall play a game....We shall play a game....
We shall play a game....
 
Guest Agents: Support & Implementation
Guest Agents: Support & ImplementationGuest Agents: Support & Implementation
Guest Agents: Support & Implementation
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project Update
 
Linux Virtualization
Linux VirtualizationLinux Virtualization
Linux Virtualization
 
syzbot and the tale of million kernel bugs
syzbot and the tale of million kernel bugssyzbot and the tale of million kernel bugs
syzbot and the tale of million kernel bugs
 

More from Danny Al-Gaaf

Email Storage with Ceph - Cephalocon APAC, Beijing 2018
Email Storage with Ceph - Cephalocon APAC, Beijing 2018Email Storage with Ceph - Cephalocon APAC, Beijing 2018
Email Storage with Ceph - Cephalocon APAC, Beijing 2018Danny Al-Gaaf
 
Email Storage with Ceph - Ceph Day Germany 2018
Email Storage with Ceph - Ceph Day Germany 2018Email Storage with Ceph - Ceph Day Germany 2018
Email Storage with Ceph - Ceph Day Germany 2018Danny Al-Gaaf
 
Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...
Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...
Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...Danny Al-Gaaf
 
Email Storage with Ceph - SUSECON2017
Email Storage with Ceph - SUSECON2017Email Storage with Ceph - SUSECON2017
Email Storage with Ceph - SUSECON2017Danny Al-Gaaf
 
Email storage with Ceph
Email storage with CephEmail storage with Ceph
Email storage with CephDanny Al-Gaaf
 
DOST 2017 - Vanilla or Distributions - How do they differentiate
DOST 2017 - Vanilla or Distributions - How do they differentiateDOST 2017 - Vanilla or Distributions - How do they differentiate
DOST 2017 - Vanilla or Distributions - How do they differentiateDanny Al-Gaaf
 
Vanilla or distributions - How do they differentiate?
Vanilla or distributions - How do they differentiate?Vanilla or distributions - How do they differentiate?
Vanilla or distributions - How do they differentiate?Danny Al-Gaaf
 

More from Danny Al-Gaaf (7)

Email Storage with Ceph - Cephalocon APAC, Beijing 2018
Email Storage with Ceph - Cephalocon APAC, Beijing 2018Email Storage with Ceph - Cephalocon APAC, Beijing 2018
Email Storage with Ceph - Cephalocon APAC, Beijing 2018
 
Email Storage with Ceph - Ceph Day Germany 2018
Email Storage with Ceph - Ceph Day Germany 2018Email Storage with Ceph - Ceph Day Germany 2018
Email Storage with Ceph - Ceph Day Germany 2018
 
Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...
Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...
Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Stat...
 
Email Storage with Ceph - SUSECON2017
Email Storage with Ceph - SUSECON2017Email Storage with Ceph - SUSECON2017
Email Storage with Ceph - SUSECON2017
 
Email storage with Ceph
Email storage with CephEmail storage with Ceph
Email storage with Ceph
 
DOST 2017 - Vanilla or Distributions - How do they differentiate
DOST 2017 - Vanilla or Distributions - How do they differentiateDOST 2017 - Vanilla or Distributions - How do they differentiate
DOST 2017 - Vanilla or Distributions - How do they differentiate
 
Vanilla or distributions - How do they differentiate?
Vanilla or distributions - How do they differentiate?Vanilla or distributions - How do they differentiate?
Vanilla or distributions - How do they differentiate?
 

Recently uploaded

The Ten Facts About People With Autism Presentation
The Ten Facts About People With Autism PresentationThe Ten Facts About People With Autism Presentation
The Ten Facts About People With Autism PresentationNathan Young
 
THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...
THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...
THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...漢銘 謝
 
Application of GIS in Landslide Disaster Response.pptx
Application of GIS in Landslide Disaster Response.pptxApplication of GIS in Landslide Disaster Response.pptx
Application of GIS in Landslide Disaster Response.pptxRoquia Salam
 
A Guide to Choosing the Ideal Air Cooler
A Guide to Choosing the Ideal Air CoolerA Guide to Choosing the Ideal Air Cooler
A Guide to Choosing the Ideal Air Coolerenquirieskenstar
 
Early Modern Spain. All about this period
Early Modern Spain. All about this periodEarly Modern Spain. All about this period
Early Modern Spain. All about this periodSaraIsabelJimenez
 
proposal kumeneger edited.docx A kumeeger
proposal kumeneger edited.docx A kumeegerproposal kumeneger edited.docx A kumeeger
proposal kumeneger edited.docx A kumeegerkumenegertelayegrama
 
Internship Presentation | PPT | CSE | SE
Internship Presentation | PPT | CSE | SEInternship Presentation | PPT | CSE | SE
Internship Presentation | PPT | CSE | SESaleh Ibne Omar
 
Engaging Eid Ul Fitr Presentation for Kindergartners.pptx
Engaging Eid Ul Fitr Presentation for Kindergartners.pptxEngaging Eid Ul Fitr Presentation for Kindergartners.pptx
Engaging Eid Ul Fitr Presentation for Kindergartners.pptxAsifArshad8
 
Chizaram's Women Tech Makers Deck. .pptx
Chizaram's Women Tech Makers Deck.  .pptxChizaram's Women Tech Makers Deck.  .pptx
Chizaram's Women Tech Makers Deck. .pptxogubuikealex
 
SaaStr Workshop Wednesday w/ Kyle Norton, Owner.com
SaaStr Workshop Wednesday w/ Kyle Norton, Owner.comSaaStr Workshop Wednesday w/ Kyle Norton, Owner.com
SaaStr Workshop Wednesday w/ Kyle Norton, Owner.comsaastr
 
GESCO SE Press and Analyst Conference on Financial Results 2024
GESCO SE Press and Analyst Conference on Financial Results 2024GESCO SE Press and Analyst Conference on Financial Results 2024
GESCO SE Press and Analyst Conference on Financial Results 2024GESCO SE
 
cse-csp batch4 review-1.1.pptx cyber security
cse-csp batch4 review-1.1.pptx cyber securitycse-csp batch4 review-1.1.pptx cyber security
cse-csp batch4 review-1.1.pptx cyber securitysandeepnani2260
 
RACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATION
RACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATIONRACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATION
RACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATIONRachelAnnTenibroAmaz
 
Quality by design.. ppt for RA (1ST SEM
Quality by design.. ppt for  RA (1ST SEMQuality by design.. ppt for  RA (1ST SEM
Quality by design.. ppt for RA (1ST SEMCharmi13
 
General Elections Final Press Noteas per M
General Elections Final Press Noteas per MGeneral Elections Final Press Noteas per M
General Elections Final Press Noteas per MVidyaAdsule1
 
INDIAN GCP GUIDELINE. for Regulatory affair 1st sem CRR
INDIAN GCP GUIDELINE. for Regulatory  affair 1st sem CRRINDIAN GCP GUIDELINE. for Regulatory  affair 1st sem CRR
INDIAN GCP GUIDELINE. for Regulatory affair 1st sem CRRsarwankumar4524
 
Dutch Power - 26 maart 2024 - Henk Kras - Circular Plastics
Dutch Power - 26 maart 2024 - Henk Kras - Circular PlasticsDutch Power - 26 maart 2024 - Henk Kras - Circular Plastics
Dutch Power - 26 maart 2024 - Henk Kras - Circular PlasticsDutch Power
 

Recently uploaded (17)

The Ten Facts About People With Autism Presentation
The Ten Facts About People With Autism PresentationThe Ten Facts About People With Autism Presentation
The Ten Facts About People With Autism Presentation
 
THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...
THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...
THE COUNTRY WHO SOLVED THE WORLD_HOW CHINA LAUNCHED THE CIVILIZATION REVOLUTI...
 
Application of GIS in Landslide Disaster Response.pptx
Application of GIS in Landslide Disaster Response.pptxApplication of GIS in Landslide Disaster Response.pptx
Application of GIS in Landslide Disaster Response.pptx
 
A Guide to Choosing the Ideal Air Cooler
A Guide to Choosing the Ideal Air CoolerA Guide to Choosing the Ideal Air Cooler
A Guide to Choosing the Ideal Air Cooler
 
Early Modern Spain. All about this period
Early Modern Spain. All about this periodEarly Modern Spain. All about this period
Early Modern Spain. All about this period
 
proposal kumeneger edited.docx A kumeeger
proposal kumeneger edited.docx A kumeegerproposal kumeneger edited.docx A kumeeger
proposal kumeneger edited.docx A kumeeger
 
Internship Presentation | PPT | CSE | SE
Internship Presentation | PPT | CSE | SEInternship Presentation | PPT | CSE | SE
Internship Presentation | PPT | CSE | SE
 
Engaging Eid Ul Fitr Presentation for Kindergartners.pptx
Engaging Eid Ul Fitr Presentation for Kindergartners.pptxEngaging Eid Ul Fitr Presentation for Kindergartners.pptx
Engaging Eid Ul Fitr Presentation for Kindergartners.pptx
 
Chizaram's Women Tech Makers Deck. .pptx
Chizaram's Women Tech Makers Deck.  .pptxChizaram's Women Tech Makers Deck.  .pptx
Chizaram's Women Tech Makers Deck. .pptx
 
SaaStr Workshop Wednesday w/ Kyle Norton, Owner.com
SaaStr Workshop Wednesday w/ Kyle Norton, Owner.comSaaStr Workshop Wednesday w/ Kyle Norton, Owner.com
SaaStr Workshop Wednesday w/ Kyle Norton, Owner.com
 
GESCO SE Press and Analyst Conference on Financial Results 2024
GESCO SE Press and Analyst Conference on Financial Results 2024GESCO SE Press and Analyst Conference on Financial Results 2024
GESCO SE Press and Analyst Conference on Financial Results 2024
 
cse-csp batch4 review-1.1.pptx cyber security
cse-csp batch4 review-1.1.pptx cyber securitycse-csp batch4 review-1.1.pptx cyber security
cse-csp batch4 review-1.1.pptx cyber security
 
RACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATION
RACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATIONRACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATION
RACHEL-ANN M. TENIBRO PRODUCT RESEARCH PRESENTATION
 
Quality by design.. ppt for RA (1ST SEM
Quality by design.. ppt for  RA (1ST SEMQuality by design.. ppt for  RA (1ST SEM
Quality by design.. ppt for RA (1ST SEM
 
General Elections Final Press Noteas per M
General Elections Final Press Noteas per MGeneral Elections Final Press Noteas per M
General Elections Final Press Noteas per M
 
INDIAN GCP GUIDELINE. for Regulatory affair 1st sem CRR
INDIAN GCP GUIDELINE. for Regulatory  affair 1st sem CRRINDIAN GCP GUIDELINE. for Regulatory  affair 1st sem CRR
INDIAN GCP GUIDELINE. for Regulatory affair 1st sem CRR
 
Dutch Power - 26 maart 2024 - Henk Kras - Circular Plastics
Dutch Power - 26 maart 2024 - Henk Kras - Circular PlasticsDutch Power - 26 maart 2024 - Henk Kras - Circular Plastics
Dutch Power - 26 maart 2024 - Henk Kras - Circular Plastics
 

DOST: Ceph in a security critical OpenStack cloud

  • 1. Ceph in a security critical OpenStack cloud Danny Al-Gaaf (Deutsche Telekom) Deutsche OpenStack Tage 2015 - Frankfurt
  • 2. ● Ceph and OpenStack ● Secure NFV cloud at DT ● Attack surface ● Proactive countermeasures ○ Setup ○ Vulnerability prevention ○ Breach mitigation ● Reactive countermeasures ○ 0-days, CVEs ○ Security support SLA and lifecycle ● Conclusions Overview 2
  • 7. NFV Cloud @ Deutsche Telekom ● Datacenter design ○ BDCs ■ few but classic DCs ■ high SLAs for infrastructure and services ■ for private/customer data and services ○ FDCs ■ small but many ■ near to the customer ■ lower SLAs, can fail at any time ■ NFVs: ● spread over many FDCs ● failures are handled by services and not the infrastructure ● Run telco core services @ OpenStack/KVM/Ceph 7
  • 8. Fundamentals - The CIA Triad 8 CONFIDENTIALITY INTEGRITY AVAILABILITY Preventing sensitive data against unauthorized access Maintaining consistency, accuracy, and trustworthiness of data Protecting systems against disrupting services and availability of information
  • 9. High Security Requirements ● Multiple security placement zones (PZ) ○ e.g. EHD, DMZ, MZ, SEC, Management ○ TelcoWG “Security Segregation” use case ● Separation between PZs required for: ○ compute ○ networks ○ storage ● Protect against many attack vectors ● Enforced and reviewed by security department 9
  • 10. Solutions for storage separation ● Physical separation ○ Large number of clusters (>100) ○ Large hardware demand (compute and storage) ○ High maintenance effort ○ Less flexibility ● RADOS pool separation ○ Much more flexible ○ Efficient use of hardware ● Question: ○ Can we get the same security as physical separation? 10
  • 11. Separation through Placement Zones ● One RADOS pool for each security zone ○ Limit access using Ceph capabilities ● OpenStack AZs as PZs ○ Cinder ■ Configure one backend/volume type per pool (with own key) ■ Need to map between AZs and volume types via policy ○ Glance ■ Lacks separation between control and compute/storage layer ■ Separate read-only vs management endpoints ○ Manila ■ Currently not planned to use in production with CephFS ■ May use RBD via NFS 11
  • 13. RadosGW attack surface ● S3/Swift ○ Network access to gateway only ○ No direct access for consumer to other Ceph daemons ● Single API attack surface 13
  • 14. RBD librbd attack surface ● Protection from hypervisor block layer ○ transparent for the guest ○ No network access or CephX keys needed at guest level ● Issue: ○ hypervisor is software and therefore not 100% secure… ■ breakouts are no mythical creature ■ e.g., Virtunoid, SYSENTER, Venom!14
  • 15. RBD.ko attack surface ● RBD kernel module ○ e.g. used with XEN or on bare metal ○ Requires direct access to Ceph public network ○ Requires CephX keys/secret at guest level ● Issue: ○ no separation between cluster and guest 15
  • 16. CephFS attack surface ● pure CephFS tears a big hole in hypervisor separation ○ Requires direct access to Ceph public network ○ Requires CephX keys/secret at guest level ○ Complete file system visible to guest ■ Separation currently only via POSIX user/group 16
  • 17. Host attack surface ● If KVM is compromised, the attacker ... ○ has access to neighbor VMs ○ has access to local Ceph keys ○ has access to Ceph public network and Ceph daemons ● Firewalls, deep packet inspection (DPI), ... ○ partly impractical due to used protocols ○ implications to performance and cost ● Bottom line: Ceph daemons must resist attack ○ C/C++ is harder to secure than e.g. Python ○ Homogenous: if one daemon is vulnerable, all in the cluster are! 17
  • 18. Network attack surface ● Sessions are authenticated ○ Attacker cannot impersonate clients or servers ○ Attacker cannot mount man-in-the-middle attacks ● Client/cluster sessions are not encrypted ○ Sniffer can recover any data read or written 18
  • 19. Denial of Service ● Attack against: ○ Ceph Cluster: ■ Submit many / large / expensive IOs ■ Open many connections ■ Use flaws to crash Ceph daemons ■ Identify non-obvious but expensive features of client/OSD interface ○ Ceph Cluster hosts: ■ Crash complete cluster hosts e.g. through flaws in kernel network layer ○ VMs on same host: ■ Saturate the network bandwidth of the host 19
  • 21. Deployment and Setup ● Network ○ Always use separated cluster and public networks ○ Always separate your control nodes from other networks ○ Don’t expose cluster to the open internet ○ Encrypt inter-datacenter traffic ● Avoid hyper-converged infrastructure ○ Don’t mix ■ compute and storage resources, isolate them! ■ OpenStack and Ceph control nodes ○ Scale resources independently ○ Risk mitigation if daemons are compromised or DoS’d 21
  • 22. Deploying RadosGW ● Big and easy target through HTTP(S) protocol ● Small appliance per tenant with ○ Separate network ○ SSL terminated proxy forwarding requests to radosgw ○ WAF (mod_security) to filter ○ Placed in secure/managed zone ○ different type of webserver than RadosGW ● Don’t share buckets/users between tenants22
  • 23. Ceph security: CephX ● Monitors are trusted key servers ○ Store copies of all entity keys ○ Each key has an associated “capability” ■ Plaintext description of what the key user is allowed to do ● What you get ○ Mutual authentication of client + server ○ Extensible authorization w/ “capabilities” ○ Protection from man-in-the-middle, TCP session hijacking ● What you don’t get ○ Secrecy (encryption over the wire) 23
  • 24. Ceph security: CephX take-aways ● Monitors must be secured ○ Protect the key database ● Key management is important ○ Separate key for each Cinder backend/AZ ○ Restrict capabilities associated with each key ○ Limit administrators’ power ■ use ‘allow profile admin’ and ‘allow profile readonly’ ■ restrict role-definer or ‘allow *’ keys ○ Careful key distribution (Ceph and OpenStack nodes) ● To do: ○ Thorough CephX code review by security experts ○ Audit OpenStack deployment tools’ key distribution ○ Improve security documentation24
  • 25. ● Static Code Analysis (SCA) ○ Buffer overflows and other code flaws ○ Regular Coverity scans ■ 996 fixed, 284 dismissed; 420 outstanding ■ defect density 0.97 ○ cppcheck ○ LLVM: clang/scan-build ● Runtime analysis ○ valgrind memcheck ● Plan ○ Reduce backlog of low-priority issues (e.g., issues in test code) ○ Automated reporting of new SCA issues on pull requests ○ Improve code reviewer awareness of security defects Preventing Breaches - Defects 25
  • 26. ● Pen-testing ○ human attempt to subvert security, generally guided by code review ● Fuzz testing ○ computer attempt to subvert or crash, by feeding garbage input ● Harden build ○ -fpie -fpic ○ -stack-protector=strong ○ -Wl,-z,relro,-z,now ○ -D_FORTIFY_SOURCE=2 -O2 (?) ○ Check for performance regression! Preventing Breaches - Hardening 26
  • 27. Mitigating Breaches ● Run non-root daemons (WIP: PR #4456) ○ Prevent escalating privileges to get root ○ Run as ‘ceph’ user and group ○ Pending for Infernalis ● MAC ○ SELinux / AppArmor ○ Profiles for daemons and tools planned for Infernalis ● Run (some) daemons in VMs or containers ○ Monitor and RGW - less resource intensive ○ MDS - maybe ○ OSD - prefers direct access to hardware ● Separate MON admin network 27
  • 28. Encryption: Data at Rest ● Encryption at application vs cluster level ● Some deployment tools support dm-crypt ○ Encrypt raw block device (OSD and journal) ○ Allow disks to be safely discarded if key remains secret ● Key management is still very simple ○ Encryption key stored on disk via LUKS ○ LUKS key stored in /etc/ceph/keys ● Plan ○ Petera, a new key escrow project from Red Hat ■ https://github.com/npmccallum/petera ○ Alternative: simple key management via monitor (CDS blueprint) 28
  • 29. ● Goal ○ Protect data from someone listening in on network ○ Protect administrator sessions configuring client keys ● Plan ○ Generate per-session keys based on existing tickets ○ Selectively encrypt monitor administrator sessions ○ alternative: make use of IPSec (performance and management implications) Encryption: On Wire 29
  • 30. ● Limit load from client ○ Use qemu IO throttling features - set safe upper bound ● To do: ○ Limit max open sockets per OSD ○ Limit max open sockets per source IP ■ handle on Ceph or in the network layer? ○ Throttle operations per-session or per-client (vs just globally)? Denial of Service attacks 30
  • 31. CephFS ● No standard virtualization layer (unlike block) ○ Filesystem passthrough (9p/virtfs) to host ○ Proxy through gateway (NFS?) ○ Allow direct access from tenant VM (most unsecure) ● Granularity of access control is harder ○ No simple mapping to RADOS objects ● Work in progress ○ root_squash (Infernalis blueprint) ○ Restrict mount to subtree ○ Restrict mount to user 31
  • 33. ● Community ○ Single point of contact: security@ceph.com ■ Core development team ■ Red Hat, SUSE, Canonical security teams ○ Security related fixes are prioritized and backported ○ Releases may be accelerated on ad hoc basis ○ Security advisories to ceph-announce@ceph.com ● Red Hat Ceph ○ Strict SLA on issues raised with Red Hat security team ○ Escalation process to Ceph developers ○ Red Hat security team drives CVE process ○ Hot fixes distributed via Red Hat’s CDN Reactive Security Process 33
  • 34. Detecting and Preventing Breaches ● Brute force attacks ○ Good logging of any failed authentication ○ Monitoring easy via existing tools like e.g. Nagios ● To do: ○ Automatic blacklisting IPs/clients after n-failed attempts on Ceph level (Jewel blueprint) ● Unauthorized injection of keys ○ Monitor the audit log ■ trigger alerts for auth events -> monitoring ○ Periodic comparison with signed backup of auth database? 34
  • 36. Summary ● Reactive processes are in place ○ security@ceph.com, CVEs, downstream product updates, etc. ● Proactive measures in progress ○ Code quality improves (SCA, etc.) ○ Unprivileged daemons ○ MAC (SELinux, AppArmor) ○ Encryption ● Progress defining security best-practices ○ Document best practices for security ● Ongoing process 36
  • 37. Get involved ! ● Ceph ○ https://ceph.com/community/contribute/ ○ ceph-devel@vger.kernel.org ○ IRC: OFTC ■ #ceph, ■ #ceph-devel ○ Ceph Developer Summit ● OpenStack ○ Telco Working Group ■ #openstack-nfv ○ Cinder, Glance, Manila, ... 37