SlideShare a Scribd company logo
CEPH AT WORK
IN BLOOMBERG
Object Store, RBD and OpenStack
January 19, 2016
By: Chris Jones & Chris Morgan
BLOOMBERG
2
30 Years in under 30 Seconds
● Subscriber based financial provider (Bloomberg Terminal)
● Online, TV, print, real-time streaming information
● Offices and customers in every major financial market and institution
worldwide
BLOOMBERG
3
Primary product - Information
● Bloomberg Terminal
− Approximately 60,000 features/functions. For
example, ability to track oil tankers in real-time via
satellite feeds
− Note: Exact numbers are not specified. Contact
media relations for specifics and other important
information.
CLOUD INFRASTRUCTURE
4
CLOUD INFRASTRUCTURE GROUP
5
Primary customers
– Developers
– Product Groups
● Many different
development
groups throughout
our organization
● Currently about
3,000 R&D
developers
● Everyone of them
wants and needs
resources
CLOUD INFRASTRUCTURE GROUP
6
Resource Challenges
● Developers
− Development
− Testing
− Automation (Cattle vs. Pets)
● Organizations
− POC
− Products in production
− Automation
● Security/Networking
− Compliance
USER BASE (EXAMPLES)
7
Resources and Use cases
● Multiple Data Centers
− Each DC contains *many* Network Tiers which includes a DMZ for Public-
facing Bloomberg assets
− There is at least one Ceph/OpenStack Cluster per Network Tier
● Developer Community Supported
− Public facing Bloomberg products
− Machine learning backend for smart apps
− Compliance-based resources
− Use cases continue to climb as Devs need more storage and compute
capacity
INFRASTRUCTURE
8
USED IN BLOOMBERG
9
● Ceph – RGW (Object Store)
● Ceph – Block/Volume
● OpenStack
─ Different flavors of compute
─ Ephemeral storage
● Object Store is
becoming one of
the most popular
items
● OpenStack
compute with Ceph
backed block store
volumes are very
popular
● We introduced
ephemeral
compute storage
SUPER HYPER-CONVERGED STACK
10
On EVERY Network Tier
SUPER HYPER-CONVERGED STACK
11
(Original) Converged Architecture Rack Layout
● 3 Head Nodes (Controller Nodes)
− Ceph Monitor
− Ceph OSD
− OpenStack Controllers (All of them!)
− HAProxy
● 1 Bootstrap Node
− Cobbler (PXE Boot)
− Repos
− Chef Server
− Rally/Tempest
● Remaining Nodes
− Nova Compute
− Ceph OSDs
− RGW – Apache
● Ubuntu
● Shared spine with Hadoop resources
Bootstrap Node
Compute/Ceph OSDs/RGW/Apache
Remaining Stack
Sliced View of Stack
NEW POD ARCHITECTURE
12
POD
(TOR)
HAProxy
OS-Nova
OS-NovaOS-Rabbit
OS-DB
Number of large providers have taken similar approaches
Note: Illustrative only – Not Representative
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
RBD Only
Bootstrap
Monitoring
Ephemeral
Ephemeral – Fast/Dangerous
Host aggregates & flavors
Not Ceph backed
POD ARCHITECTURE (OPENSTACK/CEPH)
13
POD
(TOR)
Ceph
Block
OS-Nova
OS-NovaOS-Rabbit
OS-NovaOS-DB
Number of large providers have taken similar approaches
Note: Illustrative only – Not Representative
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
• Scale and re-
provision as needed
• 3 PODs per rack
EPHEMERAL VS. CEPH BLOCK STORAGE
14
Numbers will vary in different environments. Illustrations are simplified.
Ceph Ephemeral
New feature option added to address high IOP applications
EPHEMERAL VS. CEPH BLOCK STORAGE
15
Numbers will vary in different environments. Illustrations are simplified.
Ceph – Advantages
● All data is replicated at least 3 ways across the cluster
● Ceph RBD volumes can be created, attached and detached from any hypervisor
● Very fast provisioning using COW (copy-on-write) images
● Allows easy instance re-launch in the event of hypervisor failure
● High read performance
Ephemeral – Advantages
● Offers read/write speeds that can be 3-4 times faster than Ceph with lower latency
● Can provide fairly large volumes for cheap
Ceph – Disadvantages
● All writes must be acknowledged by multiple nodes before being considered as committed (tradeoff for reliability)
● Higher latency due to Ceph being network based instead of local
Ephemeral – Disadvantages
● Trades data integrity for speed: if one drive fails at RAID 0 then all data on that node is lost
● May be difficult to add more capacity (depends on type of RAID)
● Running in JBOD LVM mode w/o RAID performance was not as good as Ceph
● Less important, with RAID your drives need to be same size or you lose capacity
EPHEMERAL VS. CEPH BLOCK STORAGE
16
Numbers will vary in different environments. Illustrations are simplified.
EPHEMERAL CEPH
Block write bandwidth (MB/s) 1,094.02 642.15
Block read bandwidth (MB/s) 1,826.43 639.47
Character read bandwidth (MB/s) 4.93 4.31
Character write bandwidth (MB/s) 0.83 0.75
Block write latency (ms) 9.502 37.096
Block read latency (ms) 8.121 4.941
Character read latency (ms) 2.395 3.322
Character write latency (ms) 11.052 13.587
Note: Ephemeral in JBOD/LVM mode is not as fast as Ceph
Numbers can also increase with additional tuning and different devices
CHALLENGES – LESSONS LEARNED
17
Network
● It’s all about the network.
− Changed MTU from 1500 to 9000 on certain interfaces (Float interface – Storage interface)
− Hardware Load Balancers – keep an eye on performance
● Hardware
− Moving to a more commodity driven hardware
− All flash storage in compute cluster (high cost, good for block and ephemeral)
Costs
● Storage costs are very high in a converged compute cluster for Object Store
Analytics
● Need to know how the cluster is being used
● Need to know if the tps meets the SLA
● Test going directly against nodes and then layer in network components until you can
verify all choke points in the data flow path
● Monitor and test always
NEW CEPH OBJECT STORE
18
OBJECT STORE STACK (RACK CONFIG)
19
RedHat 7.1
● 1 TOR and 1 Rack Mgt Node
● 3 1U nodes (Mon, RGW, Util)
● 17 2U Ceph OSD nodes
● 2x or 3x Replication depending on need (3x default)
● Secondary RGW (may coexist with OSD Node)
● 10g Cluster interface
● 10g Public interface
● 1 IPMI interface
● OSD Nodes (high density server nodes)
− 6TB HDD x 12 – Journal partitions on SSD
− No RAID1 OS drives – instead we partitioned off a
small amount of SSD1 for OS and swap with remainder
of SSD1 used for some journals and SSD2 used for
remaining journals
− Failure domain is a node
3 1U Nodes
TOR/IPMI
Converged
Storage Nodes
2U each
OBJECT STORE STACK (ARCHITECTURE)
20
1 Mon/RGW Node
Per rack
TOR - Leaf
Storage Nodes
Spine Spine LBLB
OBJECT STORE STACK
21
Standard configuration (Archive Cluster)
● Min of 3 Racks = Cluster
● OS – Redhat 7.1
● Cluster Network: Bonded 10g or higher depending on size of cluster
● Public Network: Bonded 10g for RGW interfaces
● 1 Ceph Mon node per rack except on more than 3 racks. Need to keep odd number of Mons so some racks may not
have Mons. We try to keep larger cluster racks & Mons in different power zones
● We have developed a healthy “Pain” tolerance. We mainly see drive failures and some node failures.
● Min 1 RGW (dedicated Node) per rack (may want more)
● Hardware load balancers to RGWs with redundancy
● Erasure coded pools (no cache tiers at present – testing). We also use a host profile with 8/3 (k/m)
● Near full and full ratios are .75/.85 respectfully
● Index sharding
● Federated (regions/zones)
● All server nodes, no JBOD expansions
● S3 only at present but we do have a few requests for Swift
● Fully AUTOMATED – Chef cookbooks to configure and manage cluster (some Ansible)
AUTOMATION
22
All of what we do only happens because of automation
● Company policy – Chef
● Cloud Infrastructure Group uses Chef and Ansible. We use Ansible for
orchestration and maintenance
● Bloomberg Github: https://github.com/bloomberg/bcpc
● Ceph specific options
− Ceph Chef: https://github.com/ceph/ceph-chef
− Bloomberg Object Store: https://github.com/bloomberg/chef-bcs
− Ceph Deploy: https://github.com/ceph/ceph-deploy
− Ceph Ansible: https://github.com/ceph/ceph-ansible
● Our bootstrap server is our Chef server per cluster
TESTING
23
Testing is critical. We use different strategies for the different parts of
OpenStack and Ceph we test
● OpenStack
− Tempest – We currently only use this for patches we make. We plan to use this more in our
DevOps pipeline
− Rally – Can’t do distributed testing but we use it to test bottlenecks in OpenStack itself
● Ceph
− RADOS Bench
− COS Bench – Going to try this with CBT
− CBT – Ceph Benchmark Testing
− Bonnie++
− FIO
● Ceph – RGW
− Jmeter – Need to test load at scale. Takes a cloud to test a cloud 
● A lot of the times you find it’s your network, load balancers etc
CEPH USE CASE DEMAND – GROWING!
24
Ceph
*Real-time
Object
ImmutableOpenStack
Big Data*?
*Possible use cases if performance is enhanced
WHAT’S NEXT?
25
Continue to evolve our POD architecture
● OpenStack
− Work on performance improvements and track stats on usage for departments
− Better monitoring
− LBaaS, Neutron
● Containers and PaaS
− We’re currently evaluating PaaS software and container strategies now
● Better DevOps Pipelining
− GO CD and/or Jenkins improved strategies
− Continue to enhance automation and re-provisioning
− Add testing to automation
● Ceph
− New Block Storage Cluster
− Super Cluster design
− Performance improvements – testing Jewel
− RGW Multi-Master (multi-sync) between datacenters
− Enhanced security – encryption at rest (can already do) but with better key management
− NVMe for Journals and maybe for high IOP block devices
− Cache Tier (need validation tests)
THANK YOU
ADDITIONAL RESOURCES
27
● Chris Jones: cjones303@bloomberg.net
− Github: cloudm2
● Chris Morgan: cmorgan2@bloomberg.net
− Github: mihalis68
Cookbooks:
● BCC: https://github.com/bloomberg/bcpc
− Current repo for Bloomberg’s Converged OpenStack and Ceph cluster
● BCS: https://github.com/bloomberg/chef-bcs
● Ceph-Chef: https://github.com/ceph/ceph-chef
The last two repos make up the Ceph Object Store and full Ceph Chef
Cookbooks.

More Related Content

What's hot

Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
zhouyuan
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
Sage Weil
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Sage Weil
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and Beyond
Sage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
Sage Weil
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Sage Weil
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
Bluestore
BluestoreBluestore
Bluestore
Patrick McGarry
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Danny Al-Gaaf
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
Linaro
 
Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah Watkins
Ceph Community
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containers
Sage Weil
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
Sage Weil
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
Kyle Bader
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
Colleen Corrice
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
Sage Weil
 

What's hot (19)

Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and Beyond
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Bluestore
BluestoreBluestore
Bluestore
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah Watkins
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containers
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 

Viewers also liked

Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
Red_Hat_Storage
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
Mirantis
 
Ceph Object Storage at Spreadshirt
Ceph Object Storage at SpreadshirtCeph Object Storage at Spreadshirt
Ceph Object Storage at Spreadshirt
Jens Hadlich
 
Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)
Jens Hadlich
 
Scalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the CloudScalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the Cloud
Red_Hat_Storage
 
How to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft AzureHow to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft Azure
Buurst
 
Cloudinit
CloudinitCloudinit
Cloudinit
sriram_rajan
 
High availability and fault tolerance of openstack
High availability and fault tolerance of openstackHigh availability and fault tolerance of openstack
High availability and fault tolerance of openstack
Deepak Mane
 
Disenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-othDisenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-oth
iiark .
 
FOTOGRAFIA
FOTOGRAFIAFOTOGRAFIA
FOTOGRAFIA
Nadia Alcantara
 
Listing packet condo villa-twnhm
Listing packet condo villa-twnhmListing packet condo villa-twnhm
Listing packet condo villa-twnhm
DuPreeMarketing
 
My Role @ Symantec
My Role @ SymantecMy Role @ Symantec
My Role @ Symantec
Vineet Sood
 
Curriculum nuevo1
Curriculum nuevo1Curriculum nuevo1
Curriculum nuevo1
Voraz Sornero
 
Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997
Venugopala Rao P
 
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
Universidad de Cantabria
 
CloudInit Introduction
CloudInit IntroductionCloudInit Introduction
CloudInit Introduction
Publicis Sapient Engineering
 
Longboard :3
Longboard :3Longboard :3
Longboard :3
Abraham Hernandez
 
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_singC cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
John Sing
 

Viewers also liked (20)

Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Ceph Object Storage at Spreadshirt
Ceph Object Storage at SpreadshirtCeph Object Storage at Spreadshirt
Ceph Object Storage at Spreadshirt
 
Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)
 
Scalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the CloudScalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the Cloud
 
How to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft AzureHow to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft Azure
 
Cloudinit
CloudinitCloudinit
Cloudinit
 
High availability and fault tolerance of openstack
High availability and fault tolerance of openstackHigh availability and fault tolerance of openstack
High availability and fault tolerance of openstack
 
Disenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-othDisenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-oth
 
FOTOGRAFIA
FOTOGRAFIAFOTOGRAFIA
FOTOGRAFIA
 
Listing packet condo villa-twnhm
Listing packet condo villa-twnhmListing packet condo villa-twnhm
Listing packet condo villa-twnhm
 
My Role @ Symantec
My Role @ SymantecMy Role @ Symantec
My Role @ Symantec
 
Curriculum nuevo1
Curriculum nuevo1Curriculum nuevo1
Curriculum nuevo1
 
Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997
 
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
 
CloudInit Introduction
CloudInit IntroductionCloudInit Introduction
CloudInit Introduction
 
Productos biotecnológicos
Productos biotecnológicosProductos biotecnológicos
Productos biotecnológicos
 
Longboard :3
Longboard :3Longboard :3
Longboard :3
 
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_singC cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
 

Similar to Ceph at Work in Bloomberg: Object Store, RBD and OpenStack

Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
Patrick Quairoli
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
Ettore Simone
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
Red_Hat_Storage
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
Miloš Zubal
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Community
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
Ceph Community
 
Red Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShiftRed Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShift
Jeremy Eder
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experience
ShapeBlue
 
Benchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public clouds
data://disrupted®
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
DataWorks Summit/Hadoop Summit
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
Clyso GmbH
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
Ceph Community
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016
Thang Man
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
Kernel TLV
 
Memory Bandwidth QoS
Memory Bandwidth QoSMemory Bandwidth QoS
Memory Bandwidth QoS
Rohit Jnagal
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 

Similar to Ceph at Work in Bloomberg: Object Store, RBD and OpenStack (20)

Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
Red Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShiftRed Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShift
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experience
 
Benchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public clouds
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
Memory Bandwidth QoS
Memory Bandwidth QoSMemory Bandwidth QoS
Memory Bandwidth QoS
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 

More from Red_Hat_Storage

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red_Hat_Storage
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
Red_Hat_Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red_Hat_Storage
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red_Hat_Storage
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
Red_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
Red_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
Red_Hat_Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
Red_Hat_Storage
 

More from Red_Hat_Storage (20)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 

Recently uploaded

Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 

Recently uploaded (20)

Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 

Ceph at Work in Bloomberg: Object Store, RBD and OpenStack

  • 1. CEPH AT WORK IN BLOOMBERG Object Store, RBD and OpenStack January 19, 2016 By: Chris Jones & Chris Morgan
  • 2. BLOOMBERG 2 30 Years in under 30 Seconds ● Subscriber based financial provider (Bloomberg Terminal) ● Online, TV, print, real-time streaming information ● Offices and customers in every major financial market and institution worldwide
  • 3. BLOOMBERG 3 Primary product - Information ● Bloomberg Terminal − Approximately 60,000 features/functions. For example, ability to track oil tankers in real-time via satellite feeds − Note: Exact numbers are not specified. Contact media relations for specifics and other important information.
  • 5. CLOUD INFRASTRUCTURE GROUP 5 Primary customers – Developers – Product Groups ● Many different development groups throughout our organization ● Currently about 3,000 R&D developers ● Everyone of them wants and needs resources
  • 6. CLOUD INFRASTRUCTURE GROUP 6 Resource Challenges ● Developers − Development − Testing − Automation (Cattle vs. Pets) ● Organizations − POC − Products in production − Automation ● Security/Networking − Compliance
  • 7. USER BASE (EXAMPLES) 7 Resources and Use cases ● Multiple Data Centers − Each DC contains *many* Network Tiers which includes a DMZ for Public- facing Bloomberg assets − There is at least one Ceph/OpenStack Cluster per Network Tier ● Developer Community Supported − Public facing Bloomberg products − Machine learning backend for smart apps − Compliance-based resources − Use cases continue to climb as Devs need more storage and compute capacity
  • 9. USED IN BLOOMBERG 9 ● Ceph – RGW (Object Store) ● Ceph – Block/Volume ● OpenStack ─ Different flavors of compute ─ Ephemeral storage ● Object Store is becoming one of the most popular items ● OpenStack compute with Ceph backed block store volumes are very popular ● We introduced ephemeral compute storage
  • 10. SUPER HYPER-CONVERGED STACK 10 On EVERY Network Tier
  • 11. SUPER HYPER-CONVERGED STACK 11 (Original) Converged Architecture Rack Layout ● 3 Head Nodes (Controller Nodes) − Ceph Monitor − Ceph OSD − OpenStack Controllers (All of them!) − HAProxy ● 1 Bootstrap Node − Cobbler (PXE Boot) − Repos − Chef Server − Rally/Tempest ● Remaining Nodes − Nova Compute − Ceph OSDs − RGW – Apache ● Ubuntu ● Shared spine with Hadoop resources Bootstrap Node Compute/Ceph OSDs/RGW/Apache Remaining Stack Sliced View of Stack
  • 12. NEW POD ARCHITECTURE 12 POD (TOR) HAProxy OS-Nova OS-NovaOS-Rabbit OS-DB Number of large providers have taken similar approaches Note: Illustrative only – Not Representative POD (TOR) Ceph OSD Ceph Mon Ceph Mon Ceph Mon Ceph OSD Ceph OSD RBD Only Bootstrap Monitoring Ephemeral Ephemeral – Fast/Dangerous Host aggregates & flavors Not Ceph backed
  • 13. POD ARCHITECTURE (OPENSTACK/CEPH) 13 POD (TOR) Ceph Block OS-Nova OS-NovaOS-Rabbit OS-NovaOS-DB Number of large providers have taken similar approaches Note: Illustrative only – Not Representative POD (TOR) Ceph OSD Ceph Mon Ceph Mon Ceph Mon Ceph OSD Ceph OSD POD (TOR) Ceph OSD Ceph Mon Ceph Mon Ceph Mon Ceph OSD Ceph OSD • Scale and re- provision as needed • 3 PODs per rack
  • 14. EPHEMERAL VS. CEPH BLOCK STORAGE 14 Numbers will vary in different environments. Illustrations are simplified. Ceph Ephemeral New feature option added to address high IOP applications
  • 15. EPHEMERAL VS. CEPH BLOCK STORAGE 15 Numbers will vary in different environments. Illustrations are simplified. Ceph – Advantages ● All data is replicated at least 3 ways across the cluster ● Ceph RBD volumes can be created, attached and detached from any hypervisor ● Very fast provisioning using COW (copy-on-write) images ● Allows easy instance re-launch in the event of hypervisor failure ● High read performance Ephemeral – Advantages ● Offers read/write speeds that can be 3-4 times faster than Ceph with lower latency ● Can provide fairly large volumes for cheap Ceph – Disadvantages ● All writes must be acknowledged by multiple nodes before being considered as committed (tradeoff for reliability) ● Higher latency due to Ceph being network based instead of local Ephemeral – Disadvantages ● Trades data integrity for speed: if one drive fails at RAID 0 then all data on that node is lost ● May be difficult to add more capacity (depends on type of RAID) ● Running in JBOD LVM mode w/o RAID performance was not as good as Ceph ● Less important, with RAID your drives need to be same size or you lose capacity
  • 16. EPHEMERAL VS. CEPH BLOCK STORAGE 16 Numbers will vary in different environments. Illustrations are simplified. EPHEMERAL CEPH Block write bandwidth (MB/s) 1,094.02 642.15 Block read bandwidth (MB/s) 1,826.43 639.47 Character read bandwidth (MB/s) 4.93 4.31 Character write bandwidth (MB/s) 0.83 0.75 Block write latency (ms) 9.502 37.096 Block read latency (ms) 8.121 4.941 Character read latency (ms) 2.395 3.322 Character write latency (ms) 11.052 13.587 Note: Ephemeral in JBOD/LVM mode is not as fast as Ceph Numbers can also increase with additional tuning and different devices
  • 17. CHALLENGES – LESSONS LEARNED 17 Network ● It’s all about the network. − Changed MTU from 1500 to 9000 on certain interfaces (Float interface – Storage interface) − Hardware Load Balancers – keep an eye on performance ● Hardware − Moving to a more commodity driven hardware − All flash storage in compute cluster (high cost, good for block and ephemeral) Costs ● Storage costs are very high in a converged compute cluster for Object Store Analytics ● Need to know how the cluster is being used ● Need to know if the tps meets the SLA ● Test going directly against nodes and then layer in network components until you can verify all choke points in the data flow path ● Monitor and test always
  • 18. NEW CEPH OBJECT STORE 18
  • 19. OBJECT STORE STACK (RACK CONFIG) 19 RedHat 7.1 ● 1 TOR and 1 Rack Mgt Node ● 3 1U nodes (Mon, RGW, Util) ● 17 2U Ceph OSD nodes ● 2x or 3x Replication depending on need (3x default) ● Secondary RGW (may coexist with OSD Node) ● 10g Cluster interface ● 10g Public interface ● 1 IPMI interface ● OSD Nodes (high density server nodes) − 6TB HDD x 12 – Journal partitions on SSD − No RAID1 OS drives – instead we partitioned off a small amount of SSD1 for OS and swap with remainder of SSD1 used for some journals and SSD2 used for remaining journals − Failure domain is a node 3 1U Nodes TOR/IPMI Converged Storage Nodes 2U each
  • 20. OBJECT STORE STACK (ARCHITECTURE) 20 1 Mon/RGW Node Per rack TOR - Leaf Storage Nodes Spine Spine LBLB
  • 21. OBJECT STORE STACK 21 Standard configuration (Archive Cluster) ● Min of 3 Racks = Cluster ● OS – Redhat 7.1 ● Cluster Network: Bonded 10g or higher depending on size of cluster ● Public Network: Bonded 10g for RGW interfaces ● 1 Ceph Mon node per rack except on more than 3 racks. Need to keep odd number of Mons so some racks may not have Mons. We try to keep larger cluster racks & Mons in different power zones ● We have developed a healthy “Pain” tolerance. We mainly see drive failures and some node failures. ● Min 1 RGW (dedicated Node) per rack (may want more) ● Hardware load balancers to RGWs with redundancy ● Erasure coded pools (no cache tiers at present – testing). We also use a host profile with 8/3 (k/m) ● Near full and full ratios are .75/.85 respectfully ● Index sharding ● Federated (regions/zones) ● All server nodes, no JBOD expansions ● S3 only at present but we do have a few requests for Swift ● Fully AUTOMATED – Chef cookbooks to configure and manage cluster (some Ansible)
  • 22. AUTOMATION 22 All of what we do only happens because of automation ● Company policy – Chef ● Cloud Infrastructure Group uses Chef and Ansible. We use Ansible for orchestration and maintenance ● Bloomberg Github: https://github.com/bloomberg/bcpc ● Ceph specific options − Ceph Chef: https://github.com/ceph/ceph-chef − Bloomberg Object Store: https://github.com/bloomberg/chef-bcs − Ceph Deploy: https://github.com/ceph/ceph-deploy − Ceph Ansible: https://github.com/ceph/ceph-ansible ● Our bootstrap server is our Chef server per cluster
  • 23. TESTING 23 Testing is critical. We use different strategies for the different parts of OpenStack and Ceph we test ● OpenStack − Tempest – We currently only use this for patches we make. We plan to use this more in our DevOps pipeline − Rally – Can’t do distributed testing but we use it to test bottlenecks in OpenStack itself ● Ceph − RADOS Bench − COS Bench – Going to try this with CBT − CBT – Ceph Benchmark Testing − Bonnie++ − FIO ● Ceph – RGW − Jmeter – Need to test load at scale. Takes a cloud to test a cloud  ● A lot of the times you find it’s your network, load balancers etc
  • 24. CEPH USE CASE DEMAND – GROWING! 24 Ceph *Real-time Object ImmutableOpenStack Big Data*? *Possible use cases if performance is enhanced
  • 25. WHAT’S NEXT? 25 Continue to evolve our POD architecture ● OpenStack − Work on performance improvements and track stats on usage for departments − Better monitoring − LBaaS, Neutron ● Containers and PaaS − We’re currently evaluating PaaS software and container strategies now ● Better DevOps Pipelining − GO CD and/or Jenkins improved strategies − Continue to enhance automation and re-provisioning − Add testing to automation ● Ceph − New Block Storage Cluster − Super Cluster design − Performance improvements – testing Jewel − RGW Multi-Master (multi-sync) between datacenters − Enhanced security – encryption at rest (can already do) but with better key management − NVMe for Journals and maybe for high IOP block devices − Cache Tier (need validation tests)
  • 27. ADDITIONAL RESOURCES 27 ● Chris Jones: cjones303@bloomberg.net − Github: cloudm2 ● Chris Morgan: cmorgan2@bloomberg.net − Github: mihalis68 Cookbooks: ● BCC: https://github.com/bloomberg/bcpc − Current repo for Bloomberg’s Converged OpenStack and Ceph cluster ● BCS: https://github.com/bloomberg/chef-bcs ● Ceph-Chef: https://github.com/ceph/ceph-chef The last two repos make up the Ceph Object Store and full Ceph Chef Cookbooks.