Mark Nelson, Engineering, Inktank
Ceph is a complicated system with lots of different components, which makes com- prehensive testing a challenge. However, storage - especially virtual machine storage - needs to be fast in order to be useful.
In this talk, Inktank Performance Engineer Mark Nelson will discuss his approach to testing massively distributed systems like Ceph, will share and analyze recent bench- marks, and will discuss opportunities for enhancing performance.
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)Jens Hadlich
At Spreadshirt we are building up our own Ceph cluster(s) to spread customer images around the earth. This presentation gives an overview on the setup, numbers as well as issues we are currently facing.
At Spreadshirt we have built up our own Ceph cluster(s) to spread customer images around the world. This presentation gives an overview on the setup and how things work (and how some things don't work as expected).
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)Jens Hadlich
At Spreadshirt we are building up our own Ceph cluster(s) to spread customer images around the earth. This presentation gives an overview on the setup, numbers as well as issues we are currently facing.
At Spreadshirt we have built up our own Ceph cluster(s) to spread customer images around the world. This presentation gives an overview on the setup and how things work (and how some things don't work as expected).
OSDC 2013 | Scale-Out made easy: Petabyte storage with Ceph by Martin Gerhard...NETWAYS
Ceph is what you are looking for if you need seamlessly scalable storage: While traditional storage solutions like SANs or SAN drop-in replacements are scale-up only, Ceph allows you to turn every off-the-shelf computer into a member of a distributed, centrally managed storage network. With Ceph in place, you will never again have to worry about adding new disks to existing SANs or replacing existing disks with bigger ones -- just add new nodes to the cluster and see your storage grow. And as if that wasn't enough already, Ceph comes with inherent HA and replication capabilities, taking the burden of securing your data against outages off your shoulder.
This presentation gives an insight into the basic ideas behind the Ceph object storage solution. It also elaborates on the existing front-ends for Ceph (CephFS, RBD, radosgw) and explains how they work. Examples will demonstrate how to use Ceph in production. A live demo completes the talk.
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
"In this session for administrators of all skill levels, you’ll get a deep technical dive into Red Hat Storage Server and GlusterFS administration.
We’ll start with the basics of what scale-out storage is, and learn about the unique implementation of Red Hat Storage Server and its advantages over legacy and competing technologies. From the basic knowledge and design principles, we’ll move to a live start-to-finish demonstration. Your experience will include:
Building a cluster.
Allocating resources.
Creating and modifying volumes of different types.
Accessing data via multiple client protocols.
A resiliency demonstration.
Expanding and contracting volumes.
Implementing directory quotas.
Recovering from and preventing split-brain.
Asynchronous parallel geo-replication.
Behind-the-curtain views of configuration files and logs.
Extended attributes used by GlusterFS.
Performance tuning basics.
New and upcoming feature demonstrations.
Those new to the scale-out product will leave this session with the knowledge and confidence to set up their first Red Hat Storage Server environment. Experienced administrators will sharpen their skills and gain insights into the newest features. IT executives and managers will gain a valuable overview to help fuel the drive for next-generation infrastructures."
OSDC 2013 | Scale-Out made easy: Petabyte storage with Ceph by Martin Gerhard...NETWAYS
Ceph is what you are looking for if you need seamlessly scalable storage: While traditional storage solutions like SANs or SAN drop-in replacements are scale-up only, Ceph allows you to turn every off-the-shelf computer into a member of a distributed, centrally managed storage network. With Ceph in place, you will never again have to worry about adding new disks to existing SANs or replacing existing disks with bigger ones -- just add new nodes to the cluster and see your storage grow. And as if that wasn't enough already, Ceph comes with inherent HA and replication capabilities, taking the burden of securing your data against outages off your shoulder.
This presentation gives an insight into the basic ideas behind the Ceph object storage solution. It also elaborates on the existing front-ends for Ceph (CephFS, RBD, radosgw) and explains how they work. Examples will demonstrate how to use Ceph in production. A live demo completes the talk.
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
"In this session for administrators of all skill levels, you’ll get a deep technical dive into Red Hat Storage Server and GlusterFS administration.
We’ll start with the basics of what scale-out storage is, and learn about the unique implementation of Red Hat Storage Server and its advantages over legacy and competing technologies. From the basic knowledge and design principles, we’ll move to a live start-to-finish demonstration. Your experience will include:
Building a cluster.
Allocating resources.
Creating and modifying volumes of different types.
Accessing data via multiple client protocols.
A resiliency demonstration.
Expanding and contracting volumes.
Implementing directory quotas.
Recovering from and preventing split-brain.
Asynchronous parallel geo-replication.
Behind-the-curtain views of configuration files and logs.
Extended attributes used by GlusterFS.
Performance tuning basics.
New and upcoming feature demonstrations.
Those new to the scale-out product will leave this session with the knowledge and confidence to set up their first Red Hat Storage Server environment. Experienced administrators will sharpen their skills and gain insights into the newest features. IT executives and managers will gain a valuable overview to help fuel the drive for next-generation infrastructures."
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Apache Spark on Supercomputers: A Tale of the Storage Hierarchy with Costin I...Databricks
In this session, the speakers will discuss their experiences porting Apache Spark to the Cray XC family of supercomputers. One scalability bottleneck is in handling the global file system present in all large-scale HPC installations. Using two techniques (file open pooling, and mounting the Spark file hierarchy in a specific manner), they were able to improve scalability from O(100) cores to O(10,000) cores. This is the first result at such a large scale on HPC systems, and it had a transformative impact on research, enabling their colleagues to run on 50,000 cores.
With this baseline performance fixed, they will then discuss the impact of the storage hierarchy and of the network on Spark performance. They will contrast a Cray system with two levels of storage with a “data intensive” system with fast local SSDs. The Cray contains a back-end global file system and a mid-tier fast SSD storage. One conclusion is that local SSDs are not needed for good performance on a very broad workload, including spark-perf, TeraSort, genomics, etc.
They will also provide a detailed analysis of the impact of latency of file and network I/O operations on Spark scalability. This analysis is very useful to both system procurements and Spark core developers. By examining the mean/median value in conjunction with variability, one can infer the expected scalability on a given system. For example, the Cray mid-tier storage has been marketed as the magic bullet for data intensive applications. Initially, it did improve scalability and end-to-end performance. After understanding and eliminating variability in I/O operations, they were able to outperform any configurations involving mid-tier storage by using the back-end file system directly. They will also discuss the impact of network performance and contrast results on the Cray Aries HPC network with results on InfiniBand.
Apache Spark on Supercomputers: A Tale of the Storage Hierarchy with Costin I...Databricks
In this session, the speakers will discuss their experiences porting Apache Spark to the Cray XC family of supercomputers. One scalability bottleneck is in handling the global file system present in all large-scale HPC installations. Using two techniques (file open pooling, and mounting the Spark file hierarchy in a specific manner), they were able to improve scalability from O(100) cores to O(10,000) cores. This is the first result at such a large scale on HPC systems, and it had a transformative impact on research, enabling their colleagues to run on 50,000 cores.
With this baseline performance fixed, they will then discuss the impact of the storage hierarchy and of the network on Spark performance. They will contrast a Cray system with two levels of storage with a “data intensive” system with fast local SSDs. The Cray contains a back-end global file system and a mid-tier fast SSD storage. One conclusion is that local SSDs are not needed for good performance on a very broad workload, including spark-perf, TeraSort, genomics, etc.
They will also provide a detailed analysis of the impact of latency of file and network I/O operations on Spark scalability. This analysis is very useful to both system procurements and Spark core developers. By examining the mean/median value in conjunction with variability, one can infer the expected scalability on a given system. For example, the Cray mid-tier storage has been marketed as the magic bullet for data intensive applications. Initially, it did improve scalability and end-to-end performance. After understanding and eliminating variability in I/O operations, they were able to outperform any configurations involving mid-tier storage by using the back-end file system directly. They will also discuss the impact of network performance and contrast results on the Cray Aries HPC network with results on InfiniBand.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Have you heard that all in-memory databases are equally fast but unreliable, inconsistent and expensive? This session highlights in-memory technology that busts all those myths.
Redis, the fastest database on the planet, is not a simply in-memory key-value data-store; but rather a rich in-memory data-structure engine that serves the world’s most popular apps. Redis Labs’ unique clustering technology enables Redis to be highly reliable, keeping every data byte intact despite hundreds of cloud instance failures and dozens of complete data-center outages. It delivers full CP system characteristics at high performance. And with the latest Redis on Flash technology, Redis Labs achieves close to in-memory performance at 70% lower operational costs. Learn about the best uses of in-memory computing to accelerate everyday applications such as high volume transactions, real time analytics, IoT data ingestion and more.
Accelerating hbase with nvme and bucket cacheDavid Grier
This set of slides describes some initial experiments which we have designed for discovering improvements for performance in Hadoop technologies using NVMe technology
From http://wiki.directi.com/x/hQAa - This is a fairly detailed presentation I made at BarCamp Mumbai on building large storage networks and different SAN topologies. It covers fundamentals of selecting harddrives, RAID levels and performance of various storage architectures. This is Part I of a 3-part series.
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Виталий Стародубцев
##Что такое Storage Replica
##Архитектура и сценарии
##Синхронная и асинхронная репликация
##Междисковая, межсерверная, внутрикластерная и межкластерная репликация
##Дизайн и проектирование Storage Replica
##Нововведения в Windows Server 2016 TP5
##Графический интерфейс управления, и другие возможности - демонстрация и планы развития
##Интеграция Storage Replica с Storage Spaces Direct
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
Demystifying Storage - Building large SANsDirecti Group
From http://wiki.directi.com/x/hQAa - This is a fairly detailed presentation I made at BarCamp Mumbai on building large storage networks and different SAN topologies. It covers fundamentals of selecting harddrives, RAID levels and performance of various storage architectures. This is Part I of a 3-part series
Similar to Ceph Day Santa Clara: Ceph Performance & Benchmarking (20)
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Yes, that is a Cephalopod attacking a police box. Likeness to any existing objects, characters, or ideas is purely coincidental.
My Name is Mark and I work for a company called Inktank making an open source distributed storage system called Ceph. Before I started working for Inktank, I worked for the Minnesota Supercomputing Institute. My job was to figure out how to make our clusters run as efficiently as possible. A lot of researchers ran code on the clusters that didn't make good use of the expensive network fabrics those systems have. We worked to find better alternatives for these folks and ended up prototyping a high performance openstack cloud for research computing. The one piece that was missing was the storage solution. That's how I discovered Ceph.
I had heard of Ceph through my work for the Institute. The original development was funded by a high performance computing research grant at Lawrence Livermore National Laboratory. It had been in development since 2004, but it was only in around 2010 that I really started hearing about people starting to deploy storage with it. Ceph itself is an amazing piece of software. It lets you take commodity servers and turn them into a high performance, fault tolerant, distributed storage solution. It was designed to scale from the beginning and is made up from many distinct components.
The primary building blocks of Ceph are the daemons that run on the nodes in the cluster. Ceph is composed of OSDs that store data and monitors that keep track of cluster health and state. When using ceph as a distributed POSIX filesystem (CephFS), metadata servers may also be used. On top of these daemons are various APIs. Librados is the lowest level API that can be used to interact with rados directly at the object level. Librbd and libcephfs provide file-like API access to RBD and CephFS respectively. Finally, we have the high level block, object, and filesystem interfaces that make ceph such a versatile storage solution.
If you take one thing away from this talk, it should be that Ceph is designed to be distributed. Any number of services can run on any number of nodes. You can have as many storage servers as you want. Cluster monitors are distributed and use an election algorithm called PAXOS to avoid split-brain scenarios when servers fail. For S3 or swift compatible object storage, you can distribute requests across multiple gateways. When CephFS (Still Beta!) is used, the metadata servers can be distributed across multiple nodes and store metadata by distributing it across all of the OSD servers. When talking about data distribution specifically though, the crowning achievement in Ceph is CRUSH.
In many distributed storage systems there is some kind of centralized server that maintains an allocation table of where data is stored in the cluster. Not only is this a single point of failure, but it also can become a bottleneck as clients need to query this server to find out where data should be written or read. CRUSH does away with this. It is a hash based algorithm that allows any client to algorithmically determine where in the cluster data should be placed based on its name. Better yet, data is distributed across OSDs in the cluster pseudo-randomly. CRUSH also provides other benefits, like the ability to hierarchically define failure domains to ensure that replicated data ends up on different hardware.
From a performance perspective Ceph has a lot of benefits over many other distributed storage systems. There is no centralized server that can become a bottleneck for data allocation lookups. Data is well distributed across OSDs due to the psuedo-random nature of CRUSH. On traditional storage solutions a RAID array is used on each server. When a disk fails, the RAID array needs to be rebuilt which causes a hotspot in the cluster that drags performance down, and RAID rebuilds can last a long time. Because data in Ceph is pseudo-randomly distributed, healing happens cluster wide which dramatically speeds up the recovery process.
One of the challenges in any distributed storage system is what happens when you have hotspots in the cluster. If any one server is slow to fulfill requests, they will start backing up. Eventually a limit is reached where all outstanding requests will be concentrated on the slower server(s) starving the other faster servers and potentially degrading overall cluster performance significantly. Likewise distributed storage systems in general need a lot of concurrency to keep all of the servers and disks constantly working. Ceph also works really hard to ensure data integrity which can some times impact performance. For example, Ceph does a full write of the data to the journal for every commit, and does CRCs for every single journal write. It also has background scrub processes regularly verifying data integrity.
You guys have been patient so far but if you are anything like me then your attention span is starting to wear thin about now. Email or slashdot could be looking appealing. So let's switch things up a little bit. We've talked about why Ceph is conceptually so amazing, but what can it really deliver as far as performance goes? That was the question we asked about a year ago after seeing some rather lacklustre results on some of our existing internal test hardware. Our director of engineering told me to go forth and build a system that would give us some insight into how Ceph could perform on (what in my opinion would be) an ideal setup.
One of the things that we noticed from our previous testing is that some systems are harder to get working well than others. One potential culprit appeared to be that some expander backplanes may not behave entirely properly. For this system, we decided to skip expanders entirely and directly connect each drive in the system to it's own controller SAS lane. That means that with 24 spinning disks and 8 SSDs we needed 4 dual-port controllers to connect all of the drives. With so many disks in this system, we'd need a lot of external network connectivity and opted for a bonded 10GbE setup. We only have a single client which could be a bottleneck, but at that point we were just hoping to break 1GB/s to start out with which seemed feasible. So where we able to do it?
What you are looking at is a chart showing the write and read throughput on our test platform using our RADOS bench tool. This tool directly utilizes librados to write objects out as fast as possible and after writes complete, read them back. We're doing syncs and flushes between the tests on both the client and server and have measured the underlying disk throughput to make sure the results are accurate. What you are seeing here is that for writes, we not only hit 1GB/s, but are in fact hitting 2GB/s and maxing out the bonded 10GbE link. Reads aren't quite saturating the network but are coming pretty close. This is really good news because it means that with the right hardware, you can build Ceph nodes that can perform extremely well.
I like to show the previous slide because it makes a big impact and I get to feel vindicated regarding spending a bunch of our startup money on new toys. And who doesn't like seeing a single server able to write out 2GB/s+ of data? The problem though is that reading and writing out 4MB objects directly via librados isn't necessarily a great representation of how Ceph will really perform once you layer block or S3 storage on top of it.
Say for instance that you are using Ceph to provide block storage via RBD for openstack and have an application that does lots of 4K writes. Testing RADOS Bench with 4K objects might give you some rough idea of how RBD performs with small IOs, but it's also misleading. One of the things that you might not know is that RBD stores each block in a 4MB object behind the scenes. Doing 4K writes to 4MB objects results in different behavior that writing out distinct 4K objects themselves. Throw in the writeback cache implementation in QEMU RBD and things get complicated very fast. Ultimately you really do have to do the tests directly on RBD to know what's going on.
...And we've done that. In alarming detail. What you are seeing here is a comparison of sequential and random 4K writes using a really useful benchmarking tool called fio. We are testing Kernel RBD and QEMU RBD at differing concurrency and IO depth values. What you may notice is that the scaling behavior and throughput looks very different in each case. QEMU RBD with the writeback cache enabled is performing much better. Interestingly RBD cache not only helps sequential writes, but seems to help with random writes too (especially at low levels of concurrency). This is one example of the kind of testing we do, but we have thousands of graphs like this exploring different workloads and system configurations. It's a lot of work!
We've gotten a lot of interesting results from our high performance test node and published quite a bit of those results in different articles on the web. Unfortunately we only have 1 of those nodes and our readers have started asking for are tests showing how performance scales across nodes. Do you remember what I said earlier about Ceph loving homogeneity and lots of concurrency? The truth is that the more consistently your hardware behaves, especially over time, the better your cluster is going to scale. Since I like to show good results instead of bad ones, lets take a look at an example of a cluster that's scaling really well.
About a year and a half ago Oak Ridge National Laboratory (ORNL) reached out to Inktank to investigate how Ceph performs on a high performance storage system they have in their lab. This platform is a bit different than the typical platform that we deploy Ceph on. It has a ton (over 400!) of drives configured in RAID LUNs that are in chassis connected to the servers via QDR Infiniband links. As a result, the back-end storage maxes out at about 12GB/s but has pretty consistent performance characteristics since there are so many drives behind the scenes. Initially the results ORNL was seeing were quite bad. We worked together with them to find optimal hardware settings and did a lot of tuning and testing. By the time we were done performance had improved quite a bit.
This chart shows performance on the ORNL cluster in it's final configuration using RADOS bench. Notice that throughput is scaling fairly linearly as we add storage nodes to the cluster. The blue and red lines represent write and read performance respectively. If you just look at the write performance, the results might seem disappointing. Ceph however, is doing full data writes to the journals to guarantee the atomicity of it's write operations. Normally in high performance situations we get around this by putting journals on SSDs, but this solution unfortunately doesn't have any. Another limitation is that the client network is using IPoIB, and on this hardware that means the clients will never see more than about 10GB/s aggregate throughput. Despite these limitations, we are scaling well and throughput to the storage chassis is pretty good!
All of the results I've shown so far have been designed to showcase how much throughput we can push from the client and are not using any kind of replication. On the ORNL hardware this is probably justifiable because they are using RAID5 arrays behind the scenes and for some solutions like HPC scratch storage, running without replication may be acceptable. For a lot of folks Ceph's seamless support for replication is what makes it so compelling. So how much does replication hurt?
As you might expect, replication has a pretty profound impact on write performance. Between doing journal writes and 3x replication, we see that client write performance is over 6 times slower than the actual write speed to the DDN chassis. What is probably more interesting is how the total write throughput changes. Going from 1x to 2x replication lowers the overall write performance by about 15-20%. When Ceph writes data to an OSD, the data must be written to the journal (BTRFS is a special case), and to the replica OSD's journal before the acknowledgement can be sent to the client. This not only results in extra data being written, but extra latency for every write. Read performance remains high regardless of replication.
So again I've shown a bunch of RADOS bench results, but that's not what people ultimately care about. For high performance computing, customers really want CephFS: Our distributed POSIX filesystem. Before we go on, let me say that our block and object layers are production ready, but CephFS is still in beta. It's probably the most complex interface we have on top of Ceph and there are still a number of known bugs. We've also done very little performance tuning, so when we started this testing we were pretty unsure about how it would perform.
To test CephFS, we settled on a tool that is commonly used in the HPC space called IOR. It coordinates IO on multiple client nodes using MPI and has many options that are useful for testing high performancre storage system s. When we first started testing CephFS, the performance was lower than we hoped. Through a series of tests, profiling, and investigation we were able to tweak the configuration to produce the results you see here. With 8 client nodes, writes are nearly as high as what we saw with RADOS bench, but reads have topped out lower. We are still seeing some variability in the results and have more tuning to do, but are happy with the performance we've been able to get so far given CephFS's level of maturity.
The results that we've shown thus far are only a small sample of the barrage of tests that we run. We have hundreds, if not thousands of graphs and charts showcasing performance of Ceph on different hardware, and serving different kinds of IO. Given how open Ceph is, this is ultimately going to be a losing battle. There are too many platforms, too many applications, and too many ways performance can be impacted to capture them all.
So given that you can't catch everything ahead of time, what do you do when cluster performance is lower than you expected? First, make a pot of coffee because you may be in for a long night. It's going to take some blood, sweat and tears, but luckily some other folks have paved the way and developed some very useful tools that can make the job a lot easier.