This document summarizes a presentation about the Ceph distributed storage system. It provides an overview of Ceph's object, block, and file capabilities. It also discusses how Ceph is used in various environments like OpenStack, CloudStack, Linux distributions, and with XenServer. Community involvement and upcoming development areas are also reviewed, like erasure coding, geo-replication, and tiering. Attendees are encouraged to get involved by contributing code, participating in discussions, or attending upcoming Ceph Days events.
RBD, the RADOS Block Device in Ceph, gives you virtually unlimited scalability (without downtime), high performance, intelligent balancing and self-healing capabilities that traditional SANs can't provide. Ceph achieves this higher throughput through a unique system of placing objects across multiple nodes, and adaptive load balancing that replicates frequently accessed objects over more nodes. This talk will give a brief overview of the Ceph architecture, current integration with Apache CloudStack, and recent advancements with Xen and blktap2.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
Presentation for the July 2018 @medianetlab meetup at NCSR "Demokritos"
Relative blog post can be found here: https://medianetlab.gr/mnlab-meetup-kubernetes/
and the video: https://www.youtube.com/watch?v=l2ce5U9bh6M
RBD, the RADOS Block Device in Ceph, gives you virtually unlimited scalability (without downtime), high performance, intelligent balancing and self-healing capabilities that traditional SANs can't provide. Ceph achieves this higher throughput through a unique system of placing objects across multiple nodes, and adaptive load balancing that replicates frequently accessed objects over more nodes. This talk will give a brief overview of the Ceph architecture, current integration with Apache CloudStack, and recent advancements with Xen and blktap2.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
Presentation for the July 2018 @medianetlab meetup at NCSR "Demokritos"
Relative blog post can be found here: https://medianetlab.gr/mnlab-meetup-kubernetes/
and the video: https://www.youtube.com/watch?v=l2ce5U9bh6M
DockerCon EU 2015: Finding a Theory of the Universe with Docker and Volunteer...Docker, Inc.
Presentation by Dr. Marius Millea, Cosmologist and Postdoctoral Fellow at the Institut Lagrange de Paris
Cosmology@Home is a project which uses volunteer computing to analyze cosmological data and answer questions about our universe such as "how much dark matter is there?" and "under what conditions did the Big Bang occur?" We recently began using Docker by taking each job which we would normally send to our volunteer computers, and packaging it up inside a Docker container. The volunteer computers themselves come from interested users all over the world who download and run the software allowing them to become volunteers (called BOINC). The system is working exceedingly well and using Docker has made it massively easier for us to develop and run it. I will explain some of the technical details of the implementation, which involves a customized boot2docker ISO, as well give a brief summary of the scientific questions we are trying to answer and how these results made possible by Docker are helping analyze data from, e.g. the European Space Agency's Planck satellite.
KubeCon EU 2016: Killing containers to make weather beautifulKubeAcademy
The Met Office Informatics Lab includes scientists, developers and designers. We build prototypes exploring new technologies to make environmental data useful. Here we describe a recent project to process multi-dimensional weather data to create a fully interactive 4D browser application. We used long-running containers to serve data and web pages and short-running processes to ingest and compress the data. Forecast data is issued every three hours so our data ingestion goes through regular and predictable bursts (i.e. perfect for autoscaling).
We built a Kubernetes cluster in an AWS group which auto-scales based on load. We used replication controllers to process the data. Every three hours ingestion jobs are added to a queue and the number of ingestion containers are set in proportion to the queue length. Each worker completes exactly one ingestion job from the queue and then exits, at which point Kubernetes creates a new one to process the next message. This has allowed us to remove the lifespan logic from the containers and keep them light, fast and massively scalable. We are now in the process of using this in our production systems.
Sched Link: http://sched.co/6BWQ
Using Kubernetes and TensorFlow to build the Fog Computing Platform that can dynamically deploy the deep learning applications on to the IoT devices (Raspberry PI).
Immutable infrastructure with Docker and containers (GlueCon 2015)Jérôme Petazzoni
"Never upgrade a server again. Never update your code. Instead, create new servers, and throw away the old ones!"
That's the idea of immutable servers, or immutable infrastructure. This makes many things easier: rollbacks (you can always bring back the old servers), A/B testing (put old and new servers side by side), security (use the latest and safest base system at each deploy), and more.
However, throwing in a bunch of new servers at each one-line CSS change is going to be complicated, not to mention costly.
Containers to the rescue! Creating container "golden images" is easy, fast, dare I say painless. Replacing your old containers with new ones is also easy to do; much easier than virtual machines, let alone physical ones.
In this talk, we'll quickly recap the pros (and cons) of immutable servers; then explain how to implement that pattern with containers. We will use Docker as an example, but the technique can easily be adapted to Rocket or even plain LXC containers.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
Solving k8s persistent workloads using k8s DevOps styleMayaData
Solving k8s persistent workloads
using k8s DevOps style. Presented at Container_stack-Zurich-2019
-How Hardware trends enforce a change in the way we do things
-Storage limitations bubble up
-Infrastructure as code
OSCON: Incremental Revolution - What Docker learned from the open-source fire...Docker, Inc.
Since Solomon Hykes unveiled Docker at the PyCon conference three years ago, containers have revolutionized how developers and ops teams build, ship, and run applications. Solomon explores the past, present, and future of our container ecosystem and shares lessons learned from managing successful open source projects across several dimensions: technology, people, products, and business.
Sign up for the Docker for Mac and Windows beta: beta.docker.com
In this talk Ben will walk you through running Cassandra in a docker environment to give you a flexible development environment that uses only a very small set of resources, both locally and with your favorite cloud provider. Lessons learned running Cassandra with a very small set of resources are applicable to both your local development environment and larger, less constrained production deployments.
Introduction what is container and how to use it. staring from the comparison to virtual machine and also show how to use the persistent storage and port mapping in containers.
In the last part, shows what is kubernetes and what kind of problems kubernetes want to solve and how it solves.
There have been heaping piles of buzz surrounding Ceph and OpenStack lately. Similar amounts of work have been going in to the integration between Ceph and OpenStack in recent versions. We'll take a look at how this work is making all the awesomeness of Ceph available to users in a simple, intuitive, and powerful way. The world of Havana and beyond is certainly no different, and promises to continue the trend of both functionality and buzz-worthiness.
This talk given at the OpenStack meetup in Boston (Aug 14, 2013) gives a brief introduction to Ceph for the uninitiated and take a look at what's coming down the road. The short term of Havana has plenty to keep fans of both platforms happy and busy, but there are plenty more interesting problems that we can tackle. In addition to the concrete of the short term we'll take a look at how less-oft-used pieces of the Ceph platform can help augment your OpenStack setup, some general blue sky thinking, and what the community can do to get involved.
DockerCon EU 2015: Finding a Theory of the Universe with Docker and Volunteer...Docker, Inc.
Presentation by Dr. Marius Millea, Cosmologist and Postdoctoral Fellow at the Institut Lagrange de Paris
Cosmology@Home is a project which uses volunteer computing to analyze cosmological data and answer questions about our universe such as "how much dark matter is there?" and "under what conditions did the Big Bang occur?" We recently began using Docker by taking each job which we would normally send to our volunteer computers, and packaging it up inside a Docker container. The volunteer computers themselves come from interested users all over the world who download and run the software allowing them to become volunteers (called BOINC). The system is working exceedingly well and using Docker has made it massively easier for us to develop and run it. I will explain some of the technical details of the implementation, which involves a customized boot2docker ISO, as well give a brief summary of the scientific questions we are trying to answer and how these results made possible by Docker are helping analyze data from, e.g. the European Space Agency's Planck satellite.
KubeCon EU 2016: Killing containers to make weather beautifulKubeAcademy
The Met Office Informatics Lab includes scientists, developers and designers. We build prototypes exploring new technologies to make environmental data useful. Here we describe a recent project to process multi-dimensional weather data to create a fully interactive 4D browser application. We used long-running containers to serve data and web pages and short-running processes to ingest and compress the data. Forecast data is issued every three hours so our data ingestion goes through regular and predictable bursts (i.e. perfect for autoscaling).
We built a Kubernetes cluster in an AWS group which auto-scales based on load. We used replication controllers to process the data. Every three hours ingestion jobs are added to a queue and the number of ingestion containers are set in proportion to the queue length. Each worker completes exactly one ingestion job from the queue and then exits, at which point Kubernetes creates a new one to process the next message. This has allowed us to remove the lifespan logic from the containers and keep them light, fast and massively scalable. We are now in the process of using this in our production systems.
Sched Link: http://sched.co/6BWQ
Using Kubernetes and TensorFlow to build the Fog Computing Platform that can dynamically deploy the deep learning applications on to the IoT devices (Raspberry PI).
Immutable infrastructure with Docker and containers (GlueCon 2015)Jérôme Petazzoni
"Never upgrade a server again. Never update your code. Instead, create new servers, and throw away the old ones!"
That's the idea of immutable servers, or immutable infrastructure. This makes many things easier: rollbacks (you can always bring back the old servers), A/B testing (put old and new servers side by side), security (use the latest and safest base system at each deploy), and more.
However, throwing in a bunch of new servers at each one-line CSS change is going to be complicated, not to mention costly.
Containers to the rescue! Creating container "golden images" is easy, fast, dare I say painless. Replacing your old containers with new ones is also easy to do; much easier than virtual machines, let alone physical ones.
In this talk, we'll quickly recap the pros (and cons) of immutable servers; then explain how to implement that pattern with containers. We will use Docker as an example, but the technique can easily be adapted to Rocket or even plain LXC containers.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
Solving k8s persistent workloads using k8s DevOps styleMayaData
Solving k8s persistent workloads
using k8s DevOps style. Presented at Container_stack-Zurich-2019
-How Hardware trends enforce a change in the way we do things
-Storage limitations bubble up
-Infrastructure as code
OSCON: Incremental Revolution - What Docker learned from the open-source fire...Docker, Inc.
Since Solomon Hykes unveiled Docker at the PyCon conference three years ago, containers have revolutionized how developers and ops teams build, ship, and run applications. Solomon explores the past, present, and future of our container ecosystem and shares lessons learned from managing successful open source projects across several dimensions: technology, people, products, and business.
Sign up for the Docker for Mac and Windows beta: beta.docker.com
In this talk Ben will walk you through running Cassandra in a docker environment to give you a flexible development environment that uses only a very small set of resources, both locally and with your favorite cloud provider. Lessons learned running Cassandra with a very small set of resources are applicable to both your local development environment and larger, less constrained production deployments.
Introduction what is container and how to use it. staring from the comparison to virtual machine and also show how to use the persistent storage and port mapping in containers.
In the last part, shows what is kubernetes and what kind of problems kubernetes want to solve and how it solves.
There have been heaping piles of buzz surrounding Ceph and OpenStack lately. Similar amounts of work have been going in to the integration between Ceph and OpenStack in recent versions. We'll take a look at how this work is making all the awesomeness of Ceph available to users in a simple, intuitive, and powerful way. The world of Havana and beyond is certainly no different, and promises to continue the trend of both functionality and buzz-worthiness.
This talk given at the OpenStack meetup in Boston (Aug 14, 2013) gives a brief introduction to Ceph for the uninitiated and take a look at what's coming down the road. The short term of Havana has plenty to keep fans of both platforms happy and busy, but there are plenty more interesting problems that we can tackle. In addition to the concrete of the short term we'll take a look at how less-oft-used pieces of the Ceph platform can help augment your OpenStack setup, some general blue sky thinking, and what the community can do to get involved.
Sanger OpenStack presentation March 2017Dave Holland
A description of the Sanger Institute's journey with OpenStack to date, covering RHOSP, Ceph, S3, user applications, and future plans. Given at the Sanger Institute's OpenStack Day.
The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!
Under The Hood Of A Shard-Per-Core Database ArchitectureScyllaDB
Most databases are based on architectures that pre-date advances to modern hardware. This results in performance issues, the need to overprovision, and a high total cost of ownership. In this webinar, we will discuss the advances to modern server technology and take a deep dive into ScyllaDB’s shard-per-core architecture and our asynchronous engine, the Seastar framework.
Join us to learn how Seastar (and ScyllaDB):
- Avoid locks and contention on the CPU level
- Bypass kernel bottlenecks
- Implement its per-core shared-nothing autosharding mechanism
- Utilize modern storage hardware
- Leverage NUMA to get the best RAM performance
- Balance your data across CPUs and nodes for the best and smoothest performance
Plus we’ll cover the advantages of unlocking vertical scalability.
OpenStack and Ceph: the Winning Pair
By: Sebastien Han
Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. The community and Ceph itself has greatly matured. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance,and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables. The main goal of the talk is to convince those of you who aren't already using Ceph as a storage backend for OpenStack to do so. I consider the Ceph technology to be the de facto storage backend for OpenStack for a lot of good reasons that I'll expose during the talk. Since the Icehouse OpenStack summit, we have been working really hard to improve the Ceph integration. Icehouse is definitely THE big release for OpenStack and Ceph. In this session, Sebastien Han from eNovance will go through several subjects such as: Ceph overview Building a Ceph cluster - general considerations Why is Ceph so good with OpenStack? OpenStack and Ceph: 5 minutes quick start for developers Typical architecture designs State of the integration with OpenStack (icehouse best additions) Juno roadmap and beyond.
Video Presentation: http://bit.ly/1iLwTNf
What's Running My Containers? A review of runtimes and standards.Phil Estes
A talk given at Open Source Leadership Summit (OSLS) on Thursday, March 14th in Half Moon Bay, CA. In this talk the current status of the Open Container Initiative (OCI) standards as well as the Kubernetes Container Runtime Interface (CRI) were presented, with a view towards how these components have provided a level playing field with significant choice when it comes to container runtimes for use in Kubernetes, as well as interoperability per the OCI standards.
A simple setup to build a private or public cloud.
A cloud at the IaaS layer is simply a cluster of hypervisors with some added storage infrastructure and software to orchestrate everything. In this presentation we show some straightfoward DELL hardware that could be purchased to build a single rack as the basic for a private or public cloud. It totals $100k and coupled with open source software: cloudstack, ceph, glusterfs, nfs etc is the basis for your cloud.
You will get a AWS compatible cloud in no-time and with limited acquisition cost.
Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share b...Avere Systems
For years vendors have been trying to drive down the cost of flash so that the all-flash data center can become reality. The problem is that even the rapidly declining price of flash storage can’t keep pace with the rapidly declining price of hard disk. As a result data that does not need to be on flash storage has to be stored on something less expensive. But does that less expensive storage need to be another hard disk array or could it be stored in the cloud?
In this webinar join Storage Switzerland’s founder George Crump and Avere Systems CEO, Ron Bianchini for an interactive webinar Using the Cloud to Create an All-Flash Data Center.
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
Delivery of a new Bio-informatics infrastructure at the Wellcome Trust Sanger Center. We include how to programatically create, manage and provide providence for images used both at Sanger and elsewhere using open source tools and continuous integration.
Talk from 05 June 2014 NYLUG meeting at Bloomberg NYC. Short history of where Ceph came from, an architectural overview, and the current state of the community.
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Patrick McGarry
Everyone needs storage, but Open Source is changing how we think about storage infrastructure through new features, added durability, and reduced cost. New storage solutions like Ceph are providing distributed, flexible, powerful options that can support a myriad of use cases across object, block, and file system applications. This talk will explore the history and basics of Ceph, the current status of the community, and where the project is headed in the near future.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
3. 3
•Ceph in <30s
•Ceph, a little bit more
•Ceph in the wild
•Orchestration
•Community status
•What’s Next?
•Questions
The plan, Stan
Welcome!
4. 4
On commodity hardware
Ceph can run on any
infrastructure, metal
or virtualized to
provide a cheap and
powerful storage
cluster.
Object, block, and file
Low overhead doesn’t
mean just hardware,
it means people too!
Awesomesauce
Infrastructure-aware
placement algorithm
allows you to do really
cool stuff.
Huge and beyond
Designed for exabyte,
current
implementations in
the multi-petabyte.
HPC, Big Data, Cloud,
raw storage.
…besides wicked-awesome?
What is Ceph?
Software All-in-1 CRUSH Scale
5. 5
Find out more!
Ceph.com
…but you can find out more
Use it today
Dreamhost.com/cloud/DreamObjects
Get Support
Inktank.com
That WAS fast
6. 6
OBJECTS VIRTUAL DISKS FILES & DIRECTORIES
CEPH
FILE SYSTEM
A distributed, scale-out
filesystem with POSIX
semantics that provides
storage for a legacy and
modern applications
CEPH
GATEWAY
A powerful S3- and Swift-
compatible gateway that
brings the power of the
Ceph Object Store to
modern applications
CEPH
BLOCK DEVICE
A distributed virtual block
device that delivers high-
performance, cost-effective
storage for virtual machines
and legacy applications
CEPH OBJECT STORE
A reliable, easy to manage, next-generation distributed object
store that provides storage of unstructured data for applications
19. 1
9
Object && Block
Via RBD and RGW (Swift API)
Our BFF
Identity
Via Keystone
More coming!
Work continues with updates in
Havana and Icehouse.
OpenStack
20. 2
0
Block
Alternate primary, and secondary
Community maintained
Community
Wido from 42on.com
More coming in 4.2!
Snapshot & backup support
Cloning (layering) support
No NFS for system VMs
Secondary/Backup storage (s3)
CloudStack
21. 2
1
A blatent ripoff!
Primary Storage Flow
•The mgmt server never talks
to the Ceph cluster
•One mgmt server can
manage 1000s of hypervisors
•Mgmt server can be clustered
•Multiple Ceph clusters/pools
can be added to CloudStack
cluster
22. 2
2
A pretty package
A commercially
packaged OpenStack
solution back by
Ceph.
RADOS for Archipelago
Virtual server
management
software tool on top
of Xen or KVM.
RBD backed
Complete
virtualization
management with
KVM and containers.
BBC territory
Talk next week in
Berlin
So many delicious flavors
Other Cloud
SUSE Cloud Ganeti Proxmox OpenNebula
23. 2
3
Since 2.6.35
Kernel clients for RBD
and CephFS. Active
development as a
Linux file system.
iSCSI ahoy!
One of the Linux iSCSI
target frameworks.
Emulates: SBC (disk),
SMC (jukebox), MMC
(CD/DVD), SSC (tape),
OSD.
Getting creative
Creative community
member used Ceph to
back their VMWare
infrastructure via
fibre channel.
You can always use more friends
Project Intersection
Kernel STGT VMWare
Love me!
Slightly out-of-date.
Some work has been
done, but could use
some love.
Wireshark
24. 2
4
CephFS
CephFS can serve as a
drop-in replacement
for HDFS.
Upstream
Ceph vfs module
upstream samba.
CephFS or RBD
Reexporting CephFS
or RBD for NFS/CIFS.
MOAR projects
Project Intersection
Hadoop Samba Ganesha
Recently Open Source
Commercially
supported product
from Citrix. Recently
Open Sourced. Still a
bit of a tech preview.
XenServer
25. 2
5
Support for libvirt
XenServer can manipulate Ceph!
Don’t let the naming fool you, it’s easy
Blktap{2,3,asplode}
Qemu; new boss, same as the old boss
(but not really)
What’s in a name?
Ceph :: XenServer :: Libvirt
Block device :: VDI :: storage vol
Pool :: Storage Repo :: storage pool
Doing it with Xen*
26. 2
6
Thanks David Scott!
XenServer host arch
Xapi, XenAPI
xenopsd S M adapters
libvirt
libxl ceph ocfs2
libxenguest libxc qemu
xen
Client
(CloudStack, OpenStack, XenDesktop)
27. 2
7
Come for the block
Stay for the object and file
No matter what you use!
Reduced Overhead
Easier to manage one cluster
“Other Stuff”
CephFS prototypes
fast development profile
ceph-devel
lots of partner action
Gateway Drug
28. 2
8
Squash Hotspots
Multiple hosts = parallel workload
But what does that mean?
Instant Clones
No time to boot for many images
Live migration
Shared storage allows you to
move instances between compute
nodes transparently.
Blocks are delicious
29. 2
9
Flexible APIs
Native support for swift and s3
And less filling!
Secondary Storage
Coming with 4.2
Horizontal Scaling
Easy with HAProxy or others
Objects can juggle
30. 3
0
Neat prototypes
Image distribution to hypervisors
You can dress them up, but you can’t take them anywhere
Still early
You can fix that!
Outside uses
Great way to combine resources.
Files are tricksy
32. 3
2
Procedural, Ruby
Written in Ruby, this
is more of the dev-
side of DevOps. Once
you get past the
learning curve it’s
powerful though.
Model-driven
Aimed more at the
sysadmin, this
procedural tool has a
very wide penetration
(even on Windows!).
Agentless, whole stack
Using the built-in
OpenSSH in your OS,
this super easy tool
goes further up the
stack than most.
Fast, 0MQ
Using ZeroMQ this tool
is designed for massive
scale and fast, fast, fast.
Unfortunately 0MQ has
no built in encryption.
The new hotness
Orchestration
Chef Puppet Ansible Salt
33. 3
3
Canonical Unleashed
Being language
agnostic, this tool can
completely encapsulate
a service. Can also
handle provisioning all
the way down to
hardware.
Dell has skin in the game
Complete operations
platform that can dive
all the way down to
BIOS/RAID level.
Others are joining in
Custom provisioning
and
orchestration, just
one example of how
busy this corner of
the market is.
Doing it w/o a tool
If you prefer not to
use a tool, Ceph gives
you an easy way to
deploy your cluster by
hand.
MOAR HOTNESS
Orchestration Cont’d
Juju Crowbar ComodIT Ceph-deploy
40. 4
0
An ongoing process
While the first pass
for disaster recovery
is done, we want to
get to built-in, world-
wide replication.
Reception efficiency
Currently underway
in the community!
Headed to dynamic
Can already do this in
a static pool-based
setup. Looking to get
to a use-based
migration.
Making it open-er
Been talking about it
forever. The time is
coming!
Hop on board!
The Ceph Train
Geo-Replication Erasure Coding Tiering Governance
41. 4
1
Quarterly Online Summit
Online summit puts
the core devs together
with the Ceph
community.
Not just for NYC
More planned,
including Santa Clara
and London. Keep an
eye out:
http://inktank.com/cephdays/
Geek-on-duty
During the week
there are times when
Ceph experts are
available to help. Stop
by oftc.net/ceph
Email makes the world go
Our mailing lists are
very active, check out
ceph.com for details
on how to join in!
Open Source is Open!
Get Involved!
CDS Ceph Day IRC Lists
42. 4
2
http://wiki.ceph.com/04De
velopment/Project_Ideas
Lists, blueprints,
sideboard, paper cuts,
etc.
http://tracker.ceph.com/
All the things!
New #ceph-devel
Splitting off developer
chatter to make it
easier to filter
discussions.
http://ceph.com/resources
/mailing-list-irc/
Our mailing lists are
very active, check out
ceph.com for details
on how to join in!
Patches welcome
Projects
Wiki Redmine IRC Lists
43. 4
3
Comments? Anything for the good of the cause?
Questions?
E-MAIL
patrick@inktank.com
WEBSITE
Ceph.com
SOCIAL
@scuttlemonkey
@ceph
Facebook.com/cephstorage
Editor's Notes
The way CRUSH is configured is somewhat unique. Instead of defining pools for different data types, workgroups, subnets, or applications, CRUSH is configured with the physical topology of your storage network. You tell it how many buildings, rooms, shelves, racks, and nodes you have, and you tell it how you want data placed. For example, you could tell CRUSH that it’s okay to have two replicas in the same building, but not on the same power circuit. You also tell it how many copies to keep.
With CRUSH, the first thing that happens is the data gets split into a certain number of sections. These are called “placement groups”. The number of placement groups is configurable. Then, the CRUSH algorithm is invoked, passing along the latest cluster map and a set of placement rules, and it determines where the placement group belongs in the cluster. This is a pseudo-random calculation, but it’s also repeatable; given the same cluster state and rule set, it will always return the same results.
Each placement group is run through CRUSH and stored in the cluster. Notice how no node has received more than one copy of a placement group, and no two nodes contain the same information? That’s important.
When it comes time to store an object in the cluster (or retrieve one), the client calculates where it belongs.
What happens, though, when a node goes down? The OSDs are always talking to each other (and the monitors), and they know when something is amiss. The third and fifth node on the top row have noticed that the second node on the bottom row is gone, and they are also aware that they have replicas of the missing data.
What happens, though, when a node goes down? The OSDs are always talking to each other (and the monitors), and they know when something is amiss. The third and fifth node on the top row have noticed that the second node on the bottom row is gone, and they are also aware that they have replicas of the missing data.
The OSDs collectively use the CRUSH algorithm to determine how the cluster should look based on its new state, and move the data to where clients running CRUSH expect it to be.
Because of the way placement is calculated instead of centrally controlled, node failures are transparent to clients.
4.2 ready (working on RBD java bindings)QEMU and libvirt are creating images in format 1, hacky stuff to make format 2RBD for Primary and RGW S3 for Secondary (templates, backups, isos)
You can have a management server which is communicating to all of your agents (hypervisors)Management servers can by clustered for HA/failover or performance
Client -> XenAPI ->Domain manager -> xen control library -> standard xen libraries && “upstream” qemuStorage plugins -> libvirt support (experimental build) -> ceph && ocfs2