Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage in a single unified cluster. It uses a RADOS distributed object store and CRUSH algorithm to distribute data across clusters for high performance, reliability, and scalability. Ceph block storage (RBD) can provide virtual block devices that allow live migration of VMs and containers across hosts in a cluster.
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
The document discusses different ways of storing virtual machine images and volumes using Ceph storage. It describes growing from using proprietary storage hardware to using Ceph with its open source, scalable, and fault-tolerant design. It outlines how virtual machine images can be stored in Ceph using the Cinder volume driver and RADOS Block Device (RBD) for efficient cloning and live migration. Client access to volumes is provided through a Linux kernel module while container/VM access is through device drivers.
Ian Colle, the Ceph Program Manager at Inktank, gave a presentation on Ceph and how it can be used for cloud storage with OpenStack. Ceph is an open source software-defined storage system that provides object, block, and file storage in a single distributed system. It utilizes a CRUSH algorithm for data distribution, thin provisioning for efficient storage of VMs, and is integrated with OpenStack through the Cinder block storage and Swift object storage APIs. Inktank was formed to ensure the long-term success of Ceph through services, support, and helping companies adopt it.
NoSQL refers to non-relational databases that were developed to handle large volumes of data across many servers. NoSQL databases such as Redis, Memcached, MongoDB and CouchDB do not use SQL for queries and are more scalable and flexible than traditional relational databases. They often sacrifice consistency, complex queries, and transaction support in favor of availability and flexibility.
Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage functionality. It uses a technique called CRUSH to automatically distribute data across clusters of commodity servers and provide fault tolerance. Ceph block storage (RBD) can be used as reliable virtual disk images for virtual machines and containers, enabling features like live migration. RBD integration is currently being improved for better performance and compatibility with virtualization platforms like Xen and OpenStack.
The ability to create global namespace, high availability, and volume economics enables unique capabilities when Red Hat Storage Server is used on Amazon Web Services (AWS). In this session, you’ll learn about the capabilities and the potential of using AWS ephemeral storage, availability zones, and regions to deliver a truly distributed cloud storage solution that is geographically diverse, cost effective, and highly available.
Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage in a single unified cluster. It uses a RADOS distributed object store and CRUSH algorithm to distribute data across clusters for high performance, reliability, and scalability. Ceph block storage (RBD) can provide virtual block devices that allow live migration of VMs and containers across hosts in a cluster.
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
The document discusses different ways of storing virtual machine images and volumes using Ceph storage. It describes growing from using proprietary storage hardware to using Ceph with its open source, scalable, and fault-tolerant design. It outlines how virtual machine images can be stored in Ceph using the Cinder volume driver and RADOS Block Device (RBD) for efficient cloning and live migration. Client access to volumes is provided through a Linux kernel module while container/VM access is through device drivers.
Ian Colle, the Ceph Program Manager at Inktank, gave a presentation on Ceph and how it can be used for cloud storage with OpenStack. Ceph is an open source software-defined storage system that provides object, block, and file storage in a single distributed system. It utilizes a CRUSH algorithm for data distribution, thin provisioning for efficient storage of VMs, and is integrated with OpenStack through the Cinder block storage and Swift object storage APIs. Inktank was formed to ensure the long-term success of Ceph through services, support, and helping companies adopt it.
NoSQL refers to non-relational databases that were developed to handle large volumes of data across many servers. NoSQL databases such as Redis, Memcached, MongoDB and CouchDB do not use SQL for queries and are more scalable and flexible than traditional relational databases. They often sacrifice consistency, complex queries, and transaction support in favor of availability and flexibility.
Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage functionality. It uses a technique called CRUSH to automatically distribute data across clusters of commodity servers and provide fault tolerance. Ceph block storage (RBD) can be used as reliable virtual disk images for virtual machines and containers, enabling features like live migration. RBD integration is currently being improved for better performance and compatibility with virtualization platforms like Xen and OpenStack.
The ability to create global namespace, high availability, and volume economics enables unique capabilities when Red Hat Storage Server is used on Amazon Web Services (AWS). In this session, you’ll learn about the capabilities and the potential of using AWS ephemeral storage, availability zones, and regions to deliver a truly distributed cloud storage solution that is geographically diverse, cost effective, and highly available.
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
Ceph is an open source distributed object store and file system that provides excellent performance, reliability and scalability.
In this presentation, the Ceph architecture will be explained, attendees will be introduced to the block, object and file interfaces to Ceph.
This document summarizes Haomai Wang's presentation on containers and Ceph. Some key points include:
- Ceph can provide block, file, and object storage for containers and virtual machines. Block storage via RBD is commonly used today but file storage via CephFS may be better suited for containers.
- CephFS provides POSIX file sharing across containers and clients, with improvements in snapshotting and statistics capabilities. It inherits Ceph's scalability and resilience.
- Orchestration tools like Kubernetes can integrate Ceph storage, either using existing volume plugins or new plugins being developed for Ceph block and file storage. This allows containers to easily share storage.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
This document discusses the future of CephFS, the distributed file system component of Ceph. It describes plans to improve dynamic subtree partitioning to balance metadata load across servers, enhance failure recovery, and scale the metadata cluster. It also covers adding features like snapshots, hard links, and journaling to improve functionality and durability. The goal is to continue testing, fixing bugs, adding integrations, and developing remaining desired features.
Keeping OpenStack storage trendy with Ceph and containersSage Weil
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
The State of Ceph, Manila, and Containers in OpenStackSage Weil
OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible.
This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.
Ceph, being a distributed storage system, is highly reliant on the network for resiliency and performance. In addition, it is crucial that the network topology beneath a Ceph cluster be designed in such a way to facilitate easy scaling without service disruption. After an introduction to Ceph itself this talk will dive into the design of Ceph client and cluster network topologies.
2015 open storage workshop ceph software defined storageAndrew Underwood
The document provides an overview of Ceph software-defined storage. It begins with an agenda for an Open Storage Workshop and discusses how the storage market is changing and the limitations of current storage technologies. It then introduces Ceph, describing its architecture including RADOS, CephFS, RBD and RGW. Key benefits of Ceph are scalability, low cost, resilience and extensibility. The document concludes with a case study of Australian research universities using Ceph with OpenStack and next steps to building a scalable storage solution.
GlusterFS is a clustered file system that aggregates multiple storage servers over InfiniBand RDMA into a large parallel network file system. It was designed for scalable capacity and I/O throughput beyond petabytes, reliability through its non-stop storage architecture with no single point of failure, and ease of manageability through self-healing capabilities. Benchmark tests using 16 storage servers and 64 clients achieved a peak aggregated read throughput of 13GBps over InfiniBand verbs transport.
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform.
This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
This document provides an overview of how Bloomberg uses Ceph and OpenStack in its cloud infrastructure. Some key points:
- Bloomberg uses Ceph for object storage with RGW and block storage with RBD. It uses OpenStack for compute functions.
- Initially Bloomberg had a fully converged architecture with Ceph and OpenStack on the same nodes, but this caused performance issues.
- Bloomberg now uses a semi-converged "POD" architecture with dedicated Ceph and OpenStack nodes in separate clusters for better scalability and performance.
- Ephemeral storage provides faster performance than Ceph but lacks data integrity protections. Ceph offers replication and reliability at the cost of some latency.
- Automation with Chef
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
The container revolution, and what it means to operators.pptxRobert Starmer
The document discusses the rise of containers as a DevOps technology that accelerates the development process. It provides a brief history of containers, explaining how Docker simplified their use. Containers allow for faster development cycles than VMs by providing process-level segregation. While containers abstract operations, container management platforms are still needed to provide scaling, scheduling, security and other operational functions. The document also discusses how OpenStack can manage containers running on VMs, bare metal or directly, and how containers are increasingly being used to deploy OpenStack services themselves.
The document discusses Riak, a decentralized key-value store that is well-suited for web applications, describing how it stores and retrieves data across a distributed cluster of nodes in a way that provides high availability and resolves conflicts. Riak stores data in buckets and keys, replicates data across multiple nodes for availability, uses consistent hashing to determine node ownership of data, and exposes conflicts as sibling objects to be resolved in application logic.
NoSQL overview presentation with details on Riak and CouchDB.
Presented at Qbranch CODE Night 2010-04-15.
Thanks to @frli01 for arranging and @xlson for invitation.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
This document discusses Spreadshirt's implementation of Ceph object storage to store and serve user-generated content globally at scale. It provides an overview of Ceph and its architecture, describing how Spreadshirt set up Ceph across multiple data centers for high performance and availability. Benchmark results show the initial Ceph implementation outperforming Amazon S3 on response times and throughput, though on an unfair comparison between local and geo-distributed tests. The presentation concludes by mentioning plans for bucket replication between regions to provide truly global availability.
1) CEPH is an open source distributed storage system that can provide scalable and high availability storage for Openstack's Glance, Cinder, and Swift components.
2) CEPH integrates well with Openstack through authentication via keyrings and access to CEPH's RADOS block device (RBD) and object storage (RGW) from Glance, Cinder, and Swift.
3) Using CEPH as the unified backend storage for Openstack addresses Openstack's needs for scalability, high availability, and avoiding vendor lock-in.
Using Ceph in a Private Cloud - Ceph Day Frankfurt Ceph Community
This document summarizes how to set up a Ceph cluster for private cloud storage using SUSE Cloud. It describes configuring over 10 storage nodes and 3 monitor nodes for the Ceph cluster. It explains integrating the external Ceph cluster with SUSE Cloud to provide block storage, image storage, and object storage services. It also covers setting up Ceph directly with SUSE Cloud using Crowbar to deploy all nodes.
ION Tokyo slides for "The Business Case for Implementing DNSSEC" by Dan York (Internet Society).
DNSSEC helps prevent attackers from subverting and modifying DNS messages and sending users to wrong (and potentially malicious) sites. So what needs to be done for DNSSEC to be deployed on a large scale? We’ll discuss the business reasons for, and financial implications of, deploying DNSSEC, from staying ahead of the technological curve, to staying ahead of your competition, to keeping your customers satisfied and secure on the Internet. We’ll also examine some of the challenges operators have faced and the opportunities to address those challenges and move deployment forward.
JMP206 : Calling Home: Enabling the IBM Sametime Softphone in ST9Keith Brooks
The session Jeremy Sanders and I presented today the IBM Connect 2014 event in Orlando.
Need my help? Contact Keith Brooks via one of the following ways:
Blog http://blog.vanessabrooks.com
Twitter http://twitter.com/lotusevangelist
http://about.me/keithbrooks
For more information on ThinkRite, http://www.thinkrite.com
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
Ceph is an open source distributed object store and file system that provides excellent performance, reliability and scalability.
In this presentation, the Ceph architecture will be explained, attendees will be introduced to the block, object and file interfaces to Ceph.
This document summarizes Haomai Wang's presentation on containers and Ceph. Some key points include:
- Ceph can provide block, file, and object storage for containers and virtual machines. Block storage via RBD is commonly used today but file storage via CephFS may be better suited for containers.
- CephFS provides POSIX file sharing across containers and clients, with improvements in snapshotting and statistics capabilities. It inherits Ceph's scalability and resilience.
- Orchestration tools like Kubernetes can integrate Ceph storage, either using existing volume plugins or new plugins being developed for Ceph block and file storage. This allows containers to easily share storage.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
This document discusses the future of CephFS, the distributed file system component of Ceph. It describes plans to improve dynamic subtree partitioning to balance metadata load across servers, enhance failure recovery, and scale the metadata cluster. It also covers adding features like snapshots, hard links, and journaling to improve functionality and durability. The goal is to continue testing, fixing bugs, adding integrations, and developing remaining desired features.
Keeping OpenStack storage trendy with Ceph and containersSage Weil
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
The State of Ceph, Manila, and Containers in OpenStackSage Weil
OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible.
This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.
Ceph, being a distributed storage system, is highly reliant on the network for resiliency and performance. In addition, it is crucial that the network topology beneath a Ceph cluster be designed in such a way to facilitate easy scaling without service disruption. After an introduction to Ceph itself this talk will dive into the design of Ceph client and cluster network topologies.
2015 open storage workshop ceph software defined storageAndrew Underwood
The document provides an overview of Ceph software-defined storage. It begins with an agenda for an Open Storage Workshop and discusses how the storage market is changing and the limitations of current storage technologies. It then introduces Ceph, describing its architecture including RADOS, CephFS, RBD and RGW. Key benefits of Ceph are scalability, low cost, resilience and extensibility. The document concludes with a case study of Australian research universities using Ceph with OpenStack and next steps to building a scalable storage solution.
GlusterFS is a clustered file system that aggregates multiple storage servers over InfiniBand RDMA into a large parallel network file system. It was designed for scalable capacity and I/O throughput beyond petabytes, reliability through its non-stop storage architecture with no single point of failure, and ease of manageability through self-healing capabilities. Benchmark tests using 16 storage servers and 64 clients achieved a peak aggregated read throughput of 13GBps over InfiniBand verbs transport.
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform.
This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
This document provides an overview of how Bloomberg uses Ceph and OpenStack in its cloud infrastructure. Some key points:
- Bloomberg uses Ceph for object storage with RGW and block storage with RBD. It uses OpenStack for compute functions.
- Initially Bloomberg had a fully converged architecture with Ceph and OpenStack on the same nodes, but this caused performance issues.
- Bloomberg now uses a semi-converged "POD" architecture with dedicated Ceph and OpenStack nodes in separate clusters for better scalability and performance.
- Ephemeral storage provides faster performance than Ceph but lacks data integrity protections. Ceph offers replication and reliability at the cost of some latency.
- Automation with Chef
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
The container revolution, and what it means to operators.pptxRobert Starmer
The document discusses the rise of containers as a DevOps technology that accelerates the development process. It provides a brief history of containers, explaining how Docker simplified their use. Containers allow for faster development cycles than VMs by providing process-level segregation. While containers abstract operations, container management platforms are still needed to provide scaling, scheduling, security and other operational functions. The document also discusses how OpenStack can manage containers running on VMs, bare metal or directly, and how containers are increasingly being used to deploy OpenStack services themselves.
The document discusses Riak, a decentralized key-value store that is well-suited for web applications, describing how it stores and retrieves data across a distributed cluster of nodes in a way that provides high availability and resolves conflicts. Riak stores data in buckets and keys, replicates data across multiple nodes for availability, uses consistent hashing to determine node ownership of data, and exposes conflicts as sibling objects to be resolved in application logic.
NoSQL overview presentation with details on Riak and CouchDB.
Presented at Qbranch CODE Night 2010-04-15.
Thanks to @frli01 for arranging and @xlson for invitation.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
This document discusses Spreadshirt's implementation of Ceph object storage to store and serve user-generated content globally at scale. It provides an overview of Ceph and its architecture, describing how Spreadshirt set up Ceph across multiple data centers for high performance and availability. Benchmark results show the initial Ceph implementation outperforming Amazon S3 on response times and throughput, though on an unfair comparison between local and geo-distributed tests. The presentation concludes by mentioning plans for bucket replication between regions to provide truly global availability.
1) CEPH is an open source distributed storage system that can provide scalable and high availability storage for Openstack's Glance, Cinder, and Swift components.
2) CEPH integrates well with Openstack through authentication via keyrings and access to CEPH's RADOS block device (RBD) and object storage (RGW) from Glance, Cinder, and Swift.
3) Using CEPH as the unified backend storage for Openstack addresses Openstack's needs for scalability, high availability, and avoiding vendor lock-in.
Using Ceph in a Private Cloud - Ceph Day Frankfurt Ceph Community
This document summarizes how to set up a Ceph cluster for private cloud storage using SUSE Cloud. It describes configuring over 10 storage nodes and 3 monitor nodes for the Ceph cluster. It explains integrating the external Ceph cluster with SUSE Cloud to provide block storage, image storage, and object storage services. It also covers setting up Ceph directly with SUSE Cloud using Crowbar to deploy all nodes.
ION Tokyo slides for "The Business Case for Implementing DNSSEC" by Dan York (Internet Society).
DNSSEC helps prevent attackers from subverting and modifying DNS messages and sending users to wrong (and potentially malicious) sites. So what needs to be done for DNSSEC to be deployed on a large scale? We’ll discuss the business reasons for, and financial implications of, deploying DNSSEC, from staying ahead of the technological curve, to staying ahead of your competition, to keeping your customers satisfied and secure on the Internet. We’ll also examine some of the challenges operators have faced and the opportunities to address those challenges and move deployment forward.
JMP206 : Calling Home: Enabling the IBM Sametime Softphone in ST9Keith Brooks
The session Jeremy Sanders and I presented today the IBM Connect 2014 event in Orlando.
Need my help? Contact Keith Brooks via one of the following ways:
Blog http://blog.vanessabrooks.com
Twitter http://twitter.com/lotusevangelist
http://about.me/keithbrooks
For more information on ThinkRite, http://www.thinkrite.com
Logs are one of the most important pieces of analytical data in a cloud-based service infrastructure. At any point in time, service owners and operators need to understand the sta- tus of each infrastructure component for fault monitoring, to assess feature usage, and to monitor business processes. Application developers, as well as security personnel, need access to historic information for debugging and forensic in- vestigations.
This paper discusses a logging framework and guidelines that provide a proactive approach to logging to ensure that the data needed for forensic investigations has been gener- ated and collected. The standardized framework eliminates the need for logging stakeholders to reinvent their own stan- dards. These guidelines make sure that critical information associated with cloud infrastructure and software as a ser- vice (SaaS) use-cases are collected as part of a defense in depth strategy. In addition, they ensure that log consumers can effectively and easily analyze, process, and correlate the emitted log records. The theoretical foundations are em- phasized in the second part of the paper that covers the im- plementation of the framework in an example SaaS offering running on a public cloud service.
While the framework is targeted towards and requires the buy-in from application developers, the data collected is crit- ical to enable comprehensive forensic investigations. In ad- dition, it helps IT architects and technical evaluators of log- ging architectures build a business oriented logging frame- work.
The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!
This document summarizes Dan van der Ster's experience scaling Ceph at CERN. CERN uses Ceph as the backend storage for OpenStack volumes and images, with plans to also use it for physics data archival and analysis. The 3PB Ceph cluster consists of 47 disk servers and 1,128 OSDs. Some lessons learned include managing latency, handling many objects, tuning CRUSH, trusting clients, and avoiding human errors when managing such a large cluster.
Trying and evaluating the new features of GlusterFS 3.5Keisuke Takahashi
My presentation in LinuxCon/CloudOpen Japan 2014.
It has passed few days since GlusterFS 3.5 released so feel free to correct me if you find my mistakes or misunderstandings. Thanks.
The TLS/SSLv3 renegotiation vulnerability explainedThierry Zoller
The whitepaper explains the SSLv3/TLS renegotiation vulnerability (CVE-2009-3555) to a broader audience and goes into details about the different attack vectors possible. The whitepaper includes original research such as downgrading the client down to clear-text protocol transparently.
The paper is referenced by the US-CERT, FINCERT, RUS-CERT, Qualys, Nessus as the main reference for this vulnerability.
DNSSEC and DANE – E-Mail security reloadedMen and Mice
This document discusses Domain Name System Security Extensions (DNSSEC) and the DNS-based Authentication of Named Entities (DANE) protocol. It describes how DANE uses DNSSEC and Transport Layer Security (TLS) records (TLSA) to authenticate TLS certificates and secure email transport (SMTP) without relying on certificate authorities. Benefits include preventing man-in-the-middle attacks and securely validating certificates through DNS rather than certificate revocation lists. The document provides instructions and examples for configuring mail servers, DNS servers, and validating TLSA records to implement DANE for email.
This presentation digests and analyzes the OAuth versions 1.0 and 2.0.
Doing a deep dive into OAuth I found myself forced into a puzzle with many pieces. This was worthwhile as OAuth is quite cool. For those interested in a quick-start check this slide-deck - I hope it saves others from puzzling.
DANE: The Future of Transport Layer Security (TLS)
Dan York (Internet Society)
If you connect to a “secure” server using TLS/SSL (such as a web server, email server or xmpp server), how do you know you are using the correct certificate? With DNSSEC now being deployed, a new protocol has emerged called “DANE” (“DNS-Based Authentication of Named Entities“), which allows you to securely specify exactly which TLS/SSL certificate an application should use to connect to your site. DANE has great potential to make the Internet much more secure by marrying the strong integrity protection of DNSSEC with the confidentiality of SSL/TLS certificates. In this session, we will explain how DANE works and how you can use it to secure your websites, email, XMPP, VoIP, and other web services.
This document summarizes Martin Schütte's work on improving NetBSD's syslogd daemon to support IETF syslog protocols. The project aimed to add TLS network transport, a new message format, and digital signatures. It discusses the BSD syslog system, problems with the UDP transport, and IETF working group drafts addressing syslog protocols, transports, and signed messages. Implementation details are provided for TLS support, message buffering, and the new syslog message format as specified in the IETF draft.
- Ceph provides flexible, scalable, and open source storage capabilities through its RADOS distributed object store and various services like RADOS Gateway (RGW), RADOS Block Device (RBD), and CephFS.
- RBD allows for efficient provisioning of block storage volumes for virtual machines (VMs) and containers using copy-on-write cloning of base images stored in Glance. This enables live migration and thousands of instant VMs.
- Cinder leverages RBD to provide persistent block storage volumes to Nova that are decoupled from compute hosts, allowing migration without data movement for high performance and efficiency at scale.
This document discusses new features and upcoming improvements for Ceph. Some key points:
- The Bobtail release improved OSD threading, recovery quality of service, block device cloning, and Keystone integration.
- Upcoming releases will add incremental backups for block devices, on-disk encryption, REST APIs, and continued performance optimizations.
- Ceph is an open source distributed storage system that provides object storage, block storage, and file storage in a single unified system. It is scalable, flexible, and low cost.
Future of CephFS discusses improvements to Ceph's distributed file system (CephFS) including dynamic subtree partitioning to balance metadata load, replication of hot metadata, and migration of metadata between servers. It describes how CephFS stores metadata in RADOS objects to improve locality and reduce seeks compared to legacy systems. Snapshots can be taken of subdirectories without special tools. Multiple client implementations include kernel, FUSE, and libraries. The path forward includes more testing, automation, and integration with other projects.
Ceph is an open-source distributed storage system that provides object storage, block storage, and file storage capabilities. It scales from terabytes to exabytes across thousands of commodity servers and provides self-healing and fault tolerance without any single point of failure. Ceph's unified storage is based on a distributed object store and allows for flexible data access via its object, block, and file interfaces.
Ceph is a unified distributed storage system that provides object storage, block storage, and file storage functionality. It is designed to scale from tens to tens of thousands of servers storing terabytes to exabytes of data. Ceph distributes data across its cluster and replicates it for fault tolerance without a single point of failure using low-cost commodity hardware and an autonomous, self-healing architecture.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
The document discusses cache tiering and erasure coding in Ceph. It provides an overview of Ceph components and architecture, including RADOS, LIBRADOS, RBD, RGW and CephFS. It then covers data placement using CRUSH, cache tiering approaches using multiple RADOS pools, and erasure coding which stores data and coding chunks across OSDs for fault tolerance with less overhead than replication. Read and write operations for cache tiering and erasure coded pools are described.
This document introduces the Ceph distributed storage system. Ceph provides object storage, block storage, and a distributed file system. It uses a CRUSH algorithm to distribute data across nodes and provides replication for fault tolerance. Ceph is open source and can scale to large capacities by running on commodity hardware.
The document discusses using Riak, an open source NoSQL database, in the cloud to provide highly available and fault tolerant storage for applications. It describes how the authors' application struggled with availability when hosted on AWS, and their research led them to choose Riak as a solution. Riak provides features like consistent hashing, hinted handoff, and active anti-entropy that allow it to scale linearly and remain available even during failures or added/reduced nodes. The document provides guidance on deploying a Riak cluster in the cloud, including choosing instance types, sizing the cluster, disabling swap, using the right filesystem and scheduler, and monitoring and scaling the cluster over time.
The document discusses the Ceph distributed storage system. It provides an overview of what Ceph is, how it works, and its key features. Ceph is an open-source unified storage system that provides object storage, block storage, and a file system. It uses a distributed system of monitors, storage nodes, and metadata servers to store and retrieve data reliably across multiple machines.
Presentation held at GRNET Digital Technology Symposium on November 5-6, 2018 at the Stavros Niarchos Foundation Cultural Center, Athens, Greece.
• Introduction to Ceph and its internals
• Presentation of GRNET's Ceph deployments (technical specs, operations)
• Usecases: ESA Copernicus, ~okeanos, ViMa
DRBD is a open source technology for replicating block based data. It can be used to build high availability clusters, software defined storeage for IaaS clouds and long distance replication (disaster recovery).
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Community
This document discusses the future of CephFS, the distributed file system component of Ceph. It describes plans to improve dynamic subtree partitioning to balance metadata load across servers, enhance failure recovery, and scale the metadata cluster. It also covers improving the client protocol, adding snapshot and recursive accounting capabilities, and supporting multiple client implementations like the Linux kernel client and Ceph fuse. The goal is to test these enhancements and continue expanding CephFS integrations and features.
This document discusses the features and capabilities of DRBD (Distributed Replicated Block Device) and drbdmanage. DRBD provides synchronous replication of block devices and volumes within a Linux cluster. It supports multiple volumes per resource and integration with Pacemaker. drbdmanage is a new tool that automates DRBD configuration and management. It replicates volumes across nodes, handles rebalancing when nodes are added or removed, and can integrate with OpenStack for provisioning of highly available volumes.
This document summarizes a distributed storage system called Ceph. Ceph uses an architecture with four main components - RADOS for reliable storage, Librados client libraries, RBD for block storage, and CephFS for file storage. It distributes data across intelligent storage nodes using the CRUSH algorithm and maintains reliability through replication and erasure coding of placement groups across the nodes. The monitors manage the cluster map and placement, while OSDs on each node store and manage the data and metadata.
What CloudStackers Need To Know About LINSTOR/DRBDShapeBlue
Philipp explains the best performing Open Source software-defined storage software available to Apache CloudStack today. It consists of two well-concerted components. LINSTOR and DRBD. Each of them also has its independent use cases, where it is deployed alone. In this presentation, the combination of these two is examined. They form the control plane and the data plane of the SDS. We will touch on: Performance, scalability, hyper-convergence (data-locality for high IO performance), resiliency through data replication (synchronous within a site, 2-way, 3-way, or more), snapshots, backup (to S3), encryption at rest, deduplication, compression, placement policies (regarding failure domains), management CLI and webGUI, monitoring interface, self-healing (restoring redundancy after device/node failure), the federation of multiple sites (async mirroring and repeatedly snapshot difference shipping), QoS control (noisy neighbors limitation) and of course: complete integration with CloudStack for KVM guests. It is Open Source software following the Unix philosophy. Each component solves one task, made for maximal re-usability. The solution leverages the Linux kernel, LVM and/or ZFS, and many Open Source software libraries. Building on these giant Open Source foundations, not only saves LINBIT from re-inventing the wheels, it also empowers your day 2 operation teams since they are already familiar with these technologies.
Philipp Reisner is one of the founders and CEO of LINBIT in Vienna/Austria. He holds a Dipl.-Ing. (comparable to MSc) degree in computer science from Technical University in Vienna. His professional career has been dominated by developing DRBD, a storage replication software for Linux. While in the early years (2001) this was writing kernel code, today he leads a company of 30 employees with locations in Austria and the USA. LINBIT is an Open Source company offering enterprise-level support subscriptions for its Open Source technologies.
-----------------------------------------
CloudStack Collaboration Conference 2022 took place on 14th-16th November in Sofia, Bulgaria and virtually. The day saw a hybrid get-together of the global CloudStack community hosting 370 attendees. The event hosted 43 sessions from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebula Project
You will learn what DRBD is, where it came from in its 15 years of existence. How it evolved into a software defined storage solution interesting for users of OpenNebula and why it is very well suited for hyperconverged deployment architectures. The presentation will contain IO performance results and (if time permits) a live demo.
MapR is an amazing new distributed filesystem modeled after Hadoop. It maintains API compatibility with Hadoop, but far exceeds it in performance, manageability, and more.
/* Ted's MapR meeting slides incorporated here */
1. DreamObjects
Cloud Object Storage
Powered by Ceph
Monday, November 5, 12
2. This slide is all about me, me, me.
Ross Turk
Community Manager, Ceph
VP Community, Inktank
ross@inktank.com | @rossturk
inktank.com | ceph.com
2
Monday, November 5, 12
3. DreamHost
Imagine the Web,Your Way
15 years creating and deploying services
Over 340,000 entrepreneur and developer customers
Open source obsessed
• Hosting over 500,000 WordPress sites
• Contributing Ceph, Ceilometer, Akanda
• OpenStack innovator & contributor
Monday, November 5, 12
4. Selection Criteria
Must be able to deploy at large scale
• Cope with large objects, or large numbers of small objects
• Transparently handle continuous component failures
Must be managed in a cost-effective way
• Run on commodity hardware and free open source software
• Automatically handle failures, new hardware, decommissioning
Must be brought to market quickly
• Mature enough to be quickly productized
Must enable hybrid deployments
• Customers should be able to use in hybrid private/public setup
• Customers should have the freedom to build it themselves
4
Monday, November 5, 12
5. What Is DreamObjects?
It’s where data hangs its hat in the cloud.
Web App Storage Backups Digital Media
Freedom: No vendor lock-in
• Powered by Ceph, an open source, portable storage platform
Flexibility: It’s compatible
• Access DreamObjects with either Amazon S3 API or Swift API
Priced RIght: Unique Pricing Model
• $0.07/GB, inbound data transfer is free, unlimited API requests
Monday, November 5, 12
6. APP APP HOST/VM CLIENT
RADOSGW RBD CEPH FS
LIBRADOS
A bucket-based REST A reliable and fully- A POSIX-compliant
A library allowing gateway, compatible with distributed block device, distributed file system,
apps to directly S3 and Swift with a Linux kernel client with a Linux kernel client
access RADOS, and a QEMU/KVM driver and support for FUSE
with support for
C, C++, Java,
Python, Ruby,
and PHP
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage
nodes
Monday, November 5, 12
7. APP APP HOST/VM CLIENT
RADOSGW RBD CEPH FS
LIBRADOS
A bucket-based REST A reliable and fully- A POSIX-compliant
A library allowing gateway, compatible with distributed block device, distributed file system,
apps to directly S3 and Swift with a Linux kernel client with a Linux kernel client
access RADOS, and a QEMU/KVM driver and support for FUSE
with support for
C, C++, Java,
Python, Ruby,
and PHP
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage
nodes
Monday, November 5, 12
8. x12
M
x12
LOAD
BALANCER
M
x12
LOAD M
BALANCER
x12
36TB PER
MACHINE
x12
radosgw M ceph-mon ceph-osd
Monday, November 5, 12
10. 3TB per OSD
x 12 = x 12 OSDs per node
= 36TB per node
3TB 36TB
36TB per node
x 90 = x 90 nodes
= ~3PB total capacity
36TB ~3PB
~3PB total capacity
/3= / 3 replicas per object
= ~1PB usable capacity
~3PB ~1PB
Monday, November 5, 12
11. Deployment
DreamHost deploys Ceph with Opscode Chef
• Reduce operations overhead
• Maintain efficiency to keep costs down
• Provide consistency
• Always deploy / manage resources the same way
Ceph has cookbooks:
https://github.com/ceph/ceph-cookbooks
11
Monday, November 5, 12
16. Questions?
Ross Turk
Community Manager, Ceph
VP Community, Inktank
ross@inktank.com | @rossturk
inktank.com | ceph.com
Read about DreamObjects:
http://inktank.com/dhcs
16
Monday, November 5, 12