Ceph is an open source project, which provides software-defined, unified storage solutions. Ceph is a distributed storage system which is massively scalable and high-performing without any single point of failure. From the roots, it has been designed to be highly scalable, up to exabyte level and beyond while running on general-purpose commodity hardware.
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
Ceph is an open source project, which provides software-defined, unified storage solutions. Ceph is a distributed storage system which is massively scalable and high-performing without any single point of failure. From the roots, it has been designed to be highly scalable, up to exabyte level and beyond while running on general-purpose commodity hardware.
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
This was a tutorial which Mark McClain and I led at ONUG, Spring 2015. It was well received and serves as a walk through of OpenStack Neutron and it's features and usage.
Presentation held during SFScon15 - Free Software Conference, 13.11.2015 @ TIS innovation park, Bolzano
--
Proxmox VE is a complete open source server virtualization management solution based on Debian. It supports both Linux containers (LXC) and KVM virtual machines and makes them available under an integrated web-based management GUI. With Proxmox VE companies can manage virtual machines, storage (such as Ceph, ZFS, NFS, GlusterFS, and iSCSI), virtualized networks, and highly available clusters. This talk will give attendees an overview of the new features of Proxmox VE 4.0 focusing on the new Proxmox VE HA Manager and the container technology Linux Containers (LXC). It will present how highly available virtual machines can be managed in a multi-node cluster. The talk will also share some insights on how it is to be a developer in open source projects and how to get into it as a student.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
Slides from #PromCon2018 Munich.
https://promcon.io/2018-munich/talks/thanos-prometheus-at-scale/
Bartłomiej Płotka
Fabian Reinartz
The Prometheus Monitoring system has been thriving for several years. Along with its powerful data model, operational simplicity and reliability have been a key factor in its success. However, some questions were still largely unaddressed to this day. How can we store historical data at the order of petabytes in a reliable and cost-efficient way? Can we do so without sacrificing responsive query times? And what about a global view of all our metrics and transparent handling of HA setups?
Thanos takes Prometheus' strong foundations and extends it into a clustered, yet coordination free, globally scalable metric system. It retains Prometheus's simple operational model and even simplifies deployments further. Under the hood, Thanos uses highly cost-efficient object storage that's available in virtually all environments today. By building directly on top of the storage format introduced with Prometheus 2.0, Thanos achieves near real-time responsiveness even for cold queries against historical data. All while having virtually no cost overhead beyond that of the underlying object storage.
We will show the theoretical concepts behind Thanos and demonstrate how it seamlessly integrates into existing Prometheus setups.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
Ross Turk, VP, Marketing & Community, Inktank
Ceph is an open source distributed object store, network block device, and file system designed for reliability, performance, and scalability. It runs on standard hardware, has no single point of failure, and is supported by the Linux kernel. It also works great with OpenStack and CloudStack.
If you’ve heard of Ceph but aren’t sure where it fits into your plans, this is the talk for you. Designed for those who are new to Ceph, this talk will cover Ceph’s design principles, overall architecture, and integration with other operational systems.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
This was a tutorial which Mark McClain and I led at ONUG, Spring 2015. It was well received and serves as a walk through of OpenStack Neutron and it's features and usage.
Presentation held during SFScon15 - Free Software Conference, 13.11.2015 @ TIS innovation park, Bolzano
--
Proxmox VE is a complete open source server virtualization management solution based on Debian. It supports both Linux containers (LXC) and KVM virtual machines and makes them available under an integrated web-based management GUI. With Proxmox VE companies can manage virtual machines, storage (such as Ceph, ZFS, NFS, GlusterFS, and iSCSI), virtualized networks, and highly available clusters. This talk will give attendees an overview of the new features of Proxmox VE 4.0 focusing on the new Proxmox VE HA Manager and the container technology Linux Containers (LXC). It will present how highly available virtual machines can be managed in a multi-node cluster. The talk will also share some insights on how it is to be a developer in open source projects and how to get into it as a student.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
Slides from #PromCon2018 Munich.
https://promcon.io/2018-munich/talks/thanos-prometheus-at-scale/
Bartłomiej Płotka
Fabian Reinartz
The Prometheus Monitoring system has been thriving for several years. Along with its powerful data model, operational simplicity and reliability have been a key factor in its success. However, some questions were still largely unaddressed to this day. How can we store historical data at the order of petabytes in a reliable and cost-efficient way? Can we do so without sacrificing responsive query times? And what about a global view of all our metrics and transparent handling of HA setups?
Thanos takes Prometheus' strong foundations and extends it into a clustered, yet coordination free, globally scalable metric system. It retains Prometheus's simple operational model and even simplifies deployments further. Under the hood, Thanos uses highly cost-efficient object storage that's available in virtually all environments today. By building directly on top of the storage format introduced with Prometheus 2.0, Thanos achieves near real-time responsiveness even for cold queries against historical data. All while having virtually no cost overhead beyond that of the underlying object storage.
We will show the theoretical concepts behind Thanos and demonstrate how it seamlessly integrates into existing Prometheus setups.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
Ross Turk, VP, Marketing & Community, Inktank
Ceph is an open source distributed object store, network block device, and file system designed for reliability, performance, and scalability. It runs on standard hardware, has no single point of failure, and is supported by the Linux kernel. It also works great with OpenStack and CloudStack.
If you’ve heard of Ceph but aren’t sure where it fits into your plans, this is the talk for you. Designed for those who are new to Ceph, this talk will cover Ceph’s design principles, overall architecture, and integration with other operational systems.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
At Percona Live in April 2016, Red Hat's Kyle Bader reviewed the general architecture of Ceph and then discussed the results of a series of benchmarks done on small to mid-size Ceph clusters, which led to the development of prescriptive guidance around tuning Ceph storage nodes (OSDs).
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Community
Sage Weil, Creator of Ceph, Founder & CTO, Inktank
CephFS is a distributed filesystem built on RADOS, offering POSIX-semantics and a true scale-out architecture. While production deployments of CephFS do exist, it still needs lots of testing and hardening before it can be used in the most challenging (and interesting) scenarios. In this session, Sage will discuss the future of CephFS, includ- ing the areas where it still needs work and ways the community can help.
RADOS is a surprisingly flexible object store. To take advantage of its rich feature set, developers can build with its programmable library, librados. Librados is avail- able in many languages, and offers access to key/value stores, object classes, cluster health and status, and other useful RADOS internals. This session will cover how to use librados, discuss situations where librados is the right choice, and share a list of lesser-known RADOS features that developers can tap into.
Similar to Ceph Intro and Architectural Overview by Ross Turk (20)
The Future of SDN in CloudStack by Chiradeep Vittalbuildacloud
The core of CloudStack networking has always been software-defined. As the networking industry evolves to a software-defined future, CloudStack will have to evolve with it.
The presentation will examine the present state of SDN in CloudStack, look at some industry directions and attempt to predict the evolution of CloudStack with those trends.
Bio
Chiradeep Vittal is a Distinguished Engineer in the Converged Infrastructure Group at Citrix where he has technology leadership responsibilities around Citrix Cloud Platform, Citrix Lifecycle Manager and Citrix Workspace Pod. He is also a Project Management Committee member of the Apache CloudStack Project. At cloud.com (acquired by Citrix), he was a founding engineer, often tasked with the thorny details of virtualized networking and storage. Prior to cloud.com, he worked at several Silicon Valley startups in various architectural roles.
Chiradeep has a B.Tech in Computer Science from IIT, Bombay and a M.Sc from the University of Alberta. He has spoken / presented at several conferences, including CloudStack Collab, LISA, OSCON, ONS, SDN Summit and LinuxCon. His twitter handle is @chiradeep and occasionally blogs at http://cloudierthanthou.wordpress.com
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapatibuildacloud
In this talk Suresh will discuss how Nuage Networks Virtualized Services Platform (VSP) helps overcome the challenges that cloud service providers and large enterprises face delivering, and managing, large multi-tenant clouds. He will discuss how Nuage Networks delivers a massively scalable SDN solution that ensures that datacenters, and wide area networks, are able to respond instantly to demand, and are boundary-less. The talk will also provide an overview of the SDN capabilities that Nuage VSP adds to CloudStack.
Bio
Suresh is the VP of Engineering at Nuage Networks. He has over 19 years experience in software development, building great teams and delivering high quality software. As the first engineer at Nuage Networks, Suresh played a key role in shaping the architecture of the Nuage Virtualized Services Platform (VSP). Suresh’s experience includes extensive protocol development, having developed IP routing and multicast protocols from scratch and deploying them in large ISPs. Suresh was part of the original TiMetra team before becoming part of Alcatel Lucent as Principal Engineer. He then took a role as Director of Engineering at Juniper where he worked on their QFabric product. Earlier in his career, Suresh worked in software engineering at Shasta Networks (Nortel acquired) as well as Fore Systems (Marconi, Ericsson acquired).
L4-L7 services for SDN and NVF by Youcef Laribibuildacloud
In this talk, we will discuss how L4-L7 devices can integrate in various SDN architectures, discuss benefits and some of the challenges that such integration represents. We will also talk about how SDN and NFV relate, and what are the different challenges to successfully deploy L4-L7 devices as Virtual Network Functions (VNFs) or provide such services to the NFV Infrastructure (VIM).
Bio
Youcef Laribi is a Principal Architect in the Delivery Networks BU at Citrix. He is responsible for driving the integration projects of the NetScaler ADC product with several Cloud, SDN and Automation environments including OpenStack, CloudStack, VMware NSX and Cisco ACI. He is also the Citrix representative on the OpenDaylight Technical Steering Committee. His background is mainly in Operating Systems and Distributed Systems, and he worked on several middleware technologies from DCE and CORBA in the early days, to J2EE and .NET to SOA and micro-services today. Youcef speaks 4 languages and holds a PhD and an MSc in Computer Science from the French INPG Institute in Grenoble, France.
Jenkins, jclouds, CloudStack, and CentOS by David Nalleybuildacloud
Setting up continuous integration for a single project can be a pretty daunting task. Doing that for hundreds of projects becomes a challenge of a different magnitude. Not only are their capacity problems, but some tests are destructive to the testing environment, some have esoteric environment demands. See how this is solved in the real world using Jenkins, jclouds, CloudStack to build an on-demand build infrastructure.
About David Nalley
David Nalley is the Vice President, Infrastructure at the Apache Software Foundation and a CloudStack PMC member.
This session will introduce monitoring CloudStack with Zenoss, and the CloudStack ZenPack. I will cover in detail what you get out of monitoring CloudStack with Zenoss. Additionally I will cover installation of Zenoss, interacting with our community and Q&A.
About Andrew Kirch
Andrew D Kirch is the Community Manager at Zenoss, a software development company specializing in Unified Monitoring with 130 employees, headquartered in Austin, Texas. The company offers an open source network and systems monitoring product called Zenoss Core, and a commercial product called Zenoss Service Dynamics. The company has over 35,000 users in over 180 countries. Customers include major organizations such as Chic-fil-a, Huntington Bank, Netflix, SunGard, Accenture, NASA, FIS Global, and many more.
As Community Manager, Andrew works directly with product users every day. He has over 10 years of experience as a Systems/Network Administrator, with specialization including SNMP and network monitoring. Prior to working at Zenoss he was principal at a unified communications VAR focused in the Midwest. In his spare time he puts computer crackers in prison.
Guaranteeing Storage Performance by Mike Tutkowskibuildacloud
This session will introduce the basics of primary storage in CloudStack. Additionally, I discuss the challenges of guaranteeing storage performance in a cloud and how by leveraging the latest enhancements to CloudStack, storage administrators can deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. I'll review the CloudStack enhancements in detail, outline the management benefits they provide and discuss common go-to-market approaches.
About Mike Tutkowski
Mike Tutkowski, a member of the CloudStack PMC, develops software for the Apache Software Foundation's CloudStack project to help drive improvements in its storage component and to integrate SolidFire more deeply into the product.
Cloud Application Blueprints with Apache Brooklyn by Alex Henevaldbuildacloud
So you have your cloud running, what now? Extend the devops agility from infrastructure to applications by learning how to use Brooklyn, the Apache-incubating project for application management. Create blueprints for applications to enable one-click deployment into Cloudstack, Docker, localhost, or other targets. Leverage your favourite server management tools, from Bash to Chef. Automatically change the deployment after it's deployed. Attach policies to support scaling, failover, and alerting in the way your application needs.
In this session we'll show how with just a few lines of YAML, you can build powerful application blueprints by composing pre-existing components, from polyglot web stacks to big data tools such as Riak. We'll also cover defining new blueprints using custom scripts, configuring machine selection and runtime policies, and managing new locations such as Clocker -- the cloud of docker.
About Alex Henevald
Alex brings twenty years experience designing software solutions in the enterprise, start-up, and academic sectors. Most recently Alex was with Enigmatec Corporation where he led the development of what is now the Monterey® Middleware Platform™. Previous to that, he founded PocketWatch Systems, commercialising results from his doctoral research. Alex holds a PhD (Informatics) and an MSc (Cognitive Science) from the University of Edinburgh and an AB (Mathematics) from Princeton University. Alex was both a USA Today Academic All-Star and a Marshall Scholar.
Introduction to Apache CloudStack by David Nalleybuildacloud
Apache CloudStack is a mature, easy to deploy IaaS platform. That doesn't mean that it can be done without thought or preparation. Learn how CloudStack can be most efficiently deployed, and the problems to avoid in the process.
About David Nalley
David is a recovering sysadmin with a decade of experience. He’s a committer on the Apache CloudStack (incubating) project, a contributor to the Fedora Project and the Vice President of Infrastructure at the Apache Software Foundation.
Monitoring CloudStack in context with Converged Infrastructure by Mike Turnlundbuildacloud
CloudStack is a powerful, flexible technology that greatly expands the economic potential for a datacenter. Performance management of CloudStack in context with the rest of the datacenter is critical for quick fault diagnostics, proactive management of bottlenecks and quickly bringing up or tearing down services. Learn how proper tooling can make the difference in running an excellent service versus a problem plagued environment.
Mike is a 25+ year technology veteran with past roles in software engineering, product development, planning, and operations at CA Technologies, Cisco, and AMD. He currently leads a business development team at CA Technologies driving their partnerships in virtualized infrastructure and converged compute environments. Mike is based in Santa Clara, California. His time outside of work is spent with wife and four children, biking, and running triathlons. He has bachelors and masters degrees from the University of California, Santa Barbara.
As you go into the cloud, the applications you are building will often be built on service-oriented architectures that communicate through RESTful APIs. Where API design and development used to be an uncommon thing, today it has become a basic application requirement. George Reese will cover the basic considerations in designing and implementing an API for your applications.
George Reese is the author of a number of technology books and a regular speaker on RESTful APIs, cloud computing, Java, and database systems. His most recent books are The REST API Design Handbook and O’Reilly’s Cloud Application Architectures. Professionally, he is the Executive Director of Cloud Computing at Dell as a result of Dell's recent acquisition of Enstratius, a company George co-founded. George has also led a number of Open Source projects, including several MUD libraries and the Imaginary Home home automation libraries for Java. He is also the primary maintainer of Dasein Cloud, a cloud abstraction API for Java.
George holds a BA from Bates College in Maine and an MBA from the Kellogg School of Management at Northwestern University.
Enterprise grade firewall and ssl termination to ac by will stevensbuildacloud
CloudOps has add support for enterprise grade security products in ACS. CloudOps has developed an integration with the Palo Alto Networks firewall appliance to enable ACS to orchestrate network features such as network creation, Source NAT, Static NAT, Port Forwarding and Firewall rules on the Palo Alto device. Additionally, CloudOps has extended ACS to support SSL certificate management as well as SSL termination by external load balancers. The existing ACS NetScaler plugin has been improved to support this new SSL termination functionality. The talk will cover the features added as well as a basic overview of how they are used.
Will Stevens is the Lead Developer at CloudOps. He has been directly involved in extending ACS to support more enterprise grade security functionality. Will has over 10 years experience as a software developer and is primarily focused on cloud integrations at CloudOps.
Securing Your Cloud With the Xen Hypervisor by Russell Pavlicekbuildacloud
The Xen Project produces a mature, enterprise-grade virtualization technology designed for the Cloud featuring many advanced and unique security features. For this reason, it's a hypervisor of choice for government agencies like NSA and the DoD, as well as for new security-minded projects the QubesOS Secure Desktop. However, while much of the security of Xen is inherent in its design, many of the advanced security features, such as stub domains, driver domains, and Xen Security Modules (XSM), are not enabled by default. This session will describe many of the advanced security features of Xen, as well as explaining why Xen is an excellent choice for secure Clouds
DevCloud - Setup and Demo on Apache CloudStack buildacloud
Hands-on Hacking Session by Amogh Vasekar
1. Demo of CloudStack using DevCloud
2. How we got there -
A) Building CloudStack from scratch
B) Deploying databases
C) Configuring your own DevCloud using Marvin
Cloud Network Virtualization with Juniper Contrailbuildacloud
Description: Contrail Technology will be discussed covering architecture, capabilities and use cases. It will be followed by a demonstration on current Contrail implementation on CloudStack/Openstack.
Parantap works as a Sr. Director of Solutions Engineering for Contrail Product within Juniper. Before Juniper, Parantap led the network architecture team for Microsoft Online Services (Windows Azure, MS Bing). Prior to Microsoft, Parantap worked as a core engineering manager for UUNet Technologies building Internet backbones.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
21. 21
CEPH
OBJECT GATEWAY
A powerful S3- and Swift-
compatible gateway that
brings the power of the
Ceph Object Store to
modern applications
CEPH
BLOCKDEVICE
A distributed virtual block
device that delivers high-
performance, cost-
effective storage for
virtual machines and
legacy applications
CEPH
FILESYSTEM
A distributed, scale-out
filesystem with POSIX
semantics that provides
storage for a legacy and
modern applications
OBJECTS VIRTUAL DISKS FILES & DIRECTORIES
CEPH STORAGECLUSTER
A reliable, easy to manage, next-generation distributed object
store that provides storage of unstructured data for applications
22. 22
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-
managing, intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
CEPH FS
A POSIX-compliant
distributed file
system, with a Linux
kernel client and
support for FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APP APP HOST/VM CLIENT
23. 23
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-
managing, intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
CEPH FS
A POSIX-compliant
distributed file
system, with a Linux
kernel client and
support for FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APP APP HOST/VM CLIENT
26. 26
Monitors:
• Maintain cluster membership
and state
• Provide consensus for
distributed decision-making
• Small, odd number
• These do not serve stored
objects to clients
M
OSDs:
• 10s to 10000s in a cluster
• One per disk
• (or one per SSD, RAID group…)
• Serve stored objects to
clients
• Intelligently peer to perform
replication and recovery tasks
27. 27
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
CEPH FS
A POSIX-compliant
distributed file
system, with a Linux
kernel client and
support for FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APP APP HOST/VM CLIENT
29. L
LIBRADOS
• Provides direct access to
RADOS for applications
• C, C++, Python, PHP, Java, Erl
ang
• Direct access to storage nodes
• No HTTP overhead
30. 30
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-
managing, intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
CEPH FS
A POSIX-compliant
distributed file
system, with a Linux
kernel client and
support for FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APP APP HOST/VM CLIENT
32. 32
RADOS Gateway:
• REST-based object storage
proxy
• Uses RADOS to store objects
• API supports
buckets, accounts
• Usage accounting for billing
• Compatible with S3 and
Swift applications
33. 33
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
CEPH FS
A POSIX-compliant
distributed file
system, with a Linux
kernel client and
support for FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APP APP HOST/VM CLIENT
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
37. 37
RADOS Block Device:
• Storage of disk images in
RADOS
• Decouples VMs from host
• Images are striped across the
cluster (pool)
• Snapshots
• Copy-on-write clones
• Support in:
• Mainline Linux Kernel (2.6.39+)
• Qemu/KVM, native Xen coming
soon
• OpenStack, CloudStack, Nebula,
Proxmox
38. 38
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
CEPH FS
A POSIX-compliant
distributed file
system, with a Linux
kernel client and
support for FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APP APP HOST/VM CLIENT
40. 40
Metadata Server
• Manages metadata for a
POSIX-compliant shared
filesystem
• Directory hierarchy
• File metadata
(owner, timestamps, mode, et
c.)
• Stores metadata in RADOS
• Does not serve file data to
clients
• Only required for shared
filesystem
74. Getting Started With Ceph
Read about the latest version of Ceph.
• The latest stuff is always at http://ceph.com/get
Deploy a test cluster using ceph-deploy.
• Read the quick-start guide at http://ceph.com/qsg
Deploy a test cluster on the AWS free-tier using Juju.
• Read the guide at http://ceph.com/juju
Read the rest of the docs!
• Find docs for the latest release at http://ceph.com/docs
74
Have a working cluster up quickly.
75. Getting Involved With Ceph
Most project discussion happens on the mailing list.
• Join or view archives at http://ceph.com/list
IRC is a great place to get help (or help others!)
• Find details and historical logs at http://ceph.com/irc
The tracker manages our bugs and feature requests.
• Register and start looking around at http://ceph.com/tracker
Doc updates and suggestions are always welcome.
• Learn how to contribute docs at http://ceph.com/docwriting
75
Help build the best storage system around!
76. Ceph Cuttlefish (v0.61.x)
1. New ceph-deploy provisioning tool
2. New Chef cookbooks
3. Fully-tested packages for RHEL (in EPEL)
4. RGW authentication management API
5. RADOS pool quotas
6. New ceph df
7. RBD incremental snapshots
76
Best Ceph ever.
Hi, welcome to my talk. I’m really happy that you chose to join me for this, given your many other choices. Believe me, I’m going to tell you things that will literally tear your head off. Ok, not literally. That would be really messy.
Working through a computer means that we can store more information, and we can store it more quickly. But it also means that we’re separated from the information we’ve created.
Working through a computer means that we can store more information, and we can store it more quickly. But it also means that we’re separated from the information we’ve created.
Ceph was designed to be self-managing. Lots of distributed storage systems require operator intervention when something goes wrong.
RADOS is a distributed object store, and it’s the foundation for Ceph. On top of RADOS, the Ceph team has built three applications that allow you to store data and do fantastic things. But before we get into all of that, let’s start at the beginning of the story.
But that’s a lot to digest all at once. Let’s start with RADOS.
Remember all that meta-data we talked about in the beginning? Feels so long ago. It has to be stored somewhere! Something has to keep track of who created files, when they were created, and who has the right to access them. And something has to remember where they live within a tree. Enter MDS, the Ceph Metadata Server. Clients accessing Ceph FS data first make a request to an MDS, which provides what they need to get files from the right OSDs.
If you aren’t running Ceph FS, you don’t need to deploy metadata servers.
So now that you know what Ceph is, I’m going to tell you what makes it different.
All of that metadata for Ceph FS has to be stored somewhere. It’s a giant diary, keeping track of where everything is and who owns it.
MDSs store all of their data within RADOS itself, but there’s still a problem…
There are multiple MDSs!
So how do you have one tree and multiple servers?
If there’s just one MDS (which is a terrible idea), it manages metadata for the entire tree.
When the second one comes along, it will intelligently partition the work by taking a subtree.
When the third MDS arrives, it will attempt to split the tree again.
Same with the fourth.
A MDS can actually even just take a single directory or file, if the load is high enough. This all happens dynamically based on load and the structure of the data, and it’s called “dynamic subtree partitioning”.