IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
In this session, you'll learn how RBD works, including how it:
Uses RADOS classes to make access easier from user space and within the Linux kernel.
Implements thin provisioning.
Builds on RADOS self-managed snapshots for cloning and differential backups.
Increases performance with caching of various kinds.
Uses watch/notify RADOS primitives to handle online management operations.
Integrates with QEMU, libvirt, and OpenStack.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
In this session, you'll learn how RBD works, including how it:
Uses RADOS classes to make access easier from user space and within the Linux kernel.
Implements thin provisioning.
Builds on RADOS self-managed snapshots for cloning and differential backups.
Increases performance with caching of various kinds.
Uses watch/notify RADOS primitives to handle online management operations.
Integrates with QEMU, libvirt, and OpenStack.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...ScaleGrid.io
Compare top PostgreSQL high availability frameworks - PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr) and Patroni to improve your app uptime. ScaleGrid blog - https://scalegrid.io/blog/whats-the-best-postgresql-high-availability-framework-paf-vs-repmgr-vs-patroni-infographic/
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...ScaleGrid.io
Compare top PostgreSQL high availability frameworks - PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr) and Patroni to improve your app uptime. ScaleGrid blog - https://scalegrid.io/blog/whats-the-best-postgresql-high-availability-framework-paf-vs-repmgr-vs-patroni-infographic/
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Netflix Open Source Meetup Season 4 Episode 2aspyker
In this episode, we will take a close look at 2 different approaches to high-throughput/low-latency data stores, developed by Netflix.
The first, EVCache, is a battle-tested distributed memcached-backed data store, optimized for the cloud. You will also hear about the road ahead for EVCache it evolves into an L1/L2 cache over RAM and SSDs.
The second, Dynomite, is a framework to make any non-distributed data-store, distributed. Netflix's first implementation of Dynomite is based on Redis.
Come learn about the products' features and hear from Thomson and Reuters, Diego Pacheco from Ilegra and other third party speakers, internal and external to Netflix, on how these products fit in their stack and roadmap.
Automation of Hadoop cluster operations in Arm Treasure DataYan Wang
This talk will focus on the journey we in the Arm Treasure Data hadoop team is on to simplify and automate how we deploy hadoop. In Arm Treasure Data, up to recently we were running hadoop clusters in two clouds. Due to fast increase of deployments into more sites, the overhead of manual operations has started to strain us. Due to this, we started a project last year to automate and simplify how we deploy using tools like AWS autoscaling groups. Steps we have taken so far are modernize and standardize instance types, moved from manually executed deployment scripts to api triggered work flows, actively working to deprecate chef in favor of debian packages and AWS Codedeploy. We have also started to automate a lot of operations that up to recently were manual, like scaling in and out clusters, and routing traffic between clusters. We also started simplify health check and node snapshotting. And our goal of the year is close to fully automated cluster operations.
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
In this talk Kim-Norman Sahm and Alexander Trost dive into the challenges of storage for containerized applications on Kubernetes. We'll see how the current state is and how Rook can help with that. We are going to especially look at Ceph run through Rook here, but nonetheless trying not to lose sight of the whole picture. There is a lot to keep in mind storage as is, but everything gets more complex with storage for containers. From what type of storage to how much and how "safe" it should, all questions that should be asked and most of them which should be answered as well. Rook's project site https://rook.io/
Kim-Norman Sahm is CTO of Cloudical. He also works as Executive Cloud Architect at Cloudical. Previously, he was OpenStack Cloud Architect at T-Systems (operational services GmbH) and noris network AG. He is an expert of the technologies OpenStack, Ceph and Kubernetes (CKA).
Alexander Trost works as a DevOps Engineer at Cloudical Deutschlang GmbH and is a Certified Kubernetes Administrator (CKA). He is one of four maintainers of the Rook.io Project and is engaged in several more open source projects, for example a Prometheus exporter for Dell Hardware (Dell OMSA Metrics), k8s-vagrant-multi-node an easy local multi node Kubernetes environment, and others. Besides Containers and Kubernetes he is expert on Software Defined Storage, Golang and Continuous Integration (with GitLab CI). He passionately enjoys working on open source projects, such as Rook, Ancientt and other projects.
Webinar: Building a multi-cloud Kubernetes storage on GitLabMayaData Inc
In this webinar, we talk about how to set up redundant and highly available storage for your repos to make your key repositories easier to deploy, more reliable, and easier to back up or move to a different cloud. We reviewed the current practices for highly-available CI/CD and showcased how there’s a better way to do it with OpenEBS.
YugabyteDB - Distributed SQL Database on KubernetesDoKC
ABSTRACT OF THE TALK
Kubernetes has hit a home run for stateless workloads, but can it do the same for stateful services such as distributed databases? Before we can answer that question, we need to understand the challenges of running stateful workloads on, well anything. In this talk, we will first look at which stateful workloads, specifically databases, are ideal for running inside Kubernetes. Secondly, we will explore the various concerns around running databases in Kubernetes for production environments, such as: - The production-readiness of Kubernetes for stateful workloads in general - The pros and cons of the various deployment architectures - The failure characteristics of a distributed database inside containers In this session, we will demonstrate what Kubernetes brings to the table for stateful workloads and what database servers must provide to fit the Kubernetes model. This talk will also highlight some of the modern databases that take full advantage of Kubernetes and offer a peek into what’s possible if stateful services can meet Kubernetes halfway. We will go into the details of deployment choices, how the different cloud-vendor managed container offerings differ in what they offer, as well as compare performance and failure characteristics of a Kubernetes-based deployment with an equivalent VM-based deployment.
BIO
Amey is a VP of Data Engineering at Yugabyte with a deep passion for Data Analytics and Cloud-Native technologies. In his current role, he collaborates with Fortune 500 enterprises to architect their business applications with scalable microservices and geo-distributed, fault-tolerant data backend using YugabyteDB. Prior to joining Yugabyte, he spent 5 years at Pivotal as Platform Data Architect and has helped enterprise customers across multiple industry verticals to extend their analytical capabilities using Pivotal & OSS Big Data platforms. He is originally from Mumbai, India, and has a Master's degree in Computer Science from the University of Pennsylvania(UPenn), Philadelphia. Twitter: @ameybanarse LinkedIn: linkedin.com/in/ameybanarse/
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB AtlasMongoDB
Moving to a new home is daunting. Packing up all your things, getting a vehicle to move it all, unpacking it, updating your mailing address, and making sure you did not leave anything behind. Well, the move to MongoDB Atlas is similar, but all the logistics are already figured out for you by MongoDB.
RubiX: A caching framework for big data engines in the cloud. Helps provide data caching capabilities to engines like Presto, Spark, Hadoop, etc transparently without user intervention.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1eyam5s.
Howard Chu covers highlights of the LMDB design and discusses some of the internal improvements in slapd due to LMDB, as well as the impact of LMDB on other projects. Filmed at qconlondon.com.
Howard Chu has been writing Free/Open Source software since the 1980s. His work has spanned a wide range of computing topics, including most of the GNU utilities (gcc, gdb, gmake, etc.), networking protocols and tools, kernel and filesystem drivers, and focused on maximizing the useful work from a system.
Similar to Ceph data services in a multi- and hybrid cloud world (20)
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
Distributed storage is complicated, and historically Ceph hasn't spent a lot of time trying to hide that complexity, instead focusing on correctness, features, and flexibility. There has been a recent shift in focus to simplifying and streamlining the user/operator experience so that the information that is actually important is available without the noise of irrelevant details. Recent feature work has also focused on simplifying configurations that were previously possible but required tedious configuration steps to manage.
This talk will cover the key new efforts in Ceph Luminous that aim to simplify and automate cluster management, as well as the plans for upcoming releases to address longstanding Cephisms that make it "hard" (e.g., choosing PG counts).
Ceph is an open source distributed storage system that provides scalable object, block, and file interfaces on a commodity hardware. Luminous, the latest stable release of Ceph, was just released in August. This talk will cover all that is new in Luminous (there is a lot!) and provide a sneak peak at the roadmap for Mimic, which is due out in the Spring.
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform.
This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
The State of Ceph, Manila, and Containers in OpenStackSage Weil
OpenStack users deploying Ceph for block (Cinder) and object (S3/Swift) are unsurprisingly looking at Manila and CephFS to round out a unified storage solution. Ceph is based on a low-level object storage layer call RADOS that serves as the foundation for its object, block, and file services. Manila's file as a service in OpenStack enables a range of container-based use-cases with Docker and Kubernetes, but a variety of deployment architectures are possible.
This talk will cover the current state of CephFS support in Manila, including upstream Manila support, Manila works in progress, a progress update on CephFS itself, including new multi-tenancy support to facilitate cloud deployments, and a discussion of how this impact container deployment scenarios in an OpenStack cloud.
Keeping OpenStack storage trendy with Ceph and containersSage Weil
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
3. 3
UNIFIED STORAGE PLATFORM
RGW
S3 and Swift
object storage
LIBRADOS
Low-level storage API
RADOS
Reliable, elastic, highly-available distributed storage layer with
replication and erasure coding
RBD
Virtual block device
with robust feature set
CEPHFS
Distributed network
file system
OBJECT BLOCK FILE
4. 4
RELEASE SCHEDULE
12.2.z
13.2.z
Luminous
Aug 2017
Mimic
May 2018
WE ARE
HERE
● Stable, named release every 9 months
● Backports for 2 releases
● Upgrade up to 2 releases at a time
● (e.g., Luminous → Nautilus, Mimic → Octopus)
14.2.z
Nautilus
Feb 2019
15.2.z
Octopus
Nov 2019
7. 7
A CLOUDY FUTURE
● IT organizations today
○ Multiple private data centers
○ Multiple public cloud services
● It’s getting cloudier
○ “On premise” → private cloud
○ Self-service IT resources, provisioned on demand by developers and business units
● Next generation of cloud-native applications will span clouds
● “Stateless microservices” are great, but real applications have state.
8. 8
● Data placement and portability
○ Where should I store this data?
○ How can I move this data set to a new tier or new site?
○ Seamlessly, without interrupting applications?
● Introspection
○ What data am I storing? For whom? Where? For how long?
○ Search, metrics, insights
● Policy-driven data management
○ Lifecycle management
○ Conformance: constrain placement, retention, etc. (e.g., HIPAA, GPDR)
○ Optimize placement based on cost or performance
○ Automation
DATA SERVICES
9. 9
● Data sets are tied to applications
○ When the data moves, the application often should (or must) move too
● Container platforms are key
○ Automated application (re)provisioning
○ “Operators” to manage coordinated migration of state and applications that consume it
MORE THAN JUST DATA
10. 10
● Multi-tier
○ Different storage for different data
● Mobility
○ Move an application and its data between sites with minimal (or no) availability interruption
○ Maybe an entire site, but usually a small piece of a site
● Disaster recovery
○ Tolerate a site-wide failure; reinstantiate data and app in a new site quickly
○ Point-in-time consistency with bounded latency (bounded data loss)
● Stretch
○ Tolerate site outage without compromising data availability
○ Synchronous replication (no data loss) or async replication (different consistency model)
● Edge
○ Small (e.g., telco POP) and/or semi-connected sites (e.g., autonomous vehicle)
DATA USE SCENARIOS
12. 12
HOW WE USE BLOCK
● Virtual disk device
● Exclusive access by nature (with few exceptions)
● Strong consistency required
● Performance sensitive
● Basic feature set
○ Read, write, flush, maybe resize
○ Snapshots (read-only) or clones (read/write)
■ Point-in-time consistent
● Often self-service provisioning
○ via Cinder in OpenStack
○ via Persistent Volume (PV) abstraction in Kubernetes
Block device
XFS, ext4, whatever
Applications
13. 13
RBD - TIERING WITH RADOS POOLS
CEPH STORAGE CLUSTER
SSD 2x POOL HDD 3x POOL SSD EC 6+3 POOL
FSFS
KRBD librbd
✓ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
❏ Edge
KVM
14. 14
RBD - LIVE IMAGE MIGRATION
CEPH STORAGE CLUSTER
SSD 2x POOL HDD 3x POOL SSD EC 6+3 POOL
FSFS
KRBD librbd
✓ Multi-tier
✓ Mobility
❏ DR
❏ Stretch
❏ Edge
KVM
● New in Nautilus
15. 15
SITE BSITE A
RBD - STRETCH
STRETCH CEPH STORAGE CLUSTER
STRETCH POOL
FS
KRBD
WAN link
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
● Apps can move
● Data can’t - it’s everywhere
● Performance is compromised
○ Need fat and low latency pipes
16. 16
SITE BSITE A
RBD - STRETCH WITH TIERS
STRETCH CEPH STORAGE CLUSTER
STRETCH POOL
FS
KRBD
WAN link
✓ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
● Create site-local pools for performance
sensitive apps
A POOL B POOL
17. 17
SITE BSITE A
RBD - STRETCH WITH MIGRATION
STRETCH CEPH STORAGE CLUSTER
STRETCH POOL
WAN link
✓ Multi-tier
✓ Mobility
✓ DR
✓ Stretch
❏ Edge
● Live migrate images between pools
● Maybe even live migrate your app VM?
A POOL B POOL
FS
KRBD
18. 18
● Network latency is critical
○ Low latency for performance
○ Requires nearby sites, limiting usefulness
● Bandwidth too
○ Must be able to sustain rebuild data rates
● Relatively inflexible
○ Single cluster spans all locations
○ Cannot “join” existing clusters
● High level of coupling
○ Single (software) failure domain for all
sites
STRETCH IS SKETCH
19. 19
RBD ASYNC MIRRORING
CEPH CLUSTER A
SSD 3x POOL
PRIMARY
CEPH CLUSTER B
HDD 3x POOL
BACKUP
WAN link
Asynchronous mirroring
FS
librbd
● Asynchronously mirror writes
● Small performance overhead at primary
○ Mitigate with SSD pool for RBD journal
● Configurable time delay for backup
KVM
20. 20
● On primary failure
○ Backup is point-in-time consistent
○ Lose only last few seconds of writes
○ VM can restart in new site
● If primary recovers,
○ Option to resync and “fail back”
RBD ASYNC MIRRORING
CEPH CLUSTER A
SSD 3x POOL
CEPH CLUSTER B
HDD 3x POOL
WAN link
Asynchronous mirroring
FS
librbd
DIVERGENT PRIMARY
❏ Multi-tier
❏ Mobility
✓ DR
❏ Stretch
❏ Edge
KVM
21. 21
● Ocata
○ Cinder RBD replication driver
● Queens
○ ceph-ansible deployment of rbd-mirror via
TripleO
● Rocky
○ Failover and fail-back operations
● Gaps
○ Deployment and configuration tooling
○ Cannot replicate multi-attach volumes
○ Nova attachments are lost on failover
RBD MIRRORING IN CINDER
22. 22
● Hard for IaaS layer to reprovision app in new site
● Storage layer can’t solve it on its own either
● Need automated, declarative, structured specification for entire app stack...
MISSING LINK: APPLICATION ORCHESTRATION
24. 24
● Stable since Kraken
● Multi-MDS stable since Luminous
● Snapshots stable since Mimic
● Support for multiple RADOS data pools
● Provisioning via OpenStack Manila and Kubernetes
● Fully awesome
CEPHFS STATUS
✓ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
❏ Edge
25. 25
● We can stretch CephFS just like RBD pools
● It has the same limitations as RBD
○ Latency → lower performance
○ Limited by geography
○ Big (software) failure domain
● Also,
○ MDS latency is critical for file workloads
○ ceph-mds daemons be running in one site or another
● What can we do with CephFS across multiple clusters?
CEPHFS - STRETCH?
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
26. 26
7. A: create snap S3
● CephFS snapshots provide
○ point-in-time consistency
○ granularity (any directory in the system)
● CephFS rstats provide
○ rctime to efficiently find changes
● rsync provides
○ efficient file transfer
● Time bounds on order of minutes
● Gaps and TODO
○ “rstat flush” coming in Nautilus
■ Xuehan Xu @ Qihoo 360
○ rsync support for CephFS rstats
○ scripting / tooling
CEPHFS - SNAP MIRRORING
❏ Multi-tier
❏ Mobility
✓ DR
❏ Stretch
❏ Edge
time
1. A: create snap S1
2. rsync A→B
3. B: create snap S1
4. A: create snap S2
5. rsync A→B
6. B: create S2
8. rsync A→B
9. B: create S3
S1
S2
S3
S1
S2
SITE A SITE B
27. 27
● Yes.
● Sometimes.
● Some geo-replication DR features are built on rsync...
○ Consistent view of individual files,
○ Lack point-in-time consistency between files
● Some (many?) applications are not picky about cross-file consistency...
○ Content stores
○ Casual usage without multi-site modification of the same files
DO WE NEED POINT-IN-TIME FOR FILE?
28. 28
● Many humans love Dropbox / NextCloud / etc.
○ Ad hoc replication of directories to any computer
○ Archive of past revisions of every file
○ Offline access to files is extremely convenient and fast
● Disconnected operation and asynchronous replication leads to conflicts
○ Usually a pop-up in GUI
● Automated conflict resolution is usually good enough
○ e.g., newest timestamp wins
○ Humans are happy if they can rollback to archived revisions when necessary
● A possible future direction:
○ Focus less on avoiding/preventing conflicts…
○ Focus instead on ability to rollback to past revisions…
CASE IN POINT: HUMANS
29. 29
● Do we need point-in-time consistency for file systems?
● Where does the consistency requirement come in?
BACK TO APPLICATIONS
30. 30
MIGRATION: STOP, MOVE, START
time
● App runs in site A
● Stop app in site A
● Copy data A→B
● Start app in site B
● App maintains exclusive access
● Long service disruption
SITE A SITE B
❏ Multi-tier
✓ Mobility
❏ DR
❏ Stretch
❏ Edge
31. 31
MIGRATION: PRESTAGING
time
● App runs in site A
● Copy most data from A→B
● Stop app in site A
● Copy last little bit A→B
● Start app in site B
● App maintains exclusive access
● Short availability blip
SITE A SITE B
32. 32
MIGRATION: TEMPORARY ACTIVE/ACTIVE
● App runs in site A
● Copy most data from A→B
● Enable bidirectional replication
● Start app in site B
● Stop app in site A
● Disable replication
● No loss of availability
● Concurrent access to same data
SITE A SITE B
time
33. 33
ACTIVE/ACTIVE
● App runs in site A
● Copy most data from A→B
● Enable bidirectional replication
● Start app in site B
● Highly available across two sites
● Concurrent access to same data
SITE A SITE B
time
34. 34
● We don’t have general-purpose bidirectional file replication
● It is hard to resolve conflicts for any POSIX operation
○ Sites A and B both modify the same file
○ Site A renames /a → /b/a while Site B: renames /b → /a/b
● But applications can only go active/active if they are cooperative
○ i.e., they carefully avoid such conflicts
○ e.g., mostly-static directory structure + last writer wins
● So we could do it if we simplify the data model...
● But wait, that sounds a bit like object storage...
BIDIRECTIONAL FILE REPLICATION?
36. 36
WHY IS OBJECT SO GREAT?
● Based on HTTP
○ Interoperates well with web caches, proxies, CDNs, ...
● Atomic object replacement
○ PUT on a large object atomically replaces prior version
○ Trivial conflict resolution (last writer wins)
○ Lack of overwrites makes erasure coding easy
● Flat namespace
○ No multi-step traversal to find your data
○ Easy to scale horizontally
● No rename
○ Vastly simplified implementation
37. 37
● File is not going away, and will be critical
○ Half a century of legacy applications
○ It’s genuinely useful
● Block is not going away, and is also critical infrastructure
○ Well suited for exclusive-access storage users (boot devices, etc)
○ Performs better than file due to local consistency management, ordering etc.
● Most new data will land in objects
○ Cat pictures, surveillance video, telemetry, medical imaging, genome data
○ Next generation of cloud native applications will be architected around object
THE FUTURE IS… OBJECTY
38. 38
RGW FEDERATION MODEL
● Zone
○ Collection RADOS pools storing data
○ Set of RGW daemons serving that content
● ZoneGroup
○ Collection of Zones with a replication relationship
○ Active/Passive[/…] or Active/Active
● Namespace
○ Independent naming for users and buckets
○ All ZoneGroups and Zones replicate user and
bucket index pool
○ One Zone serves as the leader to handle User and
Bucket creations/deletions
● Failover is driven externally
○ Human (?) operators decide when to write off a
master, resynchronize
Namespace
ZoneGroup
Zone
39. 39
RGW FEDERATION TODAY
CEPH CLUSTER 1 CEPH CLUSTER 2 CEPH CLUSTER 3
RGW ZONE M
ZONEGROUP Y
RGW ZONE Y-A RGW ZONE Y-B
ZONEGROUP X
RGW ZONE X-A RGW ZONE X-B RGW ZONE N
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
● Gap: granular, per-bucket management of replication
40. 40
ACTIVE/ACTIVE FILE ON OBJECT
CEPH CLUSTER A
RGW ZONE A
CEPH CLUSTER B
RGW ZONE B
● Data in replicated object zones
○ Eventually consistent, last writer wins
● Applications access RGW via NFSv4
NFSv4 NFSv4
❏ Multi-tier
❏ Mobility
✓ DR
✓ Stretch
❏ Edge
41. 41
● ElasticSearch
○ Index entire zone by object or user metadata
○ Query API
● Cloud sync (Mimic)
○ Replicate buckets to external object store (e.g., S3)
○ Can remap RGW buckets into multiple S3 bucket, same S3 bucket
○ Remaps ACLs, etc
● Archive (Nautilus)
○ Replicate all writes in one zone to another zone, preserving all versions
● Pub/sub (Nautilus)
○ Subscribe to event notifications for actions like PUT
○ Integrates with knative serverless! (See Huamin and Yehuda’s talk at Kubecon next month)
OTHER RGW REPLICATION PLUGINS
42. 42
PUBLIC CLOUD STORAGE IN THE MESH
CEPH ON-PREM CLUSTER
CEPH BEACHHEAD CLUSTER
RGW ZONE B RGW GATEWAY ZONE
CLOUD OBJECT STORE
● Mini Ceph cluster in cloud as gateway
○ Stores federation and replication state
○ Gateway for GETs and PUTs, or
○ Clients can access cloud object storage directly
● Today: replicate to cloud
● Future: replicate from cloud
43. 43
Today: Intra-cluster
● Many RADOS pools for a single RGW zone
● Primary RADOS pool for object “heads”
○ Single (fast) pool to find object metadata
and location of the tail of the object
● Each tail can go in a different pool
○ Specify bucket policy with PUT
○ Per-bucket policy as default when not
specified
● Policy
○ Retention (auto-expire)
RGW TIERING
Nautilus
● Tier objects to an external store
○ Initially something like S3
○ Later: tape backup, other backends…
Later
● Encrypt data in external tier
● Compression
● (Maybe) cryptographically shard across
multiple backend tiers
● Policy for moving data between tiers
✓ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
❏ Edge
44. 44
Today
● RGW as gateway to a RADOS cluster
○ With some nifty geo-replication features
● RGW redirects clients to the correct zone
○ via HTTP Location: redirect
○ Dynamic DNS can provide right zone IPs
● RGW replicates at zone granularity
○ Well suited for disaster recovery
RGW - THE BIG PICTURE
Future
● RGW as a gateway to a mesh of sites
○ With great on-site performance
● RGW may redirect or proxy to right zone
○ Single point of access for application
○ Proxying enables coherent local caching
● RGW may replicate at bucket granularity
○ Individual applications set durability needs
○ Enable granular application mobility
46. 46
CEPH AT THE EDGE
● A few edge examples
○ Telco POPs: ¼ - ½ rack of OpenStack
○ Autonomous vehicles: cars or drones
○ Retail
○ Backpack infrastructure
● Scale down cluster size
○ Hyper-converge storage and compute
○ Nautilus: brings better memory control
● Multi-architecture support
○ aarch64 (ARM) builds upstream
○ POWER builds at OSU / OSL
● Hands-off operation
○ Ongoing usability work
○ Operator-based provisioning (Rook)
● Possibly unreliable WAN links
Control Plane,
Compute / Storage
Compute
Nodes
Compute
Nodes
Compute
Nodes
Central Site
Site1
Site2
Site3
47. 47
● Block: async mirror edge volumes to central site
○ For DR purposes
● Data producers
○ Write generated data into objects in local RGW zone
○ Upload to central site when connectivity allows
○ Perhaps with some local pre-processing first
● Data consumers
○ Access to global data set via RGW (as a “mesh gateway”)
○ Local caching of a subset of the data
● We’re most interested in object-based edge scenarios
DATA AT THE EDGE
❏ Multi-tier
❏ Mobility
❏ DR
❏ Stretch
✓ Edge
49. 49
WHY ALL THE KUBERNETES TALK?
● True mobility is a partnership between orchestrator and storage
● Kubernetes is an emerging leader in application orchestration
● Persistent Volumes
○ Basic Ceph drivers in Kubernetes, ceph-csi on the way
○ Rook for automating Ceph cluster deployment and operation, hyperconverged
● Object
○ Trivial provisioning of RGW via Rook
○ Coming soon: on-demand, dynamic provisioning of Object Buckets and User (via Rook)
○ Consistent developer experience across different object backends (RGW, S3, minio, etc.)
51. 51
SUMMARY
● Data services: mobility, introspection, policy
● These are a partnership between storage layer and application orchestrator
● Ceph already has several key multi-cluster capabilities
○ Block mirroring
○ Object federation, replication, cloud sync; cloud tiering, archiving and pub/sub coming
○ Cover elements of Tiering, Disaster Recovery, Mobility, Stretch, Edge scenarios
● ...and introspection (elasticsearch) and policy for object
● Future investment is primarily focused on object
○ RGW as a gateway to a federated network of storage sites
○ Policy driving placement, migration, etc.
● Kubernetes will play an important role
○ both for infrastructure operators and applications developers