Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
The document provides an agenda and overview of the Ceph Project Update and Ceph Month event in June 2021. Some key points:
- Ceph is open source software that provides scalable, reliable distributed storage across commodity hardware.
- Ceph Month will include weekly sessions on topics like RADOS, RGW, RBD, and CephFS to promote interactive discussion.
- The Ceph Foundation is working on projects to improve documentation, training materials, and lab infrastructure for testing and development.
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
This document discusses best practices for implementing Ceph-powered storage as a service. It covers planning a Ceph implementation based on business and technical requirements. Various use cases for Ceph are described, including OpenStack, cloud storage, web-scale applications, high performance block storage, archive/cold storage, databases and Hadoop. Architectural considerations for redundancy, servers, networking are also discussed. The document concludes with a case study of a university implementing a Ceph-based storage cloud to address storage needs for cancer and genomic research data.
Sanger OpenStack presentation March 2017Dave Holland
A description of the Sanger Institute's journey with OpenStack to date, covering RHOSP, Ceph, S3, user applications, and future plans. Given at the Sanger Institute's OpenStack Day.
This document summarizes Dan van der Ster's experience scaling Ceph at CERN. CERN uses Ceph as the backend storage for OpenStack volumes and images, with plans to also use it for physics data archival and analysis. The 3PB Ceph cluster consists of 47 disk servers and 1,128 OSDs. Some lessons learned include managing latency, handling many objects, tuning CRUSH, trusting clients, and avoiding human errors when managing such a large cluster.
Ceph began as a research project in 2005 to create a scalable object storage system. It was incubated at DreamHost from 2007-2012 and spun out as an independent company called Inktank in 2012. Key developments included the RADOS distributed storage cluster, erasure coding, and the Ceph filesystem. The project has grown a large community and is used in many production deployments, focusing on areas like tiering, erasure coding, replication, and integrating with the Linux kernel. Future plans include improving CephFS, expanding the ecosystem through different storage backends, strengthening governance, and targeting new use cases in big data and the enterprise.
The document provides an agenda and overview of the Ceph Project Update and Ceph Month event in June 2021. Some key points:
- Ceph is open source software that provides scalable, reliable distributed storage across commodity hardware.
- Ceph Month will include weekly sessions on topics like RADOS, RGW, RBD, and CephFS to promote interactive discussion.
- The Ceph Foundation is working on projects to improve documentation, training materials, and lab infrastructure for testing and development.
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
This document discusses best practices for implementing Ceph-powered storage as a service. It covers planning a Ceph implementation based on business and technical requirements. Various use cases for Ceph are described, including OpenStack, cloud storage, web-scale applications, high performance block storage, archive/cold storage, databases and Hadoop. Architectural considerations for redundancy, servers, networking are also discussed. The document concludes with a case study of a university implementing a Ceph-based storage cloud to address storage needs for cancer and genomic research data.
Sanger OpenStack presentation March 2017Dave Holland
A description of the Sanger Institute's journey with OpenStack to date, covering RHOSP, Ceph, S3, user applications, and future plans. Given at the Sanger Institute's OpenStack Day.
This document summarizes Dan van der Ster's experience scaling Ceph at CERN. CERN uses Ceph as the backend storage for OpenStack volumes and images, with plans to also use it for physics data archival and analysis. The 3PB Ceph cluster consists of 47 disk servers and 1,128 OSDs. Some lessons learned include managing latency, handling many objects, tuning CRUSH, trusting clients, and avoiding human errors when managing such a large cluster.
Ceph is an open-source distributed storage system that began as a research project in 2005. It has grown significantly since then, with the founding of Inktank in 2012 to support Ceph's development and commercial adoption. Key developments include the native Linux kernel client, erasure coding support, asynchronous replication, and improvements to CephFS. The future of Ceph includes stronger governance, a focus on performance and low-power architectures, integration with big data workloads, and use as an archival storage solution.
London Ceph Day Keynote: Building Tomorrow's Ceph Ceph Community
This document provides an overview of the history and development of Ceph, an open-source distributed storage system. It discusses Sage Weil's initial research that led to Ceph's creation, the incubation of Ceph at DreamHost, the launch of Inktank to support Ceph's development and adoption, and the current state of Ceph including its growing community and usage in production deployments. It also outlines Weil's vision for Ceph's future, including improving governance, adding new technologies like tiering and erasure coding, and expanding its role in areas like big data and the enterprise storage market.
Ceph Day London 2014 - The current state of CephFS development Ceph Community
The document discusses recent developments in CephFS. It provides an overview of CephFS architecture including components like clients, servers, storage and data placement. The focus is on improving resilience and making CephFS production-ready with features like online filesystem checking, journal resilience tools, client management and online diagnostics. The goal is to handle failures and diagnose problems in a distributed filesystem environment.
Ceph: A decade in the making and still going strongPatrick McGarry
Ceph is an open source distributed storage system that has been in development for over a decade. It started as a research project at UC Santa Cruz to build scalable object storage. Over the years, it has grown to include distributed block storage, file storage and an S3-compatible object store. Ceph is now used in many production deployments and has a thriving developer community, though continued work is needed to improve areas like CephFS and add new features around erasure coding, tiering and replication. The future of Ceph involves strengthening governance, expanding the ecosystem, improving performance and gaining more adoption in enterprise storage environments.
This document discusses disaggregating Ceph storage using NVMe over Fabrics (NVMeoF). It motivates using NVMeoF by showing the performance limitations of directly attaching multiple NVMe drives to individual compute nodes. It then proposes a design to leverage the full resources of a cluster by distributing NVMe drives across dedicated storage nodes and connecting them to compute nodes over a high performance fabric using NVMeoF and RDMA. Some initial Ceph performance measurements using this model show improved IOPS and latency compared to the direct attached approach. Future work could explore using SPDK and Linux kernel improvements to further optimize performance.
Ceph has been in development for over a decade since its beginnings as a research project at UC Santa Cruz in the 2000s. It was incubated at DreamHost and later spun out to form Inktank to build Ceph into an enterprise-grade open source storage platform. Key developments included the RADOS distributed object store, librados client library, RBD block device, and S3-compatible radosgw object store. Ceph now has a large community and is used in many production deployments, with continued work to improve performance, add new features like erasure coding, and expand its capabilities for big data and the enterprise.
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
This document discusses what's new in Ceph, including priorities around community, management/usability, performance of core Ceph components like RADOS, RBD, RGW and CephFS, and container platforms. Specific updates mentioned include centralized configuration in Mimic, Project Crimson reimplementing the OSD data path, Msgr2 network protocol, automated management features, telemetry/insights, performance optimizations, and the continued development of the Ceph dashboard.
Ceph is an open-source distributed storage system that provides object, block, and file storage on commodity hardware. It uses a pseudo-random placement algorithm called CRUSH to distribute data across a cluster in a fault-tolerant manner without single points of failure. Ceph has various applications including a RADOS Gateway for S3/Swift compatibility, RADOS Block Device for virtual machine images, and a CephFS for a POSIX-compliant distributed file system.
Hadoop Meetup Jan 2019 - Overview of OzoneErik Krogen
A presentation by Anu Engineer of Cloudera regarding the state of the Ozone subproject. He covers a brief introduction of what Ozone is, and where it's headed.
This is taken from the Apache Hadoop Contributors Meetup on January 30, hosted by LinkedIn in Mountain View.
This document discusses using Ceph block storage (RBD) with Apache CloudStack for distributed storage. Ceph provides block-level storage that scales for performance and capacity like SAN storage, addressing the need for EBS-like storage across availability zones. CloudStack currently uses local disk or requires separate storage resources per hypervisor, but using Ceph's distributed RBD allows datacenter-wide storage and removes constraints. Upcoming support in CloudStack includes format 2 RBD, snapshots, datacenter-wide storage resources, and removal of legacy storage dependencies.
In-Ceph-tion: Deploying a Ceph cluster on DreamComputePatrick McGarry
This document discusses deploying a Ceph cluster on DreamCompute, an OpenStack-powered cloud computing service from DreamHost. It begins with an overview of Ceph's scalability and uses for object, block, and file storage. The document then discusses DreamCompute's open source infrastructure and deploying Ceph using tools like Juju. It provides details on configuring the Ceph cluster by deploying MONs, OSDs, the RGW gateway, and MDS. It concludes by discussing next steps like geo-replication and erasure coding, and opportunities to get involved with the Ceph community.
Presentation held at GRNET Digital Technology Symposium on November 5-6, 2018 at the Stavros Niarchos Foundation Cultural Center, Athens, Greece.
• Introduction to Ceph and its internals
• Presentation of GRNET's Ceph deployments (technical specs, operations)
• Usecases: ESA Copernicus, ~okeanos, ViMa
Presentation on how GRNET uses Ceph as a storage backend on its Cloud Computing services. Technical specs, lessons learned, future plans.
Presentation held at the 1st GEANT SIG-CISS Meeting in Amsterdam, 2017-09-25.
GRNET - Greek Research and Technology network is the state-owned Greek NREN.
Using Ceph for Large Hadron Collider DataRob Gardner
Talk by Lincoln Bryant (University of Chicago ATLAS team) on using Ceph for ATLAS data analysis @ Ceph Days Chicago http://ceph.com/cephdays/ceph-day-chicago/
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
In this talk Kim-Norman Sahm and Alexander Trost dive into the challenges of storage for containerized applications on Kubernetes. We'll see how the current state is and how Rook can help with that. We are going to especially look at Ceph run through Rook here, but nonetheless trying not to lose sight of the whole picture. There is a lot to keep in mind storage as is, but everything gets more complex with storage for containers. From what type of storage to how much and how "safe" it should, all questions that should be asked and most of them which should be answered as well. Rook's project site https://rook.io/
Kim-Norman Sahm is CTO of Cloudical. He also works as Executive Cloud Architect at Cloudical. Previously, he was OpenStack Cloud Architect at T-Systems (operational services GmbH) and noris network AG. He is an expert of the technologies OpenStack, Ceph and Kubernetes (CKA).
Alexander Trost works as a DevOps Engineer at Cloudical Deutschlang GmbH and is a Certified Kubernetes Administrator (CKA). He is one of four maintainers of the Rook.io Project and is engaged in several more open source projects, for example a Prometheus exporter for Dell Hardware (Dell OMSA Metrics), k8s-vagrant-multi-node an easy local multi node Kubernetes environment, and others. Besides Containers and Kubernetes he is expert on Software Defined Storage, Golang and Continuous Integration (with GitLab CI). He passionately enjoys working on open source projects, such as Rook, Ancientt and other projects.
Ceph Day Santa Clara: Ceph and Apache CloudStack Ceph Community
David Nally, Apache CloudStack Contributor
Much of the cloud storage hype of late has been focused on object storage. Ceph fulfills this role well, as a number of the production public object stores prove. But Ceph provides more than object storage, the more difficult role of distributed, commodity block storage; and that has really become the next cloud storage frontier. Apache CloudStack has been able to consume RBD as storage for running virtual machines for almost a year, but things continue to improve, and we’ll discuss what the future holds as well.
DigitalOcean uses Ceph for block and object storage backing for their cloud services. They operate 37 production Ceph clusters running Nautilus and one on Luminous, storing over 54 PB of data across 21,500 OSDs. They deploy and manage Ceph clusters using Ansible playbooks and containerized Ceph packages, and monitor cluster health using Prometheus and Grafana dashboards. Upgrades can be challenging due to potential issues uncovered and slow performance on HDD backends.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Ceph is an open-source distributed storage system that began as a research project in 2005. It has grown significantly since then, with the founding of Inktank in 2012 to support Ceph's development and commercial adoption. Key developments include the native Linux kernel client, erasure coding support, asynchronous replication, and improvements to CephFS. The future of Ceph includes stronger governance, a focus on performance and low-power architectures, integration with big data workloads, and use as an archival storage solution.
London Ceph Day Keynote: Building Tomorrow's Ceph Ceph Community
This document provides an overview of the history and development of Ceph, an open-source distributed storage system. It discusses Sage Weil's initial research that led to Ceph's creation, the incubation of Ceph at DreamHost, the launch of Inktank to support Ceph's development and adoption, and the current state of Ceph including its growing community and usage in production deployments. It also outlines Weil's vision for Ceph's future, including improving governance, adding new technologies like tiering and erasure coding, and expanding its role in areas like big data and the enterprise storage market.
Ceph Day London 2014 - The current state of CephFS development Ceph Community
The document discusses recent developments in CephFS. It provides an overview of CephFS architecture including components like clients, servers, storage and data placement. The focus is on improving resilience and making CephFS production-ready with features like online filesystem checking, journal resilience tools, client management and online diagnostics. The goal is to handle failures and diagnose problems in a distributed filesystem environment.
Ceph: A decade in the making and still going strongPatrick McGarry
Ceph is an open source distributed storage system that has been in development for over a decade. It started as a research project at UC Santa Cruz to build scalable object storage. Over the years, it has grown to include distributed block storage, file storage and an S3-compatible object store. Ceph is now used in many production deployments and has a thriving developer community, though continued work is needed to improve areas like CephFS and add new features around erasure coding, tiering and replication. The future of Ceph involves strengthening governance, expanding the ecosystem, improving performance and gaining more adoption in enterprise storage environments.
This document discusses disaggregating Ceph storage using NVMe over Fabrics (NVMeoF). It motivates using NVMeoF by showing the performance limitations of directly attaching multiple NVMe drives to individual compute nodes. It then proposes a design to leverage the full resources of a cluster by distributing NVMe drives across dedicated storage nodes and connecting them to compute nodes over a high performance fabric using NVMeoF and RDMA. Some initial Ceph performance measurements using this model show improved IOPS and latency compared to the direct attached approach. Future work could explore using SPDK and Linux kernel improvements to further optimize performance.
Ceph has been in development for over a decade since its beginnings as a research project at UC Santa Cruz in the 2000s. It was incubated at DreamHost and later spun out to form Inktank to build Ceph into an enterprise-grade open source storage platform. Key developments included the RADOS distributed object store, librados client library, RBD block device, and S3-compatible radosgw object store. Ceph now has a large community and is used in many production deployments, with continued work to improve performance, add new features like erasure coding, and expand its capabilities for big data and the enterprise.
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
This document discusses what's new in Ceph, including priorities around community, management/usability, performance of core Ceph components like RADOS, RBD, RGW and CephFS, and container platforms. Specific updates mentioned include centralized configuration in Mimic, Project Crimson reimplementing the OSD data path, Msgr2 network protocol, automated management features, telemetry/insights, performance optimizations, and the continued development of the Ceph dashboard.
Ceph is an open-source distributed storage system that provides object, block, and file storage on commodity hardware. It uses a pseudo-random placement algorithm called CRUSH to distribute data across a cluster in a fault-tolerant manner without single points of failure. Ceph has various applications including a RADOS Gateway for S3/Swift compatibility, RADOS Block Device for virtual machine images, and a CephFS for a POSIX-compliant distributed file system.
Hadoop Meetup Jan 2019 - Overview of OzoneErik Krogen
A presentation by Anu Engineer of Cloudera regarding the state of the Ozone subproject. He covers a brief introduction of what Ozone is, and where it's headed.
This is taken from the Apache Hadoop Contributors Meetup on January 30, hosted by LinkedIn in Mountain View.
This document discusses using Ceph block storage (RBD) with Apache CloudStack for distributed storage. Ceph provides block-level storage that scales for performance and capacity like SAN storage, addressing the need for EBS-like storage across availability zones. CloudStack currently uses local disk or requires separate storage resources per hypervisor, but using Ceph's distributed RBD allows datacenter-wide storage and removes constraints. Upcoming support in CloudStack includes format 2 RBD, snapshots, datacenter-wide storage resources, and removal of legacy storage dependencies.
In-Ceph-tion: Deploying a Ceph cluster on DreamComputePatrick McGarry
This document discusses deploying a Ceph cluster on DreamCompute, an OpenStack-powered cloud computing service from DreamHost. It begins with an overview of Ceph's scalability and uses for object, block, and file storage. The document then discusses DreamCompute's open source infrastructure and deploying Ceph using tools like Juju. It provides details on configuring the Ceph cluster by deploying MONs, OSDs, the RGW gateway, and MDS. It concludes by discussing next steps like geo-replication and erasure coding, and opportunities to get involved with the Ceph community.
Presentation held at GRNET Digital Technology Symposium on November 5-6, 2018 at the Stavros Niarchos Foundation Cultural Center, Athens, Greece.
• Introduction to Ceph and its internals
• Presentation of GRNET's Ceph deployments (technical specs, operations)
• Usecases: ESA Copernicus, ~okeanos, ViMa
Presentation on how GRNET uses Ceph as a storage backend on its Cloud Computing services. Technical specs, lessons learned, future plans.
Presentation held at the 1st GEANT SIG-CISS Meeting in Amsterdam, 2017-09-25.
GRNET - Greek Research and Technology network is the state-owned Greek NREN.
Using Ceph for Large Hadron Collider DataRob Gardner
Talk by Lincoln Bryant (University of Chicago ATLAS team) on using Ceph for ATLAS data analysis @ Ceph Days Chicago http://ceph.com/cephdays/ceph-day-chicago/
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
In this talk Kim-Norman Sahm and Alexander Trost dive into the challenges of storage for containerized applications on Kubernetes. We'll see how the current state is and how Rook can help with that. We are going to especially look at Ceph run through Rook here, but nonetheless trying not to lose sight of the whole picture. There is a lot to keep in mind storage as is, but everything gets more complex with storage for containers. From what type of storage to how much and how "safe" it should, all questions that should be asked and most of them which should be answered as well. Rook's project site https://rook.io/
Kim-Norman Sahm is CTO of Cloudical. He also works as Executive Cloud Architect at Cloudical. Previously, he was OpenStack Cloud Architect at T-Systems (operational services GmbH) and noris network AG. He is an expert of the technologies OpenStack, Ceph and Kubernetes (CKA).
Alexander Trost works as a DevOps Engineer at Cloudical Deutschlang GmbH and is a Certified Kubernetes Administrator (CKA). He is one of four maintainers of the Rook.io Project and is engaged in several more open source projects, for example a Prometheus exporter for Dell Hardware (Dell OMSA Metrics), k8s-vagrant-multi-node an easy local multi node Kubernetes environment, and others. Besides Containers and Kubernetes he is expert on Software Defined Storage, Golang and Continuous Integration (with GitLab CI). He passionately enjoys working on open source projects, such as Rook, Ancientt and other projects.
Ceph Day Santa Clara: Ceph and Apache CloudStack Ceph Community
David Nally, Apache CloudStack Contributor
Much of the cloud storage hype of late has been focused on object storage. Ceph fulfills this role well, as a number of the production public object stores prove. But Ceph provides more than object storage, the more difficult role of distributed, commodity block storage; and that has really become the next cloud storage frontier. Apache CloudStack has been able to consume RBD as storage for running virtual machines for almost a year, but things continue to improve, and we’ll discuss what the future holds as well.
DigitalOcean uses Ceph for block and object storage backing for their cloud services. They operate 37 production Ceph clusters running Nautilus and one on Luminous, storing over 54 PB of data across 21,500 OSDs. They deploy and manage Ceph clusters using Ansible playbooks and containerized Ceph packages, and monitor cluster health using Prometheus and Grafana dashboards. Upgrades can be challenging due to potential issues uncovered and slow performance on HDD backends.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
20240609 QFM020 Irresponsible AI Reading List May 2024
Ceph in 2023 and Beyond.pdf
1. Ceph in 2023 & Beyond
HEPiX Autumn 2023 Workshop
October 18, 2023
Dan van der Ster
Ceph Executive Council / CTO Clyso GmbH
2. About Me
● University of Victoria - 1998:
○ B.Eng in Computer Engineering @ UVic
○ PhD in Grid Computing @ UVic – Supervisor Dr. Randall J. Sobie
● CERN - 2008
○ Grid Group: ATLAS Distributed Analysis Dev and Coordinator 2008-2012
○ Storage Group: AFS, CVMFS, Ceph Service Manager 2013-2022
○ Governance Group: Chief IT Architect 2022-2023
○ Sabbatical Leave 2023-present
● Ceph Open Source Project - 2013:
○ Ceph Foundation Board Member 2015-present
○ Ceph Executive Council 2021-present
● Clyso GmbH - 2023
○ CTO – leading North American expansion
2
4. Introduction to Ceph
● How many of you know Ceph? operate Ceph? like/dislike Ceph?
● Built upon a Reliable Autonomic Distributed Object Store: RADOS
● Objects are distributed pseudorandomly using CRUSH
● End result:
○ Enterprise-quality Block, File, and Object storage using commodity hardware
○ Scalable, reliable, organic technology backing much of the world’s cloud infrastructures
○ Open Source Software – the Linux of Storage
4
5. History of Ceph
● 2007 - Sage Weil’s PhD on CRUSH and CephFS
● 2011 - Inktank startup founded to commercialize Ceph
● 2013 - CERN started using Ceph
● 2014 - Inktank acquired by Red Hat
● 2014 - Dan presented Ceph@CERN: One year on.. At HEPIX LAPP
● 2018 - Creation of the Ceph Foundation
● 2019 - Red Hat acquired by IBM
● 2023 - Ceph team reassigned from RH to IBM
5
11. Ceph Community
● Ceph Foundation
○ 40 corporate + associate members
○ Supports neutral upstream development, testing, documentation, events, marketing
● Events:
○ Ceph Days 2023 - NYC, SoCal, India, Seoul, Vancouver
○ Cephalocon 2023 - Amsterdam
○ All talks recorded and shared on Youtube
● Securing the Foundation:
○ New tiers to secure the project’s future
○ Plans to invest in more infra, bigger events
● Technical Meetups:
○ Ceph Leadership Team + Component Weekly
○ Ceph Developer Monthly
11
13. My Favourite Bugs
● Bug of the Year 2020: OSDMap LZ4 Corruptions
○ Symptom: Cluster-wide of OSD aborts with osdmap crc errors
○ Recovered the cluster by injecting an older valid osdmap
○ RCA: osdmaps had 4 flipped bits, caused by LZ4 which corrupted non-contiguous inputs in rare
cases.
○ Solution: defrag ceph_buffers before compressing, and the OS upgraded its LZ4 library.
● Bug of the Year 2022: OSD PG Log "Dup" Bug
○ Symptom: For several months users reported OSDs consuming 100’s of GBs of RAM, even after
restart. Mempool dumps showed huge allocations in the pg_log buffers.
○ RCA: pg splitting and merging violated the ordering of the duplicate op log, preventing
trimming.
○ Solution: offline trim command for the OSD, and better online pg log management.
13
14. My Favourite Bugs
● Bug of the Year 2020: OSDMap LZ4 Corruptions
○ Symptom: Cluster-wide of OSD aborts with osdmap crc errors
○ Recovered the cluster by injecting an older valid osdmap
○ RCA: osdmaps had 4 flipped bits, caused by LZ4 which corrupted non-contiguous inputs in rare
cases.
○ Solution: defrag ceph_buffers before compressing, and the OS upgraded its LZ4 library.
● Bug of the Year 2022: OSD PG Log "Dup" Bug
○ Symptom: For several months users reported OSDs consuming 100’s of GBs of RAM, even after
restart. Mempool dumps showed huge allocations in the pg_log buffers.
○ RCA: pg splitting and merging violated the ordering of the duplicate op log, preventing
trimming.
○ Solution: offline trim command for the OSD, and better online pg log management.
14
FIXED
16. My Favourite Plot
16
Modern devices have a “media cache” which has a huge impact on BlueStore performance
Read ceph.com Hardware Recommendations re: disabling device writeback caches
FIXED
17. My 2nd Favourite Plot
Potential 4x sped up IO path after workload analysis here at UVic!
17
W
IP
18. Comparing Use-Cases
● CERN uses Ceph to back its cloud infrastructure: 100PB of block, S3, FS.
● In my new role I’m exposed to much more Ceph in very different envs:
○ Ranging from 10’s of TB to multiples exabytes. Cluster in a closet to 100s of clusters globally.
○ “Microsoft/VMWare is too expensive”. Moving to Proxmox+Ceph.
○ “Data is our product – We need full ownership of the platform.”
○ “Ceph backs the things that make us money – if it’s down we’ll lose $$$ per minute”
○ “Xyz is too expensive, we’re locked in → FOSS Ceph is the best alternative we found”
● Lots and lots of successful uses out there – around 5 exabytes across
thousands of clusters.
● But common themes – pain points – are emerging:
○ Ceph performance is not obvious – selecting hardware, NVMe, Crimson, multi-MDS, …
○ Ceph is still too difficult to understand and operate. #AI-OPS to the rescue?
18
20. Ceph Cluster Analyzer
● I want to build tools that
help people run Ceph.
● Step 1: a website which will
grade your ceph cluster.
● Try it now:
○ https://analyzer.clyso.com
● Coming soonTM
○ Clyso Enterprise Storage
○ Ceph Copilot
○ Chorus Multisite S3
20