Ceph is an open-source distributed storage system that provides scalable object, block, and file storage in a single unified platform. It uses a technique called CRUSH to automatically distribute data across clusters of commodity servers and provides self-healing capabilities through data replication. Ceph's unified storage platform includes RADOS, an object store; RBD for block storage; CephFS for distributed file storage; and RADOSGW for cloud storage compatibility. It is designed for large-scale deployments of 10s to 10,000s of nodes using heterogeneous hardware.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Using Ceph for Large Hadron Collider DataRob Gardner
Talk by Lincoln Bryant (University of Chicago ATLAS team) on using Ceph for ATLAS data analysis @ Ceph Days Chicago http://ceph.com/cephdays/ceph-day-chicago/
Ceph is an open source project, which provides software-defined, unified storage solutions. Ceph is a distributed storage system which is massively scalable and high-performing without any single point of failure. From the roots, it has been designed to be highly scalable, up to exabyte level and beyond while running on general-purpose commodity hardware.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Using Ceph for Large Hadron Collider DataRob Gardner
Talk by Lincoln Bryant (University of Chicago ATLAS team) on using Ceph for ATLAS data analysis @ Ceph Days Chicago http://ceph.com/cephdays/ceph-day-chicago/
Ceph is an open source project, which provides software-defined, unified storage solutions. Ceph is a distributed storage system which is massively scalable and high-performing without any single point of failure. From the roots, it has been designed to be highly scalable, up to exabyte level and beyond while running on general-purpose commodity hardware.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
Ceph, being a distributed storage system, is highly reliant on the network for resiliency and performance. In addition, it is crucial that the network topology beneath a Ceph cluster be designed in such a way to facilitate easy scaling without service disruption. After an introduction to Ceph itself this talk will dive into the design of Ceph client and cluster network topologies.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Managing data analytics in a hybrid cloudKaran Singh
We’ll talk about the changes in the industry that customers are faced with and how Red Hat Hyperconverged Infrastructure can address those challenges . Our customers are struggling not only to manage the growth of big data (structured and unstructured), but also to reap timely business insights from their data using their existing data infrastructure like monolithic Hadoop clusters. This often leads to alternative approaches that often lead to disappointing results.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Ceph, being a distributed storage system, is highly reliant on the network for resiliency and performance. In addition, it is crucial that the network topology beneath a Ceph cluster be designed in such a way to facilitate easy scaling without service disruption. After an introduction to Ceph itself this talk will dive into the design of Ceph client and cluster network topologies.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Managing data analytics in a hybrid cloudKaran Singh
We’ll talk about the changes in the industry that customers are faced with and how Red Hat Hyperconverged Infrastructure can address those challenges . Our customers are struggling not only to manage the growth of big data (structured and unstructured), but also to reap timely business insights from their data using their existing data infrastructure like monolithic Hadoop clusters. This often leads to alternative approaches that often lead to disappointing results.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph data services in a multi- and hybrid cloud worldSage Weil
IT organizations of the future (and present) are faced with managing infrastructure that spans multiple private data centers and multiple public clouds. Emerging tools and operational patterns like kubernetes and microservices are easing the process of deploying applications across multiple environments, but the achilles heel of such efforts remains that most applications require large quantities of state, either in databases, object stores, or file systems. Unlike stateless microservices, state is hard to move.
Ceph is known for providing scale-out file, block, and object storage within a single data center, but it also includes a robust set of multi-cluster federation capabilities. This talk will cover how Ceph's underlying multi-site capabilities complement and enable true portability across cloud footprints--public and private--and how viewing Ceph from a multi-cloud perspective has fundamentally shifted our data services roadmap, especially for Ceph object storage.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Deploying datacenters with Puppet - PuppetCamp Europe 2010Puppet
Rafael Brito at PuppetCamp Europe 2010
"Deploying datacenters with Puppet."
Follow along with the Video at: https://www.youtube.com/watch?v=3DaWrKQ82j4
Puppet Camp Europe 2010: Ghent, Belgium
May 27-28, 2010
Intelligence Artificielle: résolution de problèmes en Prolog ou Prolog pour l...Jean Rohmer
Ce papier explique en détail et de manière pédagogique comment résoudre des problèmes en intelligence artificielle à l'aide du langage Prolog. Les classiques du loup, chèvre et chou, et de la tour de Hanoï sont expliqués en détail. On décrit comment appliquer l'approche "general problem solver" en Prolog
This document explains how to build a deductive inference engine for rule-based systems, business rules. It leads to a useful architecure for Complex Event Processing and Data streams
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutesKaran Singh
This presentation helps you to get introductory information on Ceph. This presentation also includes a step by step procedure to get your FIRST CEPH CLUSTER UP and RUNNING in just 10 minutes. I have also described how we are using Ceph at CSC-IT Center for Science. And a very interesting introduction to Fujitsu Eternus CD10000 Ceph based Storage appliance.
Keeping OpenStack storage trendy with Ceph and containersSage Weil
The conventional approach to deploying applications on OpenStack uses virtual machines (usually KVM) backed by block devices (usually Ceph RBD). As interest increases in container-based application deployment models like Docker, it is worth looking at what alternatives exist for combining compute and storage (both shared and non-shared). Mapping RBD block devices directly to host kernels trades isolation for performance and may be appropriate for many private clouds without significant changes to the infrastructure. More importantly, moving away from a virtualization allows for non-block interfaces and a range of alternative models based on file or object.
Attendees will leave this talk with a basic understanding of the storage components and services available to both virtual machines and Linux containers, a view of a several ways they can be combined and the performance, reliability, and security trade-offs associated with those possibilities, and several proposals for how the relevant OpenStack projects (Nova, Cinder, Manila) can work together to make it easy.
ASI Financials bt Argentto Systems, Inc. are a suite of accounting business software solutions. Multi Company, Manufacturing, Production and Planning, Foreign Currency, Time & Billing, Staffing, Payroll System, Securities Portfolio Management, Performance Metrics, Legal Review Metrics, Metal Refining, Legal Case Management. As well as over 300 custom accounting vertical add-on's.
Argentto Systems, Inc. is a leading provider of business software systems that deliver smart management solutions and unleash the power of your business intelligence.
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
Ceph is an open source distributed object store and file system that provides excellent performance, reliability and scalability.
In this presentation, the Ceph architecture will be explained, attendees will be introduced to the block, object and file interfaces to Ceph.
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
The Ceph distributed storage system sports object, block, and file interfaces to a single storage cluster. These interface are built on a distributed object storage and compute platform called RADOS, which exports a conceptually simple yet powerful interface for storing and processing large amounts of data and is well-suited for backing web-scale applications and data analytics. In features a rich object model, efficient key/value storage, atomic transactions (including efficient compare-and-swap semantics), object cloning and other primitives for supporting snapshots, simple inter-client communication and coordination (ala Zookeeper), and the ability to extend the object interface using arbitrary code executed on the storage node. This talk will focus on librados API, how it is used, the security model, and some examples of RADOS classes implementing interesting functionality.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
Hopsworks in the cloud Berlin Buzzwords 2019 Jim Dowling
This talk, given at Berlin Buzzwords 2019, describes the recent progress in making Hopsworks a cloud-native platform, with HA data-center support added for HopsFS.
Red Hat Gluster Storage, Container Storage and CephFS PlansRed_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat's Sayan Saha took attendees through an overview of Red Hat Gluster Storage that included future plans for the product, Red Hat's plans for container storage, and the company's plans for CephFS.
Virtualized Big Data Platform at VMware Corp IT @ VMWorld 2015Rajit Saha
At VMware Corporate IT Data Solution and Delivery Team , we have built the Enterprise Advance Data Analytic Platform on Top of vSphere 6.0 with VMware BigData Extension, Isilon HDFS, Pivotal HD 3.0, Spring XD 1.2 and Alpine Data Lab
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Community
Sage Weil, Creator of Ceph, Founder & CTO, Inktank
CephFS is a distributed filesystem built on RADOS, offering POSIX-semantics and a true scale-out architecture. While production deployments of CephFS do exist, it still needs lots of testing and hardening before it can be used in the most challenging (and interesting) scenarios. In this session, Sage will discuss the future of CephFS, includ- ing the areas where it still needs work and ways the community can help.
RADOS is a surprisingly flexible object store. To take advantage of its rich feature set, developers can build with its programmable library, librados. Librados is avail- able in many languages, and offers access to key/value stores, object classes, cluster health and status, and other useful RADOS internals. This session will cover how to use librados, discuss situations where librados is the right choice, and share a list of lesser-known RADOS features that developers can tap into.
Data Analytics Meetup: Introduction to Azure Data Lake Storage CCG
Microsoft Azure Data Lake Storage is designed to enable operational and exploratory analytics through a hyper-scale repository. Journey through Azure Data Lake Storage Gen1 with Microsoft Data Platform Specialist, Audrey Hammonds. In this video she explains the fundamentals to Gen 1 and Gen 2, walks us through how to provision a Data Lake, and gives tips to avoid turning your Data Lake into a swamp.
Learn more about Data Lakes with our blog - Data Lakes: Data Agility is Here Now https://bit.ly/2NUX1H6
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
At the Public Sector Red Hat Storage Days on 1/20/16 and 1/21/16, Jason Calloway walked attendees through the basics of scalable POSIX file systems in the cloud.
Similar to An intro to Ceph and big data - CERN Big Data Workshop (20)
Talk from 05 June 2014 NYLUG meeting at Bloomberg NYC. Short history of where Ceph came from, an architectural overview, and the current state of the community.
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014Patrick McGarry
Everyone needs storage, but Open Source is changing how we think about storage infrastructure through new features, added durability, and reduced cost. New storage solutions like Ceph are providing distributed, flexible, powerful options that can support a myriad of use cases across object, block, and file system applications. This talk will explore the history and basics of Ceph, the current status of the community, and where the project is headed in the near future.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
By Design, not by Accident - Agile Venture Bolzano 2024
An intro to Ceph and big data - CERN Big Data Workshop
1. an intro to ceph and big data
patrick mcgarry – inktank
Big Data Workshop – 27 JUN 2013
2. what is ceph?
distributed storage system
− reliable system built with unreliable components
− fault tolerant, no SPoF
commodity hardware
− expensive arrays, controllers, specialized
networks not required
large scale (10s to 10,000s of nodes)
− heterogenous hardware (no fork-lift upgrades)
− incremental expansion (or contraction)
dynamic cluster
3. what is ceph?
unified storage platform
− scalable object + compute storage platform
− RESTful object storage (e.g., S3, Swift)
− block storage
− distributed file system
open source
− LGPL server-side
− client support in mainline Linux kernel
4. RADOS – the Ceph object store
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
RADOS – the Ceph object store
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
CEPH FS
A POSIX-compliant
distributed file system,
with a Linux kernel
client and support for
FUSE
CEPH FS
A POSIX-compliant
distributed file system,
with a Linux kernel
client and support for
FUSE
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
RADOSGW
A bucket-based REST
gateway, compatible
with S3 and Swift
APPAPP APPAPP HOST/VMHOST/VM CLIENTCLIENT
14.
efficient key/value storage inside an object
atomic single-object transactions
− update data, attr, keys together
− atomic compare-and-swap
object-granularity snapshot infrastructure
inter-client communication via object
embed code in ceph-osd daemon via plugin
API
− arbitrary atomic object mutations, processing
rich librados API
15. Data and compute
RADOS Embedded Object Classes
Moves compute directly adjacent to data
C++ by default
Lua bindings available
16. die, POSIX, die
successful exascale architectures will replace
or transcend POSIX
− hierarchical model does not distribute
line between compute and storage will blur
− some processes is data-local, some is not
fault tolerance will be first-class property of
architecture
− for both computation and storage
17. POSIX – I'm not dead yet!
CephFS builds POSIX namespace on top of
RADOS
− metadata managed by ceph-mds daemons
− stored in objects
strong consistency, stateful client protocol
− heavy prefetching, embedded inodes
architected for HPC workloads
− distribute namespace across cluster of MDSs
− mitigate bursty workloads
− adapt distribution as workloads shift over time
27. snapshots
snapshot arbitrary subdirectories
simple interface
− hidden '.snap' directory
− no special tools
$ mkdir foo/.snap/one # create snapshot
$ ls foo/.snap
one
$ ls foo/bar/.snap
_one_1099511627776 # parent's snap name is mangled
$ rm foo/myfile
$ ls -F foo
bar/
$ ls -F foo/.snap/one
myfile bar/
$ rmdir foo/.snap/one # remove snapshot
28. how can you help?
try ceph and tell us what you think
− http://ceph.com/resources/downloads
http://ceph.com/resources/mailing-list-irc/
− ask if you need help
ask your organization to start dedicating
resources to the project http://github.com/ceph
find a bug (http://tracker.ceph.com) and fix it
participate in our ceph developer summit
− http://ceph.com/events/ceph-developer-summit
RADOS is a distributed object store, and it’s the foundation for Ceph. On top of RADOS, the Ceph team has built three applications that allow you to store data and do fantastic things. But before we get into all of that, let’s start at the beginning of the story.
Let’s start with RADOS, Reliable Autonomic Distributed Object Storage. In this example, you’ve got five disks in a computer. You have initialized each disk with a filesystem (btrfs is the right filesystem to use someday, but until it’s stable we recommend XFS). On each filesystem, you deploy a Ceph OSD (Object Storage Daemon). That computer, with its five disks and five object storage daemons, becomes a single node in a RADOS cluster. Alongside these nodes are monitor nodes, which keep track of the current state of the cluster and provide users with an entry point into the cluster (although they do not serve any data themselves).
With CRUSH, the data is first split into a certain number of sections. These are called “placement groups”. The number of placement groups is configurable. Then, the CRUSH algorithm runs, having received the latest cluster map and a set of placement rules, and it determines where the placement group belongs in the cluster. This is a pseudo-random calculation, but it’s also repeatable; given the same cluster state and rule set, it will always return the same results.
Each placement group is run through CRUSH and stored in the cluster. Notice how no node has received more than one copy of a placement group, and no two nodes contain the same information? That’s important.
When it comes time to store an object in the cluster (or retrieve one), the client calculates where it belongs.
What happens, though, when a node goes down? The OSDs are always talking to each other (and the monitors), and they know when something is amiss. The third and fifth node on the top row have noticed that the second node on the bottom row is gone, and they are also aware that they have replicas of the missing data.
The OSDs collectively use the CRUSH algorithm to determine how the cluster should look based on its new state, and move the data to where clients running CRUSH expect it to be.
Because of the way placement is calculated instead of centrally controlled, node failures are transparent to clients.
Most people will default to discussions about CephFS when confronted with either Big Data or HPC applications. This can mean using CephFS by itself, or perhaps as a drop-in replacement for HDFS. [NOT READY ARGUMENT] There are a couple of other options, however. You can use librados to talk directly to the object store. One user I know actually plugged Hadoop in at this level, instead of using CephFS. Ceph also has a pretty decent key-value store proof-of-concept done by an intern last year. It's based on a b-tree structure but uses a fixed height of two levels instead of a true tree structure. This draws from both a normal B-Tree and Google BigTable. Would love to see someone do more with it.
I mentioned librados, this is the low-level library that allows you to directly access a RADOS cluster from your application. This has native language bindings for C, C++, Python, etc. This is obviously the fastest way to get at your data and comes with no inherent overheard or translation layer.
For most object systems an object is just a bunch of bytes, maybe some extended attributes. Ceph you can store a lot more than that. You can store key/value pairs inside an object, think berkelyDB or sql where each object is a logical container. It supports atomic transaction so you can do things like atomic compare-and-swap. Update the bytes and the keys/values in an atomic fashion and it will be consistently distributed and replicated across a cluster in a safe way. There is snapshotting that will give you per-directory snapshots, and inter-client communication for locking and whatnot. The really exciting part about this is the ability to implement your own functionality on the OSD.....
These embedded object classes allow you to send an object method call to the cluster and it will actually perform that computation without having to pull the data over the network. The downside to using these object classes is the injection of new functionality into the system. A compiled C++ plugin has to be delivered and dynamically loaded into each OSD process. This becomes more complicated if a cluster is composed of multiple architecture targets, and makes it difficult to update functionality on the fly. One approach to addressing these problems is to embed a language run-time within the OSD. Noah Watkins, one of our engineers tackled this with some Lua bindings which are available.
One of the more contentious assertions that Sage likes to make is that as we move towards exascale computing and beyond we'll need to transcend or replace POSIX. The heirarchical model just doesn't scale well beyond a certain level. Future models are going to have to start blurring the line between compute and storage and recognizing when data is local to perform operations vs when you need to gather data from multiple sources and gather data for an operation. And finally fault tolerance needs to become a first-class property of these architectures. As we push the scale of our existing architectures, building things like burst buffers to deal with huge checkpoints across millions of cores it just doesn't make a whole lot of sense.
Having said all that, there are too many things (both people and code) built using POSIX mentality to ditch it any time soon. CephFS is designed to provide that POSIX layer on top of RADOS. [read slide] Now, as we've said there is certainly some work to be done on CephFS, but I want to share a bit about how it works since it (and similar thinking) will play a big part of Ceph's HPC and Big Data applications going forward.
CephFS adds a metadata server (or MDS) to the list of node types in your Ceph cluster. Something has to keep track of who created files, when they were created, and who has the right to access them. And something has to remember where they live within a tree. Clients accessing Ceph FS data first make a request to an MDS, which provides what they need to get files from the right OSDs.
There are multiple MDSs!
So how do you have one tree and multiple servers?
If there’s just one MDS (which is a terrible idea), it manages metadata for the entire tree.
When the second one comes along, it will intelligently partition the work by taking a subtree.
When the third MDS arrives, it will attempt to split the tree again.
Same with the fourth.
A MDS can actually even just take a single directory or file, if the load is high enough. This all happens dynamically based on load and the structure of the data, and it’s called “dynamic subtree partitioning”. This is done as a periodic load balance exchange. The transfer just ships the cache contents between MDS and lets the clients continue transparently.
CephFS has some neat features that you don't find in most file systems. Because they built the filesystem namespace from the ground up they were able to build these features into the infrastructure. One of these features is recursive accounting. The MDSs keep track of directory stats for every dir in the file system. For instance, when you do an 'ls -al' the file size is actually the total number of bytes stored in that directory recursively in the system. The same thing you can get from a 'du' but in realtime.
[Provides snapshots] The motivation here is once you start talking about petabytes and exabytes it doesn't make much sense to try to snapshot the entire tree. You need to be able to snapshot different directories and different data sets. You can add and remove snapshots for any directory with standard bash-type commands.
Also, next Ceph developer summit coming soon to plan for the Emperor release. Would love to see some blueprints submitted for CephFS work.