From SUSECon 2015: Smooth integration of emerging Software Defined Storage technologies into traditional Data Center using Fiber Channel and iSCSI as key values for success.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Haodong Tang from Intel gave this talk at the 2018 Open Fabrics Workshop.
"Efficient network messenger is critical for today’s scale-out storage systems. Ceph is one of the most popular distributed storage system providing a scalable and reliable object, block and file storage services. As the explosive growth of Big Data continues, there're strong demands leveraging Ceph build high performance & ultra-low latency storage solution in the cloud and bigdata environment. The traditional TCP/IP cannot satisfy this requirement, but Remote Direct Memory Access (RDMA) can.
"In this session, we'll present the challenges in today's distributed storage system posed by network messenger with the profiling results of Ceph All Flash Array system showing the networking already become the bottleneck and introduce how we achieved 8% performance benefit with Ethernet RDMA protocol iWARP. We'll first present the design of integrating iWARP to Ceph networking module together with performance characterization results with iWARP enabled IO intensive workload. The send part, we will explore the proof-of-concept solution of Ceph on NVMe over iWARP to build high-performance and high-density storage solution. Finally, we will showcase how these solutions can improve OSD scalability, and what’s the next optimization opportunities based on current analysis."
Watch the video: https://wp.me/p3RLHQ-ikV
Learn more: http://intel.com
and
https://insidehpc.com/2018/04/amazon-libfabric-case-study-flexible-hpc-infrastructure/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This presentation provides a basic overview of Ceph, upon which SUSE Storage is based. It discusses the various factors and trade-offs that affect the performance and other functional and non-functional properties of a software-defined storage (SDS) environment.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
Building reliable Ceph clusters with SUSE Enterprise StorageLars Marowsky-Brée
This tutorial was presented by Lars Marowsky-Brée at SUSECon 2016 in Washington, DC (TUT91787). It covers real world survival skills and considerations in architecting, deploying, and operating Ceph clusters to deliver Software-Defined-Storage in the business world for block, file, and object storage.
Why do containers suddenly matter so much when they have been around since 1998? Take a look at the potential of OpenStack's Magnum, Murano and Nova-Docker in the context leveraging the incredible interest in Linux Containers brought about by Docker.
Check out www.stackengine.com to learn more about our excellent container management solution.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Haodong Tang from Intel gave this talk at the 2018 Open Fabrics Workshop.
"Efficient network messenger is critical for today’s scale-out storage systems. Ceph is one of the most popular distributed storage system providing a scalable and reliable object, block and file storage services. As the explosive growth of Big Data continues, there're strong demands leveraging Ceph build high performance & ultra-low latency storage solution in the cloud and bigdata environment. The traditional TCP/IP cannot satisfy this requirement, but Remote Direct Memory Access (RDMA) can.
"In this session, we'll present the challenges in today's distributed storage system posed by network messenger with the profiling results of Ceph All Flash Array system showing the networking already become the bottleneck and introduce how we achieved 8% performance benefit with Ethernet RDMA protocol iWARP. We'll first present the design of integrating iWARP to Ceph networking module together with performance characterization results with iWARP enabled IO intensive workload. The send part, we will explore the proof-of-concept solution of Ceph on NVMe over iWARP to build high-performance and high-density storage solution. Finally, we will showcase how these solutions can improve OSD scalability, and what’s the next optimization opportunities based on current analysis."
Watch the video: https://wp.me/p3RLHQ-ikV
Learn more: http://intel.com
and
https://insidehpc.com/2018/04/amazon-libfabric-case-study-flexible-hpc-infrastructure/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This presentation provides a basic overview of Ceph, upon which SUSE Storage is based. It discusses the various factors and trade-offs that affect the performance and other functional and non-functional properties of a software-defined storage (SDS) environment.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
Building reliable Ceph clusters with SUSE Enterprise StorageLars Marowsky-Brée
This tutorial was presented by Lars Marowsky-Brée at SUSECon 2016 in Washington, DC (TUT91787). It covers real world survival skills and considerations in architecting, deploying, and operating Ceph clusters to deliver Software-Defined-Storage in the business world for block, file, and object storage.
Why do containers suddenly matter so much when they have been around since 1998? Take a look at the potential of OpenStack's Magnum, Murano and Nova-Docker in the context leveraging the incredible interest in Linux Containers brought about by Docker.
Check out www.stackengine.com to learn more about our excellent container management solution.
Managing Container Clusters in OpenStack Native WayQiming Teng
This is a presentation from the OpenStack Austin Summit. It talks about managing containers in an OpenStack native way where containers are treated as first class citizens.
Webinar container management in OpenStackCREATE-NET
This webinar covers the topics of Containers in OpenStack and, in particular it offers an overview of what containers are, LXC, Docker and Kubernetes. It also includes the topic of Containers in OpenStack and the specific examples of Nova docker, Murano and Magnum. In the final part there are live Demos about the elements covered earlier.
Cloud init and cloud provisioning [openstack summit vancouver]Joshua Harlow
Evil Superuser's HOWTO: Launching instances to do your bidding.
You click 'run' on the OpenStack dashboard, or launch a new instance via the api. Some provisioning magic happens and soon you've got a server created especially for you. Did you ever wonder what magic happens to a standard image on boot? Have you wanted to launch instances and have them into your infrastructure with no manual interaction? Cloud-init is software that runs in most linux instances. It can take your input and do your bidding. Learn what things cloud-init magically does for you and how you can make it do more. Also, take advantage of the after-talk to pester cloud-init developers on what is missing or throw rotten fruits in their direction.
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...Daniel Krook
Presentation at the OpenStack Summit in Barcelona, Spain on October 25, 2016.
http://bit.ly/os-kub-oci-cncf
Containers along with next generation topics such as orchestration and serverless computing continue to draw interest across the application developer and data center operator communities because of the enormous potential of the technology and the rapid pace of change.
As the potential of Docker continues to evolve, Kubernetes emerges as the leading orchestration technology, and the OpenStack Magnum project has matured, many want to see shared governance over the baseline container specification and associated runtime and format/image to protect investments and enable confident adoption of this emerging technology.
Join this session to learn the latest about the Open Container Initiative (www.opencontainers.org) and the Cloud Native Computing Foundation (cncf.io) - both collaborative projects of the Linux Foundation - that drive the latest cloud native technologies and projects and see how they relate to Magnum and Kuryr.
Daniel Krook, Senior Software Engineer, IBM
Jeffrey Borek, Program Director, Open Tech, IBM
Sarah Novotny, Senior Kubernetes Community Manger, Google
Brent Compton and Kyle Bader of Red Hat took the stage at Red Hat Storage Day New York on 1/19/16 to share with attendees best practices and lessons learned for architecting solutions with Red Hat Ceph Storage.
My SQL and Ceph: Head-to-Head Performance LabRed_Hat_Storage
In this April 2016 session, Red Hat's Brent Compton and Kyle Bader compared the performance of MySQL on public and private clouds with a head-to-head look at (a) MySQL on Amazon AWS EBS, (b) MySQL on Amazon AWS EBS Provisioned IOPS, (c) MySQL on an OpenStack/Ceph private cloud (SuperMicro HDD-based Ceph storage), (d) MySQL on an OpenStack/Ceph private cloud (SuperMicro all-flash Ceph storage), and (e) MySQL on a single bare metal SuperMicro server (baseline).
SUSE Enterprise Storage - a Gentle IntroductionGábor Nyers
SUSE Enterprise Storage is a scalable and resilient software-based storage solution. It lets you build cost-efficient and highly scalable data storage using commodity, off-the-shelf servers and disk drives.
Wie baut man ein privates Amazon AWS mit Open Source? In diesem Vortag wird die Realisierung einer privaten Cloud vom Konzept bis hin zum Produktivsystem vorgestellt. Amazon hat mit AWS diese Idee als Public Cloud für die breite Öffentlichkeit zugänglich gemacht. Es gibt jedoch gute Gründe eine eigene, private Cloud zu bauen. Diese Gründe können Sicherheitsbedenken und rechtliche Kriterien sein. Dr. Lukas Pustina und Daniel Schneller von der codecentric AG haben für das Startup CenterDevice eine private Cloud realisiert. In diesem Vortrag werden konkret Konzepte, Entscheidungen und Probleme erläutert. Dabei wird auch die ein oder andere Anekdote aus dem täglichen Wahnsinn der Cloud Administration nicht fehlen. Anhand spezifischer Anforderungen werden die eingesetzten Komponenten Ubuntu Linux, Ansible, Ceph und OpenStack eingeführt.
Slides unseres Talks auf der DevOps Conference 2015 in Berlin.
How to Monitor Application Performance in a Container-Based WorldKen Owens
Monitoring applications that consists of multiple containers is not easy or available as part of any container solution or orchestration platform. This talk looks at how to address application performance leveraging business service level objectives and the architecture for implementing the solution. The solution has been prototyped at ciscoshipped.io and we would love your thoughts.
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephSean Cohen
IT organizations require a disaster recovery strategy addressing outages with loss of storage, or extended loss of availability at the primary site. Applications need to rapidly migrate to the secondary site and transition with little or no impact to their availability.This talk will cover the various architectural options and levels of maturity in OpenStack services for building multi-site configurations using the Mitaka release. We’ll present the latest capabilities for Volume, Image and Object Storage with Ceph as the backend storage solution, and look at the future developments the OpenStack and Ceph communities are driving to improve and simplify the relevant use cases.
Slides from OpenStack Austin Summit 2016 session: http://alturl.com/hpesz
One might find it ironic that some of the world's fastest supercomputers -- vast clusters capable of trillions of floating point operations per second -- can take upwards of a half an hour to reboot in between jobs. While we often talk about the density advantages of containers, it's the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host's CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers -- security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we'll examine a reference architecture and some best practices around containers in HPC environments.
Who carries your container? Zun or Magnum?Madhuri Kumari
There are multiple solution in OpenStack to enable containers. These slides talk about two projects i.e. Magnum and Zun in OpenStack and their use cases.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
Cloud providers are constantly addressing the technology limitations on their infrastructures, which must be overcome to meet customer needs. On this presentation, we will demonstrate how technological agnosticism and management flexibility of OpenNebula has allowed Todoencloud to provide the most efficient open source solution to the needs of its customers, choosing the most appropriate virtualization technology (Xen and KVM), storage approach (ZFS vs CEPH), Cloud Bursting solutions (Azure, Amazon) and customized networking topologies.
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
Ceph is a highly scalable open source distributed storage system that provides object, block, and file interfaces on a single platform. Although Ceph RBD block storage has dominated OpenStack deployments for several years, maturing object (S3, Swift, and librados) interfaces and stable CephFS (file) interfaces now make Ceph the only fully open source unified storage platform.
This talk will cover Ceph's architectural vision and project mission and how our approach differs from alternative approaches to storage in the OpenStack ecosystem. In particular, we will look at how our open development model dovetails well with OpenStack, how major contributors are advancing Ceph capabilities and performance at a rapid pace to adapt to new hardware types and deployment models, and what major features we are priotizing for the next few years to meet the needs of expanding cloud workloads.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
Those who out-compute can many times out-compete. The cloud gives you access to a massive amount of compute power when you need it. This talk will present an introduction to HPC in the cloud, including, the benefits of HPC in the cloud, how to get started, some tools to use, and how you can manage data. We will showcase several examples of HPC in the cloud by a number of public sector and commercial customers.
Created by: Dr. Jeff Layton, Principal, Solutions Architect
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
TUT18972: Unleash the power of Ceph across the Data Center
1. Unleash the Power of Ceph
Across the Data Center
TUT18972: FC/iSCSI for Ceph
Ettore Simone
Senior Architect
Alchemy Solutions Lab
ettore.simone@alchemy.solutions
2. 2
Agenda
• Introduction
• The Bridge
• The Architecture
• Use Cases
• How It Works
• Some Benchmarks
• Some Optimizations
• Q&A
• Bonus Tracks
4. 4
About Ceph
“Ceph is a distributed object store and file system
designed to provide excellent performance, reliability
and scalability.” (http://ceph.com/)
FUT19336 - SUSE Enterprise Storage Overview and Roadmap
TUT20074 - SUSE Enterprise Storage Design and Performance
6. 6
Some facts
Common data centers storage solutions are built
mainly on top of Fibre Channel (yes, and NAS too).
Source: Wikibon Server SAN Research Project 2014
7. 7
Is the storage mindset changing?
New/Cloud
‒ Micro-services Composed Applications
‒ NoSQL and Distributed Database (lazy commit, replication)
‒ Object and Distributed Storage
SCALE-OUT
Classic
‒ Traditional Application → Relational DB → Traditional Storage
‒ Transactional Process → Commit on DB → Commit on Disk
SCALE-UP
8. 8
Is the storage mindset changing? No.
New/Cloud
‒ Micro-services Composed Applications
‒ NoSQL and Distributed Database (lazy commit, replication)
‒ Object and Distributed Storage
Natural playground of Ceph
Classic
‒ Traditional Application → Relational DB → Traditional Storage
‒ Transactional Process → Commit on DB → Commit on Disk
Where we want to introduce Ceph!
9. 9
Is the new kid on the block so noisy?
Ceph is cool but I cannot rearchitect my storage!
And what about my shiny big disk arrays?
I have already N protocols, why another one?
<Add your own fear here>
10. 10
SAN
SCSI
over FC
Our goal
How to achieve a non disruptive introduction of Ceph
into a traditional storage infrastructure?
NAS
NFS/SMB/iSCSI
over Ethernet
RBD
over Ethernet
Ceph
11. 11
How to let happily coexist Ceph in your
datacenter with the existing neighborhood
(traditional workloads, legacy servers, FC switches etc...)
14. 14
Back to our goal
How to achieve a non disruptive introduction of Ceph
into a traditional storage infrastructure?
RBDSAN NAS
15. 15
Linux-IO Target (LIO™)
Is the most common open-source SCSI target in
modern GNU/Linux distros:
FC
FCoE
FireWire
iSCSI
iSER
SRP
loop
vHost
FABRIC BACKSTORELIO
FILEIO
IBLOCK
RBD
pSCSI
RAMDISK
TCMU
Kernel space
26. 26
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
27. 27
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
28. 28
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
29. 29
Smooth transition
Native migration of SAN/LUN to RBD/Volumes help
migration/conversion/coexisting:
Traditional Workloads Private Cloud
CephSAN GW
New Workloads
30. 30
Storage replacement
No drama at the End of Life/Support of traditional
storages:
Traditional Workloads Private Cloud
CephGW
New Workloads
33. 33
Ceph and Linux-IO
SCSI commands from fabrics are addressed by LIO
core, configured using targetcli or directly via sysfs,
and proxied to the interested block device through the
relative backstore module.
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
← user space →
← kernel space →
34. 34
Enable QLocig in target mode
# modprobe qla2xxx qlini_mode="disabled"
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
← user space →
← kernel space →
35. 35
Identify and enable HBAs
# cat
/sys/class/scsi_host/host*/device/fc_host/h
ost*/port_name |
sed -e 's/../:&/g' -e 's/:0x://'
# targetcli qla2xxx/ create ${WWPN}
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
← user space →
← kernel space →
36. 36
Map RBDs and create backstores
# rbd map -p ${POOL} ${VOL}
# targetcli backstores/rbd create name="$
{POOL}-${VOL}" dev="${DEV}"
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
/dev/rbd0
← user space →
← kernel space →
37. 37
Create LUNs connected to RBDs
# targetcli qla2xxx/${WWPN}/luns create
/backstores/rbd/${POOL}-${VOL}
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
/dev/rbd0
← user space →
← kernel space →
LUN0
38. 38
“Zoning” to filter access with ACLs
# targetcli qla2xxx/${WWPN}/acls create $
{INITIATOR} true
CLIENTS
CEPHCLUSTER
/sys/kernel/config/target
/dev/rbd0
← user space →
← kernel space →
LUN0
40. 40
First of all...
This solution is NOT a drop in replacement for SAN nor
NAS (at the moment at least!).
The main focus is to identify how to minimize the
overhead from native RBD to FC/iSCSI.
41. 41
Raw performance/estimation on 15K
Physical Disk IOPS: Ceph IOPS:
‒ 4K RND Read = 193 x 24 = 4.632
‒ 4K RND Write = 178 x 24 / 3 = 1.424 / 3 = 475
Physical Disk Throughput: Ceph Throughput:
‒ 512K RND Read = 108 MB/s x 24 = 2.600
‒ 512K RND Write = 105 MB/s x 24 / 3 = 840 / 2 = 420 MB/s
NOTE:
‒ 24 OSD and 3 Replicas per Pool
‒ No SSD for journal (so ~1/3 IOPS and ~1/2 of bandwidth for
writes)
44. 46
Where we are working on
Centralized management with GUI/CLI
‒ Deploy MON/OSD/GW nodes
‒ Manage Nodes/Disk/Pools/Map/LIO
‒ Monitor cluster and node status
Reaction on failures
Using librados/librbd with tcmu for backstore
46. 48
More integration with existing tools
Extend LRBD do accept multiple Fabric:
‒ iSCSI (native support)
‒ FC
‒ FCoE
Linux-IO:
‒ Use of librados via tcmu
48. 50
I/O scheduler matter!
On OSD nodes:
‒ deadline on physical disk (cfq if ionice scrub thread)
‒ noop on RAID disk
‒ read_ahead_kb=2048
On Gateway nodes:
‒ noop on mapped RBD
On Client nodes:
‒ noop or deadline on multipath device
50. 52
Design optimizations
• SSD on monitor nodes for LevelDB: decrease CPU,
memory usage and time during recovery
• SSD Journal decrease I/O latency: 3x IOPS and better
throughput
55. 57
Business Continuity architecture
Low latency connected sites:
WARNING: To improve availability a third site to place a
quorum node are highly encouraged.
56. 58
Disaster Recovery architecture
High latency or disconnected sites:
As in OpenStack Ceph plug-in for Cinder Backup:
# rbd export-diff pool/image@end --from-snap start - |
ssh -C remote rbd import-diff – pool/image
57. 59
KVM Gateways
• VT-x Physical passthrough of QLogic
• RBD Volumes as VirtIO devices
• Linux-IO iblock backstore
71. Unpublished Work of SUSE LLC. All Rights Reserved.
This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC.
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their
assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated,
abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE.
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.
General Disclaimer
This document is not to be construed as a promise by any participating company to develop, deliver, or market a
product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making
purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and
specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The
development, release, and timing of features or functionality described for SUSE products remains at the sole discretion
of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time,
without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this
presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-
party trademarks are the property of their respective owners.