This document provides an overview of trends and evolutions in data storage. It discusses storage architecture before and after Amazon S3, including features like snapshots, cloning, and replication. It then covers Ceph storage concepts like RADOS, RBD, CephFS, and Rados Gateway (RGW). Specific Ceph components are explained like MON, CRUSH, OSD. Example commands are shown for setting up and using RBD block storage and CephFS distributed file storage. Lastly, it briefly discusses multi-site object storage replication between Ceph clusters.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Introduction to Ceph, an open-source, massively scalable distributed file system.
This document explains the architecture of Ceph and integration with OpenStack.
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
BIND’s New Security Feature: DNSRPZ - the "DNS Firewall"Barry Greene
Learn how to turn your network’s DNS into a Security Tool! Webinar-Oct 12th
What do you do if the security tools are not protecting your network? Cyber-criminals are constantly finding ways to bypass your security tools and own your network. When the threat changes, you should grow with the threat - think out of the box – using tools that the criminals have not yet considered; the DNS.
ISC’s Internet Critical Open Source DNS software BIND has a new feature that would turn a DNS Caching Resolver into a tool to help protect your network from malware. All the computers in your network must contact your DNS Resolvers to get to the outside world. Your DNS Resolvers are critical “choke-point” for which all devices in your network must interact to get to the outside world. This "choke-point" is a logical choice to put security capabilities to check if a domain is "clean" or "dirty."
How can you have your DNS Resolver check if a domain is clean or dirty? Use BIND’s new feature – the DNS Response Policy Zone (DNSRPZ). DNSRPZ uses secure and fast zone transfer technologies to pull down black list of bad domains and put them into your DNS resolver.
The archived recording of the Webinar is here: www.isc.org/webinars
Who should watch this Webinar?
E-mail Administrators: Find out how DNSRPZ offers more effective way to work with the Anti-Spam black list.
Network Operators: Learn how DNSRPZ can be used inside your network to keep your users from being in-inadvertently infected by malware, zero-days, and malvertisements.
Security Engineers: Discover how DNSRPZ is a tool to help contain infections that get into your network and try to “call home” to a BOTNET controller.
Hosting Providers: By default, most of your hosting customers are using your DNS resolvers. Learn how DNSRPZ can help prevent and contain the threat of your customers getting infected.
Service Providers: Learn how to turn your DNS services into a tool to help protect all your customers from infection.
Mobile Telecoms Operators: Find a new tool that would prevent miscreant smart phone applications from calling home with DNS and infecting your customer’s phones.
SCADA and Critical Industrial System Operators: Learn how DNSRPZ is a tool to help protect legacy control systems that need DNS to work.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
BIND’s New Security Feature: DNSRPZ - the "DNS Firewall"Barry Greene
Learn how to turn your network’s DNS into a Security Tool! Webinar-Oct 12th
What do you do if the security tools are not protecting your network? Cyber-criminals are constantly finding ways to bypass your security tools and own your network. When the threat changes, you should grow with the threat - think out of the box – using tools that the criminals have not yet considered; the DNS.
ISC’s Internet Critical Open Source DNS software BIND has a new feature that would turn a DNS Caching Resolver into a tool to help protect your network from malware. All the computers in your network must contact your DNS Resolvers to get to the outside world. Your DNS Resolvers are critical “choke-point” for which all devices in your network must interact to get to the outside world. This "choke-point" is a logical choice to put security capabilities to check if a domain is "clean" or "dirty."
How can you have your DNS Resolver check if a domain is clean or dirty? Use BIND’s new feature – the DNS Response Policy Zone (DNSRPZ). DNSRPZ uses secure and fast zone transfer technologies to pull down black list of bad domains and put them into your DNS resolver.
The archived recording of the Webinar is here: www.isc.org/webinars
Who should watch this Webinar?
E-mail Administrators: Find out how DNSRPZ offers more effective way to work with the Anti-Spam black list.
Network Operators: Learn how DNSRPZ can be used inside your network to keep your users from being in-inadvertently infected by malware, zero-days, and malvertisements.
Security Engineers: Discover how DNSRPZ is a tool to help contain infections that get into your network and try to “call home” to a BOTNET controller.
Hosting Providers: By default, most of your hosting customers are using your DNS resolvers. Learn how DNSRPZ can help prevent and contain the threat of your customers getting infected.
Service Providers: Learn how to turn your DNS services into a tool to help protect all your customers from infection.
Mobile Telecoms Operators: Find a new tool that would prevent miscreant smart phone applications from calling home with DNS and infecting your customer’s phones.
SCADA and Critical Industrial System Operators: Learn how DNSRPZ is a tool to help protect legacy control systems that need DNS to work.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
Together with my colleagues at Red Hat Storage Team, i am very proud to have worked on this reference architecture for Ceph Object Storage.
If you are building Ceph object storage at scale, this document is for you.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
Are you Willing to have a luxury apartment in South Gurgaon ?
If your answer is yes , then your search ends here . MKS County Sohna provides fully opened 2 & 3 BHK apartments with uninterrupted Aravalli views and calm environment. Take an overview by visiting http://www.mkscounty.co.in/ .
A gathering of key stories and photos that capture some of the action at the Army's manufacturing center at Watervliet, New York for the month of August 2016. The Arsenal has been in continuous operation since the War of 1812.
The Watervliet Arsenal is an Army-owned-and-operated manufacturing facility and is the oldest, continuously active arsenal in the United States having begun operations during the War of 1812. It celebrated its 200th anniversary on July 14, 2013.
Today's Arsenal is relied upon by U.S. and foreign militaries to produce the most advanced, high-tech, high-powered weaponry for cannon, howitzer, and mortar systems. This National Historic Registered Landmark had revenue in fiscal year 2015 that exceeded $138 million and provides an annual economic benefit to the local community in excess of $100 million.
This is a low resolution edition and so, if you wish a higher resolution copy, please send a note to: usarmy.watervliet.tacom.list.wvapublicaffairs@mail.mil
Tnd - Pengantar Manajemen Proyek Sistem Informasi - Temu 4Tino Dwiantoro
Ini adalah materi kuliah Pengantar Manajemen Proyek Sistem Informasi di Akademi-akademi Bina Sarana Informatika (BSI) untuk kelas yang saya ajarkan. Semoga bermanfaat,
In this session, you'll learn how RBD works, including how it:
Uses RADOS classes to make access easier from user space and within the Linux kernel.
Implements thin provisioning.
Builds on RADOS self-managed snapshots for cloning and differential backups.
Increases performance with caching of various kinds.
Uses watch/notify RADOS primitives to handle online management operations.
Integrates with QEMU, libvirt, and OpenStack.
Ceph is an open source distributed storage system designed for scalability and reliability. Ceph's block device, RADOS block device (RBD), is widely used to store virtual machines, and is the most popular block storage used with OpenStack.
In this session, you'll learn how RBD works, including how it:
* Uses RADOS classes to make access easier from user space and within the Linux kernel.
* Implements thin provisioning.
* Builds on RADOS self-managed snapshots for cloning and differential backups.
* Increases performance with caching of various kinds.
* Uses watch/notify RADOS primitives to handle online management operations.
* Integrates with QEMU, libvirt, and OpenStack.
This session shows an overview of the features and architecture of SQL Server on Linux and Containers. It covers install, config, performance, security, HADR, Docker containers, and tools. Find the demos on http://aka.ms/bobwardms
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Community
Sage Weil, Creator of Ceph, Founder & CTO, Inktank
CephFS is a distributed filesystem built on RADOS, offering POSIX-semantics and a true scale-out architecture. While production deployments of CephFS do exist, it still needs lots of testing and hardening before it can be used in the most challenging (and interesting) scenarios. In this session, Sage will discuss the future of CephFS, includ- ing the areas where it still needs work and ways the community can help.
RADOS is a surprisingly flexible object store. To take advantage of its rich feature set, developers can build with its programmable library, librados. Librados is avail- able in many languages, and offers access to key/value stores, object classes, cluster health and status, and other useful RADOS internals. This session will cover how to use librados, discuss situations where librados is the right choice, and share a list of lesser-known RADOS features that developers can tap into.
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
Ceph is an open source distributed object store and file system that provides excellent performance, reliability and scalability.
In this presentation, the Ceph architecture will be explained, attendees will be introduced to the block, object and file interfaces to Ceph.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
2. HELLO!• Jefferson Especialista de Storage no Walmart.com
• Processamento de Dados, Fatec de São Paulo
• Experiência em alta criticidade em sistemas de
armazenamento de dados
• SNIA Certificate .
8. 2 Tipos de Storages
CONTROLER
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
DI
S
K
CONTROLER
Block
Unified
SAN
FC/ISCSI
/DEV/SDA
OU
DRIVE F:
CIFS IPSHARE
NFS IP:/MOUNT
Ext4
4k 4k 4k
• Ext4
• XFS
• BRFS
• ZFS
13. Ceph Versions
Argonaut – on July 3, 2012
Bobtail (v0.56) – on January 1, 2013
Cuttlefish (v0.61) – on May 7, 2013
Dumpling (v0.67) – on August 14, 2013
Emperor (v0.72) – on November 9, 2013
Firefly (v0.80) – on May 7, 2014
Giant (v0.87) – on October 29, 2014
Hammer (v0.94) – on April 7, 2015
Infernalis (v9.2.0) – on November 6, 2015
Jewel (v10.2.0) – on April 21, 2016
14. Open source (LGPL license)
Software defined storage distributed
No single point of failure
Massively scalable
Self healing
Unified storage: object, block and file
Ceph
15. Ceph architecture
CEPHFS
A distributed file
system with POSIX
semantics and scale-
out metadata
management
RGW
A web services
gateway for object
storage, compatible
with S3 and Swift
RBD
A reliable, fully-
distributed block
device with cloud
platform integration
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby,
PHP)
RADOS
APP HOST CLIENT
16. Rados
Reliable Distributed Object Storage
Replication
Flat object namespace within each pool
Strong consistency (CP system)
Infrastructure aware, dynamic topology
Hash-based placement (CRUSH)
17. 3 até 10.000 OSDs p/ um Cluster
One per Disk
Server Stored object to client
Intelligently peer to replication
OSD
18. Maintain cluster membership and state
Provide consensus for distributed decision-
making
Small, odd number
These do not serve stored objects to clients
Monitor node
19.
20. Object Placement
Pool
placement group (PG)
CRUSH(pg, cluster state,
rule) = [A, B]
O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J O
B
J
O
B
J
BINARY
ID
METADATA
OBJ
23. Rados Gateway overview
RGW
A web services
gateway for object
storage, compatible
with S3 and Swift
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby,
PHP)
RADOS
APP
25. RGW Components
• Frontend
• FastCGI - external web servers
• Civetweb– embedded web server
• Rest Dialect S3
• Swift
• Other API
• Execution layer – common layer for all dialects
31. CephFS
CEPHFS
A distributed file
system with POSIX
semantics and scale-
out metadata
management
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby,
PHP)
RADOS
CLIENT
Here is a fact: Ceph release names follow alphabetic order; the next one will be a "K" release.
license allows developers
Reliable Distributed Object Storage
Replication
Flat object namespace within each pool Identifier, binary data, and metadata consisting of a set of name/value pairsno hierarchy of directories
Strong consistency (CP system) Consistency e Partition tolerance continuidade do systema mesmo que tenha uma falhar na rede
Infrastructure aware, dynamic topology
Hash-based placement (CRUSH)
Rados
O rados é um sistema que consiste em uma coleção de diferentes servidores (ou servidores de diferentes tipos de hardware)
O rados tem a habilidade de escalar milhares de dispositivos de harware fazendo uso do seu gerenciamento de software destes devices indivialmente para cada node .
rados prove tambem uma serie de features tais como thin provision ,snapshots , e replicaçao , um algoritmo chamado controle de replicação dentro de scalabilidade por hash (crush ) deteremina como o dados é replicado e mapeado para um node individialmente .
O rados foi criado na universidade da California em Santa Cruz .
Exemplo de Crush :
Crush tem o conseito de Zonas ,
INO --número de inode (INO)
ONO -- número do objeto
OID -- ID de objeto
PGID -- ID do group de posicionamento .