SlideShare a Scribd company logo
1 of 80
Download to read offline
Backup Management with Ceph Object Storage
¿Who are we?
Camilo Echevarne
Félix Barbeira
cechevarne@dinahosting.com
fbarbeira@dinahosting.com
Linux Sysadmin
Head Linux Sysadmin
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
¿What is Dinahosting?
Our main business is web hosting and domain registration.
We offer the user all tools needed to develop their project on
Internet with guarantees:
- Domain name for your site.
- E-mail services.
- Hosting plans: from the simplest ones to complex and
powerful solutions like Cloud Hosting, as well as VPS and
Dedicated Servers.
¿Where are we?
Presence on more than
130 international markets
México, Argentina, Colombia,
Chile, Portugal, Peru, Venezuela,
USA, Brazil, Ecuador, France,
United Kingdom, Italy, Denmark,
Netherlands, Uruguay, Bolivia,
Japan, China, Senegal, etc.
Santiago
+130.000
customers
+3.000
servers
+ 240.000
domainsSome numbers…
Revenue
0
3.500.000
7.000.000
10.500.000
14.000.000
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017
Revenue
- Toll-free phone number.
- Chat.
- E-mail.
- Social network presence.
- 24/7 service.
- No call-center auto-attendant.
Customer service
• Only for clients of managed services.
• Restorations at file level.
• 30 days max retention.
• Weekly complete backup, incremental rest of the week.
• ~3000 machines.
• ~1PB available space.
• ~30 bare metal storage servers.
• Complete backup size ~125TB.
Backups
Data size increases year by year an so their management complexity
Agenda
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Current system
NFS servers
RAID storage
RAID:
the end of an era
• Slow recovery.
• Hazardous recovery.
• Painful recovery.
• Disk incompatibility.
• Wasted disk for hot-spare.
• Expensive storage cards.
• Hard to scale.
• False sense of security.
[2]
¿Would we be protected against … ?
hardware error
network outage
datacenter disaster
power supply
operating system error
filesystem failure
RAID:
the end of an era
Problems
managing files
- Backwards compatibility.
- Wasted space.
- ¿Storage node partition table?
- Corrupt files when disk is full.
- Many hours spent by SysOps.
- Forced to deploy and maintain an API.
IT support The bosses
Problems
managing files
Agenda
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Upload backup
to the cloud
¿When do we start to loose money?
Price per month 1PB cloud storage
AWS Cost Blocking elements
S3 Infrequent Access (IA) ~15.000 € Price
S3 Glacier ~5.000 €*
- Slow data retrieval**
- Limited availability***
- 500TB upload limit****
* Files deleted before 90 days incur a pro-rated charge.
** Expedited retrievals: 1-5 min. 3 expedited retrievals can be performed every 5 minutes.
Each unit of provisioned capacity costs 100$ per month.
Retrieval Standard: 3-5 hours.
*** Glacier inventory refresh every 24h.
**** Increasing the upload limit is available contacting AWS support.
AZURE Cost Blocking elements
Storage Standard Cool ~9.000 € Price
Storage Standard Archive ~5.000 € Restauraciones <15h
Azure charges extra cost if files are deleted before 30 and 180 days respectively.
GCP Cost Blocking elements
Nearline Storage ~8.000 € Price
Coldline Storage ~5.000 € Price
In both types of storage, data access is measured in milliseconds.
Upload backup
to the cloud
[3]
Upload backup
to the cloud
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Unified, distributed storage system.
Intelligence on software.
- Open source.
- Massively scalable.
- Independent components.
- No SPOF.
- High performance.
- S3 API.
- Active community.
- Use of commodity hardware.
Clients
Object Storage
Ceph Storage Cluster
Block Storage File Storage
Ceph
Linux OS
CPU
Memory
HDDs
Network
Ceph Ceph
Linux OS Linux OS
CPU
Memory
HDDs
Network
CPU
Memory
HDDs
Network
Linux OS
CPU
Memory
HDDs
Network
Ceph
…
…
…
Ceph SDS
Linux OS
Hardware
Distributed
storage
Server1 Server2 Server3 ServerN
OSD (object storage daemon)
Monitors
- From one to thousands.
- Generally speaking, 1 OSD = 1 hard disk.
- Communicate between them to replicate data and make recoveries.
- Maintain cluster maps.
- Provide consensus on the decisions of data distribution.
- Small and odd number.
- Do not store data.
Gateways
- Entry points to cluster.
OSD
NODE
Disk
OSD OSD OSD
DiskDiskDisk
OSD
Disk
OSD OSD OSD
DiskDiskDisk
OSD
Disk
OSD OSD OSD
DiskDiskDisk
Server1 Server2 Server3
NODE
RADOS sends the read
request to the primary
OSD.
Primary OSD reads
data from local disk
and notifies Ceph
client.
WRITEREAD
1
2
1
2
Client writes data,
RADOS creates object
and sends data to the
primary OSD.
1
4
3
2 Primary OSD finds the
number of replicas and
sends data to replica
OSDs.
Replica OSDs write data
and send completion to
primary OSD.
Primary OSD signals
write completion to
Ceph client.
1
2 2
3
4
3
Data flow on OSDs
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Ceph OSDs
• DELL R720XD / R730XD
• CPU: 2xE5-2660 8 cores 2,20ghz
• RAM: 64GB-96GB
• Disks:
• 12x8TB SATA
• 1 SATA disk for OS
• NIC: 10G
• Controller: H730 / LSI JBOD
Hardware
planification
Ceph monitors
• VPS.
• 4 vcores
• RAM: 4GB
• NIC: 10G
Ceph gateways
• DELL R230
• CPU: E3-1230v5 3.4GHz (8)
• RAM: 8GB
• NIC: 10G
¿Optimize for cost or performance?
In our case, the principal objective is to optimize total cost per GB.
¿What happen to the OSDs if the OS disk dies?
“We recommend using a dedicated drive for the operating system and software,
and one drive for each OSD Daemon you run on the host”
So…¿where do I put the operating system of the OSD node?
PROS CONS
OS in RAID1 - Cluster protected against OS failures.
- Hot-swap disks.
- We do not have a RAID card*.
- We would need 1 extra disk.
OS single disk
- Only 1 disk slot used.
- High reliability. Monitor disk with
SMART.
If disk dies, all OSDs of that machine
die too.
OS in SATADOM All disk slots available for OSDs.
They are not reliable after
months of use.
OS from SAN - All disk slots available for OSDs.
- RAID protected.
We depend on the network
and remote storage.
OS in SD All disk slots available for OSDs. Poor performance, not reliable.
*PERC H730 supports RAID.
Hardware
planification
Rules of thumb for a Ceph installation.
- 10G networking as a minimum.
- Deeply knowledge of hardware you wish to use.
- Always use at least 3 replicas
- Try to use enterprise SSDs.
- Don’t use configuration options you don’t understand.
- Power loss testing.
- Have a recovery plan.
- Use a CM (configuration management) system.
Hardware
planification
https://github.com/ceph/ceph-ansible.git
• Gradual learning curve.
• Plain deploy, no lifecycle management.
• No orchestration.
• No server needed.
• Evolution of ceph-deploy tool.
http://docs.ceph.com/ceph-ansible/
TIPS:
- Use ansible compatible version (no bleeding edge version supported).
- Do not use master branch unless you like strong emotions.
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Cliente
Public network
Monitor01 Monitor02 Monitor03
RadosGW01
HTTP (S3)
…RadosGW02
Ceph
Architecture
OSD OSD OSD OSD OSDn
10G
Odd
number
(3)
…
RadosGWn
10G10G
10G
10G
IPv6
IPv4 & IPv6
10G 10G
RadosGW01 …RadosGW02 RadosGWn
Client
HA Gateway
Option 1: LB active/passive mode.
LB-ON LB-OFF
Inconveniences:
- Bandwidth bottleneck at LB.
- Mount 2 LB at least.
- Increases TCO.
- Increases complexity.
Ceph
Architecture
RadosGW01 …RadosGW02 RadosGWn
Client
HA Gateway
Option 2: DNS Round Robin.
Inconveniences:
- DNS responses are stateless.
- TTL dependency.
- No instant fail-over.
Ceph
Architecture
HA Gateway
Option 3 (selected): local anycast of gateway ip.
Advantages:
- Bandwidth of all nodes is added.
- Instant fail-over.
- Route is deleted if it fails node, RadosGW daemon or
FrRouting daemon.
OSPF Route
RadosGW01
©√FRROUTING
IP
RADOS-GATEWAY
health-check
RadosGW0n
©√FRROUTING
IP
RADOS-GATEWAY
health-check…
OSPF Route
CPD
RadosGW01
©√FRROUTING
IP
RADOS-GATEWAY
health-check
Ceph
Architecture
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Tuning
No silver bullets.
root@ceph:~# ceph --show-config | wc -l
1397
root@ceph:~#
Default options are designed for
general use cases.
Most of the times you need to make
some adjustments in order to achieve
real performance.
Ceph documentation is highly valuable
and extensive:
http://docs.ceph.com/docs/master/
Tuning
• Enable Jumbo Frames.
ping6 -M do -s 8972 <ip>
• Monitor options:
[mon]
mon osd nearfull ratio = .90
mon osd down out subtree limit = host
• OSD options:
[osd]
osd scrub sleep = .1
osd scrub load threshold = 1.0
osd scrub begin hour = 12
osd scrub end hour = 0
Overhead
Data
Standard Frames (1522 MTU)
Jumbo Frame (9000 MTU)
Overhead
Data
• Daily reweight:
ceph osd reweight-by-utilization [threshold]
Erasure Code
Replicated pool vs. Erasure code pool
OBJECT
COPY
COPYCOPY
Replicated pool
CEPH STORAGE CLUSTER
OBJECT
Erasure coded pool
CEPH STORAGE CLUSTER
1 2 3 X Y
Full copies of stored objects
• High durability.
• 3x (200% overhead)
• Quicker recovery
• Admit all kind of operations.
• Use less resources (CPU).
One copy plus parity
• Cost-effective durability.
• 1.5x (50% overhead)
• Expensive recovery.
• Partial writes not supported*.
• Higher CPU usage.
CEPH CLIENT
Erasure coded pool
CEPH STORAGE CLUSTER
1
OSD
2
OSD
3
OSD
4
OSD
X
OSD
Y
OSD
READ
Reads
Erasure Code
¿HOW DOES ERASURE CODE WORKS?
Erasure coded pool
CEPH STORAGE CLUSTER
1
OSD
2
OSD
3
OSD
4
OSD
X
OSD
Y
OSD
READ
READS
Erasure Code CEPH CLIENT
Reads
¿HOW DOES ERASURE CODE WORKS?
Erasure coded pool
CEPH STORAGE CLUSTER
1
OSD
2
OSD
3
OSD
4
OSD
X
OSD
Y
OSD
READ REPLY
READS
Erasure Code CEPH CLIENT
Reads
¿HOW DOES ERASURE CODE WORKS?
CEPH STORAGE CLUSTER
1
OSD
2
OSD
3
OSD
4
OSD
X
OSD
Y
OSD
WRITE
Erasure coded pool
Erasure Code CEPH CLIENT
Writes
¿HOW DOES ERASURE CODE WORKS?
Erasure coded pool
CEPH STORAGE CLUSTER
1
OSD
2
OSD
3
OSD
4
OSD
X
OSD
Y
OSD
WRITE
WRITES
Erasure Code CEPH CLIENT
Writes
¿HOW DOES ERASURE CODE WORKS?
CEPH STORAGE CLUSTER
1
OSD
2
OSD
3
OSD
4
OSD
X
OSD
Y
OSD
WRITE ACK
Writes
Erasure coded pool
Erasure Code CEPH CLIENT
¿HOW DOES ERASURE CODE WORKS?
Two variables: K + M
K = data shards
M = erasure code shards
Usable space
OSDs number
allowed to fail
3+1 75 % 1
4+2 66 % 2
18+2 90 % 2
11+3 78.5 % 3
n=k+m n= total shards
r=k/n r=encoding rate
n=4+2=6
r=4/6=0.66
CERN
Erasure Code
¿HOW DOES ERASURE CODE WORKS?
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
{
"user_id": "fbarbeira",
"display_name": "Felix Barbeira",
"email": "fbarbeira@dinahosting.com",
"suspended": 0,
"max_buckets": 100,
"auid": 0,
"subusers": [
{
"id": "fbarbeira:read-only",
"permissions": "read"
}
],
"keys": [
{
"user": “fbarbeira:read-only",
"access_key": "XXXXXXXXXXXXXXXXXXXX",
"secret_key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
{
"user": "fbarbeira",
"access_key": "XXXXXXXXXXXXXXXXXXXX",
"secret_key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
],
[...]
},
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 1073741824,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
Limit max size object 1TB (default no-limit)
Ceph user
profile
Limit bucket max number (default 1000)
Subuser with read-only permission
¿How do we
sync backups?
message broker
AGENTBACKUP_DONE
Consume n elements
SERVER
publish
publish
SERVER
publish
SERVER
…
User-RW
WRITE
¿How do we
restore
backups?
PANEL AGENT
Generate temporary links
Method 1: Restorations ordered by control panel.
SERVER GET
User-RO
Method 2: Restorations from the same machine.
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
s3cmd minio
Ceph client
requirements
• No dependencies.
• Multi-OS compatible.
• Low resources requirements.
• Active development.
• Bandwidth limit.
We need to limit used bandwidth.
CPU and IO
problems
CPU
NET
1G
Limit bandwidth
Powerful machine
CPU
NET
1G
No limits
Not-so-powerful machine
CPU
NET
1G
No limits
Powerful machine
Elastic limit
CPU
NET
1G
Linux Traffic Control (TC)
Flow1
Flow2
Flow3
Flow4
Port
FIFO queue
Ceph traffic
Default behaviour
Flow1
Flow2
Flow3
Flow4
Classifier
Ceph traffic
Port
Ceph traffic
FIFO queue
Hierarchical Token Bucket (HTB)
Applying tc policy
CPU and IO
adjustments
Linux Traffic Control (TC)
Regulate outgoing traffic using system load.
CPU
NET
Network
limit
Allowed
CPU load
range
Reduce/increase transfer rate
CPU and IO
adjustments
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Prometheus Grafana
Storage
scrape metrics
Alertmanager
generate alerts
E-mail
XMPP
NODE_EXPORTER
MGR_EXPORTER
NODE_EXPORTER
MGR_EXPORTER
NODE_EXPORTER
MGR_EXPORTER
NODE_EXPORTER
NODE_EXPORTER
Monitors
OSDs and Gateways
Monitoring
Monitoring
user@prometheus:~$ curl --silent http://ceph-monitor:9283/metrics | head -20
# HELP ceph_osd_op_out_bytes Client operations total read size
# TYPE ceph_osd_op_out_bytes counter
ceph_osd_op_out_bytes{ceph_daemon="osd.6"} 192202.0
ceph_osd_op_out_bytes{ceph_daemon="osd.26"} 355345.0
ceph_osd_op_out_bytes{ceph_daemon="osd.30"} 99943.0
ceph_osd_op_out_bytes{ceph_daemon="osd.8"} 9687.0
ceph_osd_op_out_bytes{ceph_daemon="osd.20"} 6480.0
ceph_osd_op_out_bytes{ceph_daemon="osd.36"} 73682.0
ceph_osd_op_out_bytes{ceph_daemon="osd.22"} 497679.0
ceph_osd_op_out_bytes{ceph_daemon="osd.47"} 123536.0
ceph_osd_op_out_bytes{ceph_daemon="osd.34"} 95692.0
ceph_osd_op_out_bytes{ceph_daemon="osd.45"} 114504.0
ceph_osd_op_out_bytes{ceph_daemon="osd.10"} 8695.0
ceph_osd_op_out_bytes{ceph_daemon="osd.39"} 0.0
ceph_osd_op_out_bytes{ceph_daemon="osd.43"} 107303.0
ceph_osd_op_out_bytes{ceph_daemon="osd.12"} 199043.0
ceph_osd_op_out_bytes{ceph_daemon="osd.28"} 1165455.0
ceph_osd_op_out_bytes{ceph_daemon="osd.41"} 216581.0
ceph_osd_op_out_bytes{ceph_daemon="osd.14"} 124186.0
user@prometheus:~$
Prometheus exporter example:
Monitoring
S.M.A.R.T. Status
Monitoring
Ceph Status
Monitoring
Gateway Status
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Upgrades
overview
Unattended upgrades
• Mirror on-premises.
• Upgrade policy:
• Security: ASAP.
• Updates: every Tuesday.
• Maintenance window.
• Package blacklist: ceph and ceph-*
• Index results on Elasticsearch.
Ceph upgrades sequence:
- Monitors.
- OSDs.
- Gateways.
Upgrades
policy
Upgrades dashboard
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
CEPH_HEALTH ?
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
HEALTHY
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
LOCK
OSD1
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD1
CEPH_HEALTH ?
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD1
HEALTHY
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD1
LOCK
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD1
WAIT
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD1
UNLOCK
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
LOCKOSD3
Orchestrated
reboots
OSD1
Prometheus
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
…
OSD2
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
REBOOT_REQUIRED
CEPH_HEALTHY
ETCD_LOCK
REBOOTING
ETCD_UNLOCK
OSD3
Orchestrated
reboots
• Presentation.
• The problem.
• Alternatives.
• Ceph to the rescue.
• Hardware planification.
• Architecture.
• Tuning.
• Backup management.
• Clients.
• Monitoring.
• Upgrades.
• Future plans.
Agenda
Future plans
• Metadata Search (Elasticsearch).
• Search objects using tags.
• Top 10 backup size.
• Average size.
• Crush Maps.
• Use current datacenter configuration in Ceph: rack, row, room…
• Increase availability.
• EC adjustment.
• Indexless buckets.
• Incompatible with lifecycles.
¿Questions?
References:
[1] http://www.vacalouraestudio.es/
[2] https://www.krollontrack.co.uk/blog/survival-stories/24tb-of-confidential-data-recovered-from-raid-6-array/
[3] https://www.elempresario.com/noticias/economia/2017/09/27/
el_numero_billetes_500_euros_continua_minimos_desde_2003_54342_1098.html
¡Gracias!

More Related Content

What's hot

Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightColleen Corrice
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster inwin stack
 
Ceph used in Cancer Research at OICR
Ceph used in Cancer Research at OICRCeph used in Cancer Research at OICR
Ceph used in Cancer Research at OICRCeph Community
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCCeph Community
 
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red_Hat_Storage
 
Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph SolutionsRed_Hat_Storage
 
Which Hypervisor is Best?
Which Hypervisor is Best?Which Hypervisor is Best?
Which Hypervisor is Best?Kyle Bader
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Disk health prediction for Ceph
Disk health prediction for CephDisk health prediction for Ceph
Disk health prediction for CephCeph Community
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed_Hat_Storage
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed_Hat_Storage
 

What's hot (20)

Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Ceph used in Cancer Research at OICR
Ceph used in Cancer Research at OICRCeph used in Cancer Research at OICR
Ceph used in Cancer Research at OICR
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
 
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
 
Which Hypervisor is Best?
Which Hypervisor is Best?Which Hypervisor is Best?
Which Hypervisor is Best?
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Disk health prediction for Ceph
Disk health prediction for CephDisk health prediction for Ceph
Disk health prediction for Ceph
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere Mortals
 

Similar to Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira

Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowEd Balduf
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2aspyker
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
 
HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
 
Building a High Performance Analytics Platform
Building a High Performance Analytics PlatformBuilding a High Performance Analytics Platform
Building a High Performance Analytics PlatformSantanu Dey
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructurexKinAnx
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructuresolarisyourep
 
Technological Innovations for Home Entertainment & Video Storage
 Technological Innovations for Home Entertainment & Video Storage Technological Innovations for Home Entertainment & Video Storage
Technological Innovations for Home Entertainment & Video StorageCK Chen
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Shuquan Huang
 
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten RachfahlStorage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten RachfahlITCamp
 
Application Caching: The Hidden Microservice
Application Caching: The Hidden MicroserviceApplication Caching: The Hidden Microservice
Application Caching: The Hidden MicroserviceScott Mansfield
 

Similar to Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira (20)

Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big Data
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
Building a High Performance Analytics Platform
Building a High Performance Analytics PlatformBuilding a High Performance Analytics Platform
Building a High Performance Analytics Platform
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Technological Innovations for Home Entertainment & Video Storage
 Technological Innovations for Home Entertainment & Video Storage Technological Innovations for Home Entertainment & Video Storage
Technological Innovations for Home Entertainment & Video Storage
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
 
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten RachfahlStorage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
 
QNAP NAS Training 2016
QNAP NAS Training 2016QNAP NAS Training 2016
QNAP NAS Training 2016
 
Application Caching: The Hidden Microservice
Application Caching: The Hidden MicroserviceApplication Caching: The Hidden Microservice
Application Caching: The Hidden Microservice
 

Recently uploaded

Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 

Recently uploaded (20)

Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 

Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira