SlideShare a Scribd company logo
1 of 31
Download to read offline
Scale and performance: Servicing the
Fabric and the Workshop
Steve Quenette, Deputy Director, Monash eResearch Centre, and
Blair Bethwaite, Technical Lead, Monash eResearch Centre
Ceph day Melbourne 2015, Monash University
brought to you by
bought to you by
computing for research: 

… extremes over spectrums …
1. peak vs long-tail
(spectrum of user expectations)
2. permeability: solo vs multidisciplinary
(spectrum of organisational expectations)
3. paradigms: “there is no spoon”
(spectrum of computing expectations)
bought to you by
1. peak vs long-tail
• Leading researchers build tools to see what could
not be seen before, and provide that tool for others.
• All researchers apply tools (of others) on new
problems.
“peak”
and the tail
bought to you by
2. permeability
• Implies - over time: Research
verticals…
• becomes increasing
complicated, involved and
leveraged
• involves many organisations and
people
bought to you by
3. discovery paradigms
bought to you by
technology driven
discovery?…
5.7% (CAGR)
Moore’s Curse, IEEE Spectrum, April 2015
http://www.i-scoop.eu/internet-of-things ,https://www.ncta.com/broadband-by-the-numbers
http://www.pwc.com/gx/en/technology/mobile-innovation/assets/pwc-mobile-technologies-index-image-sensor-steady-growth-for-new-capabilities.pdf
Normalised growth - innovations
1
1000
1000000
1875.00 1909.75 1944.50 1979.25 2014.00
Number of components on a microchip IoT - Number of devices on the internet light efficiency (outdoor lights)
light efficiency (indoor lights) Intercontinental travel Capability of image sensors
Fuel conversion efficiency (US passenger car) energy cost of steel (coke, natural gas, electricity) US corn grop yeild
bought to you by
4 paradigms
Emperical
(“1st paradigm”)
Collecting and enumerating things.

Enabled by telescopes,
microscopes, …
Theoretical
(“2nd paradigm”)
Properties determined by models.
Enabled by innovations in statistics,
calculus, physical laws, …
Computational
(“3rd paradigm”)
Models significantly more complex
and sized than a human can
compute.
Enabled by computing growth
Data-driven
(“4th paradigm”)
Significantly more and complex data.
Enabled by sensors, storage, IoT
growth
bought to you by
… the 4th is really …
Data-mining
There is so much data the f can be
discovered with little or no
preconditioning of what “f” is.
Enabled by innovations in data-
mining model/approaches (“g”)
Data assimilation
Both models and observations are
big and complex.
Enabled by innovations in inverse
and optimisation model/approaches
Visualisation
Where very much more of x and y
can be displayed to humans, and
the human brain does the “data-
mining”
bought to you by
Yes visualisation is relevant!
bought to you by
21st century microscopes
look more like…
ANALYSIS

Filters
INSIGHT

Lens
AUSTRALIAN SYNCHROtRON
MONASH 

BIOMEDICAL

IMAGING
RAMACCIOTTI

CRYO-EM
CAVE2 

IMMERSIVE 

VISUALISATION
DIGITAL 

SCIENTIFIC

DESKTOPS
MONASH 

RESEARCH 

CLOUD
CAPTURE

Light Source,
Samples
SHARE

DATA
bought to you by
computing for research: 

… extremes over spectrums…
1. peak vs long-tail
(spectrum of user expectations)
2. permeability: solo vs multidisciplinary
(spectrum of organisational expectations)
3. paradigms: “there is no spoon”
(spectrum of computing expectations)
self service
multiple market-driven front-ends
quality
accessible &
multi-tenant
scale
low latency
bandwidth
front-ends “emerge”
bought to you by
fabric and workshop
• Ceph (together with OpenStack and Neutron),
means our storage is software defined
• Its more like a fabric
• Self-service to pieces
• We choose the pieces to be right for researchers
who orchestrate their own 21st century microscope
• MeRC, including compute, people, etc is more like a
workshop for microscope builders
bought to you by
storage IaaS products
• Customer’s storage capacity can be a mix of…
• Vault
• Lower $/tb, write fast, slow retrieve
• Market (Object)
• Moderate $/tb
• Amazon S3-like for modern “Access Layers”
• Remote backup optional
• Market (File)
• Higher $/tb
• For traditional filesystem “Access Layers”
• Remote backup implied
• Computational
• Moderate $/tb
• Direct attached volumes to R@CMon Cloud
• A user can join storage capacity from other tenants (e.g. RDSI ReDS merit
allocation) per “project”.
bought to you by
storage Access Layers
• MyTardis
• For Instrument Integration
• From sensor to analysis to open access
• Researcher, Facility & Institutional data management
• Figshare
• Data management for Institutions and the long-tail
• (Can trial through R@CMon Storage)
• Aspera
• RDS/VicNode operated FTP & web access tool for very high-
speed data transfer
• OwnCloud (not yet in production)
• Dropbox-like
• Linked to user end-points across Access layers
bought to you by
some numbers
By allocations (Q3 2015)…
• Vault: 2.5uPB
• Market (Object): 0.6uPB
• Market (File): 2uPB
• Computational: 0.5uPB
• Intent: By end of 2016 all* Monash University “storage”
for research will be on this infrastructure
(*) Except the IS027k accredited hosting facility, and admin storage space used by researchers
bought to you by
at the end of the day, we are still consolidating -
its just that we’ve asked where should
consolidation occur
bought to you by
Now over to the techies…
Speaking: Blair Bethwaite, Senior HPC Consultant,
Monash eResearch Centre
Monash Ceph Crew:
Jerico Revote, Rafael Lopez, Swe Aung, Craig
Beckman, George Foscolos, George Kralevski,
Steve Davison, John Mann, Colin Blythe
Please ask questions as we go
bought to you by
Ceph@Monash, some history
It all started with The Cloud
https://xkcd.com/908/ (NeCTAR logo added)
bought to you by
speaking of accidents
• In early 2013 R@CMon started with Monash’s first zone of the
NeCTAR cloud
• Our own local cloud = awesome! But, “where do we store
all the things?”
• No persistent volume service provided by NeCTAR,
expected from other funding sources
• Plenty of object storage though…
• Enter Cuttlefish!
• “monash-01” Cinder zone backed by Ceph available mid
2013
bought to you by
show and tell: monash-01
• (Disclaimer: we’re not good at names)
• The hardware - repurposed Swift servers:
• 8x Dell R720xd (colo osds & mons x5) - 24TB/node
• 12x 2TB 7.2k NL-SAS (12x RAID0, PERC H710p)
• 2x E5-2650(2GHz), 32GB RAM
• 20GbE (Intel X520 DP), VLANs for back/front-end
• Ceph Firefly on Ubuntu Precise, 2 replicas, ~90uTB,
60TB used, 135TB committed (thin provisioning)
bought to you by
show and tell: monash-02
• 17x Dell R720xd (virtualised mons x3)
• 9x 4TB 7.2k NL-SAS (9x RAID0, PERC H710p) - 36TB/node
• 3x 200GB Intel DC S3700 SSDs (journals and future cache)
• 1x E5-2630Lv2 (2.4GHz), 32GB RAM
• 2x 10GbE (Mellanox CX-3 DP), back/front-end active on
alternate ports (different ToR switches)
• Ceph Firefly on Ubuntu Trusty, 2 replicas, ~300uTB, 110TB
used,130TB committed
What we changed?
bought to you by
show and tell: rds[i]
• 3x Dell R320 (mons)
• 4x Dell R720xd (cache tier) - 18TB/node
• 20x 900GB 10k SAS (20x RAID0, PERC 710p) - rgw
hot tier
• 4x 400GB Intel DC S3700 SSDs (journals for rgw hot
tier)
• 2x E5-2630v2 (2.6GHz), 128GB RAM
• 56GbE (Mellanox CX-3 DP), VLANs for back/front-end
bought to you by
show and tell: rds[i]
• 33x Dell R720xd + 66 MD1200 (2 per node) -144TB/node
• 8x 6TB 7.2k NL-SAS (8x RAID0, PERC H710p) - rgw EC cold
tier
• 24x 4TB 7.2k NL-SAS (24x RAID0, PERC H810) - rbds go
here
• 4x 200GB Intel DC S3700 SSDs (journals for rbd pool)
• 2x E5-2630v2 (2.6GHz), 128GB RAM
• 20GbE (Mellanox CX-3 DP), VLANs for back/front-end
• Ceph Hammer on RHEL Maipo
bought to you by
rds logical - physical layout
bought to you by
rgw HA architecture
• DNS round-robin provides initial HA request fanout
• HAproxys handle load-balancing and SSL/TLS
termination.
• Scale arbitrarily in pairs with keepalived pairing
providing redundancy and HA via Virtual/floating
IP address (VIP) failover.
• RGW instances handle actual client/application
protocol (S3, Swift, etc) traffic.
• Scale arbitrarily.
bought to you by
new hardware/capacity
• monash-02 - another 10 nodes same config
• rds - another 9 nodes same config
• Refresh monash-01 cluster:
• 9x Dell R730xd - 96TB/node
• 16x 6TB 7.2k NL-SAS
• 2x 400GB Intel DC P3700 NVMe (journals)
• 1x E5-2630v3 (2.5GHz), 128GB RAM
• 20GbE (Intel X710 DP), VLANs for back/front-end
16x 3.5” data drives in 2RU!
bought to you by
pain / nits
• Most problems have been indirect, i.e. operating system
and hardware, Ceph itself solid
• But can be very opaque when things go wrong
• E.g., what is wrong in this picture, how bad is it, is
there any commonality or correlation of symptoms,
does cluster need intervention to recover?
	
  	
  	
  cluster	
  b8bf920a-­‐de81-­‐4ea5-­‐b63e-­‐2d5f8cced22d	
  
	
  	
  	
  	
  health	
  HEALTH_WARN	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  23	
  pgs	
  backfill	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  68	
  pgs	
  backfilling	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1230	
  pgs	
  degraded	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  6017	
  pgs	
  down	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  46	
  pgs	
  incomplete	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  8099	
  pgs	
  peering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  94	
  pgs	
  recovering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  41	
  pgs	
  recovery_wait	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  2824	
  pgs	
  stale	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1204	
  pgs	
  stuck	
  degraded	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  8908	
  pgs	
  stuck	
  inactive	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  2824	
  pgs	
  stuck	
  stale	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  9913	
  pgs	
  stuck	
  unclean	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1073	
  pgs	
  stuck	
  undersized	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1092	
  pgs	
  undersized	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1308	
  requests	
  are	
  blocked	
  >	
  32	
  sec	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  recovery	
  168114/1648042	
  objects	
  degraded	
  (10.201%)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  recovery	
  52842/1648042	
  objects	
  misplaced	
  (3.206%)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  recovery	
  1056/460665	
  unfound	
  (0.229%)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  74/256	
  in	
  osds	
  are	
  down	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1	
  mons	
  down,	
  quorum	
  1,2	
  rcmondc1r75-­‐02-­‐ac,rcmondc1r75-­‐01-­‐ac	
  
	
  	
  	
  	
  monmap	
  e2:	
  3	
  mons	
  at	
  {rcmondc1r75-­‐01-­‐ac=172.16.93.3:6789/0,rcmondc1r75-­‐02-­‐ac=172.16.93.2:6789/0,rcmondc1r75-­‐03-­‐ac=172.16.93.1:6789/0}	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  election	
  epoch	
  51186,	
  quorum	
  1,2	
  rcmondc1r75-­‐02-­‐ac,rcmondc1r75-­‐01-­‐ac	
  
	
  	
  	
  	
  osdmap	
  e103326:	
  848	
  osds:	
  182	
  up,	
  256	
  in;	
  1153	
  remapped	
  pgs	
  
	
  	
  	
  	
  	
  pgmap	
  v3451913:	
  10560	
  pgs,	
  18	
  pools,	
  1152	
  GB	
  data,	
  449	
  kobjects	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  2547	
  GB	
  used,	
  784	
  TB	
  /	
  786	
  TB	
  avail	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  168114/1648042	
  objects	
  degraded	
  (10.201%)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  52842/1648042	
  objects	
  misplaced	
  (3.206%)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1056/460665	
  unfound	
  (0.229%)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  4703	
  down+peering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  1236	
  stale+down+peering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  877	
  stale+peering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  634	
  peering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  474	
  active+undersized+degraded	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  422	
  remapped+peering	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  381	
  active+clean	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  266	
  stale+active+clean	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  251	
  active+remapped
bought to you by
challenges / questions
• Best for vNAS performance via KVM+librbd
• Filesystem, # of rbds, what interface, tuning?
• Disk failure handling process
• Currently policy is to redeploy OSD for any media
error
• http://tracker.ceph.com/projects/ceph/wiki/
A_standard_framework_for_Ceph_performance_pro
filing_with_latency_breakdown
bought to you by
learnings
• use dedicated journals
• network matters - becomes much more visible
• RAID controllers with no native JBOD are ok, but be
prepared for more complicated ops
MASSIVE Business Plan 2013 / 2014 DRAFT
Title: Business Plan for the Multi-modal Australian ScienceS Imaging and
Visualisation Environment (MASSIVE) 2013 / 2014
Document no: MASSIVE-BP-2.3 DRAFT
Date: June 2013
Prepared by: Name: Wojtek J Goscinski
Title: MASSIVE Coordinator
Approved by: Name: MASSIVE Steering Committee
Date:
Open IaaS:
Technology:
30/10/2015 1:59 pmMyTardis | Automatically stores your instrument data for sharing.
Menu
MyTardis Tech Group Meeting
#3
Posted on August 20, 2015August 20, 2015 by steve.androulakissteve.androulakis
It’s been months since the last one, so a wealth of activity to report on.
MyTardis
Automatically stores your instrument data for sharing.
Application layers:
IaaS: Lustre
Cloud Storage HPC
Tenancies:
RDS/VicNode
NeCTAR
Monash
Other 1
Vault
Market

(object)
Market

(file)
Computational
MyTardis Figshare CIFS OwnCloud
Access
Layers:
Ceph

More Related Content

What's hot

Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Community
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCCeph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Community
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph Community
 
Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 

What's hot (20)

Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance Barriers
 
Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 

Viewers also liked

Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Community
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Community
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Community
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture Ceph Community
 

Viewers also liked (20)

Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
 

Similar to Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Workshop

Using Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah BardUsing Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah BardDocker, Inc.
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
Avoiding big data antipatterns
Avoiding big data antipatternsAvoiding big data antipatterns
Avoiding big data antipatternsgrepalex
 
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in ProductionTugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in ProductionCodemotion
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Community
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Community
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on DockerDataWorks Summit
 
Tsinghua University: Two Exemplary Applications in China
Tsinghua University: Two Exemplary Applications in ChinaTsinghua University: Two Exemplary Applications in China
Tsinghua University: Two Exemplary Applications in ChinaDataStax Academy
 
Ncar globally accessible user environment
Ncar globally accessible user environmentNcar globally accessible user environment
Ncar globally accessible user environmentinside-BigData.com
 
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...Glenn K. Lockwood
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph Community
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
 
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...Jen Aman
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Databricks
 
Build a Time Series Application with Apache Spark and Apache HBase
Build a Time Series Application with Apache Spark and Apache  HBaseBuild a Time Series Application with Apache Spark and Apache  HBase
Build a Time Series Application with Apache Spark and Apache HBaseCarol McDonald
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Dave Holland
 
Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...
Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...
Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...Databricks
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Ceph Community
 
Devnexus 2018
Devnexus 2018Devnexus 2018
Devnexus 2018Roy Russo
 

Similar to Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Workshop (20)

Using Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah BardUsing Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Avoiding big data antipatterns
Avoiding big data antipatternsAvoiding big data antipatterns
Avoiding big data antipatterns
 
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in ProductionTugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic Cloud
 
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic CloudCeph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Scaling an Academic Cloud
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on Docker
 
Tsinghua University: Two Exemplary Applications in China
Tsinghua University: Two Exemplary Applications in ChinaTsinghua University: Two Exemplary Applications in China
Tsinghua University: Two Exemplary Applications in China
 
Kafka & Hadoop in Rakuten
Kafka & Hadoop in RakutenKafka & Hadoop in Rakuten
Kafka & Hadoop in Rakuten
 
Ncar globally accessible user environment
Ncar globally accessible user environmentNcar globally accessible user environment
Ncar globally accessible user environment
 
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
 
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
 
Build a Time Series Application with Apache Spark and Apache HBase
Build a Time Series Application with Apache Spark and Apache  HBaseBuild a Time Series Application with Apache Spark and Apache  HBase
Build a Time Series Application with Apache Spark and Apache HBase
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017
 
Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...
Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...
Very Large Data Files, Object Stores, and Deep Learning—Lessons Learned While...
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt
 
Devnexus 2018
Devnexus 2018Devnexus 2018
Devnexus 2018
 

Recently uploaded

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 

Recently uploaded (20)

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 

Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Workshop

  • 1. Scale and performance: Servicing the Fabric and the Workshop Steve Quenette, Deputy Director, Monash eResearch Centre, and Blair Bethwaite, Technical Lead, Monash eResearch Centre Ceph day Melbourne 2015, Monash University brought to you by
  • 2. bought to you by computing for research: 
 … extremes over spectrums … 1. peak vs long-tail (spectrum of user expectations) 2. permeability: solo vs multidisciplinary (spectrum of organisational expectations) 3. paradigms: “there is no spoon” (spectrum of computing expectations)
  • 3. bought to you by 1. peak vs long-tail • Leading researchers build tools to see what could not be seen before, and provide that tool for others. • All researchers apply tools (of others) on new problems. “peak” and the tail
  • 4. bought to you by 2. permeability • Implies - over time: Research verticals… • becomes increasing complicated, involved and leveraged • involves many organisations and people
  • 5. bought to you by 3. discovery paradigms
  • 6. bought to you by technology driven discovery?… 5.7% (CAGR) Moore’s Curse, IEEE Spectrum, April 2015 http://www.i-scoop.eu/internet-of-things ,https://www.ncta.com/broadband-by-the-numbers http://www.pwc.com/gx/en/technology/mobile-innovation/assets/pwc-mobile-technologies-index-image-sensor-steady-growth-for-new-capabilities.pdf Normalised growth - innovations 1 1000 1000000 1875.00 1909.75 1944.50 1979.25 2014.00 Number of components on a microchip IoT - Number of devices on the internet light efficiency (outdoor lights) light efficiency (indoor lights) Intercontinental travel Capability of image sensors Fuel conversion efficiency (US passenger car) energy cost of steel (coke, natural gas, electricity) US corn grop yeild
  • 7. bought to you by 4 paradigms Emperical (“1st paradigm”) Collecting and enumerating things.
 Enabled by telescopes, microscopes, … Theoretical (“2nd paradigm”) Properties determined by models. Enabled by innovations in statistics, calculus, physical laws, … Computational (“3rd paradigm”) Models significantly more complex and sized than a human can compute. Enabled by computing growth Data-driven (“4th paradigm”) Significantly more and complex data. Enabled by sensors, storage, IoT growth
  • 8. bought to you by … the 4th is really … Data-mining There is so much data the f can be discovered with little or no preconditioning of what “f” is. Enabled by innovations in data- mining model/approaches (“g”) Data assimilation Both models and observations are big and complex. Enabled by innovations in inverse and optimisation model/approaches Visualisation Where very much more of x and y can be displayed to humans, and the human brain does the “data- mining”
  • 9. bought to you by Yes visualisation is relevant!
  • 10. bought to you by 21st century microscopes look more like… ANALYSIS
 Filters INSIGHT
 Lens AUSTRALIAN SYNCHROtRON MONASH 
 BIOMEDICAL
 IMAGING RAMACCIOTTI
 CRYO-EM CAVE2 
 IMMERSIVE 
 VISUALISATION DIGITAL 
 SCIENTIFIC
 DESKTOPS MONASH 
 RESEARCH 
 CLOUD CAPTURE
 Light Source, Samples SHARE
 DATA
  • 11. bought to you by computing for research: 
 … extremes over spectrums… 1. peak vs long-tail (spectrum of user expectations) 2. permeability: solo vs multidisciplinary (spectrum of organisational expectations) 3. paradigms: “there is no spoon” (spectrum of computing expectations) self service multiple market-driven front-ends quality accessible & multi-tenant scale low latency bandwidth front-ends “emerge”
  • 12. bought to you by fabric and workshop • Ceph (together with OpenStack and Neutron), means our storage is software defined • Its more like a fabric • Self-service to pieces • We choose the pieces to be right for researchers who orchestrate their own 21st century microscope • MeRC, including compute, people, etc is more like a workshop for microscope builders
  • 13. bought to you by storage IaaS products • Customer’s storage capacity can be a mix of… • Vault • Lower $/tb, write fast, slow retrieve • Market (Object) • Moderate $/tb • Amazon S3-like for modern “Access Layers” • Remote backup optional • Market (File) • Higher $/tb • For traditional filesystem “Access Layers” • Remote backup implied • Computational • Moderate $/tb • Direct attached volumes to R@CMon Cloud • A user can join storage capacity from other tenants (e.g. RDSI ReDS merit allocation) per “project”.
  • 14. bought to you by storage Access Layers • MyTardis • For Instrument Integration • From sensor to analysis to open access • Researcher, Facility & Institutional data management • Figshare • Data management for Institutions and the long-tail • (Can trial through R@CMon Storage) • Aspera • RDS/VicNode operated FTP & web access tool for very high- speed data transfer • OwnCloud (not yet in production) • Dropbox-like • Linked to user end-points across Access layers
  • 15. bought to you by some numbers By allocations (Q3 2015)… • Vault: 2.5uPB • Market (Object): 0.6uPB • Market (File): 2uPB • Computational: 0.5uPB • Intent: By end of 2016 all* Monash University “storage” for research will be on this infrastructure (*) Except the IS027k accredited hosting facility, and admin storage space used by researchers
  • 16. bought to you by at the end of the day, we are still consolidating - its just that we’ve asked where should consolidation occur
  • 17. bought to you by Now over to the techies… Speaking: Blair Bethwaite, Senior HPC Consultant, Monash eResearch Centre Monash Ceph Crew: Jerico Revote, Rafael Lopez, Swe Aung, Craig Beckman, George Foscolos, George Kralevski, Steve Davison, John Mann, Colin Blythe Please ask questions as we go
  • 18. bought to you by Ceph@Monash, some history It all started with The Cloud https://xkcd.com/908/ (NeCTAR logo added)
  • 19. bought to you by speaking of accidents • In early 2013 R@CMon started with Monash’s first zone of the NeCTAR cloud • Our own local cloud = awesome! But, “where do we store all the things?” • No persistent volume service provided by NeCTAR, expected from other funding sources • Plenty of object storage though… • Enter Cuttlefish! • “monash-01” Cinder zone backed by Ceph available mid 2013
  • 20. bought to you by show and tell: monash-01 • (Disclaimer: we’re not good at names) • The hardware - repurposed Swift servers: • 8x Dell R720xd (colo osds & mons x5) - 24TB/node • 12x 2TB 7.2k NL-SAS (12x RAID0, PERC H710p) • 2x E5-2650(2GHz), 32GB RAM • 20GbE (Intel X520 DP), VLANs for back/front-end • Ceph Firefly on Ubuntu Precise, 2 replicas, ~90uTB, 60TB used, 135TB committed (thin provisioning)
  • 21. bought to you by show and tell: monash-02 • 17x Dell R720xd (virtualised mons x3) • 9x 4TB 7.2k NL-SAS (9x RAID0, PERC H710p) - 36TB/node • 3x 200GB Intel DC S3700 SSDs (journals and future cache) • 1x E5-2630Lv2 (2.4GHz), 32GB RAM • 2x 10GbE (Mellanox CX-3 DP), back/front-end active on alternate ports (different ToR switches) • Ceph Firefly on Ubuntu Trusty, 2 replicas, ~300uTB, 110TB used,130TB committed What we changed?
  • 22. bought to you by show and tell: rds[i] • 3x Dell R320 (mons) • 4x Dell R720xd (cache tier) - 18TB/node • 20x 900GB 10k SAS (20x RAID0, PERC 710p) - rgw hot tier • 4x 400GB Intel DC S3700 SSDs (journals for rgw hot tier) • 2x E5-2630v2 (2.6GHz), 128GB RAM • 56GbE (Mellanox CX-3 DP), VLANs for back/front-end
  • 23. bought to you by show and tell: rds[i] • 33x Dell R720xd + 66 MD1200 (2 per node) -144TB/node • 8x 6TB 7.2k NL-SAS (8x RAID0, PERC H710p) - rgw EC cold tier • 24x 4TB 7.2k NL-SAS (24x RAID0, PERC H810) - rbds go here • 4x 200GB Intel DC S3700 SSDs (journals for rbd pool) • 2x E5-2630v2 (2.6GHz), 128GB RAM • 20GbE (Mellanox CX-3 DP), VLANs for back/front-end • Ceph Hammer on RHEL Maipo
  • 24. bought to you by rds logical - physical layout
  • 25. bought to you by rgw HA architecture • DNS round-robin provides initial HA request fanout • HAproxys handle load-balancing and SSL/TLS termination. • Scale arbitrarily in pairs with keepalived pairing providing redundancy and HA via Virtual/floating IP address (VIP) failover. • RGW instances handle actual client/application protocol (S3, Swift, etc) traffic. • Scale arbitrarily.
  • 26. bought to you by new hardware/capacity • monash-02 - another 10 nodes same config • rds - another 9 nodes same config • Refresh monash-01 cluster: • 9x Dell R730xd - 96TB/node • 16x 6TB 7.2k NL-SAS • 2x 400GB Intel DC P3700 NVMe (journals) • 1x E5-2630v3 (2.5GHz), 128GB RAM • 20GbE (Intel X710 DP), VLANs for back/front-end 16x 3.5” data drives in 2RU!
  • 27. bought to you by pain / nits • Most problems have been indirect, i.e. operating system and hardware, Ceph itself solid • But can be very opaque when things go wrong • E.g., what is wrong in this picture, how bad is it, is there any commonality or correlation of symptoms, does cluster need intervention to recover?      cluster  b8bf920a-­‐de81-­‐4ea5-­‐b63e-­‐2d5f8cced22d          health  HEALTH_WARN                        23  pgs  backfill                        68  pgs  backfilling                        1230  pgs  degraded                        6017  pgs  down                        46  pgs  incomplete                        8099  pgs  peering                        94  pgs  recovering                        41  pgs  recovery_wait                        2824  pgs  stale                        1204  pgs  stuck  degraded                        8908  pgs  stuck  inactive                        2824  pgs  stuck  stale                        9913  pgs  stuck  unclean                        1073  pgs  stuck  undersized                        1092  pgs  undersized                        1308  requests  are  blocked  >  32  sec                        recovery  168114/1648042  objects  degraded  (10.201%)                        recovery  52842/1648042  objects  misplaced  (3.206%)                        recovery  1056/460665  unfound  (0.229%)                        74/256  in  osds  are  down                        1  mons  down,  quorum  1,2  rcmondc1r75-­‐02-­‐ac,rcmondc1r75-­‐01-­‐ac          monmap  e2:  3  mons  at  {rcmondc1r75-­‐01-­‐ac=172.16.93.3:6789/0,rcmondc1r75-­‐02-­‐ac=172.16.93.2:6789/0,rcmondc1r75-­‐03-­‐ac=172.16.93.1:6789/0}                        election  epoch  51186,  quorum  1,2  rcmondc1r75-­‐02-­‐ac,rcmondc1r75-­‐01-­‐ac          osdmap  e103326:  848  osds:  182  up,  256  in;  1153  remapped  pgs            pgmap  v3451913:  10560  pgs,  18  pools,  1152  GB  data,  449  kobjects                        2547  GB  used,  784  TB  /  786  TB  avail                        168114/1648042  objects  degraded  (10.201%)                        52842/1648042  objects  misplaced  (3.206%)                        1056/460665  unfound  (0.229%)                                4703  down+peering                                1236  stale+down+peering                                  877  stale+peering                                  634  peering                                  474  active+undersized+degraded                                  422  remapped+peering                                  381  active+clean                                  266  stale+active+clean                                  251  active+remapped
  • 28. bought to you by challenges / questions • Best for vNAS performance via KVM+librbd • Filesystem, # of rbds, what interface, tuning? • Disk failure handling process • Currently policy is to redeploy OSD for any media error • http://tracker.ceph.com/projects/ceph/wiki/ A_standard_framework_for_Ceph_performance_pro filing_with_latency_breakdown
  • 29. bought to you by learnings • use dedicated journals • network matters - becomes much more visible • RAID controllers with no native JBOD are ok, but be prepared for more complicated ops
  • 30. MASSIVE Business Plan 2013 / 2014 DRAFT Title: Business Plan for the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) 2013 / 2014 Document no: MASSIVE-BP-2.3 DRAFT Date: June 2013 Prepared by: Name: Wojtek J Goscinski Title: MASSIVE Coordinator Approved by: Name: MASSIVE Steering Committee Date: Open IaaS: Technology: 30/10/2015 1:59 pmMyTardis | Automatically stores your instrument data for sharing. Menu MyTardis Tech Group Meeting #3 Posted on August 20, 2015August 20, 2015 by steve.androulakissteve.androulakis It’s been months since the last one, so a wealth of activity to report on. MyTardis Automatically stores your instrument data for sharing. Application layers:
  • 31. IaaS: Lustre Cloud Storage HPC Tenancies: RDS/VicNode NeCTAR Monash Other 1 Vault Market
 (object) Market
 (file) Computational MyTardis Figshare CIFS OwnCloud Access Layers: Ceph