Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
Upcoming SlideShare
Loading in...5
×
 

Flying Circus Ceph Case Study (CEPH Usergroup Berlin)

on

  • 492 views

Slides from the inaugural CEPH users group meeting in Berlin. A quick overview of the CEPH status at the Flying Circus.

Slides from the inaugural CEPH users group meeting in Berlin. A quick overview of the CEPH status at the Flying Circus.

Statistics

Views

Total Views
492
Views on SlideShare
492
Embed Views
0

Actions

Likes
0
Downloads
5
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Flying Circus Ceph Case Study (CEPH Usergroup Berlin) Flying Circus Ceph Case Study (CEPH Usergroup Berlin) Presentation Transcript

  • Case Study: Flying Circus Berlin CEPH meetup
 2014-01-27, Christian Theune <ct@gocept.com>
  • /me • Christian Theune • Co-Founder of gocept • Software Developer
 (formerly Zope, Plone, grok), Python (lots of packages) • ct@gocept.com • @theuni
  • What worked for us? raw image on local server lvm volume via iSCSI (ietd + open-iscsi)
  • What didn’t work (for us) ATA over Ethernet Linux HA solution for iSCSI Gluster (sheepdog)
  • CEPH • been watching for ages • started work in December 2012 • production roll-out since December 2013 • about 50% migrated in production
  • Our production structure • KVM hosts with 2x1Gbps (STO and STB) • Old storages with 5*600GB RAID 5 + 1 Journal
 SAS 15k drives • 5 monitors, 6 OSDs currently • RBD from KVM hosts and backup server, 1 cluster per customer project (multiple VMs) • Acceptable performance on existing hardware
  • Good stuff • No single point of failure any more.! • Create/destroy VM images on KVM hosts! • Fail-over and self-healing works nicely • Virtualisation for storage “as it should be”™ • High quality of concepts, implementation, and documentation • Relatively simple to configure
  • ceph -s (and -w)
  • ceph osd tree
  • Current issues • Bandwith vs. Latency: replicas from RBD client?!?. • Deciding for PG allocation in various situations. • Deciding for new hardware. • Backup has become a bottle neck. • I can haz “ceph osd pool stats” per RBD volume? • Still measuring performance. RBD is definitely sucking up some performance.
  • Summary • finally … FINALLY … F I N A L L Y ! • feels sooo good • well, at least we did not want to throw up using it • works as promised • can’t stop praising it …