Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Case Study: Flying Circus
Berlin CEPH meetup

2014-01-27, Christian Theune <ct@gocept.com>
/me
•

Christian Theune

•

Co-Founder of gocept

•

Software Developer

(formerly Zope, Plone, grok),
Python (lots of pac...
What worked for us?

raw image on local server

lvm volume via iSCSI (ietd + open-iscsi)
What didn’t work (for us)
ATA over Ethernet
Linux HA solution for iSCSI
Gluster
(sheepdog)
CEPH
•

been watching for ages

•

started work in December 2012

•

production roll-out since December 2013

•

about 50%...
Our production structure
•

KVM hosts with 2x1Gbps (STO and STB)

•

Old storages with 5*600GB RAID 5 + 1 Journal

SAS 15k...
Good stuff
•

No single point of failure any more.!

•

Create/destroy VM images on KVM hosts!

•

Fail-over and self-heal...
ceph -s (and -w)
ceph osd tree
Current issues
•

Bandwith vs. Latency: replicas from RBD client?!?.

•

Deciding for PG allocation in various situations....
Summary
•

finally … FINALLY … F I N A L L Y !

•

feels sooo good

•

well, at least we did not want to throw up using it
...
Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
Upcoming SlideShare
Loading in …5
×

Flying Circus Ceph Case Study (CEPH Usergroup Berlin)

1,027 views

Published on

Slides from the inaugural CEPH users group meeting in Berlin. A quick overview of the CEPH status at the Flying Circus.

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Flying Circus Ceph Case Study (CEPH Usergroup Berlin)

  1. 1. Case Study: Flying Circus Berlin CEPH meetup
 2014-01-27, Christian Theune <ct@gocept.com>
  2. 2. /me • Christian Theune • Co-Founder of gocept • Software Developer
 (formerly Zope, Plone, grok), Python (lots of packages) • ct@gocept.com • @theuni
  3. 3. What worked for us? raw image on local server lvm volume via iSCSI (ietd + open-iscsi)
  4. 4. What didn’t work (for us) ATA over Ethernet Linux HA solution for iSCSI Gluster (sheepdog)
  5. 5. CEPH • been watching for ages • started work in December 2012 • production roll-out since December 2013 • about 50% migrated in production
  6. 6. Our production structure • KVM hosts with 2x1Gbps (STO and STB) • Old storages with 5*600GB RAID 5 + 1 Journal
 SAS 15k drives • 5 monitors, 6 OSDs currently • RBD from KVM hosts and backup server, 1 cluster per customer project (multiple VMs) • Acceptable performance on existing hardware
  7. 7. Good stuff • No single point of failure any more.! • Create/destroy VM images on KVM hosts! • Fail-over and self-healing works nicely • Virtualisation for storage “as it should be”™ • High quality of concepts, implementation, and documentation • Relatively simple to configure
  8. 8. ceph -s (and -w)
  9. 9. ceph osd tree
  10. 10. Current issues • Bandwith vs. Latency: replicas from RBD client?!?. • Deciding for PG allocation in various situations. • Deciding for new hardware. • Backup has become a bottle neck. • I can haz “ceph osd pool stats” per RBD volume? • Still measuring performance. RBD is definitely sucking up some performance.
  11. 11. Summary • finally … FINALLY … F I N A L L Y ! • feels sooo good • well, at least we did not want to throw up using it • works as promised • can’t stop praising it …

×