• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
RADOS for Eucalyptus
 

RADOS for Eucalyptus

on

  • 4,036 views

Describing "RADOS for Eucalyptus", a distributed storage implementation for Eucalyptus based on Ceph Filesystem technology.

Describing "RADOS for Eucalyptus", a distributed storage implementation for Eucalyptus based on Ceph Filesystem technology.

Statistics

Views

Total Views
4,036
Views on SlideShare
4,030
Embed Views
6

Actions

Likes
4
Downloads
101
Comments
0

4 Embeds 6

http://paper.li 3
http://twib.jp 1
http://twitter.com 1
http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />

RADOS for Eucalyptus RADOS for Eucalyptus Presentation Transcript

  • RADOS for Eucalyptus <tasada@livedoor.jp>
  • RADOS for Eucalyptus • Eucalyptus IaaS •
  • IaaS • Infrastructure as a Service • Web … Web • Amazon Web Services
  • Amazon EC2 • Web Linux Solaris Windows VM • EC2 → S3 • VM S3
  • Amazon S3 • Web • REST/SOAP API • • 1020 2010/03
  • Amazon EBS • EC2 → •
  • Eucalyptus • IaaS • Amazon EC2, S3, EBS API →Amazon • • •
  • Eucalyptus Cloud Controller Walrus S3 Cluster Storage Cluster Storage Controller Controller Controller Controller Node EBS Node Node Controller Node Controller Node Controller Node Controller Controller Controller VM
  • Eucalyptus • • Walrus(S3 ) • SC(EBS )
  • Eucalyptus • • Ceph Filesystem • • S3 EBS like
  • Ceph Filesystem • • POSIX ”mount ” • Ceph Filesystem • • •
  • Ceph Filesystem
  • • • • • • etc..
  • RADOS • • Reliable, Autonomic Distributed Object Store • ≠ • • etc... •
  • RADOS OSD IO Monitor OSD OSD IO
  • ~100PG/ OSD PG OSD ” ”
  • CRUSH • PG OSD • HDD OSD → •
  • • OSD up down IP • PG • OSD Monitor • Monitor down
  • RADOS API • RADOS • pool API open_pool(), close_pool(), lookup_pool(), create_pool(), delete_pool(), list_pools(), get_pool_stats() • API create(), write(), read(), remove(), trunk(), getxattr(), setxattr(), stat(), list_objects_open(), list_objects_more(), list_objects_close() • API IO API etc...
  • RADOS API • radosgw: S3 • rbd: • S3 EBS
  • RADOS for Eucalyptus • Walrus/SC RADOS • Walrus • (radosgw, rbd...)
  • Before Cloud Walrus Controller Cluster Storage Cluster Storage Controller Controller Controller Controller Node Node Node Controller Node Controller Node Controller Node Controller Controller Controller
  • After Cloud Walrus Controller Walrus Walrus Cluster Storage Cluster Storage Controller Controller Controller Controller Node Node Node Controller Node Controller Node Controller Node Controller Controller Controller RADOS Cluster
  • RADOS for Walrus • • radosgw Eucalyptus • Walrus RADOS •
  • • RADOS API C/C++ Java JNI • File/FileInputStream/FileOutputStream RADOS API
  • • RADOS • • •
  • • :,( • RADOS
  • (MB/s) 90.0 67.5 40% 70% LocalFS 45.0 RADOS 60% Walrus(LocalFS) Walrus(RADOS) 22.5 0
  • (MB/s) 90.0 67.5 40% 70% LocalFS 45.0 RADOS 60% Walrus(LocalFS) Walrus(RADOS) 22.5 0 Web RADOS
  • 60 Per!OSD Throughput 50 40 (MB/sec) 30 20 no replication 2x replication 10 3x replication 0 4 16 64 256 1024 4096 Write Size (KB) 6.7: Per-OSD write performance. The horizontal line indicates the upper limit im Ceph physical disk. Replication has minimal impact on OSD throughput, although 60MB/s r of OSDs is fixed, n-way replication reduces total effective throughput by a facto e replicated data must be written to n OSDs.
  • 60 Per!OSD Throughput 50 40 (MB/sec) 30 20 no replication 2x replication 10 3x replication 0 4 16 64 256 1024 4096 Write Size (KB) 6.7: Per-OSD write performance. The horizontal line indicates the upper limit im Ceph physical disk. Replication has minimal impact on OSD throughput, although 60MB/s r of OSDs is fixed, n-way replication reduces total effective throughput by a facto e replicated data must be written to n OSDs.
  • • RADOS Walrus • Ceph Eucalyptus • rbd Eucalyptus
  • URL • Wiki: http://r4eucalyptus.wikia.com • Repository: bzr branch lp:~syuu/eucalyptus/rados4eucalyptus-2.0.0