Cirrostratus

397 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
397
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Title Month Year
  • Title Month Year
  • Title Month Year
  • Title Month Year
  • Title Month Year
  • Title Month Year
  • Title Month Year
  • Title Month Year
  • Cirrostratus

    1. 1. <ul>Cirrostratus Distributed Storage for Private Cloud </ul><ul><li>University Co-Op Project
    2. 2. Stanislav Bogatyrev </li></ul>
    3. 3. <ul><li>No distributed block device storage optimal for such most commonly used cluster configuration in Linux
    4. 4. Cluster file systems distributed locking and cache coherency mechanisms kill performance
    5. 5. VM->VMM->FS->File->BD->TCP->Ethernet overhead
    6. 6. Not easy to add/remove/fail/recover storage nodes </li></ul><ul>Problem statement </ul>
    7. 7. <ul>Proposed solution </ul><ul><li>No meta-data, O(1) scale
    8. 8. No distributed locking
    9. 9. Sparse disk, snapshots, multipath, link aggregation “for free”
    10. 10. CRUSH/Rados data redistribution
    11. 11. Heatmaps for caches
    12. 12. Low overhead Ethernet Transport
    13. 13. GPL v2+ licensed code </li></ul><ul>Features: </ul>
    14. 14. <ul>Proposed topology </ul><ul><li>Dumb switches
    15. 15. Any number of links to any switch
    16. 16. Access and Storage roles can be combined
    17. 17. Storage nodes export block storage to internal network by id
    18. 18. Any block device can be attached to back-end (SAN, iSCSI, etc)
    19. 19. AoE compatible protocol for export </li></ul><ul>Features: </ul>
    20. 20. <ul>Data Path </ul><ul><li>Ethernet packet comes to server
    21. 21. Identify type and choose queue
    22. 22. Apply Data Protection Policy and set placement hints for each part
    23. 23. Set destination storage for each part using CRUSH (Controlled Replication Under Scalable Hashing)
    24. 24. Send data to storage nodes </li></ul>
    25. 25. <ul>Demo </ul><ul><li>Virtual external and internal networks
    26. 26. 1 virtual disc exported to Linux kernel AoE client (e1.0)
    27. 27. N-way mirroring as data protection policy
    28. 28. Sparse back-end
    29. 29. One node dies at the end
    30. 30. Available in FullHD 1080p </li></ul><ul>Scenario: </ul>
    31. 31. <ul><li>DataProtection (Erasure coding, N-way replication)
    32. 32. Crypto interface
    33. 33. CRUSH
    34. 34. Multipath
    35. 35. Full redesign </li></ul><ul>Done and planned </ul><ul><li>Stable code
    36. 36. Load Balancing
    37. 37. Delayed replication
    38. 38. Back-end rebalancing
    39. 39. Visioning for Network Map
    40. 40. Security </li></ul><ul><li>PaceMaker integration
    41. 41. Auto Configuration
    42. 42. Cache (W+R)
    43. 43. HeatMaps </li></ul><ul>Done: </ul><ul>Planned for 2011: </ul><ul>Planned for 2010: </ul>
    44. 44. <ul><li>Git DVCS http://github.com/realloc/cirrostratus
    45. 45. Mailing list http://groups.google.com/group/cirrostratus-dev
    46. 46. Scrum http://project.o1host.net/xplanner-plus/
    47. 47. Xplanner+, 2week sprint
    48. 48. SBP SU 7 students, 4 course
    49. 49. SPB SU ITMO 2 students, 4 course
    50. 50. SPB SU AI team, Security </li></ul><ul>How we do it </ul>

    ×