Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

OpenIO Summit'17: Much Done, More Yet to Come

117 views

Published on

May our flexibility be with you! This past quarter has seen a
lot of improvements. Join us to discover how to automate lifecycle enforcement on data, how to e iciently take snapshots of containers, how to compute storage usage whatever the connector, and how to shard a single container to reach millions of contents.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

OpenIO Summit'17: Much Done, More Yet to Come

  1. 1. OpenIO Summit ‘17 OIO-SDS: Much Done, More Yet to Come
  2. 2. NOW H1’17 Q3’17 Q4’17
  3. 3. Why? (The starting point)
  4. 4. The original pain Applicative Silos Colocated Data Organic Growth Exponential TCO JIT provisioning Continuous small increments Misc. HW New vendor deals, Refurbishing Versatile Platforms Powering Real Humans Buzz effects, Daylight rythms Known constraints
  5. 5. Design elements State of the art (2006): CHORD-based solutions ● Static placement of the data ● Locations *known* Wrong choice! ● Platform change → Rebalancing! ● Poor at managing heterogeneity
  6. 6. SDS → K.I.S.S. No ring allowed... ● Choose locations: ○ Discovery & Qualification ○ Load-Balancing ● Remember: ○ Data structure for 10^14 contents ○ → Layered: containers ○ → Distributed 2 2 2 2... 1
  7. 7. protocol://service/resource The key to our flexibility ● Proto: Plug backends/tiers ● Service/Resource: Policies No regrets :) Pointers everywhere!
  8. 8. Planned, Done H1’17
  9. 9. A question of mapping names to objects ● Swift: the ideal mapping ● S3: tmpauth, ACL, Lifecycle, ... ● FS: with a side service to map inodes ● Even for “bucket-less” flat namespaces ○ Hashed containers ○ Regex containers ○ Divided containers Planned, Done: Standard connectors
  10. 10. Archive containers ● Backup with Range’d GET ● Restore with Chunk’d POST Snapshot containers ● Hardlink chunks (same volumes) ● Register new pointers ● No data copied! Planned, Done: container services
  11. 11. Encompass the customer’s use cases A service able to list container and match prefixes Planned, Done: oio-cb ACCT-pics / USER ACCT-mail / USER ACCT-snapshot / USER-20170928 ACCT / USER-segments + Their shards ... GET /v1/bill/fetch?acct=ACCT&ref=USER
  12. 12. Cope with the customer’s cardinality ● Configure a limit the number of contents in each container ○ Per NS, Account or Container ● New shard are allocated when a full occurs ○ Atomically ○ Automatically ● ~ Seamless ○ Trade-Off: Slower LIST ○ Trade-Off: Slower random GET Planned, Done: Sharded containers
  13. 13. ● Easy tasks ○ Necessary notions already present ● Lifecycle “à-la-S3” ○ Notify the containers ○ List items ○ Match the rules (prefixes, metadata) ○ Adapt the storage policy ● Self-healing … idem ○ Notify the erroneous items ○ List the chunks ○ Check the compliance to the storage policy Planned, Done: Lifecycle rules, Self-healing
  14. 14. Now what? H2’17
  15. 15. Improve: ● Behavior at the limits ● Robustness, Reliability ● QoS enforcement A cycle of QA tests & CI improvements starts for OIO-SDS In a short term Q1’17 Q3’17 Q4’17 Plan Do CheckAct Q2’17 17.10
  16. 16. SDS is just an enabler, not the star ● Stable set of features ● Only the data matter G4A is the new star ● SDS as a source / sink In a longer term 2015 2017 2018 Plan Do CheckAct 2016 SDS
  17. 17. SDS Grid For Apps
  18. 18. OpenIO Summit ‘17 What’s next Teezly, Boring and Working Use Case:
 How to save 400K per year Solvik Blum

×