Successfully reported this slideshow.
Your SlideShare is downloading. ×

java.util.concurrent for Distributed Coordination - Devoxx Ukraine 2019

Ad

java.util.concurrent
for distributed coordination
Devoxx Ukraine 2019
Ensar Basri Kahveci
Distributed Systems Engineer @ H...

Ad

@metanet
Hazelcast
The leading open source Java IMDG
Distributed Java collections, concurrency primitives, messaging
Cachi...

Ad

@metanet
Agenda
What is distributed coordination?
How distributed coordination APIs evolved over time?
Why to use java.uti...

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Check these out next

1 of 27 Ad
1 of 27 Ad

java.util.concurrent for Distributed Coordination - Devoxx Ukraine 2019

Download to read offline

The evolution of distributed coordination tools shows that high-level APIs ease implementation of coordination tasks, such as leader election, locking, synchronized actions. For instance, the Chubby paper highlights familiarity of lock-based interfaces. Similarly, Apache Curator hides complexity of ZooKeeper recipes behind Java APIs, while etcd and Consul implement concurrency primitives on their own.

A different path in this journey would be extending the long-lasting java.util.concurrent APIs, such as Lock, Semaphore, etc. Simplicity of these APIs makes them very useful in distributed coordination use cases.

Join this talk to explore Hazelcast’s brand new implementation of java.util.concurrent APIs on top of the Raft consensus algorithm. I will walk through code samples to demonstrate how Java locks, semaphores, etc. can be used in distributed environments that involve partial failures. I will also share our experience of how we coped with several challenges we faced while developing and testing our Raft implementation.

The evolution of distributed coordination tools shows that high-level APIs ease implementation of coordination tasks, such as leader election, locking, synchronized actions. For instance, the Chubby paper highlights familiarity of lock-based interfaces. Similarly, Apache Curator hides complexity of ZooKeeper recipes behind Java APIs, while etcd and Consul implement concurrency primitives on their own.

A different path in this journey would be extending the long-lasting java.util.concurrent APIs, such as Lock, Semaphore, etc. Simplicity of these APIs makes them very useful in distributed coordination use cases.

Join this talk to explore Hazelcast’s brand new implementation of java.util.concurrent APIs on top of the Raft consensus algorithm. I will walk through code samples to demonstrate how Java locks, semaphores, etc. can be used in distributed environments that involve partial failures. I will also share our experience of how we coped with several challenges we faced while developing and testing our Raft implementation.

Advertisement
Advertisement

More Related Content

Advertisement

java.util.concurrent for Distributed Coordination - Devoxx Ukraine 2019

  1. 1. java.util.concurrent for distributed coordination Devoxx Ukraine 2019 Ensar Basri Kahveci Distributed Systems Engineer @ Hazelcast @ metanet
  2. 2. @metanet Hazelcast The leading open source Java IMDG Distributed Java collections, concurrency primitives, messaging Caching, application scaling, distributed coordination 🎉 Hazelcast Cloud https://hazelcast.cloud #devoxxua2019
  3. 3. @metanet Agenda What is distributed coordination? How distributed coordination APIs evolved over time? Why to use java.util.concurrent.* Demo on Hazelcast IMDG 3.12 #devoxxua2019
  4. 4. #DevoxxUA
  5. 5. @metanet Distributed Coordination Leader election Synchronization Service discovery Configuration management for microservices #devoxxua2019
  6. 6. @metanet Challenges Concurrency problems Partial failures in distributed systems Consistency issues #devoxxua2019
  7. 7. DO IT YOURSELF
  8. 8. @metanet Consensus algorithms under the hood CP with respect to CAP Deployed as a central service APIs for coordination tasks Distributed Coordination Systems #devoxxua2019
  9. 9. @metanet Distributed Coordination Systems Google Chubby (Paxos) Apache ZooKeeper (ZAB) etcd (Raft) #devoxxua2019
  10. 10. @metanet APIs for Coordination: Layer #1 Some form of KV store API Suitable for configuration management and service discovery Foundational layer for building higher-level APIs /services /payment /product /photo /services /services/payment /services/product /services/product/photo Chubby & ZooKeeper etcd #devoxxua2019
  11. 11. @metanet APIs for Coordination: Layer #2 Chubby: Locking APIs on files ZooKeeper: Recipes Algorithms to build high-level primitives on top its FS “Friends don't let friends write ZK recipes.” Apache Curator Tech Notes #6 etcd: Leader election and distributed lock primitives on top of its KV store #devoxxua2019
  12. 12. @metanet Need for High-level APIs KV-store / low-level file-system API is easy to misuse, not suitable for all coordination tasks. “Consensus in the Cloud: Paxos Systems Demystified” High-level APIs minimise guesswork and development effort. java.util.concurrent.* in JDK #devoxxua2019
  13. 13. @metanet Distributed Coordination Systems Google Chubby (Paxos) Apache ZooKeeper (ZAB) etcd (Raft) Hazelcast IMDG 3.12 CP Subsystem (Raft) #devoxxua2019
  14. 14. @metanet Hazelcast IMDG 3.12 CP Subsystem IAtomicReference Configuration and metadata management for basic use-cases IAtomicLong, ICountDownLatch, ISemaphore Distributed synchronization FencedLock Distributed locking and leader election #devoxxua2019
  15. 15. @metanet Hazelcast IMDG 3.12 CP Subsystem Well-defined failure semantics Strong consistency CP with respect to CAP DIY-style tested with Jepsen #devoxxua2019
  16. 16. @metanet Strong consistency Handles crashes and network failures. Operational as long as the majority is up. Consensus Node Count Majority Degree of Fault Tolerance 2 2 0 3 2 1 4 3 1 5 3 2 6 4 2 7 4 3 #devoxxua2019
  17. 17. @metanet Understandability as the primary goal Runtime concerns (snapshotting, dynamic membership) Performance optimizations (fast reads, batching) https://raft.github.io Why Raft? #devoxxua2019
  18. 18. @metanet ENOUGH TALK LET’S DEMO
  19. 19. @metanet https://github.com/metanet/juc-talk DEMO #1: Leader Election #devoxxua2019
  20. 20. @metanet CP Sessions A session starts on the first lock / semaphore request. Session heartbeats are periodically committed in the background. If no heartbeat for some time (session TTL), the session is closed. Auto-release mechanism for FencedLock and ISemaphore #devoxxua2019
  21. 21. @metanet CP sessions offer a trade-off between safety and liveness. #devoxxua2019
  22. 22. @metanet “How to do distributed locking” “Distributed locks are dead; long live distributed locks!” DEMO #2: Fencing-off Stale Lock Holders #devoxxua2019
  23. 23. @metanet Replacing crashed CP members to preserve high availability Demo #3: Handling Server Failures #devoxxua2019
  24. 24. @metanet Recap Avoid writing your own implementations for coordination. High-level APIs minimise guesswork and development effort. java.util.concurrent.* FTW! Operational simplicity matters. Dynamic clustering Horizontal scalability #devoxxua2019
  25. 25. @metanet Current Status In Hazelcast 4.0: CP Subsystem disk persistence Metrics In roadmap: KV Store Event Listeners More client languages #devoxxua2019
  26. 26. @metanet Resources https://github.com/metanet/juc-talk (demos) Hazelcast IMDG Docs CP Subsystem Code Samples https://hazelcast.com/blog/author/ensarbasri Hazelcast IMDG 3.12 #devoxxua2019
  27. 27. Дякую! Devoxx Ukraine 2019 #DevoxxUA Ensar Basri Kahveci Distributed Systems Engineer @ Hazelcast @ metanet

×