Actors Model, Asynchronous Design with Non Blocking IO and SEDA

6,432 views
6,121 views

Published on

The idea of this presentation is to demonstrate differences between event based systems, traditional concurrency model and how it affect the state of objects and impossibilite a true scalable system.

I will show how the technique of Actors Model can be the base of a eficient concurrency system to support thousands of simultaneous requests, without incurring an complex infrastructure architecture, expensive and inefficient .

Also raise some questions about Non Blocking IO as well the intersection with SEDA, to provide a final solution prepared to meet this grow demand and make a better use of the resources in Cloud Computing.

0 Comments
10 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
6,432
On SlideShare
0
From Embeds
0
Number of Embeds
42
Actions
Shares
0
Downloads
97
Comments
0
Likes
10
Embeds 0
No embeds

No notes for slide

Actors Model, Asynchronous Design with Non Blocking IO and SEDA

  1. 1. ActorsModel, Asynchronous Design with Non Blocking IO  and SEDA<br />Felipe Oliveira – felipe.oliveira@soaexpert.com.br<br />
  2. 2. Agenda<br />A Little bit about Concurrency <br />Dealing with State (Share Mutable, Isolated Mutable, Persistent Data Structure)<br />Strategies<br />Concurrency with Intensive IO<br />Scalability<br />STM – Software Transaction Memory<br />Actors Model and SEDA <br />
  3. 3. What´s Concurrency ? <br />In a concurrent program two or more actions take place simultaneously<br /> We often write concurrent programs using threads<br />Starting threads is easy, but their execution sequence is non- deterministic !!<br />Coordinate threads and ensure they’re handling data consistently is very difficulty<br />
  4. 4. Three prominent options for concurrency<br />The “Synchronize and Suffer” model<br />The Software-Transactional Memory (STM) model<br />The Actor-based Concurrency model<br />
  5. 5. Exploring Design Options<br />Shared Mutable Design<br />Isolated Mutable Design<br />Purely Immutable Design (with functional languages) <br />
  6. 6. Three ways to avoid problems<br />Synchronize Properly<br />Don’t Share State !!<br />Don’t Mutable State<br />“Avoiding mutable state is the secret weapon to winning concurrency battles”<br />
  7. 7. Strategies<br />Sequential to Concurrent<br />Divide and Conquer<br />Decide the Number of Threads: <br />Runtime.getRuntime().availableProcessors();<br />We can compute the total number of threads we’d need as: <br />Number of threads = Number of Available Cores / (1 - Blocking Coefficient)<br />
  8. 8. Concurrency with Intensive IO<br />An IO intensive application has a large blocking coefficient and will benefit from more threads than the number of available cores.<br />A computation intensive task has a blocking coefficient of 0 and an IO intensive task has a value close to 1—a fully blocked task is doomed so we don’t have to worry about the value reaching 1.<br />In order to determine the number of threads you need to know two things:<br />The number of available cores<br />The blocking coefficient of your tasks<br />
  9. 9. Speedup for the IO Intensive App<br />
  10. 10. Concurrent Computation of Prime Numbers<br />
  11. 11. Speedup for the Computationally Intensive <br />
  12. 12. Managing Threads with ExecutorService<br />
  13. 13. Software Transactional Memory - STM<br />Separation of Identity and State<br />
  14. 14. Clojure STM <br />
  15. 15. Actos Model – Isolating Mutability<br />
  16. 16. Life Cycle of an Actor<br />
  17. 17. Actors Model <br />Lock free approach to concurrency<br />No shared state between actors <br />Asynchronous message passing Mailboxes to buffer incoming messages<br />
  18. 18. SEDA<br />Staged Event Driven Architecture<br />Decomposes a complex, event-driven application into a set of stages connected by queues.<br />The most fundamental aspect of the SEDA architecture is the programming model that supports stage-level backpressure and load management.<br />
  19. 19. Stages<br />Stages<br />One actor class per stage<br />Shared dispatcher<br />Individually tunable I/O Bound<br />CPU Bound Easier to reason about Code reuse<br />
  20. 20.
  21. 21. Dispatchers<br />ThreadBasedDispatcher Binds one actor to its own thread<br />ExecutorBasedEventDrivenDispatcher Must be shared between actors<br />ExecutorBasedEventDrivenWorkStealingDispatcher Must be shared between actors of the same type<br />
  22. 22. Queues<br />SEDA has a queue per stage model<br />Akka actors have their own mailbox <br />How do we evenly distribute work?<br />
  23. 23. Work Stelaling<br />"Actors of the same type can be set up to share this dispatcher and during execution time the different actors will steal messages from other actors if they have less messages to process”<br />
  24. 24. Fault Tolerance<br />Supervisors Restarts actors Stops after x times within y milliseconds<br />Restart Strategies<br />OneForOne<br />AllForOne<br />
  25. 25. Final Product<br />
  26. 26. I hope that this presentation was useful for open mind to a new model for build scalable APIs <br />Thanks – Felipe Oliveira @soaexpertbr<br />

×