Actors, a Unifying Pattern for Scalable Concurrency | C4 2006

5,369 views

Published on

IoLanguage www.iolanguage.com

Published in: Technology
2 Comments
11 Likes
Statistics
Notes
  • Impressive presentation of ’Actors, a Unifying Pattern for Scalable Concurrency | C4 2006’. You’ve shown your credibility on presentation with this slideshow. This one deserves thumbs up. I’m John, owner of www.freeringtones.ws/ . Hope to see more quality slides from you.

    Best wishes.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • hey there,could you please mail this across to me,it will truly assist me for my function.thank you really much.
    Anisa
    http://financejedi.com http://healthjedi.com
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total views
5,369
On SlideShare
0
From Embeds
0
Number of Embeds
64
Actions
Shares
0
Downloads
118
Comments
2
Likes
11
Embeds 0
No embeds

No notes for slide

Actors, a Unifying Pattern for Scalable Concurrency | C4 2006

  1. 1. actors a unifying design pattern for scalable concurrency www.iolanguage.com steve@dekorte.com
  2. 2. talk overview what is an actor? concurrency trends problems and solutions the big picture
  3. 3. what is an actor? an informal definition an object with an asynchronous message queue and an execution context for processing that queue encapsulates state, instructions and execution (a CSP is a process level actor)
  4. 4. concurrency trends a quick look
  5. 5. clock speed leveling off 5,000 ? 4,000 ? 2,000 800 120 50 1990 1995 2000 2005 2010 2015
  6. 6. cores per machine increasing exponentially 80 ? 24 ? 8 4 2 2002 2004 2006 2008 2011
  7. 7. clusters massive scaling typical scale memcached 10^2 gfs 10^4 mmog 10^3 p2p 10^5 @home 10^6
  8. 8. trends are cores and clusters ideal concurrency model will naturally scale across both
  9. 9. traditional concurrency model preemptive threads with shared memory and coordination via locks
  10. 10. problem nondeterminism
  11. 11. For concurrent programming to become mainstream, we must discard threads as a programming model. Nondeterminism should be judiciously and carefully introduced where needed, and it should be explicit in programs. - Ed Lee, The Problem with Threads Berkeley CS Tech Report
  12. 12. traditional concurrency model threads can directly change one another’s state “spaghetti concurrency” thread A thread B
  13. 13. actor/csp model only a thread can directly change it’s own state thread A thread B queue queue same model works across machines
  14. 14. a natural extension actors are the object paradigm extended to execution objects encapsulate state and instructions actors encapsulate state, instructions and execution
  15. 15. what this looks like in Io any object becomes an actor when sent an async message account deposit(10.00) // sync message account @deposit(10.00) // future message account @@deposit(10.00) // async message Io is a hybrid actor language
  16. 16. problem asynchronous programming
  17. 17. a future an object returned from an async message which becomes the result when it is available. If it is accessed before the result is ready, it blocks the calling execution context.
  18. 18. futures sync programming with async messages by decoupling messages from return values allows lazy synchronization and automatic deadlock detection
  19. 19. another look account deposit(10.00) // sync message account @deposit(10.00) // future message account @@deposit(10.00) // async message
  20. 20. another example // waits for result data := url fetch // returns a future immediately data := url @fetch data setFutureDelegate(self)
  21. 21. the undiscovered country lots of interesting concurrent coordinating patterns can be composed from futures or other forms of async return messages c := urls cursor c setConcurrency(500) c setDelegate(self) c foreach(fetch)
  22. 22. problem distributed programming
  23. 23. transparent distributed objects unified local and remote messaging peers append(DO at(ip, port, key)) ... c := peers cursor c setConcurrency(50) c setDelegate(self) c foreach(search(query)) eliminates protocol hassles but doesn’t mean we can ignore the network, etc
  24. 24. problem the high concurrency memory bottleneck
  25. 25. execution contexts memory usage and maximum concurrency bytes max per GB process/os thread 1,000,000s 100s* stackfull coro 10,000s 10,000s stackless coro or 100s 1,000,000s** continuation *webkit etc use os threads.. **but thread related state may exceed stack size
  26. 26. synonyms synonyms user level thread os thread lightweight thread kernel thread microthread native thread green thread coroutine coro fiber
  27. 27. taxonomy execution context / thread user level thread os thread large fixed stack size nondeterministic hw registers swapping coroutine continuation linked activation records stackfull coro stackless coro small fixed stack size variable stack size deterministic hw registers swapping
  28. 28. what this means in practice connections
  29. 29. user level threads aren’t preemptive, so what about blocking ops? avoid them by using async sockets and async file i/o
  30. 30. async issues ease of use? pause the user level thread while waiting on the async i/o request cpu bound blocking? use explicit yield() where needed
  31. 31. conclusion use user level threads for scaling concurrency on a given core and one os threads or processes per core for scaling across cores
  32. 32. the big picture “powers of 10” each level follows the actor pattern of encapsulating state, instructions and execution and communicating via async queued messaging
  33. 33. actor user thread
  34. 34. actor actor actor actor csp actor actor actor actor os process actor actor actor actor actor actor actor actor
  35. 35. csp csp csp csp csp csp csp csp machine csp csp csp csp csp csp csp csp
  36. 36. machine machine machine machine machine machine machine machine cluster machine machine machine machine machine machine machine machine
  37. 37. some fun speculation what about cores? a prediction based on this pattern
  38. 38. traditional SISD architecture works, but clock speed growth is slowing and silicon is cheap core bus memory
  39. 39. current MISD architecture bus bottleneck - memory performance per core drops as core count increases core core bus memory
  40. 40. future MIMD architecture? the actor pattern on the hardware level a connection machine on a chip core SISD SISD SISD SI SISD memory SISD SISD SISD SISD SI SISD SISD SISD SISD SI
  41. 41. automatic MIMD distribution SISD SISD SISD SISD SI array SISD SISD SISD SISD SI array shard array shard array shard array shard SISD SISD SISD SISD SI
  42. 42. this talk, in a nutshell solution problem solution concurrency nondeterminism actors/csp distributed and async transparent distributed actors/csp programming objects and futures high concurrency memory bottleneck user level threads many cores bus bottleneck MIMD www.iolanguage.com steve@dekorte.com

×