Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Threads and Java Memory Model Explained

1,062 views

Published on

This talk was given on CorkJUG on 20/jan/16. JSR 133 - The Java Memory Model, synchronisation, threads, dead locks and so on where explained.

Published in: Technology

Threads and Java Memory Model Explained

  1. 1. THREADS AND JAVA MEMORY MODEL EXPLAINED LUIZ TESTON, WWW.FRACTA.CC
  2. 2. SOME QUOTES I HEARD IN MY CAREER
  3. 3. “DEAD LOCK ON 300 THREADS. CAN ANYBODY HELP ME?” Soft real time developer
  4. 4. “IN PARALLEL IT IS WORSE.” // GLOBAL LOCK ON A HUGE GRAPH Myself struggling to fix a performance issue
  5. 5. “LET’S NOT USE THREADS, IT ALWAYS GIVES US TROUBLE.” Architect with 15 years of experience
  6. 6. “MY CODE WORKS.” // NO SYNCHRONISATION, ONLY THREADS Lead Programmer
  7. 7. DOING MANY THINGS AT ONCE? A FEW THINGS YOU SHOULD KNOW…
  8. 8. VOCABULARY
  9. 9. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  10. 10. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  11. 11. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  12. 12. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  13. 13. PARALLEL != CONCURRENT
  14. 14. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  15. 15. PARALLEL DON'T DISPUTE, CONCURRENT MAY DISPUTE.
  16. 16. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE X
  17. 17. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  18. 18. DEFINITION ON YOUR FAVOURITE SEARCH ENGINE
  19. 19. PARALLELISM WON’T IMPROVE LATENCY.
  20. 20. PARALLELISM MAY IMPROVE THROUGHPUT.
  21. 21. JSR 133 JAVA MEMORY MODEL
  22. 22. RACE CONDITION THE CLASSIC SAMPLE
  23. 23. RACE CONDITION ▸ definition: shared resources may get used “at the same time” by different threads, resulting in a invalid state. ▸ motivation: any need of concurrent or parallel processing. ▸ how to avoid: usage of some mechanism to ensure resources are used by only one thread at a time or even share nothing.
  24. 24. thread 1 thread 2 VAR=0
  25. 25. thread 1 thread 2 VAR=0 VAR++ VAR++
  26. 26. thread 1 thread 2 VAR=0 VAR++ VAR++ VAR=1 Clearly not the expected result. There are code in production working with those errors for years without people realising it. VAR WAS NOT SYNCHRONISED PROPERLY
  27. 27. AVOIDING OR FIXING THIS RACE CONDITION ▸ let the database deal with it (just kidding, but sadly it seems to be the standard way of doing it). ▸ correct synchronisation by using locks. ▸ usage of concurrent classes, such as AtomicLong. ▸ one counter per thread (summing them still requires synchronisation). ▸ share nothing. ▸ any other suggestion?
  28. 28. thread 1 thread 2 VAR=0LOCK
  29. 29. thread 1 thread 2 VAR=0LOCK LOCK VAR++ WAITING LOCK
  30. 30. thread 1 thread 2 VAR=0LOCK VAR++ LOCK VAR++
  31. 31. thread 1 thread 2 VAR=0LOCK VAR++ VAR++ VAR=2 The result was as expected, but there was a penalty in the time it took to perform both operations. In order to minimise it avoid sharing in the first place. VAR WAS PROPERLY SYNCHRONISED
  32. 32. READS ALSO NEEDS SYNCHRONISATION // COMMON MISTAKE IS ONLY // SYNCHRONISE WRITES.
  33. 33. LESSONS ▸ Synchronise properly. High level APIs are easier not to mess with. java.util.concurrent excels at that. ▸ The optimal number of threads is usually twice the number of cores: Runtime.getRuntime().availableProcessors() * 2; ▸ Measure and stress. It is not easy to see synchronisation issues, since the behaviour varies depending on machine, operation system, etc. They usually don’t show while debugging.
  34. 34. DEAD LOCK SOMETIMES NEVER ENDS
  35. 35. DEAD LOCKS ▸ what it is: threads holding and waiting each other locks. ▸ motivation: global lock leads to global contention and slow code. Use of more than one fine grained lock at the same time in more than one thread in a unpredictable way is the real problem. ▸ how to avoid: ensure same locking order or review synchronisation strategy (functional approach, atomic classes, high level APIs, concurrent collections, share nothing, etc).
  36. 36. thread 1 thread 2 AB Two threads have access to resources protected by two distinct locks: A and B. Green means available, yellow means waiting and red means locked. Two scenarios are going to be presented: Threads acquiring the locks in the same order, and in different order.
  37. 37. thread 1 thread 2 B A First thread acquires lock A.
  38. 38. thread 1 thread 2 B A A Second thread tries to acquire the same lock. Since it is in use, it will wait until lock A is available.
  39. 39. thread 1 thread 2 A A B Meanwhile the first thread acquires lock B. The second thread is still waiting for lock A.
  40. 40. thread 1 thread 2 B A A The first thread releases lock B. The second thread is still waiting for lock A.
  41. 41. thread 1 thread 2 AB A Then the lock A is finally released. The second thread is finally able to use it.
  42. 42. thread 1 thread 2 B A It acquires lock A.
  43. 43. thread 1 thread 2 A B Then it acquires lock B.
  44. 44. thread 1 thread 2 B A Lock B is released.
  45. 45. thread 1 thread 2 AB Then lock A is released. No synchronisation problems has happened and no locked resources where harmed in this execution. Some contention has happened, but they where temporary. EVERYTHING WAS FINE.
  46. 46. thread 1 thread 2 AB NOW SOMETHING DIFFERENT
  47. 47. thread 1 thread 2 B A The first thread acquires lock A.
  48. 48. thread 1 thread 2 A B And the second thread acquires lock B.
  49. 49. thread 1 thread 2 A B B The first thread tries to acquire lock B. Since it is busy, it will wait for it.
  50. 50. thread 1 thread 2 A B B A And the second thread tries to acquire lock A. Since it is busy, it will wait for it.
  51. 51. thread 1 thread 2 A B B A What did the different order of lock acquisition cause? Keep in mind locks can be acquired internally by APIs, by using the synchronised keyword, by doing IO. It is almost impossible to keep track of all the locks in a huge application stack. DEAD LOCK IS SET.
  52. 52. LESSONS ▸ If sharing data between threads, synchronise properly and measure and stress (same as before). ▸ Keep in mind some dead locks keeps latent and may happen only in unusual situations (such as unusual high peak load). ▸ The best approach is to minimise sharing data, having isolated threads working independently. ▸ There are frameworks that suits better than using threads manually. Consider those, such as Akka, Disruptor, etc.
  53. 53. QUESTIONS? THANKS FOR YOUR TIME! ▸ https://www.cs.umd.edu/~pugh/java/memoryModel/ jsr-133-faq.html ▸ http://docs.oracle.com/javase/specs/ ▸ fotos: Dani Teston

×