Multithreading patterns Cristian Nicola  Development Manager Net Evidence (SLM) Ltd http://www.tonicola.com   [email_addre...
  1. Introduction to multithreading   2. Multithreading patterns
1. Introduction to multithreading
In this section… <ul><li>Why do multi-threading? </li></ul><ul><li>When and when not to use threads? </li></ul><ul><li>Mul...
Why multi-threading? <ul><li>Multi-core / multi-CPU machines are now standard </li></ul><ul><li>Makes programming more fun...
When to use threads? <ul><li>Clearly defined work-tasks, and the work-tasks are long enough </li></ul><ul><li>Data needed ...
When  NOT  to use threads? <ul><li>Work-tasks are not clearly defined </li></ul><ul><li>There is a lot of shared data betw...
Multithreading structures
Jobs, processes, threads, fibers Job 1 Process 1 Thread 1 Thread 2 Thread M … … Process N Fiber 1 Fiber 2 Fiber X …
What we need…a way to <ul><li>… avoid simultaneous access to a common resource ( mutexes, critical sections ) </li></ul><u...
Critical sections <ul><li>User object - lightweight </li></ul><ul><li>Their number is limited by memory </li></ul><ul><li>...
Mutexes <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have se...
Events <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have sec...
Semaphores <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have...
Timers <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have sec...
Kernel-land / User-land <ul><li>Kernel transition – expensive </li></ul><ul><li>User transition – fast </li></ul><ul><li>S...
Multithreading problems
Atomic operations <ul><li>A set of operations that must be executed as a whole, so they appear to the rest of the system t...
Atomic operations <ul><li>For example the code: </li></ul><ul><li>I = J + 1; </li></ul><ul><li>Can be compiled as: </li></...
Race conditions <ul><li>A task switch can occur any time </li></ul>
Race conditions <ul><li>When 2 threads race to change the data </li></ul><ul><li>Problem: </li></ul><ul><li>Unpredictable ...
Race conditions Thread 1 *  Read A=1 into register *  Increment register *  Write register= 2 into A  in memory Input:  A ...
Priority inversion <ul><li>A thread with a higher priority waits for a resource used by a thread with a lower priority </l...
Priority inversion Thread 1 (low priority) *  Lock a file for usage writing some data into it *  Do some more work with th...
Deadlock <ul><li>2 or more actions depend on each other for completion, and as a result none finishes </li></ul><ul><li>Pr...
Deadlock conditions  <ul><li>1. Mutual exclusion locking of resources </li></ul><ul><li>2. Resources are locked while othe...
Deadlock Thread 1 *  Lock resource A *  Wait for resource B to  be available Thread 2 *  Lock resource B *  Wait for resou...
Livelock <ul><li>Same as deadlock, except the detection/prevention of deadlocks would wake up the threads, without progres...
Boxcar / Lock Convoys / Thundering herd <ul><li>Can have a serious performance penalty </li></ul><ul><li>The application w...
Boxcar / Lock Convoys / Thundering herd Thread 1 *  Sleep waiting for event *  Lock data *  Use data *  Unlock data *  Go ...
2. Multithreading patterns
In this section… <ul><li>What is a design pattern? </li></ul><ul><li>Groups of patterns (control-flow patterns, data patte...
<ul><li>A design pattern is a reusable solution to a recurring problem in the context of object oriented development </li>...
<ul><li>Control-flow: aspects related to control and flow dependencies between various threads (e.g. parallelism, choice, ...
Control-flow patterns
Worker threads <ul><li>Sometimes referred as  “ Active   Object” ,  “Cyclic Executive”  or  “Concurrency Pattern”   </li><...
Worker threads <ul><li>Windows Vista/Server has API support for creating thread pools ( CreateThreadpool) </li></ul><ul><l...
<ul><li>Background Worker Pattern notifications when the thread completes, but provides an update on the status of the ope...
<ul><li>Implicit Termination </li></ul><ul><li>the worker has finished its work and can end </li></ul><ul><li>Explicit Ter...
Scheduler <ul><li>Explicitly control when threads may execute single-threaded code (sequences waiting threads) </li></ul><...
Thread pool <ul><li>A number of threads are created to perform a number of tasks, usually organized in a queue </li></ul><...
Thread pool <ul><li>The creating or destroying algorithm impacts overall performance: </li></ul><ul><ul><li>Create too man...
Thread pool - triggers <ul><li>Transient Trigger </li></ul><ul><ul><li>Offers the capability to signal currently running t...
<ul><li>Asynchronous communications, implemented via queued messages  </li></ul><ul><li>Simple, without mutual exclusion p...
<ul><li>Occurs when the event of interest occurs </li></ul><ul><li>Executes very quickly and with little overhead </li></u...
<ul><li>Used when it may not be possible to wait for an asynchronous rendezvous </li></ul><ul><li>The call of the method o...
<ul><li>Concerned with modelling the preconditions for synchronization or rendezvous of threads  </li></ul><ul><li>ready t...
Data patterns
<ul><li>Also called  thread-local storage </li></ul><ul><li>Any function in that thread will get the same value, TLS is al...
<ul><li>Dynamic memory problems:  </li></ul><ul><ul><ul><li>nondeterministic timing of memory allocation and de-allocation...
<ul><li>Involves creating of pools of objects at start-up </li></ul><ul><li>Doesn't address needs for dynamic memory  </li...
<ul><li>Memory fragmentation occurs when: </li></ul><ul><ul><ul><li>The order of allocation is independent of the release ...
<ul><li>Solves memory leaks and dangling pointers </li></ul><ul><li>It does not address memory fragmentation  </li></ul><u...
<ul><li>Removes memory fragmentation  </li></ul><ul><li>Maintains two memory segments in the heap </li></ul><ul><li>Moves ...
Resource patterns
Locked structures <ul><li>Structures that use a locking mechanism  </li></ul><ul><li>Easy to implement, easy to debug </li...
Lock-free structures <ul><li>They do not need to lock  </li></ul><ul><li>They need hardware support (e.g. compare-and-swap...
Wait-free structures <ul><li>Same as lock free structures, but there is a guarantee they would finish in a certain number ...
Single writer / multi reader <ul><li>Special kind of lock that would allow multiple read access to the data but only a sin...
<ul><li>Also known as &quot; Double-Checked Locking Optimization &quot; </li></ul><ul><li>Reduces the overhead of acquirin...
<ul><li>Common memory area addressable by multiple processors  </li></ul><ul><li>Almost always involves a combined hardwar...
<ul><li>Deadlocks avoidance </li></ul><ul><li>Works in an all-or-none fashion </li></ul><ul><li>Prevents the condition of ...
<ul><li>Eliminates deadlocks </li></ul><ul><li>Orders resources and enforcing an ordered policy in which resources must be...
Exception/error patterns
<ul><li>Work failure  </li></ul><ul><li>Deadline expiry </li></ul><ul><li>Resource unavailability  </li></ul><ul><li>Exter...
Balking <ul><li>Executes an action on an object when the object is in a particular state </li></ul><ul><li>An attempt to u...
Triple Modular Redundancy <ul><li>Used when there is no fail-safe state </li></ul><ul><li>Based on an odd number of channe...
Watchdog <ul><li>Lightweight and inexpensive </li></ul><ul><li>Minimal coverage </li></ul><ul><li>Watches out over process...
<ul><li>http://www.workflowpatterns.com </li></ul><ul><li>“ Real-Time Design Patterns: Robust Scalable Architecture for Re...
Questions ?
Big  thank you!
Upcoming SlideShare
Loading in …5
×

Multithreading Patterns

7,879 views

Published on

Published in: Technology

Multithreading Patterns

  1. 1. Multithreading patterns Cristian Nicola Development Manager Net Evidence (SLM) Ltd http://www.tonicola.com [email_address]
  2. 2. 1. Introduction to multithreading 2. Multithreading patterns
  3. 3. 1. Introduction to multithreading
  4. 4. In this section… <ul><li>Why do multi-threading? </li></ul><ul><li>When and when not to use threads? </li></ul><ul><li>Multithreading basic structures (Critical sections, Mutexes, Events, Semaphores and Timers) </li></ul><ul><li>Multithreading problems (atomic operations, race conditions, priority inversion, deadlocks, livelocks, boxcar / lock convoys / thundering herd) </li></ul>
  5. 5. Why multi-threading? <ul><li>Multi-core / multi-CPU machines are now standard </li></ul><ul><li>Makes programming more fun </li></ul>
  6. 6. When to use threads? <ul><li>Clearly defined work-tasks, and the work-tasks are long enough </li></ul><ul><li>Data needed to complete the work tasks does not overlap (or maybe just a little) </li></ul><ul><li>Generally UI interaction is not needed – background tasks </li></ul>
  7. 7. When NOT to use threads? <ul><li>Work-tasks are not clearly defined </li></ul><ul><li>There is a lot of shared data between the tasks </li></ul><ul><li>UI interaction is a requirement </li></ul><ul><li>Work-tasks are small </li></ul><ul><li>You do not have a good reason to use it </li></ul>
  8. 8. Multithreading structures
  9. 9. Jobs, processes, threads, fibers Job 1 Process 1 Thread 1 Thread 2 Thread M … … Process N Fiber 1 Fiber 2 Fiber X …
  10. 10. What we need…a way to <ul><li>… avoid simultaneous access to a common resource ( mutexes, critical sections ) </li></ul><ul><li>… signal an occurrence or an action ( events ) </li></ul><ul><li>… restrict/throttle the access to some shared resources ( semaphores ) </li></ul><ul><li>… signal a due time – sometimes periodically ( timers ) </li></ul>
  11. 11. Critical sections <ul><li>User object - lightweight </li></ul><ul><li>Their number is limited by memory </li></ul><ul><li>Re-entrant </li></ul><ul><li>Very fast when no collisions (10’s of instructions) </li></ul><ul><li>Downgrades to a kernel object when locked </li></ul><ul><li>No time-out </li></ul>
  12. 12. Mutexes <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have security flags </li></ul><ul><li>Can be inherited by child processes </li></ul><ul><li>Can be acquired/released </li></ul>
  13. 13. Events <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have security flags </li></ul><ul><li>Can be inherited by child processes </li></ul><ul><li>Holds a state: signalled, non-signalled </li></ul><ul><li>Can be auto-reset - PulseEvent (should not be used) </li></ul><ul><li>Auto-reset events are NOT re-entrant </li></ul>
  14. 14. Semaphores <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have security flags </li></ul><ul><li>Can be inherited by child processes </li></ul><ul><li>Have a count property, but it cannot be interrogated </li></ul><ul><li>Signalled when count > 0 </li></ul>
  15. 15. Timers <ul><li>Kernel object </li></ul><ul><li>Can be named for inter-process communication </li></ul><ul><li>Can have security flags </li></ul><ul><li>Can be inherited by child processes </li></ul><ul><li>Can be auto-reset </li></ul>
  16. 16. Kernel-land / User-land <ul><li>Kernel transition – expensive </li></ul><ul><li>User transition – fast </li></ul><ul><li>Should avoid kernel transitions when possible (system calls, usage of kernel objects, un-needed thread creation or destruction) </li></ul>
  17. 17. Multithreading problems
  18. 18. Atomic operations <ul><li>A set of operations that must be executed as a whole, so they appear to the rest of the system to be a single operation </li></ul><ul><li>There can be 2 outcomes: </li></ul><ul><li>- success </li></ul><ul><li>- failure </li></ul>
  19. 19. Atomic operations <ul><li>For example the code: </li></ul><ul><li>I = J + 1; </li></ul><ul><li>Can be compiled as: </li></ul><ul><li>MOV EAX, [EBP-$10] </li></ul><ul><li>INC EAX </li></ul><ul><li>MOV [EBP-$0C], EAX </li></ul>Possible task switch Possible task switch Solution: Lock; I = J + 1; Unlock;
  20. 20. Race conditions <ul><li>A task switch can occur any time </li></ul>
  21. 21. Race conditions <ul><li>When 2 threads race to change the data </li></ul><ul><li>Problem: </li></ul><ul><li>Unpredictable result </li></ul>
  22. 22. Race conditions Thread 1 * Read A=1 into register * Increment register * Write register= 2 into A in memory Input: A = 1 Example: 2 threads incrementing a variable by 1 If we start from 1 then the expected result would be 3 Thread 2 * Read A=1 into register * Increment register * Write register= 2 into A in memory Output: A = 2
  23. 23. Priority inversion <ul><li>A thread with a higher priority waits for a resource used by a thread with a lower priority </li></ul><ul><li>Problem: </li></ul><ul><li>A high priority thread is executed less often than a lower priority thread </li></ul>
  24. 24. Priority inversion Thread 1 (low priority) * Lock a file for usage writing some data into it * Do some more work with the file * Release the file Thread 2 (high priority) * Wait for the file to be available * Use the file Example: 2 threads accessing the same file Out of 3 switches: Low priority 2 High priority 1
  25. 25. Deadlock <ul><li>2 or more actions depend on each other for completion, and as a result none finishes </li></ul><ul><li>Problem: </li></ul><ul><li>One or more threads stop working for indefinite amounts of time </li></ul>
  26. 26. Deadlock conditions <ul><li>1. Mutual exclusion locking of resources </li></ul><ul><li>2. Resources are locked while others are waited for </li></ul><ul><li>3. Pre-emption while holding resources is permitted </li></ul><ul><li>4. A circular wait condition exists </li></ul>
  27. 27. Deadlock Thread 1 * Lock resource A * Wait for resource B to be available Thread 2 * Lock resource B * Wait for resource A to be available Example: 2 threads accessing the same resources Both threads are now stopped, no way to wake up
  28. 28. Livelock <ul><li>Same as deadlock, except the detection/prevention of deadlocks would wake up the threads, without progressing </li></ul><ul><li>Problem: </li></ul><ul><li>One or more threads do not progress, they do spin </li></ul><ul><li>2 people travelling in opposite directions, each other is polite and moves aside to make space – none of them can pass as the move from side to side </li></ul>
  29. 29. Boxcar / Lock Convoys / Thundering herd <ul><li>Can have a serious performance penalty </li></ul><ul><li>The application would work fine </li></ul><ul><li>A certain flag wakes up many threads, however only the first one has work to do </li></ul><ul><li>Problem: </li></ul><ul><li>Threads wake up, wait on a resource and then there is no work to do </li></ul>
  30. 30. Boxcar / Lock Convoys / Thundering herd Thread 1 * Sleep waiting for event * Lock data * Use data * Unlock data * Go back to sleep Thread 2 * Sleep waiting for event * Wait for the data lock to be available * Lock data * Nothing to do * Unlock data * Go back to sleep Example: 2 threads wake up to use the same resource Flag is signalled
  31. 31. 2. Multithreading patterns
  32. 32. In this section… <ul><li>What is a design pattern? </li></ul><ul><li>Groups of patterns (control-flow patterns, data patterns, resource patterns, exception/error patterns) </li></ul><ul><li>Multithreading patterns sources </li></ul>
  33. 33. <ul><li>A design pattern is a reusable solution to a recurring problem in the context of object oriented development </li></ul><ul><li>Patterns can be about other topics </li></ul>What is a design pattern?
  34. 34. <ul><li>Control-flow: aspects related to control and flow dependencies between various threads (e.g. parallelism, choice, synchronization) </li></ul><ul><li>Data perspective: passing of information , scoping of variables, etc </li></ul><ul><li>Resource perspective: resource to thread allocation, delegation, etc. </li></ul><ul><li>Exception handling: various causes of exceptions and the various actions that need to be taken as a result of exceptions occurring </li></ul>Groups of patterns
  35. 35. Control-flow patterns
  36. 36. Worker threads <ul><li>Sometimes referred as “ Active Object” , “Cyclic Executive” or “Concurrency Pattern” </li></ul><ul><li>Generic threads doing some work without being aware of what kind of work they do </li></ul><ul><li>They share a common work queue </li></ul><ul><li>Very useful in highly parallel systems </li></ul>
  37. 37. Worker threads <ul><li>Windows Vista/Server has API support for creating thread pools ( CreateThreadpool) </li></ul><ul><li>Use a semaphore to limit the number of active threads to a number compared to the CPU’s count (usually 2 x CPU) </li></ul>
  38. 38. <ul><li>Background Worker Pattern notifications when the thread completes, but provides an update on the status of the operation </li></ul><ul><ul><li>May need a cancel of the operation </li></ul></ul><ul><li>Asynchronous Results Pattern you are more interested in the result than the actual status of the operations </li></ul>Worker threads - variants
  39. 39. <ul><li>Implicit Termination </li></ul><ul><li>the worker has finished its work and can end </li></ul><ul><li>Explicit Termination </li></ul><ul><li>the worker is asked to terminate </li></ul>Worker threads - Termination
  40. 40. Scheduler <ul><li>Explicitly control when threads may execute single-threaded code (sequences waiting threads) </li></ul><ul><li>Independent mechanism to implement a scheduling policy </li></ul><ul><li>Read/Write lock is usually implemented using the scheduler pattern to ensure fairness in scheduling </li></ul><ul><li>Adds significant overhead </li></ul>
  41. 41. Thread pool <ul><li>A number of threads are created to perform a number of tasks, usually organized in a queue </li></ul><ul><li>There are many more tasks than threads </li></ul><ul><li>When thread completes its task: </li></ul><ul><ul><li>If more tasks -> request the next task from the queue </li></ul></ul><ul><ul><li>If no more tasks -> it terminates, or sleeps </li></ul></ul><ul><li>Number of threads used is a parameter that can be tuned - can be dynamic based on the number of waiting tasks </li></ul>
  42. 42. Thread pool <ul><li>The creating or destroying algorithm impacts overall performance: </li></ul><ul><ul><li>Create too many threads = resources and time are wasted </li></ul></ul><ul><ul><li>Destroy too many threads = time spent re-creating </li></ul></ul><ul><ul><li>Creating threads too slowly = poor client performance </li></ul></ul><ul><ul><li>Destroying threads too slowly = starvation of resources </li></ul></ul><ul><li>Negates thread creation and destruction overhead </li></ul><ul><li>Better performance and better system stability </li></ul>
  43. 43. Thread pool - triggers <ul><li>Transient Trigger </li></ul><ul><ul><li>Offers the capability to signal currently running threads </li></ul></ul><ul><ul><li>They are lost if not acted upon right away </li></ul></ul><ul><li>Persistent Trigger </li></ul><ul><ul><li>Generally it would result in the pool actions </li></ul></ul><ul><ul><li>They would be persisted and would eventually be handled </li></ul></ul>
  44. 44. <ul><li>Asynchronous communications, implemented via queued messages </li></ul><ul><li>Simple, without mutual exclusion problems </li></ul><ul><li>No resource is shared by reference </li></ul><ul><li>The shared information is passed by value </li></ul>Message Queuing
  45. 45. <ul><li>Occurs when the event of interest occurs </li></ul><ul><li>Executes very quickly and with little overhead </li></ul><ul><li>Provide a means for timely response to urgent needs </li></ul><ul><li>There are circumstances when their use can lead to system failure </li></ul><ul><li>Asynchronous procedure calls (APC) </li></ul>Interrupt
  46. 46. <ul><li>Used when it may not be possible to wait for an asynchronous rendezvous </li></ul><ul><li>The call of the method of the appropriate object in the other thread can lead to mutual exclusion problems if the called object is currently active doing something else </li></ul><ul><li>The Guarded Call Pattern handles this case through the use of a mutual exclusion semaphore </li></ul>Guarded Call
  47. 47. <ul><li>Concerned with modelling the preconditions for synchronization or rendezvous of threads </li></ul><ul><li>ready threads registers with the Rendezvous class </li></ul><ul><li>then blocks until the Rendezvous class releases it to run </li></ul><ul><li>Build a collaboration structure that allows any arbitrary set of preconditions to be met for thread synchronization, </li></ul><ul><li>Independent of task phrasings, scheduling policies, and priorities </li></ul>Rendezvous
  48. 48. Data patterns
  49. 49. <ul><li>Also called thread-local storage </li></ul><ul><li>Any function in that thread will get the same value, TLS is allocated per thread </li></ul><ul><li>Similar to global storage - unlike global storage, functions in another thread will not get the same value </li></ul><ul><li>Thread specific storage sometimes refers to the private virtual address space of a running task </li></ul>Thread-Specific Storage
  50. 50. <ul><li>Dynamic memory problems: </li></ul><ul><ul><ul><li>nondeterministic timing of memory allocation and de-allocation </li></ul></ul></ul><ul><ul><ul><li>memory fragmentation </li></ul></ul></ul><ul><li>Simple approach to solving both these problems: disallow dynamic memory allocation </li></ul><ul><li>Only used simple systems with highly predictable and consistent loads </li></ul><ul><li>All objects are allocated during system initialization (the system takes longer to initialize, but it operates well during execution) </li></ul>Static Allocation
  51. 51. <ul><li>Involves creating of pools of objects at start-up </li></ul><ul><li>Doesn't address needs for dynamic memory </li></ul><ul><li>The pools are not necessarily initialized at start-up </li></ul><ul><li>The pools are available upon request </li></ul>Pool Allocation
  52. 52. <ul><li>Memory fragmentation occurs when: </li></ul><ul><ul><ul><li>The order of allocation is independent of the release order </li></ul></ul></ul><ul><ul><ul><li>Memory is allocated in various sizes from the heap </li></ul></ul></ul><ul><li>Used when we cannot tolerate dynamic allocation problems like fragmentation </li></ul><ul><li>Fragmentation-free dynamic memory allocation at the cost of loss of memory usage optimality </li></ul><ul><li>Similar to a dynamic allocation but only allows fixed pre-defined sizes to be allocated </li></ul>Fixed Sized Buffer
  53. 53. <ul><li>Solves memory leaks and dangling pointers </li></ul><ul><li>It does not address memory fragmentation </li></ul><ul><li>Takes the programmer out of the loop </li></ul><ul><li>Adds run-time overhead </li></ul><ul><li>Adds a loss of execution predictability </li></ul>Garbage Collection
  54. 54. <ul><li>Removes memory fragmentation </li></ul><ul><li>Maintains two memory segments in the heap </li></ul><ul><li>Moves live objects from one segment to the next </li></ul><ul><li>The free memory in on of the segments is a contiguous block </li></ul>Garbage Compactor
  55. 55. Resource patterns
  56. 56. Locked structures <ul><li>Structures that use a locking mechanism </li></ul><ul><li>Easy to implement, easy to debug </li></ul><ul><li>Can deadlock </li></ul><ul><li>Do not scale well </li></ul>
  57. 57. Lock-free structures <ul><li>They do not need to lock </li></ul><ul><li>They need hardware support (e.g. compare-and-swap instructions) </li></ul><ul><li>They can “burn” CPU </li></ul><ul><li>Hard to implement and debug </li></ul>
  58. 58. Wait-free structures <ul><li>Same as lock free structures, but there is a guarantee they would finish in a certain number of steps </li></ul><ul><li>All wait-free structures are lock-free </li></ul><ul><li>Very difficult to implement </li></ul><ul><li>Very few real life applications </li></ul>
  59. 59. Single writer / multi reader <ul><li>Special kind of lock that would allow multiple read access to the data but only a single write (exclusive write access) </li></ul><ul><li>Problems on promoting from read to write (reader starvation, writers starvation) – Scheduler pattern </li></ul>
  60. 60. <ul><li>Also known as &quot; Double-Checked Locking Optimization &quot; </li></ul><ul><li>Reduces the overhead of acquiring a lock </li></ul><ul><li>Used for implementing &quot; lazy initialization &quot; in a multi-threaded environment </li></ul><ul><ul><li>If check failed then </li></ul></ul><ul><ul><ul><li>Lock </li></ul></ul></ul><ul><ul><ul><li>If check failed then </li></ul></ul></ul><ul><ul><ul><ul><li>Initialize </li></ul></ul></ul></ul><ul><ul><ul><li>Unlock </li></ul></ul></ul>Double-checked locking
  61. 61. <ul><li>Common memory area addressable by multiple processors </li></ul><ul><li>Almost always involves a combined hardware/software solution </li></ul><ul><li>If the data to be shared is read-only then concurrency protection mechanisms may not be required </li></ul><ul><li>Used when responses to messages and events are not desired or too slow </li></ul>Shared Memory
  62. 62. <ul><li>Deadlocks avoidance </li></ul><ul><li>Works in an all-or-none fashion </li></ul><ul><li>Prevents the condition of holding some resources while requesting others </li></ul><ul><li>Allows higher-priority tasks to run if they don't need any of the locked resources </li></ul>Simultaneous Locking
  63. 63. <ul><li>Eliminates deadlocks </li></ul><ul><li>Orders resources and enforcing an ordered policy in which resources must be allocated </li></ul><ul><li>If enforced then no circular waiting condition can ever occur </li></ul><ul><li>Explicitly lock and release the resources </li></ul><ul><li>Has the potential for neglecting to unlock the resource exists </li></ul>Ordered Locking
  64. 64. Exception/error patterns
  65. 65. <ul><li>Work failure </li></ul><ul><li>Deadline expiry </li></ul><ul><li>Resource unavailability </li></ul><ul><li>External trigger </li></ul><ul><li>Constraint violation </li></ul>Exceptions/errors <ul><li>Handling: </li></ul><ul><li>Continue </li></ul><ul><li>Remove work item </li></ul><ul><li>Remove all items </li></ul><ul><li>Recovery: </li></ul><ul><li>no action </li></ul><ul><li>rollback </li></ul><ul><li>compensate </li></ul>
  66. 66. Balking <ul><li>Executes an action on an object when the object is in a particular state </li></ul><ul><li>An attempt to use the object out of its legal state would result in an &quot;Illegal State Exception&quot; </li></ul>
  67. 67. Triple Modular Redundancy <ul><li>Used when there is no fail-safe state </li></ul><ul><li>Based on an odd number of channels operating in parallel </li></ul><ul><li>The computational results or resulting actuation signals are compared, and if there is a disagreement, then a two-out-of-three majority wins </li></ul><ul><li>Any deviating computation of the third channel is discarded </li></ul>
  68. 68. Watchdog <ul><li>Lightweight and inexpensive </li></ul><ul><li>Minimal coverage </li></ul><ul><li>Watches out over processing of another component </li></ul><ul><li>Usually checks a computation time base … </li></ul><ul><li>… or ensures that computation steps are proceeding in a predefined order </li></ul>
  69. 69. <ul><li>http://www.workflowpatterns.com </li></ul><ul><li>“ Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems” by Bruce Powel Douglass </li></ul>Multithreading patterns sources
  70. 70. Questions ?
  71. 71. Big thank you!

×