Java Concurrency
In Practice
Deon Huang
Presenter
• Deon
Agenda
• Overview
• Chapter Summary (1~2)
• Concurrency Problem
• Concurrency Solution
• FAQ
• Conclusion
• Advance Topic
• Chapter Summary (3~5)
Overview
• What is concurrency problem?
• Why does it important?
• What is thread and process?
• What is thread safety?
• What is the outcome of the book?
Where does concurrency problem
come from?
• This is a breakfast shop
• Suppose you are the only cook.
• Only waiter
• Only dish washer
• It’s lunch time
• And you gave 3 client at same time
Need more workers (Multithreading)
What is concurrency problem?
• Now you get many cooks
• What happen when you order
• One steamed port dumplings?
• The answer is no one knows!
Why is it important?
What is the outcome of the book?
• 1.Introduction
• I Fundamentals
• 2.Thread Safety
• 3.Sharing Objects
• 4.Composing Objects
• 5.Building Blocks
• II Structuring Concurrent Applications
• 6.Task Executing
• 7.Cancellation and Shutdown
• 8.Applying Thread Pools
• 9.GUI Applications
• III Liveness, Performance, and Testing
• 10.Avoiding Liveness Hazards
• 11.Performance and Scalability
• 12.Testing Concurrent Programs
• IV Advanced Topics
• 13.Explicit Locks
• 14.Building Custom Synchronizers
• 15.Atomic Variables and Nonblocking Synchronization
• 16.The Java Memory Model
Chapter 1 Introduction Goal
• Understand what kind problem this book is trying to solve
• Why it’s important?
• What is thread?
Background knowledge preparation (Multithreading)
Chapter 1 Introduction
• Coarse grained communication mechanisms of processes : Sockets,
signal handler, shared memory, semaphore, files.
• Sequential programming model.
• Threads provide multiple control flow to coexist within a process
• Threads share process resources such as memory, file handler, but
each has program counter, stack and variables.
1.2 Benefit of Thread
• Exploiting multiple processor
• Simplicity of task modeling
• Simplified handling of Asynchronous events
• More responsive user interface
• Programming abstraction
• Parallelism
• Blocking I/O
• Memory savings
1.2 Benefit of Thread
• How to sell soufflé?
• Program : recipe and plan
• Process : bread store
• Thread : bakers
1.2 Benefit of Thread
Process Thread
Concept Instance that execute
program
Minimum unit of execution
Consists of Memory space
Threads
program counter, a stack, and a
set of registers, ( and a thread
ID. )
Communication Inter process
communication
Directly communicate
Memory Separate memory space Shared memory space
Controlled by Operating system Programmer in program
1.3 Risk of thread
• Safety Hazards : race condition
• Output is dependent on the sequence or timing of other
uncontrollable events.
1.3 Risk of thread
• Safety Hazards : race condition
• Output is dependent on the sequence or timing of other
uncontrollable events.
A
B
1.3 Risk of thread
• Liveness Hazards :
an activity gets into a state such that it is permanently unable t
o make forward progress.
• Deadlock
• Starvation
• Livelock
1.3 Risk of thread
• Performance Hazards :
poor service time, responsiveness, throughput, resource consum
ption, or scalability.
• Context switches : when the scheduler suspends the active thre
ad temporarily so another thread can run are more frequent in
applications with many threads, and have significant costs:
saving and restoring execution context, loss of locality and CPU spent
scheduling thread instead of running them.
1.4 Threads Are Everywhere
• Key point : thread-safe
• Examples in Java:
• RMI
• JSP
• Servlet
• Timer
• Swing and AWT (GUI application)
Chapter 1 Introduction Review
• Understand what kind problem this book is trying to solve
• Concurrency issue
• Why it’s important?
• Utilize thread for better performance
• What is thread?
• Smallest unit of worker in process
Chapter 2 Thread Safety Goal
• What is thread safety?
• How to achieve it?
• Meet Atomicity
• Meet Lock
• Thread safety awareness
Chapter 2 Thread Safety
• A class is thread-safe if it behaves correctly when accessed from
multiple threads.
2.1 Thread-safe
• Basically key point is manage access to state
• Don't share the state variable across threads;
• Make the state variable immutable; or
• Use synchronization whenever accessing the state variable.
• Stateless object is always thread-safe
2.2 Atomicity
2.2 Atomicity
• Race condition can cause problem in lazy-initialization and state
modification.
• Atomic operation is one that is atomic with respect all operations
which on the same state, either executed or none of it has.
• Compound action to gain atomicity.
• To preserve state consistency, update related state variables in a
single atomic operation.
2.3 Locking
• Manage access to state
2.3.1 Intrinsic Locks
• Java provide a built-in locking mechanism for enforcing atomicity:
the synchronized block.
• Synchronized block has two parts: a reference to an object that serve
as the lock, and a block of code guarded by that lock.
• Every Java object can implicitly act as a lock for purposes of
synchronization.
2.3.1 Intrinsic Locks
• Use nifi source code synchronized block as example
2.3.1 Intrinsic Locks
• Use nifi source code synchronized block as example
2.3.2 Reentrancy
• Reentrancy means that locks are acquired on a per thread rather
than per invocation basis.
2.4 Guarding State with locks
• For each mutable state variable that may be accessed by more than
one thread, all accesses to that variable must be performed with the
same lock held.
• Every shared, mutable variable should be guarded by exactly one
lock.
2.5 Liveness and Performance
• When and why use lock
2.5 Liveness and Performance
2.3.5 Liveness and Performance
2.5 Liveness and Performance
• Lock striping : lower granularity
2.3.5 Liveness and Performance
• Simplicity versus performance
• Avoid holding locks during lengthy computations like network or
console I/O.
Chapter 2 Thread Safety Review
• What is thread safety?
• Behaves correctly when accessed from multiple threads.
• How to achieve it?
• Manage access to state
• Meet Atomicity
• Actions and sets of actions have indivisible effects
• Meet Lock
• A tool for controlling access to a shared resource by multiple threads
• Thread safety awareness
• Simplicity versus performance
Examples
• NiFI current task
Concurrency Problem
Common problem Description
Safety Hazards Race Condition, Atomicity, Visibility, Ordering
Liveness Hazards DeadLock, Starvation, LiveLock,
Performance Hazards Spin wait, responsiveness, throughput, Amdahl's Law, Gustafson’s Law.
Lock Contention
Starvation perpetually denied access to resources
DeadLock Lock ordering deadlock
LiveLock not blocked, still cannot make progress
Concurrency Solution
Common Solution Description
Immutable Uneditable as stateless object for thread safety
Lock Guarantee critical zone by holding lock
Synchronized Collection Thread-safety but one thread can access at same time
Concurrent Collection Thread-safety with lock striping for better performance and concurrency
Future Asynchronized computing result
Thread Confinement Object is confined to a thread like ThreadLocal
Volatile Light synchronized mechanism that only provide Visibility
CAS Optimistic techniqueit proceeds with the update in the hope of success
Concurrency Skill
Common Solution Description
Intrinsic Lock Every object has a built-in lock
Read Write Lock Locking strategy that prevents writer/reader overlap
Semaphores Control the number access at the same time
Latch Gate that waiting for an event to release
Barrier Gate that waiting for an object to release
Happen Before Thread relationship to ensure Ordering
FAQ
• Multithreading: What is the point of more threads than cores?
• Thread vs Process?
• What does threads consist of?
• What does threads share?
• Data Segment(global variables,static data)
• Address space.
• Code Segment.
• I/O, if file is open , all threads can read/write to it.
• Process id of parent.
• The Heap
• What is a dead lock, starvation, race condition, livelock?
• What is volatile variables
Advanced Topic
• 1.Introduction
• I Fundamentals
• 2.Thread Safety
• 3.Sharing Objects
• 4.Composing Objects
• 5.Building Blocks
• II Structuring Concurrent Applications
• 6.Task Executing
• 7.Cancellation and Shutdown
• 8.Applying Thread Pools
• 9.GUI Applications
• III Liveness, Performance, and Testing
• 10.Avoiding Liveness Hazards
• 11.Performance and Scalability
• 12.Testing Concurrent Programs
• IV Advanced Topics
• 13.Explicit Locks
• 14.Building Custom Synchronizers
• 15.Atomic Variables and Nonblocking Synchronization
• 16.The Java Memory Model
Conclusion
• Concept of concurrency problem
• Meet Java Concurrency in Practice
• Meet concurrency problem and solution
• Meet FAQ
Chapter 3 Sharing Object
• Define chapter gain
3.1 Visibility
3.1 Visibility
• Reordering : compiler or processor somehow will reorder instruction
for some optimization. So, never assume memory actions must
happen in insufficiently synchronized multithreaded programs.
• Locking can guarantee both visibility and atomicity.
Chapter 3 Sharing Object
3.1.4 Volatile Variables
• Background :
Synchronized has relatively large overhead.
Sometimes we want lighter synchronization.
• s
3.1.4 Volatile Variables
• Implementation :
When a field is declared volatile, the compiler
and runtime are put on notice that this variable is shared and that o
perations on it should not be reordered with other memory operatio
ns
• Volatile variables are not cached in registers or in caches where they
are hidden from other processors, so a read of a volatile variable always
returns the most recent write by any thread.
• Yet accessing a volatile variable performs no locking and so cannot
cause the executing thread to block, making volatile variables a
lighter weight synchronization mechanism than synchronized
3.1.4 Volatile Variables
3.1.4 Volatile Variables
• Example:
• Limitation:
Locking can guarantee both visibility and atomicity.
Volatile only provide visibility.
3.2 Publication and Escape
• Two common problems with unsafe publication
3.2 Publication and Escape
• Inner class has hidden enclosing instance reference, so never publish
at constructor
3.2 Publication and Escape
• Skills: do not let this escape during constructor
• In practical : launch thread later with start method.
3.3 Thread Confinement
• If data is only accessed from a single thread, no synchronization is
needed.
• There are existing ThreadLocal class but even with these, it is still the
programmer's responsibility to ensure that thread confined objects
do not escape from their intended thread.
• Good example is combine Thread confinement concept with Volatile
variables, this make sure only one thread write and gain visibility from
Volatile.
3.3 Thread Confinement
3.3 Thread Confinement
• A more formal means of maintaining thread confinement is
ThreadLocal, which allows you to associate a per thread value with
a value holding object
3.4 Immutability
• An immutable object is one whose state cannot be changed after
construction
• The final keyword, a more limited version of the const mechanism
from C++
3.5 Safe Publication
3.5 Safe Publication
3.5 Safe Publication Idioms
Chapter 3 Review
Chapter 4 Composing Object
• This chapter covers patterns for structuring classes that
can make it easier to make them thread
safe and to maintain them without accidentally undermining their
safety guarantees.
•
4.1 Designing a thread safe class
4.1 Designing a thread safe class
4.1 Designing a thread safe class
• Gathering Synchronization Requirements :
• State dependent Operations :
• The built in mechanisms for efficiently waiting for a condition to become
true wait and notify are tightly bound to intrinsic locking, and can be difficult
to use correctly
• such as blocking queues or semaphores, to provide the desired state
dependent behavior. Blocking library classes such as BlockingQueue, Semapho
re
•
4.2. Instance Confinement
• Gathering Synchronization Requirements :
• State dependent Operations :
• The built in mechanisms for efficiently waiting for a condition to become
true wait and notify are tightly bound to intrinsic locking, and can be difficult
to use correctly
• such as blocking queues or semaphores, to provide the desired state
dependent behavior. Blocking library classes such as BlockingQueue, Semapho
re
•
4.2. Instance Confinement
• s
• s
4.2.1. The Java Monitor Pattern
4.4 Adding Functionality to Existing Thread
safe Classes
• Locking the wrong object
• s
4.2. Instance Confinement
• To make this approach work, we have to use the same lock that the
List uses by using client-side locking or external
• locking.
• s
4.2. Instance Confinement
• To make this approach work, we have to use the same lock that the
List uses by using client-side locking or external
• locking.
• s
4.2. Instance Confinement
• To make this approach work, we have to use the same lock that the
List uses by using client-side locking or external locking.
• s
4.4.2 Composing
• 4.5. Documenting Synchronization Policies
Chapter 4 Review
• To make this approach work, we have to use the same lock that the
List uses by using client-side locking or external locking.
• s
Chapter 5 Building Block
• This chapter covers the most useful concurrent building blocks, e
specially those introduced in Java 5.0 and Java 6, and some patt
erns for using them to structure concurrent applications.
5.1 Synchronization Collection
• Synchronization collection is not suitable for compound actions
• Common compound actions on collections include
• iteration (repeatedly fetch elements until the collection is exhausted),
• navigation (find the next element after this one according to some order)
• conditional operations such as put if absent
(check if a Map has a mapping for key K, and if not, add the mapping).=
5.1 Synchronization Collection
5.1 Synchronization Collection
• Use client-side Locking for compound action in synchronization
collection
5.1 Synchronization Collection
• Use client-side Locking for compound action in synchronization
collection
5.1 Synchronization Collection
• Same problem for iteration on synchronization collection.
5.2 Concurrent collection
• This chapter covers the most useful concurrent building blocks, e
specially those introduced in Java 5.0 and Java 6, and some patt
erns for using them to structure concurrent applications.
Chapter 5 Building Block
• ConcurrentHashMap, a replacement for synchronized hash based Map
implementations, and CopyOnWriteArrayList, a replacement for
synchronized List implementations
• Java 5.0 also adds two new collection types, Queue and BlockingQueue.
• BlockingQueue extends Queue to add blocking insertion and retrieval
operations.
• blocking queues are extremely useful in producer consumer designs
5.2.1. ConcurrentHashMap
• Instead of synchronizing every method on a common lock, restricting
access to a single thread at a time, it uses a finer grained locking
mechanism called lock striping
Item SynchronizedHashMap ConcurrentHashMap
Client-side lock Lock striping
Intrinsic Lock Higher throughput
Better scalibility
Null key, value Allow Not allow
f empty(), size() result is
approximation
Iterator Fail fast Weak consistent
5.2.1. ConcurrentHashMap
5.3. Blocking Queues and the Producer
consumer Pattern
• Blocking queues also provide an offer method, which returns a failure
status if the item cannot be enqueued
5.3.3. Deques and Work Stealing
• Java 6 also adds another two collection types, Deque (pronounced
"deck")
• A Deque is a double ended queue that allows efficient insertion and
removal from both the head and the tail. Implementations include
ArrayDeque and LinkedBlockingDeque.
• Just as blocking queues lend themselves to the producer
consumer pattern, deques lend themselves to a related pattern
called work stealing.
5.4. Blocking and Interruptible Methods
• When a thread blocks, it is usually suspended and placed in one of
the blocked thread states (BLOCKED, WAITING, or TIMED_WAITING).
• When a method can throw InterruptedException, it is telling you
that it is a blocking method, and further that if it is interrupted, it
will make an effort to stop blocking early.
5.4. Blocking and Interruptible Methods
5.5. Synchronizers
• A synchronizer is any object that coordinates the control flow of
threads based on its state.
• Blocking queues can act as synchronizers; other types of
synchronizers include semaphores, barriers, and latches.
• All synchronizers share certain structural properties:
• they encapsulate state that determines whether threads arriving at the
synchronizer should be allowed to pass or forced to wait,
• provide methods to manipulate that state, and
• provide methods to wait efficiently for the synchronizer to enter the desired
state.
5.5.1 Latches
• Synchronizer that can delay threads until it reaches its terminal state
• 1-Ensuring that a computation does not proceed until resources it
needs have been initialized.
• 2-Ensuring that a service does not start until other services on which it depends
have started
• 3-Waiting until all the parties involved in an activity
• Ex: CountdownLatch
5.5.2 FutureTask
• FutureTask also acts like a latch. A computation represented by a
FutureTask is implemented with a Callable
• the result bearing equivalent of Runnable, and can be in one of
three states: waiting to run, running, or completed.
• The behavior of Future.get depends on the state of the task.
5.5.3. Semaphores
• Counting semaphores are used to control the number of activities
that can access a certain resource or perform a given action at the
same time [CPJ 3.4.1].
• Counting semaphores can be used to implement resource pools or to
impose a bound on a collection.
5.5.4. Barriers
• Barriers are similar to latches in that they block a group of threads
until some event has occurred [CPJ 4.4.3].
• The key difference is that with a barrier, all the threads must come
together at a barrier point at the same time in order to proceed.
• Threads call await when they reach the barrier point, and await
blocks until all the threads have reached the barrier point.
• If all threads meet at the barrier point, the barrier has been
successfully passed, in which case all threads are released and the
barrier is reset so it can be used again.
5.5.4. Barriers
• Latch is waiting for thread A While Barriers is wait for 7 o’clock
Reference
• https://www.studytonight.com/operating-system/multithreading
• https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Thr
eads.html
• https://www.quora.com/What-is-the-difference-between-a-process-
and-a-thread
• https://stackoverflow.com/questions/41632073/do-threads-share-
local-variables

Java concurrency in practice

  • 1.
  • 2.
  • 3.
    Agenda • Overview • ChapterSummary (1~2) • Concurrency Problem • Concurrency Solution • FAQ • Conclusion • Advance Topic • Chapter Summary (3~5)
  • 4.
    Overview • What isconcurrency problem? • Why does it important? • What is thread and process? • What is thread safety? • What is the outcome of the book?
  • 5.
    Where does concurrencyproblem come from? • This is a breakfast shop • Suppose you are the only cook. • Only waiter • Only dish washer • It’s lunch time • And you gave 3 client at same time Need more workers (Multithreading)
  • 6.
    What is concurrencyproblem? • Now you get many cooks • What happen when you order • One steamed port dumplings? • The answer is no one knows!
  • 7.
    Why is itimportant?
  • 8.
    What is theoutcome of the book? • 1.Introduction • I Fundamentals • 2.Thread Safety • 3.Sharing Objects • 4.Composing Objects • 5.Building Blocks • II Structuring Concurrent Applications • 6.Task Executing • 7.Cancellation and Shutdown • 8.Applying Thread Pools • 9.GUI Applications • III Liveness, Performance, and Testing • 10.Avoiding Liveness Hazards • 11.Performance and Scalability • 12.Testing Concurrent Programs • IV Advanced Topics • 13.Explicit Locks • 14.Building Custom Synchronizers • 15.Atomic Variables and Nonblocking Synchronization • 16.The Java Memory Model
  • 9.
    Chapter 1 IntroductionGoal • Understand what kind problem this book is trying to solve • Why it’s important? • What is thread? Background knowledge preparation (Multithreading)
  • 10.
    Chapter 1 Introduction •Coarse grained communication mechanisms of processes : Sockets, signal handler, shared memory, semaphore, files. • Sequential programming model. • Threads provide multiple control flow to coexist within a process • Threads share process resources such as memory, file handler, but each has program counter, stack and variables.
  • 11.
    1.2 Benefit ofThread • Exploiting multiple processor • Simplicity of task modeling • Simplified handling of Asynchronous events • More responsive user interface • Programming abstraction • Parallelism • Blocking I/O • Memory savings
  • 12.
    1.2 Benefit ofThread • How to sell soufflé? • Program : recipe and plan • Process : bread store • Thread : bakers
  • 13.
    1.2 Benefit ofThread Process Thread Concept Instance that execute program Minimum unit of execution Consists of Memory space Threads program counter, a stack, and a set of registers, ( and a thread ID. ) Communication Inter process communication Directly communicate Memory Separate memory space Shared memory space Controlled by Operating system Programmer in program
  • 14.
    1.3 Risk ofthread • Safety Hazards : race condition • Output is dependent on the sequence or timing of other uncontrollable events.
  • 15.
    1.3 Risk ofthread • Safety Hazards : race condition • Output is dependent on the sequence or timing of other uncontrollable events. A B
  • 16.
    1.3 Risk ofthread • Liveness Hazards : an activity gets into a state such that it is permanently unable t o make forward progress. • Deadlock • Starvation • Livelock
  • 17.
    1.3 Risk ofthread • Performance Hazards : poor service time, responsiveness, throughput, resource consum ption, or scalability. • Context switches : when the scheduler suspends the active thre ad temporarily so another thread can run are more frequent in applications with many threads, and have significant costs: saving and restoring execution context, loss of locality and CPU spent scheduling thread instead of running them.
  • 18.
    1.4 Threads AreEverywhere • Key point : thread-safe • Examples in Java: • RMI • JSP • Servlet • Timer • Swing and AWT (GUI application)
  • 19.
    Chapter 1 IntroductionReview • Understand what kind problem this book is trying to solve • Concurrency issue • Why it’s important? • Utilize thread for better performance • What is thread? • Smallest unit of worker in process
  • 21.
    Chapter 2 ThreadSafety Goal • What is thread safety? • How to achieve it? • Meet Atomicity • Meet Lock • Thread safety awareness
  • 22.
    Chapter 2 ThreadSafety • A class is thread-safe if it behaves correctly when accessed from multiple threads.
  • 23.
    2.1 Thread-safe • Basicallykey point is manage access to state • Don't share the state variable across threads; • Make the state variable immutable; or • Use synchronization whenever accessing the state variable. • Stateless object is always thread-safe
  • 24.
  • 25.
    2.2 Atomicity • Racecondition can cause problem in lazy-initialization and state modification. • Atomic operation is one that is atomic with respect all operations which on the same state, either executed or none of it has. • Compound action to gain atomicity. • To preserve state consistency, update related state variables in a single atomic operation.
  • 26.
    2.3 Locking • Manageaccess to state
  • 27.
    2.3.1 Intrinsic Locks •Java provide a built-in locking mechanism for enforcing atomicity: the synchronized block. • Synchronized block has two parts: a reference to an object that serve as the lock, and a block of code guarded by that lock. • Every Java object can implicitly act as a lock for purposes of synchronization.
  • 28.
    2.3.1 Intrinsic Locks •Use nifi source code synchronized block as example
  • 29.
    2.3.1 Intrinsic Locks •Use nifi source code synchronized block as example
  • 30.
    2.3.2 Reentrancy • Reentrancymeans that locks are acquired on a per thread rather than per invocation basis.
  • 31.
    2.4 Guarding Statewith locks • For each mutable state variable that may be accessed by more than one thread, all accesses to that variable must be performed with the same lock held. • Every shared, mutable variable should be guarded by exactly one lock.
  • 32.
    2.5 Liveness andPerformance • When and why use lock
  • 33.
    2.5 Liveness andPerformance
  • 34.
    2.3.5 Liveness andPerformance
  • 35.
    2.5 Liveness andPerformance • Lock striping : lower granularity
  • 36.
    2.3.5 Liveness andPerformance • Simplicity versus performance • Avoid holding locks during lengthy computations like network or console I/O.
  • 37.
    Chapter 2 ThreadSafety Review • What is thread safety? • Behaves correctly when accessed from multiple threads. • How to achieve it? • Manage access to state • Meet Atomicity • Actions and sets of actions have indivisible effects • Meet Lock • A tool for controlling access to a shared resource by multiple threads • Thread safety awareness • Simplicity versus performance
  • 38.
  • 39.
    Concurrency Problem Common problemDescription Safety Hazards Race Condition, Atomicity, Visibility, Ordering Liveness Hazards DeadLock, Starvation, LiveLock, Performance Hazards Spin wait, responsiveness, throughput, Amdahl's Law, Gustafson’s Law. Lock Contention Starvation perpetually denied access to resources DeadLock Lock ordering deadlock LiveLock not blocked, still cannot make progress
  • 40.
    Concurrency Solution Common SolutionDescription Immutable Uneditable as stateless object for thread safety Lock Guarantee critical zone by holding lock Synchronized Collection Thread-safety but one thread can access at same time Concurrent Collection Thread-safety with lock striping for better performance and concurrency Future Asynchronized computing result Thread Confinement Object is confined to a thread like ThreadLocal Volatile Light synchronized mechanism that only provide Visibility CAS Optimistic techniqueit proceeds with the update in the hope of success
  • 41.
    Concurrency Skill Common SolutionDescription Intrinsic Lock Every object has a built-in lock Read Write Lock Locking strategy that prevents writer/reader overlap Semaphores Control the number access at the same time Latch Gate that waiting for an event to release Barrier Gate that waiting for an object to release Happen Before Thread relationship to ensure Ordering
  • 42.
    FAQ • Multithreading: Whatis the point of more threads than cores? • Thread vs Process? • What does threads consist of? • What does threads share? • Data Segment(global variables,static data) • Address space. • Code Segment. • I/O, if file is open , all threads can read/write to it. • Process id of parent. • The Heap • What is a dead lock, starvation, race condition, livelock? • What is volatile variables
  • 43.
    Advanced Topic • 1.Introduction •I Fundamentals • 2.Thread Safety • 3.Sharing Objects • 4.Composing Objects • 5.Building Blocks • II Structuring Concurrent Applications • 6.Task Executing • 7.Cancellation and Shutdown • 8.Applying Thread Pools • 9.GUI Applications • III Liveness, Performance, and Testing • 10.Avoiding Liveness Hazards • 11.Performance and Scalability • 12.Testing Concurrent Programs • IV Advanced Topics • 13.Explicit Locks • 14.Building Custom Synchronizers • 15.Atomic Variables and Nonblocking Synchronization • 16.The Java Memory Model
  • 44.
    Conclusion • Concept ofconcurrency problem • Meet Java Concurrency in Practice • Meet concurrency problem and solution • Meet FAQ
  • 47.
    Chapter 3 SharingObject • Define chapter gain
  • 48.
  • 49.
    3.1 Visibility • Reordering: compiler or processor somehow will reorder instruction for some optimization. So, never assume memory actions must happen in insufficiently synchronized multithreaded programs. • Locking can guarantee both visibility and atomicity.
  • 50.
  • 51.
    3.1.4 Volatile Variables •Background : Synchronized has relatively large overhead. Sometimes we want lighter synchronization. • s
  • 52.
    3.1.4 Volatile Variables •Implementation : When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that o perations on it should not be reordered with other memory operatio ns • Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread. • Yet accessing a volatile variable performs no locking and so cannot cause the executing thread to block, making volatile variables a lighter weight synchronization mechanism than synchronized
  • 53.
  • 54.
    3.1.4 Volatile Variables •Example: • Limitation: Locking can guarantee both visibility and atomicity. Volatile only provide visibility.
  • 55.
    3.2 Publication andEscape • Two common problems with unsafe publication
  • 56.
    3.2 Publication andEscape • Inner class has hidden enclosing instance reference, so never publish at constructor
  • 57.
    3.2 Publication andEscape • Skills: do not let this escape during constructor • In practical : launch thread later with start method.
  • 58.
    3.3 Thread Confinement •If data is only accessed from a single thread, no synchronization is needed. • There are existing ThreadLocal class but even with these, it is still the programmer's responsibility to ensure that thread confined objects do not escape from their intended thread. • Good example is combine Thread confinement concept with Volatile variables, this make sure only one thread write and gain visibility from Volatile.
  • 59.
  • 60.
    3.3 Thread Confinement •A more formal means of maintaining thread confinement is ThreadLocal, which allows you to associate a per thread value with a value holding object
  • 61.
    3.4 Immutability • Animmutable object is one whose state cannot be changed after construction • The final keyword, a more limited version of the const mechanism from C++
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
    Chapter 4 ComposingObject • This chapter covers patterns for structuring classes that can make it easier to make them thread safe and to maintain them without accidentally undermining their safety guarantees. •
  • 67.
    4.1 Designing athread safe class
  • 68.
    4.1 Designing athread safe class
  • 69.
    4.1 Designing athread safe class • Gathering Synchronization Requirements : • State dependent Operations : • The built in mechanisms for efficiently waiting for a condition to become true wait and notify are tightly bound to intrinsic locking, and can be difficult to use correctly • such as blocking queues or semaphores, to provide the desired state dependent behavior. Blocking library classes such as BlockingQueue, Semapho re •
  • 70.
    4.2. Instance Confinement •Gathering Synchronization Requirements : • State dependent Operations : • The built in mechanisms for efficiently waiting for a condition to become true wait and notify are tightly bound to intrinsic locking, and can be difficult to use correctly • such as blocking queues or semaphores, to provide the desired state dependent behavior. Blocking library classes such as BlockingQueue, Semapho re •
  • 71.
  • 72.
    4.2.1. The JavaMonitor Pattern
  • 73.
    4.4 Adding Functionalityto Existing Thread safe Classes • Locking the wrong object • s
  • 74.
    4.2. Instance Confinement •To make this approach work, we have to use the same lock that the List uses by using client-side locking or external • locking. • s
  • 75.
    4.2. Instance Confinement •To make this approach work, we have to use the same lock that the List uses by using client-side locking or external • locking. • s
  • 76.
    4.2. Instance Confinement •To make this approach work, we have to use the same lock that the List uses by using client-side locking or external locking. • s
  • 77.
    4.4.2 Composing • 4.5.Documenting Synchronization Policies
  • 78.
    Chapter 4 Review •To make this approach work, we have to use the same lock that the List uses by using client-side locking or external locking. • s
  • 79.
    Chapter 5 BuildingBlock • This chapter covers the most useful concurrent building blocks, e specially those introduced in Java 5.0 and Java 6, and some patt erns for using them to structure concurrent applications.
  • 80.
    5.1 Synchronization Collection •Synchronization collection is not suitable for compound actions • Common compound actions on collections include • iteration (repeatedly fetch elements until the collection is exhausted), • navigation (find the next element after this one according to some order) • conditional operations such as put if absent (check if a Map has a mapping for key K, and if not, add the mapping).=
  • 81.
  • 82.
    5.1 Synchronization Collection •Use client-side Locking for compound action in synchronization collection
  • 83.
    5.1 Synchronization Collection •Use client-side Locking for compound action in synchronization collection
  • 84.
    5.1 Synchronization Collection •Same problem for iteration on synchronization collection.
  • 85.
    5.2 Concurrent collection •This chapter covers the most useful concurrent building blocks, e specially those introduced in Java 5.0 and Java 6, and some patt erns for using them to structure concurrent applications.
  • 86.
    Chapter 5 BuildingBlock • ConcurrentHashMap, a replacement for synchronized hash based Map implementations, and CopyOnWriteArrayList, a replacement for synchronized List implementations • Java 5.0 also adds two new collection types, Queue and BlockingQueue. • BlockingQueue extends Queue to add blocking insertion and retrieval operations. • blocking queues are extremely useful in producer consumer designs
  • 87.
    5.2.1. ConcurrentHashMap • Insteadof synchronizing every method on a common lock, restricting access to a single thread at a time, it uses a finer grained locking mechanism called lock striping Item SynchronizedHashMap ConcurrentHashMap Client-side lock Lock striping Intrinsic Lock Higher throughput Better scalibility Null key, value Allow Not allow f empty(), size() result is approximation Iterator Fail fast Weak consistent
  • 88.
  • 89.
    5.3. Blocking Queuesand the Producer consumer Pattern • Blocking queues also provide an offer method, which returns a failure status if the item cannot be enqueued
  • 90.
    5.3.3. Deques andWork Stealing • Java 6 also adds another two collection types, Deque (pronounced "deck") • A Deque is a double ended queue that allows efficient insertion and removal from both the head and the tail. Implementations include ArrayDeque and LinkedBlockingDeque. • Just as blocking queues lend themselves to the producer consumer pattern, deques lend themselves to a related pattern called work stealing.
  • 91.
    5.4. Blocking andInterruptible Methods • When a thread blocks, it is usually suspended and placed in one of the blocked thread states (BLOCKED, WAITING, or TIMED_WAITING). • When a method can throw InterruptedException, it is telling you that it is a blocking method, and further that if it is interrupted, it will make an effort to stop blocking early.
  • 92.
    5.4. Blocking andInterruptible Methods
  • 93.
    5.5. Synchronizers • Asynchronizer is any object that coordinates the control flow of threads based on its state. • Blocking queues can act as synchronizers; other types of synchronizers include semaphores, barriers, and latches. • All synchronizers share certain structural properties: • they encapsulate state that determines whether threads arriving at the synchronizer should be allowed to pass or forced to wait, • provide methods to manipulate that state, and • provide methods to wait efficiently for the synchronizer to enter the desired state.
  • 94.
    5.5.1 Latches • Synchronizerthat can delay threads until it reaches its terminal state • 1-Ensuring that a computation does not proceed until resources it needs have been initialized. • 2-Ensuring that a service does not start until other services on which it depends have started • 3-Waiting until all the parties involved in an activity • Ex: CountdownLatch
  • 95.
    5.5.2 FutureTask • FutureTaskalso acts like a latch. A computation represented by a FutureTask is implemented with a Callable • the result bearing equivalent of Runnable, and can be in one of three states: waiting to run, running, or completed. • The behavior of Future.get depends on the state of the task.
  • 96.
    5.5.3. Semaphores • Countingsemaphores are used to control the number of activities that can access a certain resource or perform a given action at the same time [CPJ 3.4.1]. • Counting semaphores can be used to implement resource pools or to impose a bound on a collection.
  • 97.
    5.5.4. Barriers • Barriersare similar to latches in that they block a group of threads until some event has occurred [CPJ 4.4.3]. • The key difference is that with a barrier, all the threads must come together at a barrier point at the same time in order to proceed. • Threads call await when they reach the barrier point, and await blocks until all the threads have reached the barrier point. • If all threads meet at the barrier point, the barrier has been successfully passed, in which case all threads are released and the barrier is reset so it can be used again.
  • 98.
    5.5.4. Barriers • Latchis waiting for thread A While Barriers is wait for 7 o’clock
  • 101.
    Reference • https://www.studytonight.com/operating-system/multithreading • https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Thr eads.html •https://www.quora.com/What-is-the-difference-between-a-process- and-a-thread • https://stackoverflow.com/questions/41632073/do-threads-share- local-variables