Concurrency Grab BagMore Gotchas, Tips, and Patterns for Practical ConcurrencySangjin Lee & DebashisSahaeBay Inc.
AgendaIntroductionPatterns & anti-patternsWarm-up: “double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...2
IntroductionThe main goal is two-fold: correctness first, and performance/scalability nextProblems tend to repeat themselves: anti-patterns work as visual “crutches” to spot bad smell3
AgendaIntroductionPatterns & anti-patternsWarm-up: “double-checked locking” on collectionMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...4
“Double-checked locking” on collectionInitialize a collection lazilyclass Unsafe {    private Map<String,Object> map = null;    public void useMap() {        if (map == null) {initMap();        } // read the map; get(), iterate, ...    }    privatesynchronized void initMap() {        if (map == null) {            map = new HashMap<String,Object>();            // populate the map with initial data        }    }}5
“Double-checked locking” on collectionIt’s worse than the real double-checked locking patternWhy would one do this?Delay the expensive operation of populating the dataYou don’t want to incur penalty on reads: once the map is set up, it’s read-onlyBut is laziness really necessary?6
“Double-checked locking” on collection“Eager” fixclass Safe {    privatefinal Map<String,Object> map;    public Safe() {        map = new HashMap<String,Object>();        // populate the map with initial data    }public void useMap() {        // read the map; get(), iterate, ...    }}7
“Double-checked locking” on collectionFix using volatile if the data is optional & largeclass Safe {    privatevolatile Map<String,Object> map = null;    public void useMap() {        if (map == null) {initMap();        } // read the map; get(), iterate, ...    }    private synchronized void initMap() {        if (map == null) {Map<String,Object> temp = new HashMap<String,Object>();            // populate temp with initial datamap = temp; // make it available after it’s ready        }    }}8
AgendaIntroductionPatterns & anti-patternsWarm-up: “double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...9
Many readers, few writersUse cases: change data only on demand (e.g. configuration), ...Implementation choicesSynchronized data structureConcurrent collections (e.g. ConcurrentHashMap)ReadWriteLock“Copy-on-write”10
Many readers, few writersExample: using synchronizationclass Synchronized {    private final List<String> list = new ArrayList<String>();// the entire iteration must be synchronized    public synchronized void iterateOnList() {        for (String s: list) {            // do something with s        }    }    publicsynchronized void add(String value) {list.add(value);    }}11
Many readers, few writersExample: using ReadWriteLockclass UsingReadWriteLock {    private final List<String> list = new ArrayList<String>();private final ReadWriteLock lock = new ReentrantReadWriteLock();    public void iterateOnList() {lock.readLock().lock();        try {            for (String s: list) {                // do something with s            }        } finally { lock.readLock().unlock(); }    }    // continued...12
Many readers, few writersExample: using ReadWriteLock    // continued    public void add(String value) {lock.writeLock().lock();        try {list.add(value);        } finally { lock.writeLock().unlock(); }    }}13
Many readers, few writersCopy-on-writeIf writes are truly few and far between, and you want reads to be as fast as possible, copy-on-write is an optionYou copy and replace the entire data on every writeYou eliminate synchronization on reads, and shift the burden to writesWrites usually become much more expensiveexample: java.util.concurrent.CopyOnWriteArrayList14
Many readers, few writersExample: using copy-on-writeclass CopyOnWrite {    privatevolatile List<String> list = new ArrayList<String>();    public void iterateOnList() { // no locking needed        for (String s: list) {            // do something with s        }    }    publicsynchronized void add(String value) { // need mutual exclusion        List<String> copy = new ArrayList<String>(list); // create a copycopy.add(value);list = copy;    }}15
Many readers, few writersWhat’s wrong with this?class BadCopyOnWrite {    private volatile List<String> list = new ArrayList<String>();    public void iterateOnList() { // no locking needed        for (inti = 0; i < list.size(); i++) {String s = list.get(i);            // do something with s        }    }    publicsynchronized void add(String value) { // need mutual exclusionList<String> copy = new ArrayList<String>(list); // create a copycopy.add(value);list = copy;    }}16
Many readers, few writersOf course you can simply use CopyOnWriteArrayList!class CopyOnWrite2 {    private final List<String> list = new CopyOnWriteArrayList<String>();    public void iterateOnList() { // no locking needed        for (String s: list) {            // do something with s        }    }    public void add(String value) {list.add(value);    }}17
Many readers, few writers18
Many readers, few writersFor Maps, copy-on-write is less useful as ConcurrentHashMap is usually good enoughReadWriteLock is an option, but is less concurrent than and performs more poorly than ConcurrentHashMapCopy-on-write has the best read performance19
Many readers, few writersCopy-on-write: caveatsThe write performanceThe staleness behavior should be acceptable (it usually is)The direct reference to the underlying data that is copied should not escape the objectStale dataMemory leaks20
Many readers, few writersWhat should we use?If the (read) concurrency is low, synchronization is often good enoughChoose concurrent collections (ConcurrentHashMap, etc.) if applicableUse copy-on-write if concurrent collections are not applicable and write performance is not a concern21
Many readers, few writersHow about copy-on-write on MULTIPLE variables?22
Many readers, few writersMulti-variable example: using synchronizationclass Synchronized {    private Map<String,String> current = new HashMap<String,String>();    private Map<String,String> previous = null;    public synchronized void shift() {        previous = current;        current = new HashMap<String,String>();    }    public synchronized void putValue(String key, String value) {current.put(key, value);    }    public synchronized void getValue(String key) {        return current.get(key);    }}23
Many readers, few writersCopy-on-write on multiple variablesUse a container class with those variablesDo a volatile copy-and-replace with the container object24
Many readers, few writersMulti-variable example: use a container classclass ShiftingWindow {final Map<String,String> current;final Map<String,String> previous;    public ShiftingWindow(Map<String,String> c, Map<String,String> p) {        current = c;        previous = p;    }}25
Many readers, few writersMulti-variable example: use a container classclass CopyOnWrite {    privatevolatileShiftingWindow window =        new ShiftingWindow(newConcurrentHashMap<String,String>(), null);    public synchronized void shift() { // copy on writeShiftingWindownewWindow =             new ShiftingWindow(newConcurrentHashMap<String,String>(),window.current);        window = newWindow;    }    public void putValue(String key, String value) { // no lockingwindow.current.put(key, value);    }    public void getValue(String key) { // no locking        return window.current.get(key);    }}26
AgendaIntroductionPatterns & anti-patternsWarm-up: “double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...27
Many writers, few readersUse cases: logging, counters, statistics, ...Produce secondary data (e.g. URL counts) from primary operations (serving URLs)Many writers: all servlet threads will update the data frequentlyFew readers: the data will be read on demand (reporting) or periodicallyImpact on the primary operations must be minimized28
Many writers, few readersImplementation choicesSynchronized data structureConcurrentHashMap (for a map or set)Asynchronous (background) processor29
Many writers, few readersSynchronized data structureNot recommendedCan induce a hotly contended lock under high level of concurrency, and turn into a scalability hot spotConcurrentHashMapNormally the best solutionScales well under high level of concurrencyAsynchronous (background) processorUseful pattern if ConcurrentHashMap is not an option or write operations are serial in nature30
Many writers, few readersSynchronized data structureclass SynchronizedCounter {    private final Map<String,Integer> map = new HashMap<String,Integer>();    publicsynchronized void addCount(String page) {        Integer value = map.get(page);        value = (value == null) ? 1 : value+1;map.put(page, value);    }    public synchronizedintgetCount(String page) {        Integer value = map.get(page);        return (value == null) ? 0 : value;    }}31
Many writers, few readersConcurrentHashMapclass ConcurrentHashMapCounter {    private final ConcurrentMap<String,AtomicInteger> map =         new ConcurrentHashMap<String,AtomicInteger>();    public void addCount(String page) {AtomicInteger value = map.get(page);        if (value == null) {            value = new AtomicInteger(0);AtomicInteger old = map.putIfAbsent(page, value);            if (old != null) {                value = old;            }        }value.incrementAndGet();    }    // continued...32
Many writers, few readersConcurrentHashMap    // continued    publicintgetCount(String page) {AtomicInteger value = map.get(page);        return (value == null) ? 0 : value.get();    }}33
Many writers, few readersAsynchronous (background) processorA single background processor thread owns the dataPrimary threads produce tasks for the background processorWrites and reads are actually done on the background processor thread34
Many writers, few readersAsynchronous (background) processor: benefitsLatency on the primary threads is minimizedContention is greatly reduced: can yield much better throughput than synchronizationTrivially thread safe: exploits safety via thread confinementExample: logging to disk/console35
Many writers, few readersAsynchronous (background) processor: caveatsThe data structure should not escape the background threadThe actual tasks should be thread-agnosticPerforms poorly against a more concurrent solutionCode becomes bit more complicatedYou need to manage saturation: tasks may be produced faster than they can be handled by the processor36
Many writers, few readersAsynchronous (background) processorclass BackgroundCounter {    // background thread    private final ExecutorService executor =Executors.newSingleThreadExecutor();    // map is exclusively used by the executor thread    private final Map<String,Integer> map = new HashMap<String,Integer>();    public void addCount(String page) {executor.execute(newAddTask(page));    }    public intgetCount(String page) {Future<Integer> future = executor.submit(newGetTask(page));        return future.get(); // exception handling omitted    }    // continued...37
Many writers, few readersAsynchronous (background) processor    // continuedprivate class AddTask implements Runnable {        private final String page;AddTask(String page) { this.page = page; }        public void run() {            Integer value = map.get(page);            value = (value == null) ? 1 : value+1;map.put(page, value);        }    }    // continued...38
Many writers, few readersAsynchronous (background) processor    // continuedprivate class GetTask implements Callable<Integer> {        private final String page;GetTask(String page) { this.key = page; }        public Integer call() {            Integer value = map.get(page);            return (value == null) ? 0 : value;        }    }}39
AgendaIntroductionPatterns & anti-patternsWarm-up: “double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...40
Configuring a ThreadPoolExecutorRight configuration that fits your use case and demand is extremely importantBadly configured ThreadPoolExecutors cause exceptions and performance issuesRejectedExecutionExceptions anyone?41
Configuring a ThreadPoolExecutorSimple rules for ThreadPoolExecutor behaviorWhen a task is submitted:If the core size has not been reached, a new thread is always createdIf the core size is reached, the task is queuedIf the core size is reached and the queue becomes full, a new thread is created until the max size is reachedIf the max size is reached and the queue is full, the rejected execution policy kicks in42
Configuring a ThreadPoolExecutorImportance of core sizeThreadPoolExecutor changes behavior dramatically around the core sizeBelow core size, threads are always created even if there are idle threadsAbove core size, the preferred behavior shifts to queuingCore size should be big enough to accommodate the anticipated average task throughput demand43
Configuring a ThreadPoolExecutorThread pool size and queue size are competing parametersQueuing increases latency but conserves resourceA queued task in general consumes less resource than an active task44
Closing...Power of static analysisWhenever we find an issue, we try to turn it into a static analysis ruleFindBugs already has many useful thread-safety rulesIntent is the most difficult part with thread-safety analysis: annotations helpContinued training helps as well45
Closing...46
Thank you!Questions?47
TPE: Cancelling tasksCancelling tasks: more complicated than you thinkCancelling tasks is your jobTiming out from Future.get() does NOT cancel the task by itselfSome TPE methods cancel outstanding tasks for you: invokeAll() with timeout, invokeAny()Cancelling tasks uses interruption: you should write your task to respond to cancellation promptly (i.e. “interruptible”)48
TPE & UncaughtExceptionHandlerUncaughtExceptionHandler doesn’t mix with ThreadPoolExecutor49
TPE & UncaughtExceptionHandlerMulti-threaded test with vanilla threadclass TestWithThreads extends TestCase {    @Test public void test() {MyHandlerh = new MyHandler();        Thread th = new Thread(someRunnable);th.setUncaughtExceptionHandler(h);th.start(); th.join();// check MyHandler for any exception on thread th    }    private static class MyHandler implements UncaughtExceptionHandler {        public void uncaughtException(Threadt, Throwablee) {            // store the exception        }    }}50
TPE & UncaughtExceptionHandlerMulti-threaded test with TPE stops working: why?class BrokenTestWithExecutor extends TestCase {    private ExecutorService executor = Executors.newSingleThreadExecutor();    @Test public void test() {MyHandlerh = new MyHandler();Thread.setDefaultUncaughtExceptionHandler(h);executor.submit(someRunnable).get();// check MyHandler for any exception on thread th    }    private static class MyHandler implements UncaughtExceptionHandler {        public void uncaughtException(Threadt, Throwablee) {            // store the exception        }    }}51
TPE & UncaughtExceptionHandlerRemember what UncaughtExceptionHandlers are for!UncaughtExceptionHandlers are invoked only if the thread is being terminated due to an uncaught exceptionSome (not all) TPE methods catch and handle all exceptionsThreadPoolExecutorexecute(): triggers UncaughtExceptionHandlerssubmit(): does not trigger themScheduledThreadPoolExecutor: does not trigger them52
TPE & UncaughtExceptionHandlerSimply don’t rely on UncaughtExceptionHandlers with TPEUsing Future and ExecutionException is the right way with TPE53
TPE & UncaughtExceptionHandlerMulti-threaded test with TPE: correctclass CorrectTestWithExecutor extends TestCase {    private ExecutorService executor = Executors.newSingleThreadExecutor();    @Test public void test() {        try {executor.submit(someRunnable).get();        } catch (ExecutionExceptione) {// its cause is the original exceptionThrowable cause = e.getCause();// assert failure        } catch (InterruptedException e2) { ... }    }}54

Concurrency Grabbag

  • 1.
    Concurrency Grab BagMoreGotchas, Tips, and Patterns for Practical ConcurrencySangjin Lee & DebashisSahaeBay Inc.
  • 2.
    AgendaIntroductionPatterns & anti-patternsWarm-up:“double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...2
  • 3.
    IntroductionThe main goalis two-fold: correctness first, and performance/scalability nextProblems tend to repeat themselves: anti-patterns work as visual “crutches” to spot bad smell3
  • 4.
    AgendaIntroductionPatterns & anti-patternsWarm-up:“double-checked locking” on collectionMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...4
  • 5.
    “Double-checked locking” oncollectionInitialize a collection lazilyclass Unsafe { private Map<String,Object> map = null; public void useMap() { if (map == null) {initMap(); } // read the map; get(), iterate, ... } privatesynchronized void initMap() { if (map == null) { map = new HashMap<String,Object>(); // populate the map with initial data } }}5
  • 6.
    “Double-checked locking” oncollectionIt’s worse than the real double-checked locking patternWhy would one do this?Delay the expensive operation of populating the dataYou don’t want to incur penalty on reads: once the map is set up, it’s read-onlyBut is laziness really necessary?6
  • 7.
    “Double-checked locking” oncollection“Eager” fixclass Safe { privatefinal Map<String,Object> map; public Safe() { map = new HashMap<String,Object>(); // populate the map with initial data }public void useMap() { // read the map; get(), iterate, ... }}7
  • 8.
    “Double-checked locking” oncollectionFix using volatile if the data is optional & largeclass Safe { privatevolatile Map<String,Object> map = null; public void useMap() { if (map == null) {initMap(); } // read the map; get(), iterate, ... } private synchronized void initMap() { if (map == null) {Map<String,Object> temp = new HashMap<String,Object>(); // populate temp with initial datamap = temp; // make it available after it’s ready } }}8
  • 9.
    AgendaIntroductionPatterns & anti-patternsWarm-up:“double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...9
  • 10.
    Many readers, fewwritersUse cases: change data only on demand (e.g. configuration), ...Implementation choicesSynchronized data structureConcurrent collections (e.g. ConcurrentHashMap)ReadWriteLock“Copy-on-write”10
  • 11.
    Many readers, fewwritersExample: using synchronizationclass Synchronized { private final List<String> list = new ArrayList<String>();// the entire iteration must be synchronized public synchronized void iterateOnList() { for (String s: list) { // do something with s } } publicsynchronized void add(String value) {list.add(value); }}11
  • 12.
    Many readers, fewwritersExample: using ReadWriteLockclass UsingReadWriteLock { private final List<String> list = new ArrayList<String>();private final ReadWriteLock lock = new ReentrantReadWriteLock(); public void iterateOnList() {lock.readLock().lock(); try { for (String s: list) { // do something with s } } finally { lock.readLock().unlock(); } } // continued...12
  • 13.
    Many readers, fewwritersExample: using ReadWriteLock // continued public void add(String value) {lock.writeLock().lock(); try {list.add(value); } finally { lock.writeLock().unlock(); } }}13
  • 14.
    Many readers, fewwritersCopy-on-writeIf writes are truly few and far between, and you want reads to be as fast as possible, copy-on-write is an optionYou copy and replace the entire data on every writeYou eliminate synchronization on reads, and shift the burden to writesWrites usually become much more expensiveexample: java.util.concurrent.CopyOnWriteArrayList14
  • 15.
    Many readers, fewwritersExample: using copy-on-writeclass CopyOnWrite { privatevolatile List<String> list = new ArrayList<String>(); public void iterateOnList() { // no locking needed for (String s: list) { // do something with s } } publicsynchronized void add(String value) { // need mutual exclusion List<String> copy = new ArrayList<String>(list); // create a copycopy.add(value);list = copy; }}15
  • 16.
    Many readers, fewwritersWhat’s wrong with this?class BadCopyOnWrite { private volatile List<String> list = new ArrayList<String>(); public void iterateOnList() { // no locking needed for (inti = 0; i < list.size(); i++) {String s = list.get(i); // do something with s } } publicsynchronized void add(String value) { // need mutual exclusionList<String> copy = new ArrayList<String>(list); // create a copycopy.add(value);list = copy; }}16
  • 17.
    Many readers, fewwritersOf course you can simply use CopyOnWriteArrayList!class CopyOnWrite2 { private final List<String> list = new CopyOnWriteArrayList<String>(); public void iterateOnList() { // no locking needed for (String s: list) { // do something with s } } public void add(String value) {list.add(value); }}17
  • 18.
  • 19.
    Many readers, fewwritersFor Maps, copy-on-write is less useful as ConcurrentHashMap is usually good enoughReadWriteLock is an option, but is less concurrent than and performs more poorly than ConcurrentHashMapCopy-on-write has the best read performance19
  • 20.
    Many readers, fewwritersCopy-on-write: caveatsThe write performanceThe staleness behavior should be acceptable (it usually is)The direct reference to the underlying data that is copied should not escape the objectStale dataMemory leaks20
  • 21.
    Many readers, fewwritersWhat should we use?If the (read) concurrency is low, synchronization is often good enoughChoose concurrent collections (ConcurrentHashMap, etc.) if applicableUse copy-on-write if concurrent collections are not applicable and write performance is not a concern21
  • 22.
    Many readers, fewwritersHow about copy-on-write on MULTIPLE variables?22
  • 23.
    Many readers, fewwritersMulti-variable example: using synchronizationclass Synchronized { private Map<String,String> current = new HashMap<String,String>(); private Map<String,String> previous = null; public synchronized void shift() { previous = current; current = new HashMap<String,String>(); } public synchronized void putValue(String key, String value) {current.put(key, value); } public synchronized void getValue(String key) { return current.get(key); }}23
  • 24.
    Many readers, fewwritersCopy-on-write on multiple variablesUse a container class with those variablesDo a volatile copy-and-replace with the container object24
  • 25.
    Many readers, fewwritersMulti-variable example: use a container classclass ShiftingWindow {final Map<String,String> current;final Map<String,String> previous; public ShiftingWindow(Map<String,String> c, Map<String,String> p) { current = c; previous = p; }}25
  • 26.
    Many readers, fewwritersMulti-variable example: use a container classclass CopyOnWrite { privatevolatileShiftingWindow window = new ShiftingWindow(newConcurrentHashMap<String,String>(), null); public synchronized void shift() { // copy on writeShiftingWindownewWindow = new ShiftingWindow(newConcurrentHashMap<String,String>(),window.current); window = newWindow; } public void putValue(String key, String value) { // no lockingwindow.current.put(key, value); } public void getValue(String key) { // no locking return window.current.get(key); }}26
  • 27.
    AgendaIntroductionPatterns & anti-patternsWarm-up:“double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...27
  • 28.
    Many writers, fewreadersUse cases: logging, counters, statistics, ...Produce secondary data (e.g. URL counts) from primary operations (serving URLs)Many writers: all servlet threads will update the data frequentlyFew readers: the data will be read on demand (reporting) or periodicallyImpact on the primary operations must be minimized28
  • 29.
    Many writers, fewreadersImplementation choicesSynchronized data structureConcurrentHashMap (for a map or set)Asynchronous (background) processor29
  • 30.
    Many writers, fewreadersSynchronized data structureNot recommendedCan induce a hotly contended lock under high level of concurrency, and turn into a scalability hot spotConcurrentHashMapNormally the best solutionScales well under high level of concurrencyAsynchronous (background) processorUseful pattern if ConcurrentHashMap is not an option or write operations are serial in nature30
  • 31.
    Many writers, fewreadersSynchronized data structureclass SynchronizedCounter { private final Map<String,Integer> map = new HashMap<String,Integer>(); publicsynchronized void addCount(String page) { Integer value = map.get(page); value = (value == null) ? 1 : value+1;map.put(page, value); } public synchronizedintgetCount(String page) { Integer value = map.get(page); return (value == null) ? 0 : value; }}31
  • 32.
    Many writers, fewreadersConcurrentHashMapclass ConcurrentHashMapCounter { private final ConcurrentMap<String,AtomicInteger> map = new ConcurrentHashMap<String,AtomicInteger>(); public void addCount(String page) {AtomicInteger value = map.get(page); if (value == null) { value = new AtomicInteger(0);AtomicInteger old = map.putIfAbsent(page, value); if (old != null) { value = old; } }value.incrementAndGet(); } // continued...32
  • 33.
    Many writers, fewreadersConcurrentHashMap // continued publicintgetCount(String page) {AtomicInteger value = map.get(page); return (value == null) ? 0 : value.get(); }}33
  • 34.
    Many writers, fewreadersAsynchronous (background) processorA single background processor thread owns the dataPrimary threads produce tasks for the background processorWrites and reads are actually done on the background processor thread34
  • 35.
    Many writers, fewreadersAsynchronous (background) processor: benefitsLatency on the primary threads is minimizedContention is greatly reduced: can yield much better throughput than synchronizationTrivially thread safe: exploits safety via thread confinementExample: logging to disk/console35
  • 36.
    Many writers, fewreadersAsynchronous (background) processor: caveatsThe data structure should not escape the background threadThe actual tasks should be thread-agnosticPerforms poorly against a more concurrent solutionCode becomes bit more complicatedYou need to manage saturation: tasks may be produced faster than they can be handled by the processor36
  • 37.
    Many writers, fewreadersAsynchronous (background) processorclass BackgroundCounter { // background thread private final ExecutorService executor =Executors.newSingleThreadExecutor(); // map is exclusively used by the executor thread private final Map<String,Integer> map = new HashMap<String,Integer>(); public void addCount(String page) {executor.execute(newAddTask(page)); } public intgetCount(String page) {Future<Integer> future = executor.submit(newGetTask(page)); return future.get(); // exception handling omitted } // continued...37
  • 38.
    Many writers, fewreadersAsynchronous (background) processor // continuedprivate class AddTask implements Runnable { private final String page;AddTask(String page) { this.page = page; } public void run() { Integer value = map.get(page); value = (value == null) ? 1 : value+1;map.put(page, value); } } // continued...38
  • 39.
    Many writers, fewreadersAsynchronous (background) processor // continuedprivate class GetTask implements Callable<Integer> { private final String page;GetTask(String page) { this.key = page; } public Integer call() { Integer value = map.get(page); return (value == null) ? 0 : value; } }}39
  • 40.
    AgendaIntroductionPatterns & anti-patternsWarm-up:“double-checked locking” on collectionsMany readers, few writersMany writers, few readersBonus: configuring a ThreadPoolExecutorClosing...40
  • 41.
    Configuring a ThreadPoolExecutorRightconfiguration that fits your use case and demand is extremely importantBadly configured ThreadPoolExecutors cause exceptions and performance issuesRejectedExecutionExceptions anyone?41
  • 42.
    Configuring a ThreadPoolExecutorSimplerules for ThreadPoolExecutor behaviorWhen a task is submitted:If the core size has not been reached, a new thread is always createdIf the core size is reached, the task is queuedIf the core size is reached and the queue becomes full, a new thread is created until the max size is reachedIf the max size is reached and the queue is full, the rejected execution policy kicks in42
  • 43.
    Configuring a ThreadPoolExecutorImportanceof core sizeThreadPoolExecutor changes behavior dramatically around the core sizeBelow core size, threads are always created even if there are idle threadsAbove core size, the preferred behavior shifts to queuingCore size should be big enough to accommodate the anticipated average task throughput demand43
  • 44.
    Configuring a ThreadPoolExecutorThreadpool size and queue size are competing parametersQueuing increases latency but conserves resourceA queued task in general consumes less resource than an active task44
  • 45.
    Closing...Power of staticanalysisWhenever we find an issue, we try to turn it into a static analysis ruleFindBugs already has many useful thread-safety rulesIntent is the most difficult part with thread-safety analysis: annotations helpContinued training helps as well45
  • 46.
  • 47.
  • 48.
    TPE: Cancelling tasksCancellingtasks: more complicated than you thinkCancelling tasks is your jobTiming out from Future.get() does NOT cancel the task by itselfSome TPE methods cancel outstanding tasks for you: invokeAll() with timeout, invokeAny()Cancelling tasks uses interruption: you should write your task to respond to cancellation promptly (i.e. “interruptible”)48
  • 49.
    TPE & UncaughtExceptionHandlerUncaughtExceptionHandlerdoesn’t mix with ThreadPoolExecutor49
  • 50.
    TPE & UncaughtExceptionHandlerMulti-threadedtest with vanilla threadclass TestWithThreads extends TestCase { @Test public void test() {MyHandlerh = new MyHandler(); Thread th = new Thread(someRunnable);th.setUncaughtExceptionHandler(h);th.start(); th.join();// check MyHandler for any exception on thread th } private static class MyHandler implements UncaughtExceptionHandler { public void uncaughtException(Threadt, Throwablee) { // store the exception } }}50
  • 51.
    TPE & UncaughtExceptionHandlerMulti-threadedtest with TPE stops working: why?class BrokenTestWithExecutor extends TestCase { private ExecutorService executor = Executors.newSingleThreadExecutor(); @Test public void test() {MyHandlerh = new MyHandler();Thread.setDefaultUncaughtExceptionHandler(h);executor.submit(someRunnable).get();// check MyHandler for any exception on thread th } private static class MyHandler implements UncaughtExceptionHandler { public void uncaughtException(Threadt, Throwablee) { // store the exception } }}51
  • 52.
    TPE & UncaughtExceptionHandlerRememberwhat UncaughtExceptionHandlers are for!UncaughtExceptionHandlers are invoked only if the thread is being terminated due to an uncaught exceptionSome (not all) TPE methods catch and handle all exceptionsThreadPoolExecutorexecute(): triggers UncaughtExceptionHandlerssubmit(): does not trigger themScheduledThreadPoolExecutor: does not trigger them52
  • 53.
    TPE & UncaughtExceptionHandlerSimplydon’t rely on UncaughtExceptionHandlers with TPEUsing Future and ExecutionException is the right way with TPE53
  • 54.
    TPE & UncaughtExceptionHandlerMulti-threadedtest with TPE: correctclass CorrectTestWithExecutor extends TestCase { private ExecutorService executor = Executors.newSingleThreadExecutor(); @Test public void test() { try {executor.submit(someRunnable).get(); } catch (ExecutionExceptione) {// its cause is the original exceptionThrowable cause = e.getCause();// assert failure } catch (InterruptedException e2) { ... } }}54