The document discusses Java 5 concurrency features including locks, conditions, atomic variables, blocking queues, concurrent hash maps, synchronizers like semaphores and mutexes, and the executor framework. Key points include:
- Locks provide an alternative to synchronized blocks and methods, and allow more flexible locking behavior. ReentrantLock is a common lock implementation.
- Conditions (condition variables) allow threads to wait/signal and are used with locks rather than synchronized monitors.
- Atomic variables ensure thread-safe operations on single variables without locking.
The document discusses concurrency programming in Java. It covers the scope of concurrency including multi-threading, multi-core, and distributed systems. It then discusses key aspects of concurrency programming like shared data/coordination for correctness and performance. It provides examples of thread-safety issues and how to address them using locks, volatile fields, and final fields to safely publish objects between threads.
The document discusses common concurrency problems in Java like shared mutable state, visibility issues, inconsistent synchronization, and unsafe publication and provides examples of how to properly implement threading concepts like locking, waiting and notifying with synchronization, volatile variables, atomic classes and safe initialization techniques to avoid concurrency bugs. It also cautions against unsafe practices like synchronizing on the wrong objects or misusing threading methods that can lead to deadlocks, race conditions and other concurrency problems.
This document discusses common concurrency problems in Java and how to address them. It covers issues that can arise from shared mutable data being accessed without proper synchronization between threads, as well as problems related to visibility and atomicity of operations. Specific problems covered include mutable statics, double-checked locking, volatile arrays, and non-atomic operations like incrementing a long. The document provides best practices for locking, wait/notify, thread coordination, and avoiding deadlocks, spin locks, and lock contention.
Carol McDonald gave a presentation on Java concurrency utilities introduced in J2SE 5.0. She discussed motivation for improved concurrency tools, common concurrency issues, and key utilities like Executor framework, locks, synchronizers, and concurrent collections. The utilities make concurrent programming easier and improve performance of multithreaded Java applications.
The document discusses Java threads and thread synchronization. It defines that a thread is a flow of control and sequence of executed statements. It explains that a Thread is an object that can be created by extending the Thread class or implementing the Runnable interface. The start() method creates and runs the thread, while run() defines the code for the thread to execute. Threads have a lifecycle and can be in different states like ready, running, blocked/waiting. Synchronization is used to coordinate access to shared resources using locks, and wait/notify allows threads to signal and wait for each other.
Java concurrency allows applications to make use of multiple processors and handle asynchronous operations through the use of threads. While concurrency provides benefits like improved performance, it also introduces risks like race conditions and deadlocks if threads access shared resources unsafely. The Java threading APIs provide tools for safely managing concurrent operations through mechanisms like synchronization, locks, and concurrent collections.
This talk was given in ilJUG on the 29th of July 2014 and discusses the new Java8 StampedLock class. It compares it to different locking mechanism is Java and shows some insights deduced from a simple benchmark
The document discusses concurrency programming in Java. It covers the scope of concurrency including multi-threading, multi-core, and distributed systems. It then discusses key aspects of concurrency programming like shared data/coordination for correctness and performance. It provides examples of thread-safety issues and how to address them using locks, volatile fields, and final fields to safely publish objects between threads.
The document discusses common concurrency problems in Java like shared mutable state, visibility issues, inconsistent synchronization, and unsafe publication and provides examples of how to properly implement threading concepts like locking, waiting and notifying with synchronization, volatile variables, atomic classes and safe initialization techniques to avoid concurrency bugs. It also cautions against unsafe practices like synchronizing on the wrong objects or misusing threading methods that can lead to deadlocks, race conditions and other concurrency problems.
This document discusses common concurrency problems in Java and how to address them. It covers issues that can arise from shared mutable data being accessed without proper synchronization between threads, as well as problems related to visibility and atomicity of operations. Specific problems covered include mutable statics, double-checked locking, volatile arrays, and non-atomic operations like incrementing a long. The document provides best practices for locking, wait/notify, thread coordination, and avoiding deadlocks, spin locks, and lock contention.
Carol McDonald gave a presentation on Java concurrency utilities introduced in J2SE 5.0. She discussed motivation for improved concurrency tools, common concurrency issues, and key utilities like Executor framework, locks, synchronizers, and concurrent collections. The utilities make concurrent programming easier and improve performance of multithreaded Java applications.
The document discusses Java threads and thread synchronization. It defines that a thread is a flow of control and sequence of executed statements. It explains that a Thread is an object that can be created by extending the Thread class or implementing the Runnable interface. The start() method creates and runs the thread, while run() defines the code for the thread to execute. Threads have a lifecycle and can be in different states like ready, running, blocked/waiting. Synchronization is used to coordinate access to shared resources using locks, and wait/notify allows threads to signal and wait for each other.
Java concurrency allows applications to make use of multiple processors and handle asynchronous operations through the use of threads. While concurrency provides benefits like improved performance, it also introduces risks like race conditions and deadlocks if threads access shared resources unsafely. The Java threading APIs provide tools for safely managing concurrent operations through mechanisms like synchronization, locks, and concurrent collections.
This talk was given in ilJUG on the 29th of July 2014 and discusses the new Java8 StampedLock class. It compares it to different locking mechanism is Java and shows some insights deduced from a simple benchmark
Java has a solid Memory Model, and there are a couple of excellent libraries for concurrency. When you start working with threads however, pitfalls start appearing - especially if the program is supposed to be fast and correct. This session shows proven solutions for some typical problems, showing how to view program code from a concurrency perspective: Which threads share which data, and how? How to reduce the impact of locks? How to avoid them altogether - and when is that worth it?
At first glance, writing concurrent programs in Java seems like a straight-forward task. But the devil is in the detail. Fortunately, these details are strictly regulated by the Java memory model which, roughly speaking, decides what values a program can observe for a field at any given time. Without respecting the memory model, a Java program might behave erratic and yield bugs that only occure on some hardware platforms. This presentation summarizes the guarantees that are given by Java's memory model and teaches how to properly use volatile and final fields or synchronized code blocks. Instead of discussing the model in terms of memory model formalisms, this presentation builds on easy-to follow Java code examples.
The document discusses various synchronization primitives in Java including ReentrantLock, Condition, Semaphore, Future, CyclicBarrier, CountDownLatch, and Exchanger. It explains what each one is used for and provides examples of how they work. The key points are that these primitives provide ways to synchronize threads, block and unblock threads under certain conditions, and coordinate work between multiple threads through techniques like waiting for barriers or counting down latches. Underlying them all is the AbstractQueuedSynchronizer which provides a framework for atomically managing shared state and enqueueing/dequeuing threads.
Threads are lightweight processes as the overhead of switching between threads is less
Synchronization allows only one thread to perform an operation on a object at a time.
Synchronization prevent data corruption
Thread Synchronization-The synchronized methods define critical sections.
You will learn the Deadlock Condition in Threads and Syncronization of Threads
Java Concurrency, Memory Model, and TrendsCarol McDonald
This document discusses concurrency in Java. It covers benefits and risks of threads, goals of concurrency utilities in Java, examples of executor services and thread pools, and best practices for thread safety including using immutable objects, atomic variables, and concurrent collections.
Threads, Events, Signals/Slots provides an overview of multithreaded programming in QT including:
1) QT supports threading through the QThread class and provides thread safe APIs like QMutex. GUI operations are only allowed on the main thread.
2) QT uses an event-based model where events are processed in an event loop. Events can be used for intra-thread and inter-thread communication.
3) Signals and slots provide a type-safe mechanism for loose coupling between objects and allow one-to-many and many-to-one communication. They are mapped to events and processed in the receiving object's thread.
This document provides an introduction to concurrency in Java programming. It discusses modifying a word counting program to run in parallel using threads. It covers thread safety, critical sections, synchronized blocks and methods, lock objects, and other concurrency concepts in Java like volatile fields and deadlocks. The document uses examples to illustrate how to design thread-safe classes and properly synchronize access to shared resources between multiple threads.
This document discusses inter-thread communication methods like wait() and notify() that allow threads to synchronize access to shared resources. It describes the producer-consumer problem that can occur when threads access a shared buffer without synchronization. It provides examples of incorrect and correct implementations of the producer-consumer pattern using wait(), notify(), and synchronization to allow a producer thread to add items to a buffer while a consumer thread removes items.
Inter thread communication & runnable interfacekeval_thummar
Inter-thread communication allows threads to pause execution in critical sections and allow other threads to enter those sections. It uses the wait(), notify(), and notifyAll() methods of the Object class. wait() pauses a thread until another calls notify() or notifyAll(), notify() wakes one waiting thread, and notifyAll() wakes all waiting threads. The Runnable interface provides a common protocol for objects to execute code while active. To create a thread using Runnable, a class implements Runnable and defines a run() method, a Thread object is created passing the Runnable, and start() is called to execute run().
This session discusses about the basic building blocks of Concurrent Programming in Java, which include:
high-level concurrency objects, lock objects, executors, executor interfaces, thread pools, fork/join, concurrent collections, atomic variables, concurrent random numbers.
This document provides an overview of threads in Java. It discusses creating threads by extending Thread or implementing Runnable, synchronization using the synchronized keyword and locks, and wait() and notify() to allow threads to wait for events. It provides examples of each concept and recommends using a thread pool rather than creating a new thread per task to prevent scaling issues. The document concludes with an exercise to implement multithreaded printing of numbers using a thread pool.
The document discusses key concepts related to threads and concurrency in Java. It defines processes, threads, and the Java memory model. It then covers various concurrency utilities in Java like synchronized blocks, volatile fields, atomic classes, thread pools, blocking queues, and locking mechanisms like ReentrantLock. The last part discusses high-level concurrency constructs like semaphores, latches, barriers, and phaser.
This document discusses Java bytecode manipulation techniques using unsafe, instrumentation, and Java agents. It covers areas where bytecode manipulation is commonly used like mocking, persistence, and security. It analyzes techniques for defining and transforming classes at runtime and discusses challenges like injecting state and working with modules. The document also proposes ideas to standardize testing support and provide a unified dynamic code generation concept in Java.
The document provides an introduction to Java concurrency, explaining how to create threads in Java by extending the Thread class or implementing Runnable, how to handle race conditions through synchronization, and some basic concurrency tools like wait, notify, sleep, join, and volatile variables. It discusses advantages and drawbacks of concurrency as well as examples of where it is commonly used.
This document summarizes key aspects of Java servlets including:
- The servlet lifecycle which includes initialization, service, and destruction phases.
- How servlets handle HTTP requests through methods like doGet() and doPost().
- How servlets can communicate with other components using various protocols beyond just HTTP.
- How servlet contexts allow servlets to share information and resources.
- How servlets can create, read, and modify cookies to maintain state across requests.
This document discusses non-blocking I/O and the traditional blocking I/O approach for building servers. The traditional approach uses one thread per connection, blocking I/O, and a simple programming model. However, this can cause issues like shared state between clients, synchronization problems, inability to prioritize clients, difficulty scaling to thousands of connections, and challenges with persistent connections. The document explores using non-blocking I/O with Netty as an alternative.
The document discusses several programming languages and libraries that provide concurrency constructs for parallel programming. It describes features for concurrency in Ada 95, Java, and C/C++ libraries including pthreads. Key features covered include threads, mutual exclusion locks, condition variables, and examples of implementing a bounded buffer for inter-thread communication.
Spinning locks use busy waiting to synchronize tasks, while blocking locks allow tasks to block instead of spin. The document discusses different types of locks including spin locks, mutex locks, and owner locks. It provides examples of how these locks can be implemented and used to synchronize access to shared resources.
The document discusses Java concurrency concepts including locks, threads, atomics, and thread pools. It provides examples of using ReentrantLock for locking, AtomicLong for atomic counters, and ThreadPoolExecutor for managing threads. The document also mentions different Java concurrency implementations and creating a lock factory to choose the implementation.
This presentation describes key concepts in Java. I call it The Java Quicky.
This is part of a series of presentations to cover the Java programming language and its new offerings and versions in depth.
Java has a solid Memory Model, and there are a couple of excellent libraries for concurrency. When you start working with threads however, pitfalls start appearing - especially if the program is supposed to be fast and correct. This session shows proven solutions for some typical problems, showing how to view program code from a concurrency perspective: Which threads share which data, and how? How to reduce the impact of locks? How to avoid them altogether - and when is that worth it?
At first glance, writing concurrent programs in Java seems like a straight-forward task. But the devil is in the detail. Fortunately, these details are strictly regulated by the Java memory model which, roughly speaking, decides what values a program can observe for a field at any given time. Without respecting the memory model, a Java program might behave erratic and yield bugs that only occure on some hardware platforms. This presentation summarizes the guarantees that are given by Java's memory model and teaches how to properly use volatile and final fields or synchronized code blocks. Instead of discussing the model in terms of memory model formalisms, this presentation builds on easy-to follow Java code examples.
The document discusses various synchronization primitives in Java including ReentrantLock, Condition, Semaphore, Future, CyclicBarrier, CountDownLatch, and Exchanger. It explains what each one is used for and provides examples of how they work. The key points are that these primitives provide ways to synchronize threads, block and unblock threads under certain conditions, and coordinate work between multiple threads through techniques like waiting for barriers or counting down latches. Underlying them all is the AbstractQueuedSynchronizer which provides a framework for atomically managing shared state and enqueueing/dequeuing threads.
Threads are lightweight processes as the overhead of switching between threads is less
Synchronization allows only one thread to perform an operation on a object at a time.
Synchronization prevent data corruption
Thread Synchronization-The synchronized methods define critical sections.
You will learn the Deadlock Condition in Threads and Syncronization of Threads
Java Concurrency, Memory Model, and TrendsCarol McDonald
This document discusses concurrency in Java. It covers benefits and risks of threads, goals of concurrency utilities in Java, examples of executor services and thread pools, and best practices for thread safety including using immutable objects, atomic variables, and concurrent collections.
Threads, Events, Signals/Slots provides an overview of multithreaded programming in QT including:
1) QT supports threading through the QThread class and provides thread safe APIs like QMutex. GUI operations are only allowed on the main thread.
2) QT uses an event-based model where events are processed in an event loop. Events can be used for intra-thread and inter-thread communication.
3) Signals and slots provide a type-safe mechanism for loose coupling between objects and allow one-to-many and many-to-one communication. They are mapped to events and processed in the receiving object's thread.
This document provides an introduction to concurrency in Java programming. It discusses modifying a word counting program to run in parallel using threads. It covers thread safety, critical sections, synchronized blocks and methods, lock objects, and other concurrency concepts in Java like volatile fields and deadlocks. The document uses examples to illustrate how to design thread-safe classes and properly synchronize access to shared resources between multiple threads.
This document discusses inter-thread communication methods like wait() and notify() that allow threads to synchronize access to shared resources. It describes the producer-consumer problem that can occur when threads access a shared buffer without synchronization. It provides examples of incorrect and correct implementations of the producer-consumer pattern using wait(), notify(), and synchronization to allow a producer thread to add items to a buffer while a consumer thread removes items.
Inter thread communication & runnable interfacekeval_thummar
Inter-thread communication allows threads to pause execution in critical sections and allow other threads to enter those sections. It uses the wait(), notify(), and notifyAll() methods of the Object class. wait() pauses a thread until another calls notify() or notifyAll(), notify() wakes one waiting thread, and notifyAll() wakes all waiting threads. The Runnable interface provides a common protocol for objects to execute code while active. To create a thread using Runnable, a class implements Runnable and defines a run() method, a Thread object is created passing the Runnable, and start() is called to execute run().
This session discusses about the basic building blocks of Concurrent Programming in Java, which include:
high-level concurrency objects, lock objects, executors, executor interfaces, thread pools, fork/join, concurrent collections, atomic variables, concurrent random numbers.
This document provides an overview of threads in Java. It discusses creating threads by extending Thread or implementing Runnable, synchronization using the synchronized keyword and locks, and wait() and notify() to allow threads to wait for events. It provides examples of each concept and recommends using a thread pool rather than creating a new thread per task to prevent scaling issues. The document concludes with an exercise to implement multithreaded printing of numbers using a thread pool.
The document discusses key concepts related to threads and concurrency in Java. It defines processes, threads, and the Java memory model. It then covers various concurrency utilities in Java like synchronized blocks, volatile fields, atomic classes, thread pools, blocking queues, and locking mechanisms like ReentrantLock. The last part discusses high-level concurrency constructs like semaphores, latches, barriers, and phaser.
This document discusses Java bytecode manipulation techniques using unsafe, instrumentation, and Java agents. It covers areas where bytecode manipulation is commonly used like mocking, persistence, and security. It analyzes techniques for defining and transforming classes at runtime and discusses challenges like injecting state and working with modules. The document also proposes ideas to standardize testing support and provide a unified dynamic code generation concept in Java.
The document provides an introduction to Java concurrency, explaining how to create threads in Java by extending the Thread class or implementing Runnable, how to handle race conditions through synchronization, and some basic concurrency tools like wait, notify, sleep, join, and volatile variables. It discusses advantages and drawbacks of concurrency as well as examples of where it is commonly used.
This document summarizes key aspects of Java servlets including:
- The servlet lifecycle which includes initialization, service, and destruction phases.
- How servlets handle HTTP requests through methods like doGet() and doPost().
- How servlets can communicate with other components using various protocols beyond just HTTP.
- How servlet contexts allow servlets to share information and resources.
- How servlets can create, read, and modify cookies to maintain state across requests.
This document discusses non-blocking I/O and the traditional blocking I/O approach for building servers. The traditional approach uses one thread per connection, blocking I/O, and a simple programming model. However, this can cause issues like shared state between clients, synchronization problems, inability to prioritize clients, difficulty scaling to thousands of connections, and challenges with persistent connections. The document explores using non-blocking I/O with Netty as an alternative.
The document discusses several programming languages and libraries that provide concurrency constructs for parallel programming. It describes features for concurrency in Ada 95, Java, and C/C++ libraries including pthreads. Key features covered include threads, mutual exclusion locks, condition variables, and examples of implementing a bounded buffer for inter-thread communication.
Spinning locks use busy waiting to synchronize tasks, while blocking locks allow tasks to block instead of spin. The document discusses different types of locks including spin locks, mutex locks, and owner locks. It provides examples of how these locks can be implemented and used to synchronize access to shared resources.
The document discusses Java concurrency concepts including locks, threads, atomics, and thread pools. It provides examples of using ReentrantLock for locking, AtomicLong for atomic counters, and ThreadPoolExecutor for managing threads. The document also mentions different Java concurrency implementations and creating a lock factory to choose the implementation.
This presentation describes key concepts in Java. I call it The Java Quicky.
This is part of a series of presentations to cover the Java programming language and its new offerings and versions in depth.
This is my attempt to compose a brief and cursory introduction to concepts in Java programming language. I call it Java Quicky.
I plan to extend and enhance it over time.
This document discusses concurrency and concurrent programming in Java. It introduces the built-in concurrency primitives like wait(), notify(), synchronized, and volatile. It then discusses higher-level concurrency utilities and data structures introduced in JDK 5.0 like Executors, ExecutorService, ThreadPools, Future, Callable, ConcurrentHashMap, CopyOnWriteArrayList that provide safer and more usable concurrency constructs. It also briefly covers topics like Java Memory Model, memory barriers, and happens-before ordering.
This document discusses common concurrency problems in Java like shared mutable state, visibility issues, lack of atomicity, and unsafe publication and provides examples of how to detect and fix issues with locking, volatile variables, atomic classes, thread coordination mechanisms like wait/notify and condition variables, avoiding deadlocks, reducing lock contention, and following safe publication patterns. It focuses on topics like locking, visibility, atomicity, safe publication, threads, and performance around shared data, coordination, and thread contention.
Core Java Programming Language (JSE) : Chapter XII - ThreadsWebStackAcademy
What are Java Threads?
A thread is a:
Facility to allow multiple activities within a single process
Referred as lightweight process
A thread is a series of executed statements
Each thread has its own program counter, stack and local variables
A thread is a nested sequence of method calls
Its shares memory, files and per-process state
How to create thread:
There are two ways to create a thread:
1. By extending Thread class
2. By implementing Runnable interface.
Thread class:
Thread class provide constructors and methods to create and perform operations on a thread.Thread class extends Object class and implements Runnable interface.
JAVA CERTIFICATION EXAM OBJECTIVES
COVERED IN THIS CHAPTER:
4.1 Write code to define, instantiate, and start new threads
using both java.lang.Thread and java.lang.Runnable.
4.2 Recognize the states in which a thread can exist, and
identify ways in which a thread can transition from one state
to another.
4.3 Given a scenario, write code that makes appropriate use
of object locking to protect static or instance variables from
concurrent access problems.
4.4 Given a scenario, write code that makes appropriate use of wait, notify, or notifyAll.
The document provides an overview of concurrency in C# including threads, thread pools, tasks, locks, and thread-safe data structures. It discusses the different types of threads and how to run code asynchronously using threads, thread pools, and tasks. It covers potential issues like race conditions and deadlocks and how to avoid them using locks, monitors, and other synchronization primitives. Finally, it introduces some thread-safe data structures like ConcurrentDictionary and ConcurrentBag.
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...Subhajit Sahu
Highlighted notes of:
Chapter 8: Monitors and Blocking Synchronization
Book:
The Art of Multiprocessor Programming
Authors:
Maurice Herlihy
Nir Shavit
Maurice Herlihy has an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He has served on the faculty of Carnegie Mellon University and the staff of DEC Cambridge Research Lab. He is the recipient of the 2003 Dijkstra Prize in Distributed Computing, the 2004 Gödel Prize in theoretical computer science, the 2008 ISCA influential paper award, the 2012 Edsger W. Dijkstra Prize, and the 2013 Wallace McDowell award. He received a 2012 Fulbright Distinguished Chair in the Natural Sciences and Engineering Lecturing Fellowship, and he is fellow of the ACM, a fellow of the National Academy of Inventors, the National Academy of Engineering, and the National Academy of Arts and Sciences.
Nir Shavit received B.Sc. and M.Sc. degrees in Computer Science from the Technion - Israel Institute of Technology in 1984 and 1986, and a Ph.D. in Computer Science from the Hebrew University of Jerusalem in 1990. Shavit is a co-author of the book The Art of Multiprocessor Programming. He is a recipient of the 2004 Gödel Prize in theoretical computer science for his work on applying tools from algebraic topology to model shared memory computability and of the 2012 Dijkstra Prize in Distributed Computing for the introduction of Software Transactional Memory. He is a past program chair of the ACM Symposium on Principles of Distributed Computing (PODC) and the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). His current research covers techniques for desinging scalable software for multiprocessors, in particular concurrent data structures for multicore machines.
This document provides information about Java threads through a series of slides. It defines a thread as a single sequential flow of control within a program. It discusses how to define and launch threads by extending the Thread class or implementing the Runnable interface. It outlines the life cycle of a thread and its possible states such as new, runnable, blocked, waiting and terminated. It also covers how to interrupt threads and related methods.
This document discusses Java threads and synchronization. It begins with an introduction to threads, defining a thread as a single sequential flow of control within a program. It then covers how to define and launch threads in Java by extending the Thread class or implementing the Runnable interface. The life cycle of a Java thread is explained, including the various thread states. Methods for interrupting threads and thread synchronization using synchronized methods and statements are discussed. Finally, Java's monitor model for thread synchronization is described.
The Executor framework in Java provides a way to asynchronously execute tasks (implemented as Runnable or Callable interfaces) by submitting them to an ExecutorService which manages a pool of threads. This allows tasks to be executed concurrently without needing to explicitly manage threads. The framework provides factory methods like newFixedThreadPool to create ExecutorServices with different policies for executing tasks using threads from the pool. ExecutorServices also allow obtaining Future objects to asynchronously retrieve results from Callable tasks.
This document discusses concurrent programming and multithreaded programming in Java. It covers key topics such as creating and controlling threads, thread safety and synchronization, and using bounded queues to allow cooperation between producer and consumer threads.
Threads allow multiple tasks to run concurrently by sharing memory and resources within a process. Context switching between threads is typically faster than between processes. Threads can be created and started in different ways and use synchronization techniques like locks, monitors, mutexes, semaphores, and wait handles to coordinate access to resources. The thread pool optimizes thread usage by maintaining pooled threads that can be assigned tasks to run asynchronously. Exceptions on worker threads must be handled manually.
The document discusses threads and threading in Java. It covers:
- Threads allow multitasking by running multiple threads concurrently within a program.
- Threads can be created by extending the Thread class or implementing the Runnable interface.
- Threads have different states like new, ready, running, blocked, and dead. Methods like start(), sleep(), join(), yield(), etc. change a thread's state.
- Synchronization is needed to prevent data corruption when multiple threads access shared resources concurrently.
Concurrent Programming in Java discusses various approaches to concurrency in Java including threads, executors, fork/join, parallel streams, CompletableFuture, and RXJava. It explains challenges with threads like interference, deadlocks, and performance overhead. It then covers enhancements in Java 5+ including executors and concurrent collections. Later sections discuss functional-style concurrency with CompletableFuture and RXJava, which allow composing asynchronous operations without blocking.
Fault tolerance in general is a challenging topic. Yet we need fault toleranct designs more badly than ever in order to provide robust, highly available systems - especially in times of scale out systems becoming more and more popular.
Unfortunately, most developers do not care too much about a fault tolerant design, either because they are scared by the complexity of the realm or because they do not care enough. One of the problems is that a lack of fault tolerant design does not hurt a lot in development or in QA, but it hurts a lot in production - as Michael Nygard said: "It's all about production!" (at least figuratively).
In this presentation I do *not* try to give a general introduction to fault tolerant design. Instead I pick a few generic case studies that demonstrate the results of missing fault tolerant design, try to sensitize a bit about the production relevance of fault tolerant design and then go along with a few selected patterns. I picked a few patterns which are surprisingly easy to implement and help to mitigate the problems of the former case studies.
This way I try to show two things:
1. A piece of architecture or design as a pattern is not necessarily hard to implement. Sometimes the code is written quicker than it takes to explain the pattern beforehand.
2. Even if fault tolerant design as a general topic might be hard, some parts of it can be implemented very easily and it's more than worth the coding effort if you look how much better your system behaves in production just from adding those few lines of code.
The document discusses multithreading in Java, including the evolution of threading support across Java releases and examples of implementing multithreading using Threads, ExecutorService, and NIO channels. It also provides examples of how to make operations thread-safe using locks and atomic variables when accessing shared resources from multiple threads. References are included for further reading on NIO-based servers and asynchronous channel APIs introduced in Java 7.
The document discusses Android Loaders, which provide a way for Activities and Fragments to asynchronously load data from a data source and deliver it back without having to manage threads or handle configuration changes. Loaders allow data to persist across configuration changes like orientation changes. The document covers the history of loading data in Android including threads and AsyncTask, introduces Loaders and the LoaderManager API, discusses implementing basic Loaders including CursorLoaders, and covers common mistakes to avoid.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
1. Java 5 Concurrency
1.1 Locks
Before Java 5 concurrency was achieved using synchronized locks and
wait/notify idiom. Synchronization is a locking mechanism where a block of
code or method is protected by a software lock. Any thread that wants to
execute this block of code must first acquire the lock. The lock is released
once the thread exits the synchronization block or method. Acquiring of lock
and releasing it is done by compiler thus relieving programmer of lock book-
keeping. However, there are some draw backs while using synchronization as
you will see below.
Wait/Notify idiom allows a thread to wait for a signal from other thread.
Wait can be timed or can be interrupted (using Thread.interrupt()).
Wait is signaled using notify.
1.1.1 Drawbacks of Synchronization
No Back-off: Once a thread enters a synchronization block or
method, it has to wait till lock is available. It cannot back off
to execute other instructions if lock is not available or it is
taking very long time to get the lock.
No Read-Only Access: Multiple threads cannot acquire lock even if
read-only access is required.
Compile Time: Code synchronization is compile time decision.
Synchronization cannot be turned off because of run-time
conditions. To enable this, a lot of code duplication is
required.
No Metadata: Lock metadata like number of threads waiting for
lock, average time to acquire lock is not available in java
program.
1.1.2 Lock Interface
As of Java 5, Lock interface implementations can be used instead of
synchronization.
When a thread creates lock object, memory synchronization with cache occurs.
This behavior is similar to entering a synchronized block or method.
Lock interface has methods to lock and lock interruptibly.
2. Lock Interruptibly: This method acquires the lock unless the thread is interrupted
by another thread. On calling this method, if lock is available it is
acquired. If lock is not available, the thread becomes dormant and waits for
lock to be available. If some other thread calls interrupt on this thread,
then interrupted exception is thrown.
tryLock: This methods immediately acquires the lock and returns true if lock is
available. If lock is not available, it returns false.
1.1.3 Lock Implementation
1.1.3.1 ReentrantLock
Reentrant Lock is an implementation of Lock Interface. It allows a thread to
re-enter the code that is protected by this lock object.
It has additional methods that return the state of the lock and other meta
information.
Reentrant Lock can be created with fair parameter. Lock is then acquired by
thread in arrival order.
1.1.3.2 ReentrantReadWriteLock
This is an implementation of ReadWriteLock. It hold a pair of associated
locks one for read only operations and other for write only operations.
The corresponding locks are readlock and writelock. Read locks can be shared
by multiple readers. Write lock is exclusive, i.e., write lock can be granted
to only one writer thread when no reader thread has a read lock.
A reader thread is one that performs a read operation. A writer thread
performs write operations.
When fair is true, the locks are granted based on arrival order of threads.
Lock Downgrading
Lock downgrading is allowed, i.e., if a thread holds write lock, it can then
hold a read lock and then release write lock
ReentrantReadWriteLock l = ne ReentrantReadWriteLock();
l.writelock().lock();
l.readLock.lock();
l.writelock.unlock();
3. Lock Upgrading
Lock Upgrading is not allowed, ie. if a thread holds read lock then it cannot
hold write lock without releasing read lock.
RenetrantReadWriteLock l = new ReentrantReadWriteLock();
l.readLock().lock();
//process..
l.readLock.unlock();// first unlock then acquire write lock
l.writeLock().lock();
Concurrency Improvement
When there are large numbers of reader threads and small number of writer
threads, readwritelock will improve concurrency.
1.1.4 Typical Lock Usage
public class LockedMap {
private Lock l = new ReentrantLock();
Map myMap = new HashMap();
public Object get(Object key){
l.lock();
try {
return myMap.get(key);
} finally {
l.unlock();
}
}
public void put(Object key, Object val) {
l.lock();
try{
myMap.put(key,val);
}finally{
l.unlock();
}
}
}
4. 1.2 Condition
Condition interface factors out object monitor methods wait, notify,
notifyall into separate class. Condition objects are intrinsically bound to a
lock and can be obtained by calling newCondition() on lock instance.
When lock replaces synchronized methods and blocks, conditions replace object
monitor methods.
Conditions are also called condition variables or condition queues.
On the same lock object multiple condition variables can be created.
Different set of threads can wait on different condition variable. A classic
usage is in producer and consumers of a bounded buffer.
public class BoundedBuffer {
final Object[] buffer = new Object[10];
Lock l = new ReentrantLock();
Condition producer = l.newCondition();
Condition consumer = l.newCondition();
int bufferCount, putIdx, getIdx;
public void put(Object x) throws InterruptedException{
l.lock();
try {
while (bufferCount == buffer.length)
producer.await();
buffer[putIdx++] = x;
if (putIdx == buffer.length)
putIdx = 0;
++bufferCount;
consumer.signal();
} finally {
l.unlock();
}
}
public Object get() throws InterruptedException {
l.lock();
try{
while (bufferCount==0)
consumer.await();
5. Object x = buffer[getIdx++];
if(getIdx==buffer.length)
getIdx=0;
--bufferCount;
producer.signal();
return x;
}finally{
l.unlock();
}
}
}
Await UnInterruptibly
This method on condition variable causes the thread to wait until a signal is
executed on that variable.
IllegalMonitorStateException
The thread calling methods on condition variable should hold the
corresponding lock. If it doesn’t then illegalmonitorstateexception is
thrown.
1.3 Atomic Variables
Atomic variables are used for lock free, thread safe programming on single
variables.
As case with volatile variables, atomic variables are never cached locally.
They are always synced with main memory.
CompareAndSet
Atomic variables use compare and swap (CAS) primitive of processors. CAS has
three operands a memory location (V), expected old value of memory location
(A), new value of memory location (B). If current value of memory location
matches the expected old value(A), then new value (B) is written to memory
location (V) and true is returned. In case the current value is different
from expected old value, then memory is not updated and false is return.
Code logic can retry this operation if false is returned.
6. Below code shows CAS algorithm. However, the actual implementation is in
hardware for processors that support CAS. For processors that do not support
CAS, locking as shown below is done to simulate CAS.
public class SimulatedCAS {
private int value;
public synchronized int getValue() { return value; }
boolean synchronized comapreAndSet(expectedVaue, updateValue) {
bool set=false;
if(value==expectedvalue) {
value=newvalue;
set=true;
}else {
set=false
}
return set;
}
}
1.4 Data Structures
1.4.1 Blocking Queue
It is a queue data structure with additional features like consumers of queue
wait/block when queue is empty and producers wait/block when queue is full.
Queue implementations can guarantee fairness where in longest waiting
consumer/producer get the first chance to access the queue.
Below code depicts producer, consumer using blocking queue
public class Producer implements Runnable {
private final BlockingQueue q;
public Producer(BlockingQueue q) {
super();
this.q = q;
}
7. @Override
public void run() {
try {
while(true) {
q.put(produce());
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private Object produce() {
// TODO Auto-generated method stub
return new Object();
}
}
public class Consumer implements Runnable {
private final BlockingQueue q;
public Consumer(BlockingQueue q) {
super();
this.q = q;
}
@Override
public void run() {
try {
while(true) {
consume(q.take());
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void consume(Object take) {
// TODO Auto-generated method stub
}
}
8. public class Setup {
/**
* @param args
*/
public static void main(String[] args) {
BlockingQueue q = new ArrayBlockingQueue(10,true);
Producer p = new Producer(q);
new Thread(p).start();
new Thread(new Consumer(q)).start();
new Thread(new Consumer(q)).start();
}
}
1.4.2 ConcurrentHashMap
Concurrent Has Map is a thread safe hash map but does not block all get and
put operations as synchronized version of hash map does. It allows full
concurrency of gets and expected concurrency of puts.
Concurrent Hash Map internally divides its storage in bins. The entries in
bin are connected by link list.
Nulls are not allowed in key and value.
get operation generally does not entail locking. But algorithm checks if
return value is null the bin (segment) is first locked and then the value is
fetched. A value can be null because of compiler reordering of instructions.
put operations are performed by locking that particular bin (segment).
1.5 Synchronizers
Synchronizers control the flow of execution in one or more threads.
1.5.1 Semaphore
A counting semaphore is used to restrict number of threads that can access a
physical or logical resource. Semaphore maintains a set of permits. Each call
to acquire consumes a permit, possibly blocking if permit is not available.
Each call to release(), releases a permit also signals a waiting acquirer.
9. Usage:
A library has N seats and thus allows only N members at one time to use it.
If all seats are occupied, then arriving members wait for the seat to get
vacant. Design a model for the library.
package com.concur.semaphore;
import java.util.concurrent.Semaphore;
public class Library {
private final Semaphore s = new Semaphore(50, true);
public void enter() throws InterruptedException {
s.acquire();
}
public void exit() throws InterruptedException {
s.release();
}
public void borrowBooks(int id){
//implementation
}
public void returnBook(int id){
//implementaion
}
public static void main(String args) {
Library l = new Library();
try {
l.enter();
l.borrowBooks(1234);
l.exit();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
1.5.2 Mutex
Mutex is a counting semaphore with only one permit. They have lot in common
with locks. The difference is that in mutex some other thread can call a
release than the one holding the permit.
10. 1.5.3 Cyclic Barrier
In cyclic barrier, each threads come to a barrier and wait there till all
threads have reached the barrier. Once all threads reach barrier, they are
released for further processing. Optionally a method can be called before
threads are released.
package com.concur.cyclic;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
public class Barrier {
final int num_threads;
final CyclicBarrier cb;
final boolean complete=false;
public Barrier(int n) {
num_threads=n;
cb= new CyclicBarrier(num_threads, new Runnable(){
@Override
public void run() {
System.out.println("All threads reached barrier");
//check if complete and set complete
}});
}
public void process() throws InterruptedException, BrokenBarrierException {
while(!complete) {
//process
cb.await();
//exits if process completed else loops
}
}
}
1.5.4 Countdown Latch
Count down latch is similar to cyclic barrier but differs in way the treads
are released. In cyclic barrier, threads are released automatically when all
threads reach barrier. In countdown latch that is initialized by N, threads
are released when countdown has been called N times. Any call to await block
threads if N!=0. Once N reaches 0, await() returns immediately.
11. Countdown latches cannot be reused. Once N reaches 0, all await() return
immediately.
package com.concur.countdown;
import java.util.concurrent.CountDownLatch;
public class Latch {
private class Worker implements Runnable {
final CountDownLatch start, done;
public Worker(CountDownLatch start, CountDownLatch done) {
this.start = start;
this.done = done;
// TODO Auto-generated constructor stub
}
@Override
public void run() {
try {
start.await();
// do process
done.countDown();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String[] args) {
Latch l= new Latch();
CountDownLatch start = new CountDownLatch(1);
CountDownLatch done = new CountDownLatch(10);
for (int i = 0; i < 10; i++) {
new Thread(l. new Worker(start, done)).start();
}
try {
//do something
start.countDown();
//do something
done.await();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
12. }
}
1.6 Executor Framework
Executor framework has API to create thread pool and submit tasks to be
executed by thread pools.
Executor interface has only one method execute that takes a runnable object.
Executor thread pools can be created by calling factory methods
Executors.newCachedThreadPool(): If thread is available, ot will be used else
new thread will be created. Threads not used for 60 seconds will be removed
fro cache.
Executors.newFixedThreadPool(n); N threads are created and added to the pool.
The tasks are stored in unbounded queue and pool threads pick up tasks from
the queue. If thread terminates due to failure, new thread is created and
added to pool.
Executor.newSingleThredExecutor(): A pool of single thread.
Executor Usage:
package com.concur.executor;
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
public class WebServer {
Executor pool = Executors.newFixedThreadPool(50);
public static void main(String[] args) throws IOException{
WebServer ws = new WebServer();
ServerSocket ssocket= new ServerSocket(80);
while(true) {
Socket soc = ssocket.accept();
Runnable r = new Runnable(){
13. @Override
public void run() {
handle(soc);
}
};
ws.pool.execute(r);
}
}
}
1.7 Future
Future represents a task and serves as a wrapper for the tasks. The task may
not have started execution, or may be executing or may have completed. The
result of the task can be obtained by calling future.get().
future.get() returns immediately if task is completed, else it blocks till
the task gets completed.
FutureTask is an implementation of future interface. It also implements
runnable interface and allows the task to be submitted to
executor.execute(Runnable r).
Bow code snippet shows usage of future task class to implement a thread safe
cache.
package com.concur.cache;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.concurrent.FutureTask;
public class SimpleCache <K, V> {
private ConcurrentMap<K, FutureTask<V>> cache = new ConcurrentHashMap();
Executor pool = Executors.newFixedThreadPool(10);
public V get(final K key) throws InterruptedException, ExecutionException {
FutureTask<V> val = cache.get(key);
if(val==null) {
Callable<V> c = new Callable<V>(){
@Override
14. public V call() {
System.out.println("Cache Miss");
return (V)new Integer(key.hashCode());
}
};
val = new FutureTask<V>(c);
FutureTask<V> oldVal=cache.putIfAbsent(key, val);
if(oldVal==null){
//this thread should execute future task to get the actual
cache value associated with key
pool.execute(val);
}else {
//assign val to oldVal as other thread has won the race to
store its future task in concurrent map
val=oldVal;
}
}else{
System.out.println("Cache Hit");
}
return val.get();
}
public static void main(String[] args) {
try {
SimpleCache<String,Integer > sc = new
SimpleCache<String,Integer>();
System.out.println(sc.get("Hello"));
System.out.println(sc.get("Hello"));
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}