This document provides an overview of parallel programming concepts like parallelism, threads, and concurrency. It discusses the importance of parallel programming given increasing numbers of processor cores. Key concepts covered include parallelism versus multi-processing, tasks and threads, the Java thread classes and methods, threading in Swing applications, and the new Java ForkJoin framework for parallel divide-and-conquer tasks. Examples are provided of using threads, Runnables, and SwingWorkers in Java programs.
Concurrent programming allows running multiple tasks simultaneously through processes and threads. It describes running tasks in a time-shared manner on a single CPU core or truly in parallel across multiple CPU cores. Concurrent programs have different execution paths that run simultaneously. Concurrency means tasks happen within the same timeframe with dependencies, while parallelism means tasks happen simultaneously without dependencies. Concurrency challenges include shared resources, race conditions, deadlocks, priority inversions and starvation due to lack of proper synchronization techniques like mutexes and semaphores. In iOS, Apple provides threads, Grand Central Dispatch, and other synchronization tools to support concurrent programming.
Threads allow concurrent execution through independent sequential paths. A thread is scheduled independently from other threads. Tasks are similar to processes but reduced in dimension and often share memory. Concurrency requires mechanisms for thread creation, synchronization, and communication. Parallel execution uses multiple CPUs while concurrent execution uses multiple threads through task switching on a single CPU.
Concurrent and Parallel computing is explained in detail in the following presentation and the basic differences are cited with rel life examples and application.
This document discusses Java concurrency and the Java memory model. It begins with an agenda that covers the Java memory model, thread confinement, the Java atomic API, immutable objects, and memory consumption. It then goes into more detail on the Java memory model, discussing topics like ordering, visibility, and atomicity. It provides examples and references to help understand concepts like sequential consistency and data races. It also covers thread confinement techniques like ad hoc confinement, stack confinement, and using ThreadLocal.
Grand Central Dispatch (GCD) was created by Apple to make it easier to write concurrent code for multi-core systems. It shifts thread and task management from apps to the operating system. Units of work are described as blocks of code, while queues organize blocks based on execution needs. GCD has a multi-core engine that assigns blocks from app queues to OS-managed threads, removing the need for apps to directly use threads. Blocks are lightweight anonymous functions that can capture state and be passed between queues and threads for asynchronous execution. Common queues include the main queue for UI updates and global queues for general-purpose work.
This document discusses client-server programming and threads in Java. It begins by outlining the topics that will be covered, including why multi-threading is used, defining and creating threads, the life cycle of a thread, and synchronization among threads. It then provides examples of creating threads by extending the Thread class and implementing the Runnable interface. It also demonstrates issues that can arise from accessing shared resources simultaneously from multiple threads, like race conditions, and how to address this using synchronized methods. Finally, it discusses using multithreading for user interfaces in GUI applications.
The document discusses multithreading and how it can be used to exploit thread-level parallelism (TLP) in processors designed for instruction-level parallelism (ILP). There are two main approaches for multithreading - fine-grained and coarse-grained. Fine-grained switches threads every instruction while coarse-grained switches on long stalls. Simultaneous multithreading (SMT) allows a processor to issue instructions from multiple threads in the same cycle by treating instructions from different threads as independent. This converts TLP into additional ILP to better utilize the resources of superscalar and multicore processors.
Concurrent programming allows running multiple tasks simultaneously through processes and threads. It describes running tasks in a time-shared manner on a single CPU core or truly in parallel across multiple CPU cores. Concurrent programs have different execution paths that run simultaneously. Concurrency means tasks happen within the same timeframe with dependencies, while parallelism means tasks happen simultaneously without dependencies. Concurrency challenges include shared resources, race conditions, deadlocks, priority inversions and starvation due to lack of proper synchronization techniques like mutexes and semaphores. In iOS, Apple provides threads, Grand Central Dispatch, and other synchronization tools to support concurrent programming.
Threads allow concurrent execution through independent sequential paths. A thread is scheduled independently from other threads. Tasks are similar to processes but reduced in dimension and often share memory. Concurrency requires mechanisms for thread creation, synchronization, and communication. Parallel execution uses multiple CPUs while concurrent execution uses multiple threads through task switching on a single CPU.
Concurrent and Parallel computing is explained in detail in the following presentation and the basic differences are cited with rel life examples and application.
This document discusses Java concurrency and the Java memory model. It begins with an agenda that covers the Java memory model, thread confinement, the Java atomic API, immutable objects, and memory consumption. It then goes into more detail on the Java memory model, discussing topics like ordering, visibility, and atomicity. It provides examples and references to help understand concepts like sequential consistency and data races. It also covers thread confinement techniques like ad hoc confinement, stack confinement, and using ThreadLocal.
Grand Central Dispatch (GCD) was created by Apple to make it easier to write concurrent code for multi-core systems. It shifts thread and task management from apps to the operating system. Units of work are described as blocks of code, while queues organize blocks based on execution needs. GCD has a multi-core engine that assigns blocks from app queues to OS-managed threads, removing the need for apps to directly use threads. Blocks are lightweight anonymous functions that can capture state and be passed between queues and threads for asynchronous execution. Common queues include the main queue for UI updates and global queues for general-purpose work.
This document discusses client-server programming and threads in Java. It begins by outlining the topics that will be covered, including why multi-threading is used, defining and creating threads, the life cycle of a thread, and synchronization among threads. It then provides examples of creating threads by extending the Thread class and implementing the Runnable interface. It also demonstrates issues that can arise from accessing shared resources simultaneously from multiple threads, like race conditions, and how to address this using synchronized methods. Finally, it discusses using multithreading for user interfaces in GUI applications.
The document discusses multithreading and how it can be used to exploit thread-level parallelism (TLP) in processors designed for instruction-level parallelism (ILP). There are two main approaches for multithreading - fine-grained and coarse-grained. Fine-grained switches threads every instruction while coarse-grained switches on long stalls. Simultaneous multithreading (SMT) allows a processor to issue instructions from multiple threads in the same cycle by treating instructions from different threads as independent. This converts TLP into additional ILP to better utilize the resources of superscalar and multicore processors.
This document discusses multithreading and the differences between tasks and threads. It explains that operating systems manage each application as a separate task, and when an application initiates an I/O request it creates a thread. Multithreading allows a single process to support multiple concurrent execution paths. Benefits of threads include less overhead for creation, termination, and context switching compared to processes. The document concludes that threads enhance efficiency by sharing resources within a process.
Lecture 10 from the IAG0040 Java course in TTÜ.
See the accompanying source code written during the lectures: https://github.com/angryziber/java-course
This document discusses threads and multithreading. It begins with an introduction to threads and their models, including user-level and kernel-level threads. It then covers multithreading approaches like thread-level parallelism and data-level parallelism. The document discusses context switching on single-core versus multicore systems. It also provides an example of implementing matrix multiplication using threads. Finally, it summarizes a case study on using threads in interactive systems.
This document discusses parallel architecture and parallel programming. It begins with an introduction to von Neumann architecture and serial computation. Then it defines parallel architecture, outlines its benefits, and describes classifications of parallel processors including multiprocessor architectures. It also discusses parallel programming models, how to design parallel programs, and examples of parallel algorithms. Specific topics covered include shared memory and distributed memory architectures, message passing and data parallel programming models, domain and functional decomposition techniques, and a case study on developing parallel web applications using Java threads and mobile agents.
This document discusses processes and threads in Perl programming. It defines a process as an instance of a running program, while a thread is a flow of control through a program with a single execution point. Multiple threads can run within a single process and share resources, while processes run independently. The document compares processes and threads, and covers creating and managing threads, sharing data between threads, synchronization, and inter-process communication techniques in Perl like fork, pipe, and open.
Threads allow multiple tasks to run concurrently within a single Java program. A thread represents a separate path of execution and threads can be used to improve performance. There are two main ways to create threads: by extending the Thread class or implementing the Runnable interface. Threads transition between different states like new, runnable, running, blocked, and terminated. Synchronization is needed to prevent race conditions when multiple threads access shared resources simultaneously. Deadlocks can occur when threads wait for each other in a circular manner.
Threads are lightweight processes that improve application performance through parallelism. Each thread has its own program counter and stack but shares other resources like memory with other threads in a process. Using threads provides advantages like lower overhead context switching compared to processes and allows parallel execution on multi-core systems. There are two types of threads - user level threads managed by libraries and kernel level threads supported by the OS kernel. Threads have a life cycle that includes states like new, ready, running, blocked, and terminated.
Multithreading allows an application to have multiple points of execution operating concurrently within the same memory space. Each point of execution is called a thread. Threads can run tasks concurrently, improving responsiveness. They share memory and can access resources simultaneously. Synchronization is needed when threads access shared data to prevent inconsistencies.
This document provides an introduction to POSIX threads (Pthreads) programming. It discusses what threads are, how they differ from processes, and how Pthreads provide a standardized threading interface for UNIX systems. The key benefits of Pthreads for parallel programming are improved performance from overlapping CPU and I/O work and priority-based scheduling. Pthreads are well-suited for applications that can break work into independent tasks or respond to asynchronous events. The document outlines common threading models and emphasizes that programmers are responsible for synchronizing access to shared memory in multithreaded programs.
This document provides an overview of multithreading in 3 sentences or less:
Multithreading allows a program to split into multiple threads that can run simultaneously, improving responsiveness, utilizing multiprocessors efficiently, and structuring programs more effectively. Threads transition between different states like new, runnable, running, blocked, and dead over their lifetime. Common threading techniques include setting thread priority, enabling communication between threads, and avoiding deadlocks when multiple threads depend on each other's locks.
Threads allow multiple tasks to run concurrently by sharing memory and resources within a process. Context switching between threads is typically faster than between processes. Threads can be created and started in different ways and use synchronization techniques like locks, monitors, mutexes, semaphores, and wait handles to coordinate access to resources. The thread pool optimizes thread usage by maintaining pooled threads that can be assigned tasks to run asynchronously. Exceptions on worker threads must be handled manually.
This document discusses multithreading in Java. It defines threads as pieces of code that run concurrently with other threads. It describes the life cycle of a thread as starting, running, and stopping. It also discusses how to create multithreaded programs in Java by either extending the Thread class or implementing the Runnable interface.
A thread is an independent flow of control within a process, composed of a context (which includes a
register set and a program counter) and a sequence of instructions to execute.
Correct and efficient synchronization of java threadoutofmemoryerror
This document discusses synchronization issues in multithreaded Java programs. It begins by establishing the audience and goals, which are to discuss subtleties of the Java memory model that surprised experts and provide guidelines for writing correct and efficient synchronized code. It then covers key topics like atomicity, visibility and ordering constraints imposed by synchronization. It provides examples of problems like the double-check idiom and non-volatile flags. It also examines performance costs of synchronization and alternatives like isolation in Swing. Guidelines emphasize only synchronizing shared data, avoiding lock contention, and using immutable/volatile fields when possible.
The document discusses the thread model of Java. It states that all Java class libraries are designed with multithreading in mind. Java uses threads to enable asynchronous behavior across the entire system. Once started, a thread can be suspended, resumed, or stopped. Threads are created by extending the Thread class or implementing the Runnable interface. Context switching allows switching between threads by yielding control voluntarily or through prioritization and preemption. Synchronization is needed when threads access shared resources using monitors implicit to each object. Threads communicate using notify() and wait() methods.
This document discusses .NET multi-threading concepts including blocking, locking, signaling, and non-blocking techniques. It provides code examples for managing threads using techniques like Join, lock, Mutex, and EventWaitHandle. It also covers thread pools, common problems like race conditions and deadlocks, and proven practices for safe multi-threaded programming.
Multithreading allows a program to execute multiple tasks concurrently by using threads. There are two types of threads: single threads where a program uses one thread, and multiple threads where a program uses more than one thread concurrently. The life cycle of a thread involves different states such as newborn, runnable, running, blocked, and dead. Common thread methods include start(), run(), yield(), sleep(), wait(), notify(), and stop().
This document discusses multithreading in Java. It begins by explaining that multithreading allows multiple tasks to be performed concurrently by having each task executed in a separate thread. It then covers key topics like how threads are implemented in Java using the Thread class and Runnable interface, how to create and manage threads, common thread states, ensuring thread safety through synchronization, and techniques to improve synchronization performance.
This document discusses multithreading in Java. It defines multithreading as the ability for a program to execute multiple tasks concurrently using threads. Threads allow for multitasking within a single program. The document provides examples of threads, such as a spell checker, and explains how to create and start threads in Java by extending the Thread class or implementing the Runnable interface. It also covers the lifecycle of a thread and methods like sleep(). Finally, it discusses action listeners and how they can be used to handle events from user interactions.
Ateji PX for Java introduces parallelism at the language level, extending the sequential base language with a small number of parallel primitives inspired from pi-calculus. This makes parallel programming simple and
intuitive, easy to learn, efficient, provably correct and compatible with existing code, tools and
development processes.
This document discusses multithreading and the differences between tasks and threads. It explains that operating systems manage each application as a separate task, and when an application initiates an I/O request it creates a thread. Multithreading allows a single process to support multiple concurrent execution paths. Benefits of threads include less overhead for creation, termination, and context switching compared to processes. The document concludes that threads enhance efficiency by sharing resources within a process.
Lecture 10 from the IAG0040 Java course in TTÜ.
See the accompanying source code written during the lectures: https://github.com/angryziber/java-course
This document discusses threads and multithreading. It begins with an introduction to threads and their models, including user-level and kernel-level threads. It then covers multithreading approaches like thread-level parallelism and data-level parallelism. The document discusses context switching on single-core versus multicore systems. It also provides an example of implementing matrix multiplication using threads. Finally, it summarizes a case study on using threads in interactive systems.
This document discusses parallel architecture and parallel programming. It begins with an introduction to von Neumann architecture and serial computation. Then it defines parallel architecture, outlines its benefits, and describes classifications of parallel processors including multiprocessor architectures. It also discusses parallel programming models, how to design parallel programs, and examples of parallel algorithms. Specific topics covered include shared memory and distributed memory architectures, message passing and data parallel programming models, domain and functional decomposition techniques, and a case study on developing parallel web applications using Java threads and mobile agents.
This document discusses processes and threads in Perl programming. It defines a process as an instance of a running program, while a thread is a flow of control through a program with a single execution point. Multiple threads can run within a single process and share resources, while processes run independently. The document compares processes and threads, and covers creating and managing threads, sharing data between threads, synchronization, and inter-process communication techniques in Perl like fork, pipe, and open.
Threads allow multiple tasks to run concurrently within a single Java program. A thread represents a separate path of execution and threads can be used to improve performance. There are two main ways to create threads: by extending the Thread class or implementing the Runnable interface. Threads transition between different states like new, runnable, running, blocked, and terminated. Synchronization is needed to prevent race conditions when multiple threads access shared resources simultaneously. Deadlocks can occur when threads wait for each other in a circular manner.
Threads are lightweight processes that improve application performance through parallelism. Each thread has its own program counter and stack but shares other resources like memory with other threads in a process. Using threads provides advantages like lower overhead context switching compared to processes and allows parallel execution on multi-core systems. There are two types of threads - user level threads managed by libraries and kernel level threads supported by the OS kernel. Threads have a life cycle that includes states like new, ready, running, blocked, and terminated.
Multithreading allows an application to have multiple points of execution operating concurrently within the same memory space. Each point of execution is called a thread. Threads can run tasks concurrently, improving responsiveness. They share memory and can access resources simultaneously. Synchronization is needed when threads access shared data to prevent inconsistencies.
This document provides an introduction to POSIX threads (Pthreads) programming. It discusses what threads are, how they differ from processes, and how Pthreads provide a standardized threading interface for UNIX systems. The key benefits of Pthreads for parallel programming are improved performance from overlapping CPU and I/O work and priority-based scheduling. Pthreads are well-suited for applications that can break work into independent tasks or respond to asynchronous events. The document outlines common threading models and emphasizes that programmers are responsible for synchronizing access to shared memory in multithreaded programs.
This document provides an overview of multithreading in 3 sentences or less:
Multithreading allows a program to split into multiple threads that can run simultaneously, improving responsiveness, utilizing multiprocessors efficiently, and structuring programs more effectively. Threads transition between different states like new, runnable, running, blocked, and dead over their lifetime. Common threading techniques include setting thread priority, enabling communication between threads, and avoiding deadlocks when multiple threads depend on each other's locks.
Threads allow multiple tasks to run concurrently by sharing memory and resources within a process. Context switching between threads is typically faster than between processes. Threads can be created and started in different ways and use synchronization techniques like locks, monitors, mutexes, semaphores, and wait handles to coordinate access to resources. The thread pool optimizes thread usage by maintaining pooled threads that can be assigned tasks to run asynchronously. Exceptions on worker threads must be handled manually.
This document discusses multithreading in Java. It defines threads as pieces of code that run concurrently with other threads. It describes the life cycle of a thread as starting, running, and stopping. It also discusses how to create multithreaded programs in Java by either extending the Thread class or implementing the Runnable interface.
A thread is an independent flow of control within a process, composed of a context (which includes a
register set and a program counter) and a sequence of instructions to execute.
Correct and efficient synchronization of java threadoutofmemoryerror
This document discusses synchronization issues in multithreaded Java programs. It begins by establishing the audience and goals, which are to discuss subtleties of the Java memory model that surprised experts and provide guidelines for writing correct and efficient synchronized code. It then covers key topics like atomicity, visibility and ordering constraints imposed by synchronization. It provides examples of problems like the double-check idiom and non-volatile flags. It also examines performance costs of synchronization and alternatives like isolation in Swing. Guidelines emphasize only synchronizing shared data, avoiding lock contention, and using immutable/volatile fields when possible.
The document discusses the thread model of Java. It states that all Java class libraries are designed with multithreading in mind. Java uses threads to enable asynchronous behavior across the entire system. Once started, a thread can be suspended, resumed, or stopped. Threads are created by extending the Thread class or implementing the Runnable interface. Context switching allows switching between threads by yielding control voluntarily or through prioritization and preemption. Synchronization is needed when threads access shared resources using monitors implicit to each object. Threads communicate using notify() and wait() methods.
This document discusses .NET multi-threading concepts including blocking, locking, signaling, and non-blocking techniques. It provides code examples for managing threads using techniques like Join, lock, Mutex, and EventWaitHandle. It also covers thread pools, common problems like race conditions and deadlocks, and proven practices for safe multi-threaded programming.
Multithreading allows a program to execute multiple tasks concurrently by using threads. There are two types of threads: single threads where a program uses one thread, and multiple threads where a program uses more than one thread concurrently. The life cycle of a thread involves different states such as newborn, runnable, running, blocked, and dead. Common thread methods include start(), run(), yield(), sleep(), wait(), notify(), and stop().
This document discusses multithreading in Java. It begins by explaining that multithreading allows multiple tasks to be performed concurrently by having each task executed in a separate thread. It then covers key topics like how threads are implemented in Java using the Thread class and Runnable interface, how to create and manage threads, common thread states, ensuring thread safety through synchronization, and techniques to improve synchronization performance.
This document discusses multithreading in Java. It defines multithreading as the ability for a program to execute multiple tasks concurrently using threads. Threads allow for multitasking within a single program. The document provides examples of threads, such as a spell checker, and explains how to create and start threads in Java by extending the Thread class or implementing the Runnable interface. It also covers the lifecycle of a thread and methods like sleep(). Finally, it discusses action listeners and how they can be used to handle events from user interactions.
Ateji PX for Java introduces parallelism at the language level, extending the sequential base language with a small number of parallel primitives inspired from pi-calculus. This makes parallel programming simple and
intuitive, easy to learn, efficient, provably correct and compatible with existing code, tools and
development processes.
The document provides an introduction and overview of parallel computing. It discusses parallel computing systems and parallel programming models like MPI and OpenMP. It covers theoretical concepts like Amdahl's law and practical limits of parallel computing related to load balancing and non-computational sections. Examples of parallel programming using MPI and OpenMP are also presented.
The document discusses novel paradigms for parallel programming on multicore processors. It covers parallel programming paradigms like transactional memory, which provides an easy way for programmers to achieve speed and balance. The document describes software transactional memory (STM) and hardware transactional memory (HTM), discussing their approaches to concurrency control, version management, and conflict detection. It also covers using STM for slot scheduling to efficiently schedule requests across threads to a shared resource.
A biblioteca PPL do Delphi permite programação paralela de forma nativa e multiplataforma, sem precisar criar threads explicitamente. Ela usa recursos como TThreadPool, TTask e IFuture para executar tarefas complexas, downloads, processamento de arquivos e consultas a bancos de dados de forma não bloqueante e aproveitando múltiplos processadores e núcleos.
SlidesA Comparison of GPU Execution Time Prediction using Machine Learning an...Marcos Gonzalez
This document compares GPU execution time prediction using machine learning techniques and analytical modeling. It begins with introductions to parallel programming models, GPU architectures, and machine learning techniques. It then describes testing methodology where algorithms were run on various NVIDIA GPUs and datasets were collected to compare machine learning approaches like linear regression and random forests to an analytical BSP-based model for GPU execution time prediction. The goal is to determine which approach more accurately predicts execution times.
This document provides an introduction to concurrency in Python using threads. It discusses how threads allow programs to perform multiple tasks simultaneously by sharing system resources like memory. The document covers basic threading concepts like creating and launching threads, as well as challenges like accessing shared data between threads, which can be non-deterministic due to thread scheduling. It aims to provide an overview of concurrency support in the Python standard library beyond just the user manual.
1. learning programming with JavaThreads.pdfahmadkeder8
Multithreading in Java allows expressing potentially parallel code through threads. A thread represents concurrently executable code as a Runnable or by overriding the run() method in a Thread subclass. Starting a Thread object via its start() method executes the run() method concurrently. Threads run independently until completing run() or being blocked by operations like sleeping, locking, waiting or joining with other threads.
The Scala programming language has been gaining significant traction over the last few years, being adopted by vastly different organizations from startups to large enterprises. While the language itself is pretty well understood and explained in tutorials and books, there is an apparent dearth of practical advice for new adopters on the best approach to integrating the new technology. In this talk I’ll attempt to offer such advice gathered over several years of production Scala use, focusing on tools, practices, patterns and the community, in the hope of making your transition into the Scala ecosystem easier and better-informed up front.
A talk given at JavaOne 2015 in San Francisco.
Threads in java, Multitasking and Multithreadingssusere538f7
Threads allow Java programs to take advantage of multiprocessor systems by performing multiple tasks simultaneously. There are two main ways to create threads in Java - by extending the Thread class or implementing the Runnable interface. Threads can be started using the start() method and terminate when their run() method completes. The Java scheduler uses priority to determine which runnable threads get CPU time, with higher priority threads preempting lower priority ones. Threads provide concurrency but not true parallelism since Java threads still run on one CPU.
This document discusses multithreading and concurrency in .NET. It covers key concepts like processes and threads, and how they relate on an operating system level. It also discusses the Thread Pool, Task Parallel Library (TPL), Tasks, Parallel LINQ (PLINQ), and asynchronous programming patterns in .NET like async/await. Examples are provided for common threading techniques like producer/consumer and using the Timer class. Overall it serves as a comprehensive overview of multithreading and concurrency primitives available in the .NET framework.
Basic Understanding and Implement of Node.jsGary Yeh
Node.js is an event-driven JavaScript runtime built on Chrome's V8 engine. It uses non-blocking I/O and an event loop to handle multiple connections simultaneously without blocking. The document discusses Node.js' event loop model and asynchronous I/O, how callbacks allow non-blocking operations, and how modules and frameworks like Express allow building scalable network applications.
This document provides an overview of key concepts in Java concurrency including processes and threads, defining and starting threads, thread sleep and join methods, thread interference and memory consistency errors, liveness problems like deadlock and starvation, immutable objects, concurrency objects like locks and concurrent collections, executors and thread pools, the fork/join framework, and atomic variables.
This document discusses JavaScript performance best practices. It covers loading and execution performance, DOM scripting performance, and patterns to minimize repaints and reflows. Some key points include batching DOM changes, event delegation to reduce event handlers, and taking elements out of the document flow during animations. References are provided to resources on JavaScript performance testing and design patterns.
This document provides an introduction to multithreading concepts. It discusses using multiple threads to allow a bouncing ball animation program to start new balls even while others are still bouncing. It covers the basics of creating and running threads, including defining a runnable class and starting new threads. It also discusses key threading issues like thread states, scheduling, synchronization, and suspending/stopping threads.
"WTF is Twisted? (or; owl amongst the ponies)" is a talk that introduces the Twisted asynchronous programming framework, how it works, and what uses it.
Modern Java concurrency has undergone significant changes since Java 5 with the introduction of java.util.concurrent (j.u.c.). While concurrency is not a new subject, j.u.c. provides constructs like ReentrantLock, ConcurrentHashMap, and Executors that make concurrent programming easier compared to traditional approaches. However, many applications still use older concurrency approaches despite j.u.c. being faster and more refined in recent Java versions. The document advocates upgrading applications to take advantage of modern concurrency features.
New abstractions for concurrency make writing programs easier by moving away from threads and locks, but debugging such programs becomes harder. The call-stack, an essential tool in understanding why and how control flow reached a certain point in the program, loses meaning when inspected in traditional debuggers. Futures, actors or iteratees make code easier to write and reason about, and in this talk I'll show a simple solution to make them easier to debug. The tool I present integrates well with the Eclipse plugin for Scala, and shows how a "reactive debugger" might look like.
Async and parallel patterns and application design - TechDays2013 NLArie Leeuwesteijn
TechDays2013 NL session on async and parallel programming. Gives an overview of todays relevant .net technologies, examples and tips and tricks. This session will help you to understand and select and use the right async/parallel technology to use in your .net application. (arie@macaw.nl)
Java Multi Threading Concept
By N.V.Raja Sekhar Reddy
www.technolamp.co.in
Want more...
Like us @ https://www.facebook.com/Technolamp.co.in
subscribe videos @ http://www.youtube.com/user/nvrajasekhar
The big language features for Java SE 8 are lambda expressions (a.k.a. closures) and default methods (a.k.a. virtual extension methods). Adding closures to the Java language opens up a host of new expressive opportunities for applications and libraries, but how are they implemented? You might assume that lambda expressions are simply a compact syntax for inner classes, but, in fact, the implementation of lambda expressions is substantially different and builds on the invokedynamic feature added in Java SE 7.
This document discusses C# threads and thread synchronization. It begins by explaining how threads are represented by the Thread class in C# and how to start a thread by passing a delegate to its constructor. It then discusses thread states and properties like priority. It describes how threads need synchronization mechanisms when sharing resources and demonstrates this with a parent-child thread example using a shared queue. Finally, it covers various .NET synchronization primitives like locks, monitors, and reader-writer locks and how to synchronize access to collections and methods.
This document discusses multithreading and concurrency in Java. It defines a thread as a single sequential flow of control within a program. Multithreading allows a single processor to run multiple threads concurrently by rapidly switching between them. Creating threads in Java involves either extending the Thread class or implementing the Runnable interface. The document outlines thread states like new, ready, running, blocked, and finished. It also discusses thread scheduling, priorities, synchronization to prevent race conditions, and thread pools for managing tasks.
This document introduces threads as a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems. It examines issues related to multithreaded programming and compares single and multithreaded processes. Threads within a process share code, data and heap sections but have their own stack segments. Context switching is cheaper between threads of the same process than between processes. Threads allow overlapping computation and I/O to improve performance.
Polyglot and Functional Programming (OSCON 2012)Martijn Verburg
The document discusses introducing polyglot and functional programming concepts to Java developers. It explains that while Java is a powerful language, other JVM languages can offer advantages like more rapid development, concise coding, and taking advantage of non-object oriented and dynamic approaches. It provides examples of using functional concepts like map and filter to more declaratively operate on collections of data in a Java program. The document suggests exposing developers to these concepts through libraries and by experimenting with other JVM languages.
This slides explains why ConfigureAwait(false) needs to be called when we use async await. This slides is based on the following blog by Microsoft.
https://devblogs.microsoft.com/dotnet/configureawait-faq/
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
2. Our Goals
• Appreciate the (increasing) importance of
parallel programming
• Understand fundamental concepts:
– Parallelism, threads, multi-threading, concurrency,
locks, etc.
• See some basics of this is done in Java
• See some common uses:
– Divide and conquer, e.g. mergesort
– Worker threads in Swing
3. Notes!
• An area of rapid change!
– 1990s: parallel computers were $$$$
– Now: 4 core machines are commodity
• Variations between languages
• Old dogs and new tricks? Not so good…
– Educators, Java books, web pages
• Evolving frameworks, models, etc.
– E.g. Java’s getting Fork/Join in Java 1.7 (summer 11)
– MAP/REDUCE
4. (Multi)Process vs (Multi)Thread
• Assume a computer has one CPU
• Can only execute one statement at a time
– Thus one program at a time
• Process: an operating-system level “unit of
execution”
• Multi-processing
– Op. Sys. “time-slices” between processes
– Computer appears to do more than one program
(or background process) at a time
5. Tasks and Threads
• Thread: “a thread of execution”
• “Smaller”, “lighter” than a process
• smallest unit of processing that can be scheduled by
an operating system
• Has its own run-time call stack, copies of the CPU’s
registers, its own program counter, etc.
• Process has its own memory address space, but
threads share one address space
• A single program can be multi-threaded
• Time-slicing done just like in multiprocessing
• Repeat: the threads share the same memory
6. Task
• A task is an abstraction of a series of steps
– Might be done in a separate thread
•
• In Java, there are a number of classes /
interfaces that basically correspond to this
– Example (details soon): Runnable
– work done by method run()
7. Java: Statements Tasks
• Consecutive lines of code:
Foo tmp = f1;
f1 = f2;
f2 = tmp;
• A method:
swap(f1, f2);
• A “task” object:
SwapTask task1= new SwapTask(f1,f2);
task1.run();
8. Huh? Why a task object?
• Actions, functions vs. objects. What’s the
difference?
9. Huh? Why a task object?
• Actions, functions vs. objects. What’s the
difference?
• Objects:
– Are persistent. Can be stored.
– Can be created and then used later.
– Can be attached to other things. Put in Collections.
– Contain state.
• Functions:
– Called, return (not permanent)
10. Java Library Classes
• For task-like things:
– Runnable, Callable
– SwingWorker, RecursiveAction, etc.
• Thread class
• Managing tasks and threads
– Executor, ExecutorService
– ForkJoinPool
• In Swing
– The Event-Dispatch Thread
– SwingUtilities.invokeLater()
11. Java’s Nested Classes
• You can declare a class inside another
– http://download.oracle.com/javase/tutorial/java/javaOO/nested.html
• If declared static, can use just like any class
• If not static
– Can only define objects of that type from within non-static code of the
enclosing class
– Object of inner-class type can access all fields of the object that
created it. (Useful!)
– Often used for “helper” classes, e.g. a node object used in a list or
tree.
• See demo done in Eclipse: TaskDemo.java
13. Possible Needs for Task Objects
• Can you think of any?
• Storing tasks for execution later
– Re-execution
• Undo and Redo
• Threads
14. Undo Operations
• A task object should:
– Be able to execute and undo a function
– Therefore will need to be able to save enough
state to “go back”
• When application executes a task:
– Create a task object and make it execute
– Store that object on a undo stack
• Undo
– Get last task object stored on stack, make it undo
15. Calculator App Example
• We had methods to do arithmetic operations:
public void addToMemory(double inputVal) {
memory = memory + inputVal; }
• Instead:
public void addToMemory(double inputVal) {
AddTask task = new AddTask(inputVal);
task.run();
undoStack.add(task);
}
16. Stack, Undo Stack
• A Stack is an important ADT
– A linear sequence of data
– Can only add to the end, remove item at the end
– LIFO organization: “last in, first out”
– Operations: push(x), pop(), sometimes top()
• Stacks important for storing delayed things to
return turn
– Run-time stack (with activation records)
– An undo stack (and a separate redo stack)
17. Nested class for Adding
private class AddTask implements UndoableRunnable {
private double param;
public AddTask(double inputVal) {
this.param = inputVal;
}
public void run() { // memory is field in CalcApp
memory = memory + this.param;
}
public boolean undo() {
memory = memory - this.param;
return true;
}
}
18. Undo operation
• In the Calc app:
public boolean undo() {
boolean result = false;
int last = undoStack.size()-1;
if ( last >= 0 ) {
UndoableRunnable task = undoStack.get(last);
result = task.undo();
undoStack.remove(last);
}
return result;
}
19.
20. Java Thread Classes and Methods
• Java has some “primitives” for creating and
using threads
– Most sources teach these, but in practice they’re
hard to use well
– Now, better frameworks and libraries make using
them directly less important.
• But let’s take a quick look
21. Java’s Thread Class
• Class Thread: it’s method run() does its business
when that thread is run
• But you never call run(). Instead, you call start()
which lets Java start it and call run()
• To use Thread class directly (not recommended
now):
– define a subclass of Thread and override run() – not
recommended!
– Create a task as a Runnable, link it with a Thread, and then
call start() on the Thread.
• The Thread will run the Runnable’s run() method.
22. Creating a Task and Thread
• Again, the first of the two “old” ways
• Get a thread object, then call start() on that
object
– Makes it available to be run
– When it’s time to run it, Thread’s run() is called
• So, create a thread using inheritance
– Write class that extends Thread, e.g. MyThread
– Define your own run()
– Create a MyThread object and call start() on it
• We won’t do this! Not good design!
23. Runnables and Thread
• Use the “task abstraction” and create a class that
implements Runnable interface
– Define the run() method to do the work you want
• Now, two ways to make your task run in a separate
thread
– First way:
• Create a Thread object and pass a Runnable to the constructor
• As before, call start() on the Thread object
– Second way: hand your Runnable to a “thread manager”
object
• Several options here! These are the new good ways. More soon.
24. Join (not the most descriptive word)
• The Thread class defines various primitive methods you
could not implement on your own
– For example: start, which calls run in a new thread
• The join() method is one such method, essential for
coordination in this kind of computation
– Caller blocks until/unless the receiver is done executing (meaning its
run returns)
– E.g. in method foo() running in “main” thread, we call:
myThread.start(); myThread.join();
– Then this code waits (“blocks”) until myThread’s run() completes
• This style of parallel programming is often called “fork/join”
– Warning: we’ll soon see a library called “fork/join” which simplifies
things. In that, you never call join()
24
25.
26. Threading in Swing
• Threading matters a lot in Swing GUIs
– You know: main’s thread ends “early”
– JFrame.setvisible(true) starts the “GUI thread”
• Swing methods run in a separate thread called the
Event-Dispatching Thread (EDT)
– Why? GUIs need to be responsive quickly
– Important for good user interaction
• But: slow tasks can block the EDT
– Makes GUI seem to hang
– Doesn’t allow parallel things to happen
27. Thread Rules in Swing
• All operations that update GUI components
must happen in the EDT
– These components are not thread-safe (later)
– SwingUtilities.invokeLater(Runnable r) is a method
that runs a task in the EDT when appropriate
• But execute slow tasks in separate worker
threads
• To make common tasks easier, use a
SwingWorker task
28. SwingWorker
• A class designed to be extended to define a
task for a worker thread
– Override method doInBackground()
This is like run() – it’s what you want to do
– Override method done()
This method is for updating the GUI afterwards
• It will be run in the EDT
• For more info, see:
http://download.oracle.com/javase/tutorial/uiswing/concurrency/
• Note you can get interim results too
29. Code Example
• We have a fibonacci demo that runs this
method both recursively and with a loop
• Original version
– Unresponsive until it completes all its calculations
• Need to run calls to the recursive fibonacci in
a separate thread
– See Fib2.java that uses SwingWorker to define a
task
30.
31. New Java ForkJoin Framework
• Designed to support a common need
– Recursive divide and conquer code
– Look for small problems, solve without parallelism
– For larger problems
• Define a task for each subproblem
• Library provides
– a Thread manager, called a ForkJoinPool
– Methods to send your subtask objects to the pool to be run,
and your call waits until their done
– The pool handles the multithreading well
32. • Turns out that Java’s threads are still too “heavy-
weight”
• Will be in Java 7 standard libraries, but
available in Java 6 as a downloaded .jar file
– Get jsr166y.jar from
http://gee.cs.oswego.edu/dl/concurrency-interest/index.html
– More info here
http://www.cs.washington.edu/homes/djg/teachingMaterials/grossmanSPA
33. Screenshots: For single- and multi-threaded Mergesort:
Threads in Eclipse Debug window, and Mac’s CPU usage display
• text
34. The ForkJoinPool
• The “thread manager”
– Used when calls are made to RecursiveTask’s
methods fork(), invokeAll(), etc.
– When created, knows how many processors are
available
– Pretty sophisticated
• “Steals” time from threads that have nothing to do
35. Overview of How To
• Create a ForkJoinPool “thread-manager” object
• Create a task object that extends RecursiveTask
– We’ll ignore use of generics with this (see docs)
– Create a task-object for entire problem and call
invoke(task) on your ForkJoinPool
• Your task class’ compute() is like Thread.run()
– It has the code to do the divide and conquer
– First, it must check if small problem – don’t use
parallelism, solve without it
– Then, divide and create >1 new task-objects. Run them:
• Either with invokeAll(task1, task2, …). Waits for all to complete.
• Or calling fork() on first, then compute() on second, then join()
36. Same Ideas as Thread But...
To use the ForkJoin Framework:
• A little standard set-up code (e.g., create a ForkJoinPool)
Don’t subclass Thread Do subclass RecursiveTask<V>
Don’t override run Do override compute
Don’t call start Do call invoke, invokeAll, fork
Don’t just call join Do call join which returns answer
or
Do call invokeAll on multiple tasks
37. Mergesort Example
• Top-level call. Create “main” task and submit
public static void mergeSortFJRecur(Comparable[] list, int
first,
int last) {
if (last - first < RECURSE_THRESHOLD) {
MergeSort.insertionSort(list, first, last);
return;
}
Comparable[] tmpList = new Comparable[list.length];
threadPool.invoke(new SortTask(list, tmpList, first, last));
}
38. Mergesort’s Task-Object Nested Class
static class SortTask extends RecursiveAction {
Comparable[] list;
Comparable[] tmpList;
int first, last;
public SortTask(Comparable[] a, Comparable[] tmp,
int lo, int hi) {
this.list = a; this.tmpList = tmp;
this.first = lo; this.last = hi;
}
// continued next slide
39. compute() Does Task Recursion
protected void compute() { // in SortTask, continued from previous slide
if (last - first < RECURSE_THRESHOLD)
MergeSort.insertionSort(list, first, last);
else {
int mid = (first + last) / 2;
// the two recursive calls are replaced by a call to
invokeAll
SortTask task1 = new SortTask(list, tmpList, first, mid);
SortTask task2 = new SortTask(list, tmpList, mid+1, last);
invokeAll(task1, task2);
MergeSort.merge(list, first, mid, last);
}
40. Leaving new ForkJoin framework…
• Java since 1.5 has a more general set of
classes for “task managers”
41. Nice to Have a Thread “Manager”
• If your code is responsible for creating a
bunch of tasks, linking them with Threads, and
starting them all, then you have muchto worry
about:
– What if you start too many threads? Can you
manage the number of running threads?
– Enough processors?
– Can you shutdown all the threads?
– If one fails, can you restart it?
42. Executors
• An Executor is an object that manages running
tasks
– Submit a Runnable to be run with Executor’s
execute() method
– So, instead of creating a Thread for your Runnable
and calling start() on that, do this:
• Get an Executor object, say called exec
• Create a Runnable, say called myTask
• Submit for running: exec.execute(myTask)
43. How to Get an Executor
• Use static methods in Executors library.
• Fixed “thread pool”: at most N threads running at
one time
Executor exec =
Executors.newFixedThreadPool(MAX_THREADS);
• Unlimited number of threads
Executor exec =
Executors.newCachedThreadPool();
44. Summary So Far
• Create a class that implements a Runnable to
be your “task object”
– Or if ForkJoin framework, extend RecursiveTask
• Create your task objects
• Create an Executor
– Or a ForkJoinPool
• Submit each task-object to the Executor which
starts it up in a separate thread
45. Concurrency and Synchronization
• Concurrency:
Issues related to multiple-threads accessing
shared data
• Synchronization:
Methods to manage and control concurrent
access to shared data by multiple-threads
• Note: Our book defines concurrent
programming and concurrency to be what
more people now call parallel programming
46. Possible Bugs in Multithreaded Code
• Possible bug #1
i=1; x=10; x = i + x; // x could be 12 here
• Possible bug #2
if ( ! myList.contains(x) )
myList.add(x); // x could be in list twice
• Why could these cause unexpected results?
47. Here’s Why
• See MSD text pp. 759-762
• Multiple threads executing same lines of code
at “same” time, accessing same data values
48. How 1 + 10 might be 12
• Thread 1 executes:
(x is 10, i is 1)
–Get i (1) into register 1
–Get x (10) into its register 2
(other thread has CPU)
–Add registers
–Store result (11) into x
(x is now 11)
(other thread has CPU)
(other thread has CPU)
–Do next line of code
(x changes to 12 even though
no code in this thread has
touched x)
• Thread 2 executes:
(x is 10, i is 1)
(other thread has CPU)
(other thread has CPU)
–Get i (1) into its register 1
(other thread has CPU)
(other thread has CPU)
–Get x (11) into is register 2
–Add registers
–Store result (12) into x
(x is now 12)
49. Synchronization
• Understand the issue with concurrent access to
shared data?
– Data could be a counter (int) or a data structure (e.g. a
Map or List or Set)
• A race condition: Two threads will access
something. They “compete” causing a problem
• A critical section: a block of code that can only be
safely executed by one thread at a time
• A lock: an object that is “held” by one thread at a
time, then “released”
50. Synchronization in Java (1)
• Any object can serve as a lock
– Separate object: Object myLock = new Object();
– Current instance: the this object
• Enclose lines of code in a synchronized block
synchronized(myLock) {
// code here
}
• More than one thread could try to execute this code,
but one acquires the lock and the others “block” or
wait until the first thread releases the lock
51. Synchronized Methods
• Common situation: all the code in a method is a
critical section
– I.e. only one thread at a time should execute that
method
– E.g. a getter or setter or mutator, or something that
changes shared state info (e.g. a Map of important
data)
• Java makes it easy: add synchronized keyword to
method signature. E.g.
public synchronized void update(…) {
52. Summary So Far
• Concurrent access to shared data
– Can lead to serious, hard-to-find problems
– E.g. race conditions
• The concept of a lock
• Synchronized blocks of code or methods
– One thread at a time
– While first thread is executing it, others block
53. Some Java Solutions
• There are some synchronized collections
• Classes like AtomicInteger
– Stores an int
– Has methods to operate on it in a thread-safe
manner
int getAndAdd(int delta) instead of i=i+1
54. More Advanced Synchronization
• A semaphore object
– Allows simultaneous access by N threads
– If N==1, then this is known as a mutex (mutual
exclusion)
– Java has a class Semaphore
• Other Java classes
• CountDownLatch, Barriers, etc.
• No more on these in CS2110 this term
56. Barriers
• Java class CyclicBarrier
– A rendezvous point or barrier point
– Worker threads wait at a spot until all get there
– Then all proceed
57. Using CountDownLatch
• Here are some common scenarios and demo
programs for them
• You’ll use the last of these for the War card-
game program!
58. Scenario #1
• A “manager” thread and N “worker” threads
• Manager starts workers but then must wait for them
to finish before doing follow-up work
• Solution:
– Manager creates a CountDownLatch with value N
– After workers starts, manager calls await() on that
– When each worker completes its work, it calls
countDown() on the latch
– After all N call countDown(), manager is un-blocked and
does follow-up work
• Example use: parallel divide and conquer like
mergesort
• Code example: SyncDemo0.java
59. Scenario #2
• A “manager” thread and N “worker” threads
• Manager starts workers but wants them to
“hold” before doing real work until it says “go”
• Solution:
– Manager creates a CountDownLatch with value 1
– After each workers start, it calls await() on that Latch
– At some point, when ready, the manager calls
countDown() on that Latch
– Now Workers free to continue with their work
• Code example: SyncDemo1.java
60. Scenario #3
• Work done in “rounds” where:
– All workers wait for manager to say “go”
– Each worker does its job and then waits for next round
– Manager waits for all workers to complete a round, then does some
follow-up work
– When that’s done, manager starts next round by telling workers “go”
• Solution: combine the two previous solutions
– First Latch: hold workers until manager is ready
– Second Latch: manager waits until workers finish a round
– Worker’s run() has loop to repeat
– Manager must manage Latches, recreating them at end of round
• Example use: a card game or anything that has that kind of
structure
• Code example: SyncDemo2.java
61. Summary of last section
• Multiple threads may need to cooperate
– Common situation: some workers and a manager
– One thread may need to wait for one or more thread
to complete
– One or more threads may need to wait to be
“released”
– Or a combination of these situations
• Threads all access a CountDownLatch
– await() used to wait for enough calls to countDown()