The document discusses the Java memory model and how it defines the semantics of multi-threaded Java programs. It explains that the memory model specifies legal program behaviors and provides safety and security properties. It then discusses different aspects of the memory model at varying levels of expertise, including how to properly use monitors, locks, and volatile variables, how to reason about program correctness, and how to optimize concurrent data structures and utilities. The document also covers topics like the happens-before consistency model and how volatile variables can be used to avoid issues like those in double-checked locking implementations.
This document discusses Java concurrency and the Java memory model. It begins with an agenda that covers the Java memory model, thread confinement, the Java atomic API, immutable objects, and memory consumption. It then goes into more detail on the Java memory model, discussing topics like ordering, visibility, and atomicity. It provides examples and references to help understand concepts like sequential consistency and data races. It also covers thread confinement techniques like ad hoc confinement, stack confinement, and using ThreadLocal.
The document discusses various techniques for maintaining consistency in distributed systems with replicated data, including different consistency models. It describes how replication improves reliability and performance but introduces challenges with consistency. Weak consistency models are proposed to improve scalability by loosening consistency requirements. Causal consistency requires a total order of causally related writes only, while sequential consistency requires an order of all reads and writes. FIFO consistency ensures writes from each process are seen in the order issued.
This document summarizes key features and updates related to concurrency in Java releases. It discusses basics of concurrency including Amdahl's law and challenges of concurrent programming. It then covers constructs in Java 5+ like atomic operations, locks, conditions, executors and collections. The document also summarizes the fork-join framework and parallelism features in Java 8 including streams API, common pool and completable future. It highlights updates to ConcurrentHashMap in Java 8 to support parallel operations.
The document discusses several topics related to optimizing Java program performance including:
1. Using buffering and non-blocking I/O for file reading/writing to improve efficiency.
2. Minimizing network calls by retrieving related data in one call and using design patterns like the session facade.
3. Reusing objects when possible rather than constantly creating new instances to reduce garbage collection overhead.
A set of slides on the basics of concurrency touching upon Java's support for multi-threaded programming, do's and dont's, common approaches to developing concurrent applications and evolution of APIs in Java that support effective multi-threaded programming
The document discusses Java concurrency. It covers how concurrency has evolved in Java over the years. Some key concurrency concepts and patterns are explained like atomicity, visibility, synchronization, confinement, immutability and cooperation. The document does a deep dive into the java.util.concurrent packages, atomic classes, locks, executor framework and fork join framework. It discusses that there are usually multiple ways to solve a concurrency problem and compares options. Common concurrency mistakes are also highlighted.
Writing concurrent programs that can run in multiple threads and on multiple cores is crucial but daunting. Futures provides a convenient abstraction for many problem domains. The online course "Intermediate Scala" includes an up-to-date discussion of futures and the parts of java.util.concurrent that underlie the Scala futures implementation. Unlike Java's futures, Scala futures supports composition, transformations and sophisticated callbacks.
The author is managing editor of http://scalacourses.com, which offers self-paced online courses that teach Introductory and Intermediate Scala and Play Framework.
This document discusses Java concurrency and the Java memory model. It begins with an agenda that covers the Java memory model, thread confinement, the Java atomic API, immutable objects, and memory consumption. It then goes into more detail on the Java memory model, discussing topics like ordering, visibility, and atomicity. It provides examples and references to help understand concepts like sequential consistency and data races. It also covers thread confinement techniques like ad hoc confinement, stack confinement, and using ThreadLocal.
The document discusses various techniques for maintaining consistency in distributed systems with replicated data, including different consistency models. It describes how replication improves reliability and performance but introduces challenges with consistency. Weak consistency models are proposed to improve scalability by loosening consistency requirements. Causal consistency requires a total order of causally related writes only, while sequential consistency requires an order of all reads and writes. FIFO consistency ensures writes from each process are seen in the order issued.
This document summarizes key features and updates related to concurrency in Java releases. It discusses basics of concurrency including Amdahl's law and challenges of concurrent programming. It then covers constructs in Java 5+ like atomic operations, locks, conditions, executors and collections. The document also summarizes the fork-join framework and parallelism features in Java 8 including streams API, common pool and completable future. It highlights updates to ConcurrentHashMap in Java 8 to support parallel operations.
The document discusses several topics related to optimizing Java program performance including:
1. Using buffering and non-blocking I/O for file reading/writing to improve efficiency.
2. Minimizing network calls by retrieving related data in one call and using design patterns like the session facade.
3. Reusing objects when possible rather than constantly creating new instances to reduce garbage collection overhead.
A set of slides on the basics of concurrency touching upon Java's support for multi-threaded programming, do's and dont's, common approaches to developing concurrent applications and evolution of APIs in Java that support effective multi-threaded programming
The document discusses Java concurrency. It covers how concurrency has evolved in Java over the years. Some key concurrency concepts and patterns are explained like atomicity, visibility, synchronization, confinement, immutability and cooperation. The document does a deep dive into the java.util.concurrent packages, atomic classes, locks, executor framework and fork join framework. It discusses that there are usually multiple ways to solve a concurrency problem and compares options. Common concurrency mistakes are also highlighted.
Writing concurrent programs that can run in multiple threads and on multiple cores is crucial but daunting. Futures provides a convenient abstraction for many problem domains. The online course "Intermediate Scala" includes an up-to-date discussion of futures and the parts of java.util.concurrent that underlie the Scala futures implementation. Unlike Java's futures, Scala futures supports composition, transformations and sophisticated callbacks.
The author is managing editor of http://scalacourses.com, which offers self-paced online courses that teach Introductory and Intermediate Scala and Play Framework.
Modern Java concurrency has undergone significant changes since Java 5 with the introduction of java.util.concurrent (j.u.c.). While concurrency is not a new subject, j.u.c. provides constructs like ReentrantLock, ConcurrentHashMap, and Executors that make concurrent programming easier compared to traditional approaches. However, many applications still use older concurrency approaches despite j.u.c. being faster and more refined in recent Java versions. The document advocates upgrading applications to take advantage of modern concurrency features.
Aim of this presentation is not to make you masters in Java 8 Concurrency, but to help you guide towards that goal. Sometimes it helps just to know that there is some API that might be suitable for a particular situation. Make use of the pointers given to search more and learn more on those topics. Refer to books, Java API Documentation, Blogs etc. to learn more. Examples and demos for all cases discussed will be added to my blog www.javajee.com.
Advanced Introduction to Java Multi-Threading - Full (chok)choksheak
Designed for the beginning Java developer to grasp advanced Java multi-threading concepts quickly. Talks mainly about the Java Memory Model and the Concurrent Utilities. This presentation is Java-specific and we intentionally omit general non-Java-specific details, such as hardware architecture, OS, native threads, algorithms, and general software design principles etc.
The Java Memory Model defines rules for how threads interact through shared memory in Java. It specifies rules for atomicity, ordering, and visibility of memory operations. The JMM provides guarantees for code safety while allowing compiler optimizations. It defines a happens-before ordering of instructions. The ordering rules and visibility rules ensure threads see updates from other threads as expected. The JMM implementation inserts memory barriers as needed to maintain the rules on different hardware platforms.
At a time when Herbt Sutter announced to everyone that the free lunch is over (The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software), concurrency has become our everyday life.A big change is coming to Java, the Loom project and with it such new terms as "virtual thread", "continuations" and "structured concurrency". If you've been wondering what they will change in our daily work or
whether it's worth rewriting your Tomcat-based application to super-efficient reactive Netty,or whether to wait for Project Loom? This presentation is for you.
I will talk about the Loom project and the new possibilities related to virtual wattles and "structured concurrency". I will tell you how it works and what can be achieved and the impact on performance
Java Concurrency and Performance | Multi Threading | Concurrency | Java Conc...Anand Narayanan
■ With the advent of multi-core processors the usage of single threaded programs is soon becoming obsolete. Java was built to be able to do many things at once. In computer lingo, we call this "concurrency". This is the main reason why Java is so useful. Today we see a lot of our applications running on multiple cores, concurrent java programs with multiple threads is the answer for effective performance and stability on multi-core based applications. Concurrency is among the utmost worries for newcomers to Java programming but there's no reason to let it deter you. Not only is excellent documentation available but also pictorial representations of each topic to make understanding much graceful and enhanced. Java threads have become easier to work with as the Java platform has evolved. In order to learn how to do multithreaded programming in Java , you need some building blocks. Our training expert with his rich training and consulting experience illustrates with real application based case studies.
Java 9 is just around the corner. In this session, we'll describe the new modularization support (Jigsaw), new JDK tools, enhanced APIs and many performance improvements that were added to the new version.
Building large scale, job processing systems with Scala Akka Actor frameworkVignesh Sukumar
The document discusses building massive scale, fault tolerant job processing systems using the Scala Akka framework. It describes implementing a master-slave architecture with actors where an agent runs on each storage node to process jobs locally, achieving high throughput. It also covers controlling system load by dynamically adjusting parallelism, and implementing fine-grained fault tolerance through actor supervision strategies.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. The document also discusses processes, threads, synchronization, deadlocks, and strategies for designing objects to be thread-safe such as immutability, locking, and splitting state.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single and multi-process programming, multi-threading, processes, threads, synchronization, deadlocks, and strategies for designing objects to be thread-safe such as immutability, locking, and containment. It also summarizes high-level concurrency utilities in Java like locks, executors, concurrent collections, and atomic variables.
The document discusses several programming paradigms including single process, multi process, and multi-core/multi-thread. It then covers topics such as processes, threads, synchronization, and liveness issues in concurrent programming including deadlock, starvation and livelock. Processes run independently while threads share data and scheduling. Synchronization prevents interference and inconsistencies between threads accessing shared data. Liveness issues can cause programs to halt indefinitely.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. Key concepts discussed include processes, threads, synchronization, deadlocks, and high-level concurrency objects like locks, executors, and concurrent collections. The document provides examples of implementing and managing threads, as well as communicating between threads using techniques like interrupts, joins, and guarded blocks.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. Key concepts discussed include processes, threads, synchronization, deadlocks, and high-level concurrency objects like locks, executors, and concurrent collections. The document provides examples of implementing threads using subclasses and interfaces, and communicating between threads using interrupts, joins, and guarded blocks.
This document discusses several programming paradigms and concepts related to multi-threaded programming. It covers single process vs multi process vs multi-core/multi-threaded programming. It also discusses processes, threads, synchronization mechanisms like semaphores and barriers, and concurrency issues like deadlock, starvation and livelock that can occur in multi-threaded programs.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. The document also discusses processes, threads, synchronization, deadlocks, and strategies for designing objects to be thread-safe such as immutability, locking, and splitting state.
Reactive Design Patterns: a talk by Typesafe's Dr. Roland KuhnZalando Technology
We had the great pleasure of hosting a talk by Dr. Roland Kuhn: leader of Typesafe’s Akka project, and coauthor of the book Reactive Design Patterns and the Reactive Manifesto. For a standing-room-only crowd, Roland highlighted the importance of making reactive software: of considering responsiveness, maintainability, elasticity and scalability from the outset of development. He explored several architecture elements that are commonly found in reactive systems, such as the circuit breaker, various replication techniques, and flow control protocols. These patterns are language-agnostic and also independent of the abundant choice of reactive programming frameworks and libraries. Check out his slides!
The document discusses modern Java concurrency and provides examples of how to write concurrent code safely and take advantage of modern hardware. It introduces key concepts like the java.util.concurrent package, locks, conditions, concurrent data structures, executors and fork/join. Examples are provided for using constructs like CountDownLatch, LinkedBlockingQueue, thread pools and fork/join tasks. The overall message is that modern Java concurrency allows safer and faster concurrent programming compared to traditional approaches.
The document discusses parallel computing techniques using threads. It describes domain decomposition, where a problem is divided into independent tasks that can be executed concurrently by threads. Matrix multiplication is provided as an example, where each element of the resulting matrix is computed independently by a thread. Functional decomposition, where a problem is broken into distinct computational functions, is also introduced. Programming models for threads in Java, .NET and POSIX are overviewed.
The document discusses a scalable non-blocking coding style for concurrent programming. It proposes using large arrays for parallel read and update access without locks or volatile variables. Finite state machines are replicated per array word, where successful compare-and-set operations transition states. Failed CAS causes retry but ensures global progress. Array resizing is handled by "marking" words to prevent late updates and copying in parallel using the state machine. The style aims to provide high performance concurrent data structures that are as fast as non-thread-safe implementations but with correctness guarantees. Example applications discussed include a lock-free bit vector and resizable hash table.
The document discusses modern Java concurrency. It introduces the topic and explains why developers may want to utilize modern Java concurrency features. It discusses the java.util.concurrent library and how it provides building blocks for concurrent and thread-safe code like ReentrantLock, Condition, ConcurrentHashMap. It provides examples of using constructs like CountDownLatch, ThreadPoolExecutor, ForkJoin to write concurrent applications in a safer and more performant way compared to traditional locking approaches.
Modern Java concurrency has undergone significant changes since Java 5 with the introduction of java.util.concurrent (j.u.c.). While concurrency is not a new subject, j.u.c. provides constructs like ReentrantLock, ConcurrentHashMap, and Executors that make concurrent programming easier compared to traditional approaches. However, many applications still use older concurrency approaches despite j.u.c. being faster and more refined in recent Java versions. The document advocates upgrading applications to take advantage of modern concurrency features.
Aim of this presentation is not to make you masters in Java 8 Concurrency, but to help you guide towards that goal. Sometimes it helps just to know that there is some API that might be suitable for a particular situation. Make use of the pointers given to search more and learn more on those topics. Refer to books, Java API Documentation, Blogs etc. to learn more. Examples and demos for all cases discussed will be added to my blog www.javajee.com.
Advanced Introduction to Java Multi-Threading - Full (chok)choksheak
Designed for the beginning Java developer to grasp advanced Java multi-threading concepts quickly. Talks mainly about the Java Memory Model and the Concurrent Utilities. This presentation is Java-specific and we intentionally omit general non-Java-specific details, such as hardware architecture, OS, native threads, algorithms, and general software design principles etc.
The Java Memory Model defines rules for how threads interact through shared memory in Java. It specifies rules for atomicity, ordering, and visibility of memory operations. The JMM provides guarantees for code safety while allowing compiler optimizations. It defines a happens-before ordering of instructions. The ordering rules and visibility rules ensure threads see updates from other threads as expected. The JMM implementation inserts memory barriers as needed to maintain the rules on different hardware platforms.
At a time when Herbt Sutter announced to everyone that the free lunch is over (The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software), concurrency has become our everyday life.A big change is coming to Java, the Loom project and with it such new terms as "virtual thread", "continuations" and "structured concurrency". If you've been wondering what they will change in our daily work or
whether it's worth rewriting your Tomcat-based application to super-efficient reactive Netty,or whether to wait for Project Loom? This presentation is for you.
I will talk about the Loom project and the new possibilities related to virtual wattles and "structured concurrency". I will tell you how it works and what can be achieved and the impact on performance
Java Concurrency and Performance | Multi Threading | Concurrency | Java Conc...Anand Narayanan
■ With the advent of multi-core processors the usage of single threaded programs is soon becoming obsolete. Java was built to be able to do many things at once. In computer lingo, we call this "concurrency". This is the main reason why Java is so useful. Today we see a lot of our applications running on multiple cores, concurrent java programs with multiple threads is the answer for effective performance and stability on multi-core based applications. Concurrency is among the utmost worries for newcomers to Java programming but there's no reason to let it deter you. Not only is excellent documentation available but also pictorial representations of each topic to make understanding much graceful and enhanced. Java threads have become easier to work with as the Java platform has evolved. In order to learn how to do multithreaded programming in Java , you need some building blocks. Our training expert with his rich training and consulting experience illustrates with real application based case studies.
Java 9 is just around the corner. In this session, we'll describe the new modularization support (Jigsaw), new JDK tools, enhanced APIs and many performance improvements that were added to the new version.
Building large scale, job processing systems with Scala Akka Actor frameworkVignesh Sukumar
The document discusses building massive scale, fault tolerant job processing systems using the Scala Akka framework. It describes implementing a master-slave architecture with actors where an agent runs on each storage node to process jobs locally, achieving high throughput. It also covers controlling system load by dynamically adjusting parallelism, and implementing fine-grained fault tolerance through actor supervision strategies.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. The document also discusses processes, threads, synchronization, deadlocks, and strategies for designing objects to be thread-safe such as immutability, locking, and splitting state.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single and multi-process programming, multi-threading, processes, threads, synchronization, deadlocks, and strategies for designing objects to be thread-safe such as immutability, locking, and containment. It also summarizes high-level concurrency utilities in Java like locks, executors, concurrent collections, and atomic variables.
The document discusses several programming paradigms including single process, multi process, and multi-core/multi-thread. It then covers topics such as processes, threads, synchronization, and liveness issues in concurrent programming including deadlock, starvation and livelock. Processes run independently while threads share data and scheduling. Synchronization prevents interference and inconsistencies between threads accessing shared data. Liveness issues can cause programs to halt indefinitely.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. Key concepts discussed include processes, threads, synchronization, deadlocks, and high-level concurrency objects like locks, executors, and concurrent collections. The document provides examples of implementing and managing threads, as well as communicating between threads using techniques like interrupts, joins, and guarded blocks.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. Key concepts discussed include processes, threads, synchronization, deadlocks, and high-level concurrency objects like locks, executors, and concurrent collections. The document provides examples of implementing threads using subclasses and interfaces, and communicating between threads using interrupts, joins, and guarded blocks.
This document discusses several programming paradigms and concepts related to multi-threaded programming. It covers single process vs multi process vs multi-core/multi-threaded programming. It also discusses processes, threads, synchronization mechanisms like semaphores and barriers, and concurrency issues like deadlock, starvation and livelock that can occur in multi-threaded programs.
This document discusses various programming paradigms and concurrency concepts in Java. It covers single process and multi-process programming, as well as multi-core and multi-threaded programming. The document also discusses processes, threads, synchronization, deadlocks, and strategies for designing objects to be thread-safe such as immutability, locking, and splitting state.
Reactive Design Patterns: a talk by Typesafe's Dr. Roland KuhnZalando Technology
We had the great pleasure of hosting a talk by Dr. Roland Kuhn: leader of Typesafe’s Akka project, and coauthor of the book Reactive Design Patterns and the Reactive Manifesto. For a standing-room-only crowd, Roland highlighted the importance of making reactive software: of considering responsiveness, maintainability, elasticity and scalability from the outset of development. He explored several architecture elements that are commonly found in reactive systems, such as the circuit breaker, various replication techniques, and flow control protocols. These patterns are language-agnostic and also independent of the abundant choice of reactive programming frameworks and libraries. Check out his slides!
The document discusses modern Java concurrency and provides examples of how to write concurrent code safely and take advantage of modern hardware. It introduces key concepts like the java.util.concurrent package, locks, conditions, concurrent data structures, executors and fork/join. Examples are provided for using constructs like CountDownLatch, LinkedBlockingQueue, thread pools and fork/join tasks. The overall message is that modern Java concurrency allows safer and faster concurrent programming compared to traditional approaches.
The document discusses parallel computing techniques using threads. It describes domain decomposition, where a problem is divided into independent tasks that can be executed concurrently by threads. Matrix multiplication is provided as an example, where each element of the resulting matrix is computed independently by a thread. Functional decomposition, where a problem is broken into distinct computational functions, is also introduced. Programming models for threads in Java, .NET and POSIX are overviewed.
The document discusses a scalable non-blocking coding style for concurrent programming. It proposes using large arrays for parallel read and update access without locks or volatile variables. Finite state machines are replicated per array word, where successful compare-and-set operations transition states. Failed CAS causes retry but ensures global progress. Array resizing is handled by "marking" words to prevent late updates and copying in parallel using the state machine. The style aims to provide high performance concurrent data structures that are as fast as non-thread-safe implementations but with correctness guarantees. Example applications discussed include a lock-free bit vector and resizable hash table.
The document discusses modern Java concurrency. It introduces the topic and explains why developers may want to utilize modern Java concurrency features. It discusses the java.util.concurrent library and how it provides building blocks for concurrent and thread-safe code like ReentrantLock, Condition, ConcurrentHashMap. It provides examples of using constructs like CountDownLatch, ThreadPoolExecutor, ForkJoin to write concurrent applications in a safer and more performant way compared to traditional locking approaches.
Similar to COMP522-2019-Java-Memory-Model.pdf (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
2. Why Java needs a well-formed
memory model
• Java supports threads running on shared memory
• Java memory model defines multi-threaded Java
program semantics
• Key concerns: Java memory model specifies legal
behaviors and provides safety and security
properties
3/26/19 2
3. Why should we care about Java
Memory Model
• A programmer should know
• Junior level
• Use monitor, locks, and volatile properly
• Learn safety guarantees of Java
• Intermediate level
• Reason the correctness of concurrent programs
• Use concurrent data structures (e.g ConcurrentHashMap)
• Expert level
• Understand and optimize utilities in java.util.concurrent
• AbstractExecutorService
• Atomic variables
3/26/19 3
4. Create a singleton to get a static
instance
• Singleton: only want a single instance in a program
• Database
• Logging
• Configuration
3/26/19 4
public class Instance {
private static Instance instance;
public static Instance getInstance() {
if (instance == null)
instance = new Instance();
return instance;
}
}
1
2
3
4
5
6
7
8
5. The simple singleton is thread-
unsafe
3/26/19 5
public class Instance {
private static Instance instance;
public static Instance getInstance() {
if (instance == null)
instance = new Instance();
return instance;
}
}
1
2
3
4
5
6
7
8
Threads
T0
T1
Line 4
Line 4
Line 5
Line 5
Line 6
Line 6
getInstance
getInstance
Time Line
Create two instances at the same time!
6. Use synchronized keyword
3/26/19 6
• Java synchronized keyword can be used in different
contexts
• Instance methods
• Code blocks
• Static methods
• Only one thread can execute inside a static synchronized
method per class, irrespective of the number of instances it
has.
7. The synchronized singleton has
low performance
3/26/19 7
public class Instance {
private static Instance instance;
public synchronized static Instance getInstance() {
if (instance == null)
instance = new Instance();
return instance;
}
}
1
2
3
4
5
6
7
8
Threads
T0
T1
Line 4 Line 5 Line 6
getInstance
getInstance
Time Line
Line 4 Line 5 Line 6
8. Use double-checked lock to make
it more efficient
• Motivation: In the early days, the cost of
synchronization could be quite high
• Idea: Avoid the costly synchronization for all
invocations of the method except the first
• Solution:
• First check if the instance is null or not
• If instance is null, enter a critical section to create the
object
• If instance is not null, return instance
3/26/19 8
9. Double checked-lock
implementation
3/26/19 9
public class Instance {
private static Instance instance;
public static Instance getInstance() {
if (instance == null) {
synchronized (Instance.class) {
if (instance == null) {
instance = new Instance();
}
}
}
return instance;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
10. How double-checked lock goes
wrong
• Brief answer: instruction reorder
• Suppose T0 is initializing with the following three
steps
1) mem = allocate(); //Allocate memory for Singleton
2) ctorSingleton(mem); //Invoke constructor
3) object.instance = mem; //initialize instance.
• What if step 2 is interchanged with step 3?
• Another thread T1 might see the instance before being
fully constructed
3/26/19 10
11. A thread returns an object that
has not been constructed
3/26/19 11
public static Instance getInstance() {
if (instance == null) {
synchronized (UnsafeInstance.class) {
if (instance == null) {
instance = new Instance();
}
}
}
return instance;
}
3
4
5
6
7
8
9
10
11
12
Threads
T0
T1
Line 4
Line 4
Line 5
Line 11
Line 6
getInstance
getInstance
Time Line
Line 7
mem = allocate();
object.instance = mem;
ctorSingleton(mem);
T1 returns before constructor completes!
12. Use volatile to avoid reordering
• The behavior of volatile differs significantly between
programming languages
• C/C++
• Volatile keyword means always read the value of the variable
memory
• Operations on volatile variables are not atomic
• Cannot be used as a portable synchronization mechanism
• Java
• Prevent reordering
• Derive a synchronization order on top of Java Memory Model
3/26/19 12
13. Use volatile to avoid reordering
• Java’s volatile was not consistent with developers
intuitions
• The original Java memory model allowed for volatile
writes to be reordered with nonvolatile reads and writes
• Under the new Java memory model (from JVM
v1.5), volatile can be used to fix the problems with
double-checked locking
3/26/19 13
14. Java memory model history
• 1996: An Optimization Pattern for Efficiently
Initializing and Accessing Thread-safe Objects,
Douglas C. Schmidt and etc.
• 1996: The Java Language Specification, chapter 17,
James Gosling and etc.
• 1999: Fixing the Java Memory Model, William Pugh
• 2004: JSR 133---Java Memory Model and Thread
Specification Revision
3/26/19 14
15. Review: sequential consistency
• Total order: Memory actions must appear to
execute one at a time in a single total order
• Program order: Actions of a given thread must
appear in the same order in which they appear in
the program
3/26/19 15
16. Review: sequential consistency
violation
3/26/19 16
r2 = x
y = 1
r1 = y
x = 2
Thread 1 Thread 2
Initial conditions: x = y = 0
Final results: r2 == 2 and r1 == 1?
Decision: Disallowed. Violates sequential consistency
1
2
3
4
Circular dependency!
Inter threads dependency
Intra thread dependency
17. Java memory model balances
between performance and safety
• Sequential consistency
• Easy to understand
• Restricts the use of many compiler and hardware
transformations
• Relaxed memory models
• Allow more optimizations
• Hard to reason about the correctness
3/26/19 17
18. Java memory model hides underlying
hardware memory model
3/26/19 18
Java Program
High-level Language Memory Model
Java Compiler
Java Runtime
Hardware Memory Model
TSO, Power’s weak model, …
19. Java defines data-race-free model
• Data race occurs when two threads access the
same memory location, at least one of the accesses
is a write, and there is no intervening
synchronization
• A data-race-free Java program guarantees
sequential consistency (Correctly synchronized)
3/26/19 19
20. Another definition of data-race from
Java Memory Model’s perspective
• Two accesses x and y form a data race in an
execution of a program if they are from different
threads, they conflict, and they are not ordered by
happens-before
3/26/19 20
21. Happens-before memory model
• A simpler version than the full Java Memory Model
• Happens-before order
• The transitive closure of program order and the synchronizes-
with order
• Happens-before consistency
• Determines the value that a non-volatile read can see
• Synchronization order consistency
• Determines the value that a volatile read can see
• Solves part of the Java mysterious problems
3/26/19 21
22. Program order definition
• The program order of thread T is a total order that
reflects the order in which these actions would be
performed according to intra-thread semantics of T
• If x and y are actions of the same thread and x comes
before y in program order, then hb(x, y) (i.e. x happens-
before y)
3/26/19 22
23. Eliminate ambiguity in program
order definition
• Given a program in Java
• Program order does not mean that y = 6 must be
subsequent to x = 5 from a wall clock perspective. It
only means that the sequence of actions executed
must be consistent with that order
3/26/19 23
y = 6
x = 5
1
2
24. What should be consistent in
program order
• Happens-before consistency
• A read r of a variable v is allowed to observe a write w to v if
• r does not happen-before w (i.e., it is not the case that hb(r, w)) – a
read cannot see a write that happens-after it, and
• There is no intervening write w0 to v (i.e., no write w0 to v such
that hb(w, w0), hb(w0, r)) –the write w is not overwritten along a
happens-before path.
• Given a Java program
• hb(1, 2) & hb(2, 3) -> hb(1, 3), so z sees 6 be written to
y
3/26/19 24
y = 6
x = 5
z = y
1
2
3
25. Synchronization Order
• A synchronization order is a total order over all of
the synchronization actions of an execution
• A write to a volatile variable v synchronizes-with all
subsequent reads of v by any thread;
• An unlock action on monitor m synchronizes-with all
subsequent lock actions on m that were performed by
any thread;
• …
• If an action x synchronizes-with a following action y,
then we have hb(x, y)
3/26/19 25
26. What should be consistent in
synchronization order
• Synchronization order consistency
• Synchronization order is consistent with program order
• Each read r of a volatile variable v sees the last write to v
to come before it in the synchronization order
3/26/19 26
27. Happens-before memory model
example
3/26/19 27
x = 1
ready = true
if (ready)
r1 = x
Thread 1 Thread 2
1
2
3
4
synchronization-with
program order
Initial conditions: x = 0, ready = false, ready is volatile
Final results: r1 == 1?
Decision: Allowed. The program is correctly synchronized
Proof:
Program order: hb(L1, L2) and hb(L3, L4)
Synchronization order: hb(L2, L3)
Transitive: hb(L1, L4)
Recall data-race definition:
Two accesses x and y form a
data race in an execution of a
program if they are from
different threads, they conflict,
and they are not ordered by
happens-before
28. Correct double-checked lock with
volatile
3/26/19 28
public class Instance {
private volatile static Instance instance;
public static Instance getInstance() {
if (instance == null) {
synchronized (Instance.class) {
if (instance == null) {
instance = new Instance();
}
}
}
return instance;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
1. mem = allocate();
2. ctorSingleton(mem);
3. object.instance = mem;
hb(2, 3) ensures that L2’s write to mem can be seen at L3
29. Happens-before doesn’t solve all
problems
3/26/19 29
r1 = x
if (r1 != 0)
y = 42
r2 = y
if (r2 != 0)
x = 42
Thread 1 Thread 2
Initial conditions: x = y = 0
Final results: r1 == r2 == 42?
Decision: Disallowed. Because the values are out-of-thin-air.
In a future aggressive system, Thread 1 could speculatively write the value 42 to y.
How to propose a methodology to disallow these behaviors?
1
2
3
4
5
6
30. Happens-before doesn’t solve all
problems
3/26/19 30
r1 = a
r2 = a
if (r1 == r2)
b = 2
r3 = b
a = r3
Thread 1 Thread 2
Initial conditions: a = 0, b = 1
Final results: r1 == r2 == r3 == 2?
Decision: Allowed. A compiler may determines that r1 and r2 have the same value and
eliminate if r1 == r2 (L3). Then, b = 2 (L4) can be moved to an earlier position (L1)
1
2
3
4
5
6
b = 2
r1 = a
r2 = r1
if (true)
r3 = b
a = r3
Thread 1 Thread 2
1
2
3
4
4
5
31. What’s the difference between
two programs
• One difference between the acceptable and
unacceptable results is that in latter program, the
write that we perform (i.e. b = 2) would also have
occurred if we had carried on the execution in a
sequentially consistent way.
• In the former program, value 42 in any sequentially
consistent execution will not be written.
3/26/19 31
32. Well-behaved execution
• We distinguish two programs by considering
whether those writes could occur in a sequentially
consistent execution.
• Well-behaved execution
• A read that must return the value of a write that is
ordered before it by happens-before.
3/26/19 32
33. Disallowed examples-data
dependency
3/26/19 33
r1 = x
a[r1] = 0
r2 = a[0]
y = r2
r3 = y
x = r3
Thread 1 Thread 2
1
2
3
4
5
6
Initial conditions: x = y = 0; a[0] = 1, a[1] = 2
Final results: r1 == r2 == r3 == 1?
Decision: Disallowed. Because values are out-of-thin-air.
Proof: We have hb(L1, L2), hb(L2, L3). To let r2 == 1, a[0] must be 0. Since initially
a[0] == 1 and hb(L2, L3), we have r1 == 0 at a[r1] = 0 (L2). r1 at L2 is the final value
Because hb(L1, L2), r1 at L2 must see the write to r1 at r1 = x (L1).
34. Disallowed examples-control
dependency
3/26/19 34
z = 1 r1 = z
if (r1 == 0)
x = 1
Thread 1 Thread 2
1 2
3
4
r2 = x
y = r2
Thread 3
r3 = y
x = r3
Thread 4
5
6
7
8
Initial conditions: x = y = z = 0
Final results: r1 == r2 == r3 == 1?
Decision: Disallowed. Because values are out-of-thin-air.
Proof: Because we have hb(L5, L6), to let r2 == 1 (so that y = 1), x = 1 (L4) must be
executed. If L4 is executed, if (r1 == 0) (L3) must be true. However, since r1 == 1 and
hb(L2, L3), L4 cannot be executed.
35. Disallowed examples-control
dependency
3/26/19 35
r1 = x
if (r1 == 0)
x = 1
Thread 1
1
2
3
r2 = x
y = r2
Thread 2
r3 = y
x = r3
Thread 3
4
5
6
7
Initial conditions: x = y = 0
Final results: r1 == r2 == r3 == 1?
Decision: Disallowed. Because values are out-of-thin-air
Proof: The same reason as the previous example.
36. Causality
• Actions that are committed earlier may cause
actions that are committed later to occur
• The behavior of incorrectly synchronized programs
is bounded by causality
• The causality requirement is strong enough to
respect the safety and security properties of Java
and weak enough to allow standard compiler and
hardware optimizations
3/26/19 36
37. Justify a correct execution
• Build up causality constraints to justify executions
• Ensures that the occurrence of a committed action and
its value does not depend on an uncommitted data race
• Justification steps
• Starting with the empty set as C0
• Perform a sequence of steps where we take actions from
the set of actions A and add them to a set of committed
actions Ci to get a new set of committed actions Ci+1
• To demonstrate that this is reasonable, for each Ci we
need to demonstrate an execution E containing Ci that
meets certain conditions
3/26/19 37
38. Justification examples-reorder
3/26/19 38
r1 = x
y = 1
r2 = y
x = r2
Thread 1 Thread 2
Initial conditions: x = y = 0
Final results: r1 == r2 == 1?
Decision: Allowed, because of compiler transformation. y = 1 (L2) is a constant that
does not affect r1 = x (L2).
1
2
3
4
y = 1
r1 = x
r2 = y
x = r2
Thread 1 Thread 2
1
2
3
4
C1:
y = 1
r2 = y (0)
x = r2 (0)
r1 = x (0)
C2:
y = 1
r2 = y (1)
x = r2 (1)
r1 = x (0)
C3:
y = 1
r2 = y (1)
x = r2 (1)
r1 = x (0)
C4:
y = 1
r2 = y (1)
x = r2 (1)
r1 = x (1)
39. Justification examples-redundant
elimination
3/26/19 39
r1 = a
if (r1 == 1)
b = 1
r2 = b
if (r2 == 1)
a = 1
if (r2 == 0)
a = 1
Thread 1 Thread 2
1
2
3
4
5
6
7
8
Initial conditions: a = b = 0
Final results: r1 == r2 == 1?
Decision: Allowed. A compiler could determine that Thread 2 always writes 1 to a and
hoists the write to the beginning of Thread 2.
C1:
a = 1
r1 = a (0)
b = 1 (0)
r2 = b (0)
C2:
a = 1
r1 = a (1)
b = 1 (0)
r2 = b (0)
C3:
y = 1
r1 = a (1)
b = 1 (1)
r2 = b (0)
C4:
y = 1
r1 = a (1)
b = 1 (1)
r2 = b (1)
r1 = a
if (r1 == 1)
b = 1
a = 1
r2 = b
Thread 1 Thread 2
1
2
3
4
5
40. Justification examples-inter
thread analysis
3/26/19 40
r1 = x
r2 = 1 + r1*r1 – r1
y = r2
r3 = y
x = r3
Thread 1 Thread 2
1
2
3
4
5
Initial conditions: x = y = 0
Final results: r1 == r2 == 1?
Decision: Allowed. Interthread analysis could determine that x and y are always either
0 or 1, and thus determine that r2 is always 1. Once this determination is made, the write
of 1 to y could be moved early in Thread 1.
r2 = 1
y = r2
r1 = x
r3 = y
x = r3
Thread 1 Thread 2
1
2
3
4
5
C1:
r2 = 1
y = r2 (0)
r3 = y (0)
x = r3 (0)
r1 = x (0)
C2:
r2 = 1
y = r2 (1)
r3 = y (0)
x = r3 (0)
r1 = x (0)
C3:
r2 = 1
y = r2 (1)
r3 = y (1)
x = r3 (0)
r1 = x (0)
C4:
r2 = 1
y = r2 (1)
r3 = y (1)
x = r3 (1)
r1 = x (0)
C5:
r2 = 1
y = r2 (1)
r3 = y (1)
x = r3 (1)
r1 = x (1)
41. Comparison between allowed
examples and disallowed examples
3/26/19 41
r1 = x
if (r1 == 0)
x = 1
Thread 1
1
2
3
r2 = x
y = r2
Thread 2
r3 = y
x = r3
Thread 3
4
5
6
7
r1 = x
if (r1 == 0)
x = 1
r2 = x
y = r2
Thread 1
1
2
3
4
5
r3 = y
x = r3
Thread 2
6
7
Initial conditions: x = y = 0
Final results: r1 == r2 == r3 == 1?
Decision: Allowed. Interthread analysis could determine that x is always 0 or 1. So
we can replace r2 = x by r2 = 1 and y = r2 by y = 1. After moving y = 1 and r2 = 1 to an
earlier position, we get r1 == r2 == r3.
y = 1
r2 = 1
r1 = x
if (r1 == 0)
x = 1
Thread 1
r3 = y
x = r3
Thread 2
6
7
1
2
3
4
5
42. Comparison between allowed
examples and disallowed examples
3/26/19 42
r1 = x
if (r1 == 0)
x = 1
Thread 1
1
2
3
r2 = x
y = r2
Thread 2
r3 = y
x = r3
Thread 3
4
5
6
7
Initial conditions: a = b = 0
Final results: r1 == r2 == r3 == 2?
Decision: Allowed. Although there are some SC executions in which r1 != r2 (L3), we can
hoist b = 2 (L4) to an earlier position and there is an SC execution such that r1 == r2.
For the above case, there’s no SC execution such that r1 == 0 (L2) is true and
r1 == r2 == r3 == 1. That is, if we hoist x = 1 to an earlier position, L2 must be false.
r1 = a
r2 = a
if (r1 == r2)
b = 2
r3 = b
a = r3
Thread 1 Thread 2
1
2
3
4
1
2
1
2
3
1
2
43. Practical issues-final field
• “For space reasons, we omit discussion of two
important issues in the Java memory model: the
treatment of final fields, and finalization / garbage
collection.”
• Java’s final field also allows programmers to
implement thread-safe immutable objects without
synchronization
3/26/19 43
44. Rule of thumb
• Set the final fields for an object in that object's
constructor; and do not write a reference to the
object being constructed in a place where another
thread can see it before the object's constructor is
finished.
• What happens in the constructor
• If a read occurs after the field is set in the constructor, it
sees the value the final field is assigned, otherwise it
sees the default value.
3/26/19 44
45. Can we change a final field?
• Reflection introduces problems
• The specification allows aggressive optimization of
final fields. Within a thread, it is permissible to
reorder reads of a final field with those
modifications of a final field that do not take place
in the constructor.
3/26/19 45
46. Practical issues-final field
3/26/19 46
• new A().f() could return -1, 0, or 1
class A {
final int x;
A() { x = 1; }
int f() { return d(this,this); }
int d(A a1, A a2) {
int i = a1.x;
g(a1);
int j = a2.x;
return j - i;
}
static void g(A a) {
// uses reflection to change a.x to 2
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
reorder!
47. Practical issues-final field
3/26/19 47
• new A().f() could return -1, 0, or 1
class A {
final int x;
A() { x = 1; }
int f() { return d(this,this); }
int d(A a1, A a2) {
int i = a1.x;
g(a1);
int j = a2.x;
return j - i;
}
static void g(A a) {
// uses reflection to change a.x to 2
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
return 1
48. Practical issues-final field
3/26/19 48
• new A().f() could return -1, 0, or 1
class A {
final int x;
A() { x = 1; }
int f() { return d(this,this); }
int d(A a1, A a2) {
g(a1);
int i = a1.x;
int j = a2.x;
return j - i;
}
static void g(A a) {
// uses reflection to change a.x to 2
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
return 0
49. Practical issues-final field
3/26/19 49
• new A().f() could return -1, 0, or 1
class A {
final int x;
A() { x = 1; }
int f() { return d(this,this); }
int d(A a1, A a2) {
int j = a2.x;
g(a1);
int i = a1.x;
return j - i;
}
static void g(A a) {
// uses reflection to change a.x to 2
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
return -1
50. Practical issue-efficient singleton
• The initialization-on-demand holder (design pattern)
idiom is a lazy-loaded singleton. In all versions of
Java, the idiom enables a safe, highly concurrent lazy
initialization with good performance.
3/26/19 50
public class SafeInstance {
private SafeInstance() {}
private static class LazyHolder {
static final SafeInstance INSTANCE = new SafeInstance();
}
public static SafeInstance getInstance() {
return LazyHolder.INSTANCE;
}
}
1
2
3
4
5
6
7
8
9
51. Why initialization-on-demand
holder is safe?
• When the class is initialized?
• A class or interface type T will be initialized
immediately before the first occurrence of any one
of the following:
• A static field declared by T is used and the field is not a
constant variable
• A variable of primitive type or type String, that is final and
initialized with a compile-time constant expression is called a
constant variable.
• …
3/26/19 51
52. Why initialization-on-demand
holder is safe?
• Why initialization is safe?
• For each class or interface C, there is a unique
initialization lock LC. The mapping from C to LC is
left to the discretion of the Java Virtual Machine
implementation.
• We can also implement singleton by ENUM in Java.
3/26/19 52
53. Conclusion
• Following happens-before rules allows us to write a
data-race-free program that is correctly
synchronized.
• Java memory model provides a clear definition of
well-behaved executions, preventing values come
out-of-thin-air in the presence of data race.
• Double-checked lock is thread-safe for JVM later
than v1.5.
3/26/19 53
54. Reference Books
• Goetz, Brian, et al. Java concurrency in practice.
Pearson Education, 2006.
• Gosling, James, et al. The Java language
specification. Pearson Education, 2014.
• Lea, Doug. "The JSR-133 cookbook for compiler
writers." (2008).
3/26/19 54