In computing, the producer–consumer problem (also known as the bounded-buffer problem) is a classic example of a multi-process synchronization problem. The problem is described as two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
The document discusses different types of early computer processing methods. Serial processing required users to restart the entire setup sequence if an error occurred, wasting significant time. To improve efficiency, simple batch processing was developed where multiple jobs were submitted in batches to maximize processor utilization and reduce wasted time from scheduling and setup between individual jobs. This represented an early form of multiprogramming to better use the expensive computer resources.
This document discusses multiprocessor computer systems. It begins by defining a multiprocessor system as having two or more CPUs connected to a shared memory and I/O devices. Multiprocessors are classified as MIMD systems. They provide benefits like improved performance over single CPU systems for tasks like multi-user/multi-tasking applications. Multiprocessors are further classified as tightly-coupled or loosely-coupled based on shared vs distributed memory. Common interconnection structures discussed include bus, multport memory, crossbar switch, and hypercube networks.
DHCP (Dynamic Host Configuration Protocol) is a protocol that automatically provides IP hosts with IP addresses and other configuration information from a DHCP server. It uses UDP and works by having clients broadcast discover messages to locate servers, which respond with offer messages containing IP addresses and configuration options. Servers then acknowledge address assignments, while also allowing reservations of specific addresses and exclusions of certain ranges. Windows Server backs up the DHCP database and configuration every 60 minutes for restoration using the netsh command.
DHCP dynamically assigns IP addresses and other network configuration parameters to clients. It simplifies network installation and maintenance. A DHCP server uses a pool of IP addresses to assign to clients through UDP packets. A router can be configured as a DHCP server through commands that define an address pool, default gateway, and excluded addresses.
Multiprocessors(performance and synchronization issues)Gaurav Dalvi
This document discusses performance and synchronization issues in multiprocessor systems. It describes shared memory architectures like UMA, NUMA and distributed shared memory. It discusses factors that affect cache performance like CPU count, cache size and block size. It also discusses synchronization mechanisms like locks, flags and barriers that are used to synchronize access to shared resources. Different hardware primitives for synchronization are described, including atomic exchange, test-and-set, and load-linked/store-conditional instructions.
Continguous Memory Allocator in the Linux KernelKernel TLV
Agenda:
Continguous Memory Allocator - how to allocate large continguous memory for large scale DMA in the kernel.
Speaker:
Mark Veltzer - CTO of Hinbit and a senior instructor at John Bryce. Mark is also a member of the Free Source Foundation and contributes to many free projects.
RPC allows a program to call a subroutine that resides on a remote machine. When a call is made, the calling process is suspended and execution takes place on the remote machine. The results are then returned. This makes the remote call appear local to the programmer. RPC uses message passing to transmit information between machines and allows communication between processes on different machines or the same machine. It provides a simple interface like local procedure calls but involves more overhead due to network communication.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The document discusses different types of early computer processing methods. Serial processing required users to restart the entire setup sequence if an error occurred, wasting significant time. To improve efficiency, simple batch processing was developed where multiple jobs were submitted in batches to maximize processor utilization and reduce wasted time from scheduling and setup between individual jobs. This represented an early form of multiprogramming to better use the expensive computer resources.
This document discusses multiprocessor computer systems. It begins by defining a multiprocessor system as having two or more CPUs connected to a shared memory and I/O devices. Multiprocessors are classified as MIMD systems. They provide benefits like improved performance over single CPU systems for tasks like multi-user/multi-tasking applications. Multiprocessors are further classified as tightly-coupled or loosely-coupled based on shared vs distributed memory. Common interconnection structures discussed include bus, multport memory, crossbar switch, and hypercube networks.
DHCP (Dynamic Host Configuration Protocol) is a protocol that automatically provides IP hosts with IP addresses and other configuration information from a DHCP server. It uses UDP and works by having clients broadcast discover messages to locate servers, which respond with offer messages containing IP addresses and configuration options. Servers then acknowledge address assignments, while also allowing reservations of specific addresses and exclusions of certain ranges. Windows Server backs up the DHCP database and configuration every 60 minutes for restoration using the netsh command.
DHCP dynamically assigns IP addresses and other network configuration parameters to clients. It simplifies network installation and maintenance. A DHCP server uses a pool of IP addresses to assign to clients through UDP packets. A router can be configured as a DHCP server through commands that define an address pool, default gateway, and excluded addresses.
Multiprocessors(performance and synchronization issues)Gaurav Dalvi
This document discusses performance and synchronization issues in multiprocessor systems. It describes shared memory architectures like UMA, NUMA and distributed shared memory. It discusses factors that affect cache performance like CPU count, cache size and block size. It also discusses synchronization mechanisms like locks, flags and barriers that are used to synchronize access to shared resources. Different hardware primitives for synchronization are described, including atomic exchange, test-and-set, and load-linked/store-conditional instructions.
Continguous Memory Allocator in the Linux KernelKernel TLV
Agenda:
Continguous Memory Allocator - how to allocate large continguous memory for large scale DMA in the kernel.
Speaker:
Mark Veltzer - CTO of Hinbit and a senior instructor at John Bryce. Mark is also a member of the Free Source Foundation and contributes to many free projects.
RPC allows a program to call a subroutine that resides on a remote machine. When a call is made, the calling process is suspended and execution takes place on the remote machine. The results are then returned. This makes the remote call appear local to the programmer. RPC uses message passing to transmit information between machines and allows communication between processes on different machines or the same machine. It provides a simple interface like local procedure calls but involves more overhead due to network communication.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
Delay refers to the time taken to transfer a message from sender to receiver. It is calculated based on processing delay, queuing delay, transmission delay, and propagation delay. Packet loss occurs when a router's queue is full and it must drop packets. Bandwidth is the maximum rate of data transfer, while throughput is the actual rate of successful data transfer from one point to another.
Internet Message Access Protocol (IMAP) is a communications protocol for email retrieval and storage developed by Mark Crispin in 1986 at Stanford University as an alternative to POP.
IMAP uses port 143, and IMAP over SSL (IMAPS) uses port 993. IMAP, unlike POP, specifically allows multiple clients simultaneously connected to the same mailbox, and through flags stored on the server, different clients accessing the same mailbox at the same or different times can detect state changes made by other clients.
The document provides an introduction to distributed systems, defining them as a collection of independent computers that communicate over a network to act as a single coherent system. It discusses the motivation for and characteristics of distributed systems, including concurrency, lack of a global clock, and independence of failures. Architectural categories of distributed systems include tightly coupled and loosely coupled, with examples given of different types of distributed systems such as database management systems, ATM networks, and the internet.
This document discusses different approaches to memory management in operating systems. It begins by describing monoprogramming without swapping or paging, where one program uses all available memory at a time. It then describes multiprogramming using fixed memory partitions, either with separate queues for each partition or a single queue. The challenges of relocation and protection when programs are loaded at different addresses are also covered. Finally, it introduces the concepts of swapping and virtual memory for handling situations where not all active processes fit in main memory.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
A distributed operating system allows applications to run on multiple interconnected computers. It makes the distributed computers appear as a single centralized system to users. There are two main types - networking operating systems, which allow file and printer sharing on a local network, and distributed operating systems, where users are unaware of the underlying machines. Effective communication between the distributed systems requires addressing issues like naming, routing, connections, and dealing with contention for shared resources. While distributed systems provide benefits like improved performance and reliability, they also face challenges such as security, bandwidth limitations, and reduced performance due to network delays.
The document describes how input/output (I/O) devices communicate with the processor and memory. I/O devices are connected to the processor and memory via a shared bus. Each device has a unique address and uses address, data, and control lines on the bus. Interrupts allow I/O devices to signal the processor when they need attention, reducing wasted processor time. Multiple interrupt lines allow different devices to interrupt independently and ensure the correct interrupt service routine is executed.
This document discusses thread scheduling in operating systems. It defines threads as the basic unit of CPU utilization and describes different types of threads like user-level and kernel-level threads. It explains that thread scheduling is needed to exploit parallelism in multiprocessor systems. Common approaches to thread scheduling include load sharing, dedicated processor assignment, and dynamic scheduling. It provides examples of thread scheduling in Solaris, Linux, and Windows XP operating systems.
Distributed shared memory (DSM) is a memory architecture where physically separate memories can be addressed as a single logical address space. In a DSM system, data moves between nodes' main and secondary memories when a process accesses shared data. Each node has a memory mapping manager that maps the shared virtual memory to local physical memory. DSM provides advantages like shielding programmers from message passing, lower cost than multiprocessors, and large virtual address spaces, but disadvantages include potential performance penalties from remote data access and lack of programmer control over messaging.
This document contains two sample question papers for an Operating Systems exam for a 4th semester BTech course in IT/CSE. Each paper has three sections - Section A contains 10 short answer questions worth 2 marks each, Section B contains 4 long answer questions worth 5 marks each, and Section C contains 2 long answer questions worth 10 marks each. The questions cover topics like virtual memory, processes, threads, CPU scheduling algorithms, deadlocks, memory management techniques like paging, segmentation, swapping etc.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
RTLinux is a real-time operating system that allows real-time applications to run on top of Linux. It modifies the Linux kernel to add a virtual machine layer with a separate task scheduler that prioritizes real-time tasks over standard Linux processes. This enables RTLinux to support hard real-time deadlines. Programming in RTLinux involves creating modules that can be loaded and unloaded from the kernel using specific commands. Real-time threads and synchronization objects like mutexes are implemented using POSIX interfaces.
The presentation aims towards imparting the concept of PIPES and the mechanism they follow.
REFERENCES :
Operating System 8th Edition
by : Abraham Silberschatz
Telnet and SSH configuration on ubuntu and windows. this presentation show how we can configure telnet and ssh on windows and linux and what additional software we will have to required.
ARP (Address Resolution Protocol) maps logical IP addresses to physical MAC addresses. It works by broadcasting an ARP request packet containing the logical IP address, and the physical host with that IP will respond with its MAC address in an ARP reply packet. ARP packets are encapsulated within Ethernet frames to be transmitted at the data link layer, and ARP is used to resolve addresses both for hosts on the same local network and for traffic destined for a default router on another network.
The operating system provides several key services to programs and users, including user interface, program execution, input/output operations, file system management, communications, error detection and handling, resource allocation, accounting, protection, command interpretation, and resource management. These services allow for running programs, performing input/output, creating and managing files, connecting between processes, detecting and handling errors, tracking usage for billing, protecting processes from each other, interpreting user commands, and optimizing resource usage.
In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.
The document discusses process synchronization and concurrency control techniques used to ensure orderly execution of cooperating processes. It describes solutions to classical synchronization problems like the bounded buffer problem, readers-writers problem, and dining philosophers problem using semaphores and monitors. Atomic transactions are achieved through techniques like write-ahead logging and checkpoints to assure failures do not compromise data consistency.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
Delay refers to the time taken to transfer a message from sender to receiver. It is calculated based on processing delay, queuing delay, transmission delay, and propagation delay. Packet loss occurs when a router's queue is full and it must drop packets. Bandwidth is the maximum rate of data transfer, while throughput is the actual rate of successful data transfer from one point to another.
Internet Message Access Protocol (IMAP) is a communications protocol for email retrieval and storage developed by Mark Crispin in 1986 at Stanford University as an alternative to POP.
IMAP uses port 143, and IMAP over SSL (IMAPS) uses port 993. IMAP, unlike POP, specifically allows multiple clients simultaneously connected to the same mailbox, and through flags stored on the server, different clients accessing the same mailbox at the same or different times can detect state changes made by other clients.
The document provides an introduction to distributed systems, defining them as a collection of independent computers that communicate over a network to act as a single coherent system. It discusses the motivation for and characteristics of distributed systems, including concurrency, lack of a global clock, and independence of failures. Architectural categories of distributed systems include tightly coupled and loosely coupled, with examples given of different types of distributed systems such as database management systems, ATM networks, and the internet.
This document discusses different approaches to memory management in operating systems. It begins by describing monoprogramming without swapping or paging, where one program uses all available memory at a time. It then describes multiprogramming using fixed memory partitions, either with separate queues for each partition or a single queue. The challenges of relocation and protection when programs are loaded at different addresses are also covered. Finally, it introduces the concepts of swapping and virtual memory for handling situations where not all active processes fit in main memory.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
A distributed operating system allows applications to run on multiple interconnected computers. It makes the distributed computers appear as a single centralized system to users. There are two main types - networking operating systems, which allow file and printer sharing on a local network, and distributed operating systems, where users are unaware of the underlying machines. Effective communication between the distributed systems requires addressing issues like naming, routing, connections, and dealing with contention for shared resources. While distributed systems provide benefits like improved performance and reliability, they also face challenges such as security, bandwidth limitations, and reduced performance due to network delays.
The document describes how input/output (I/O) devices communicate with the processor and memory. I/O devices are connected to the processor and memory via a shared bus. Each device has a unique address and uses address, data, and control lines on the bus. Interrupts allow I/O devices to signal the processor when they need attention, reducing wasted processor time. Multiple interrupt lines allow different devices to interrupt independently and ensure the correct interrupt service routine is executed.
This document discusses thread scheduling in operating systems. It defines threads as the basic unit of CPU utilization and describes different types of threads like user-level and kernel-level threads. It explains that thread scheduling is needed to exploit parallelism in multiprocessor systems. Common approaches to thread scheduling include load sharing, dedicated processor assignment, and dynamic scheduling. It provides examples of thread scheduling in Solaris, Linux, and Windows XP operating systems.
Distributed shared memory (DSM) is a memory architecture where physically separate memories can be addressed as a single logical address space. In a DSM system, data moves between nodes' main and secondary memories when a process accesses shared data. Each node has a memory mapping manager that maps the shared virtual memory to local physical memory. DSM provides advantages like shielding programmers from message passing, lower cost than multiprocessors, and large virtual address spaces, but disadvantages include potential performance penalties from remote data access and lack of programmer control over messaging.
This document contains two sample question papers for an Operating Systems exam for a 4th semester BTech course in IT/CSE. Each paper has three sections - Section A contains 10 short answer questions worth 2 marks each, Section B contains 4 long answer questions worth 5 marks each, and Section C contains 2 long answer questions worth 10 marks each. The questions cover topics like virtual memory, processes, threads, CPU scheduling algorithms, deadlocks, memory management techniques like paging, segmentation, swapping etc.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
RTLinux is a real-time operating system that allows real-time applications to run on top of Linux. It modifies the Linux kernel to add a virtual machine layer with a separate task scheduler that prioritizes real-time tasks over standard Linux processes. This enables RTLinux to support hard real-time deadlines. Programming in RTLinux involves creating modules that can be loaded and unloaded from the kernel using specific commands. Real-time threads and synchronization objects like mutexes are implemented using POSIX interfaces.
The presentation aims towards imparting the concept of PIPES and the mechanism they follow.
REFERENCES :
Operating System 8th Edition
by : Abraham Silberschatz
Telnet and SSH configuration on ubuntu and windows. this presentation show how we can configure telnet and ssh on windows and linux and what additional software we will have to required.
ARP (Address Resolution Protocol) maps logical IP addresses to physical MAC addresses. It works by broadcasting an ARP request packet containing the logical IP address, and the physical host with that IP will respond with its MAC address in an ARP reply packet. ARP packets are encapsulated within Ethernet frames to be transmitted at the data link layer, and ARP is used to resolve addresses both for hosts on the same local network and for traffic destined for a default router on another network.
The operating system provides several key services to programs and users, including user interface, program execution, input/output operations, file system management, communications, error detection and handling, resource allocation, accounting, protection, command interpretation, and resource management. These services allow for running programs, performing input/output, creating and managing files, connecting between processes, detecting and handling errors, tracking usage for billing, protecting processes from each other, interpreting user commands, and optimizing resource usage.
In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.
The document discusses process synchronization and concurrency control techniques used to ensure orderly execution of cooperating processes. It describes solutions to classical synchronization problems like the bounded buffer problem, readers-writers problem, and dining philosophers problem using semaphores and monitors. Atomic transactions are achieved through techniques like write-ahead logging and checkpoints to assure failures do not compromise data consistency.
This document contains slides from a lecture on operating system concepts. It discusses topics like process synchronization, Peterson's solution to the critical section problem, synchronization hardware, semaphores, monitors, and atomic transactions. It provides examples of how these concepts can be used to solve problems like the bounded buffer problem and consumer-producer problem. The document contains 42 slides with code snippets and explanations of key synchronization concepts.
The document discusses two solutions to classic concurrency problems:
1) An unbounded buffer problem is solved using locks and conditions to control a producer and consumer accessing a shared buffer. The solution uses locks to prevent overflow or underflow of the buffer.
2) The dining philosophers problem is solved by having philosophers randomly wait before acquiring both forks, to avoid deadlock. Philosophers release forks and rejoin the queue if they cannot acquire both within a timeout.
The document discusses two synchronization problems: the producer-consumer problem and the readers-writers problem. The producer-consumer problem involves processes that share a fixed-size buffer, with one process producing items and putting them in the buffer while another process consumes items from the buffer. The readers-writers problem involves multiple reader processes that only read shared data and writer processes that can read and write shared data, requiring synchronization to prevent writers from interfering with readers. Solutions to both problems using semaphores and monitors are presented.
The document discusses two solutions to classic concurrency problems:
1) An unbounded buffer problem is solved using locks and conditions variables to control access to a shared buffer by producer and consumer threads.
2) The dining philosophers problem is solved by having philosophers randomly wait after acquiring one fork in order to avoid deadlock when acquiring the second fork. Locks are used to limit philosophers trying to acquire forks simultaneously.
The document discusses two solutions to classic concurrency problems:
1) An unbounded buffer problem is solved using locks and conditions to control a producer and consumer accessing a shared buffer. The solution uses locks to prevent concurrent access when the buffer is full or empty.
2) The dining philosophers problem is solved by having philosophers randomly wait before acquiring both forks, to avoid deadlock. If a philosopher cannot acquire both forks within a timeout, it releases the fork and rejoins the queue.
The document discusses two solutions to classic concurrency problems:
1) An unbounded buffer problem is solved using locks and conditions to control a producer and consumer accessing a shared buffer. The solution uses locks to prevent concurrent access when the buffer is full or empty.
2) The dining philosophers problem is solved by having philosophers randomly wait before acquiring both forks, to avoid deadlock. If a philosopher cannot acquire both forks within a time limit, it releases the fork and rejoins the queue.
The document discusses two solutions to classic concurrency problems:
1) An unbounded buffer problem is solved using locks on an array to control a producer and consumer thread. The array tracks items and locks ensure only one thread accesses it at a time.
2) The dining philosophers problem is solved by having philosophers randomly wait before acquiring both forks, to avoid deadlock. Locks ensure at most two philosophers can try to acquire forks at once.
The document discusses three classic synchronization problems: the bounded buffer problem, dining philosophers problem, and readers-writers problem. For the bounded buffer problem, it describes the producer-consumer scenario and provides pseudocode for the producer and consumer using semaphores. For the dining philosophers problem, it outlines the scenario of philosophers sharing a limited number of chopsticks and presents a solution using semaphores. For the readers-writers problem, it describes the scenario of multiple readers and a single writer accessing a shared resource and provides pseudocode for readers and writers that uses semaphores and a lock to control access.
Dead Lock Analysis of spin_lock() in Linux Kernel (english)Sneeker Yeh
The document discusses spin locks and semaphores in the Linux kernel. It begins with an introduction to the difference between spin locks and semaphores. Spin locks cause threads to continuously loop trying to acquire the lock, while semaphores cause threads to sleep. An example is given of a deadlock scenario that can occur with spin locks. The document then discusses the concept of context in the kernel, including user context, interrupt context, and the control flow during procedure calls and interrupts. Log analysis and examples of double-acquire deadlocks involving spin locks are provided. The document concludes with recommendations for how to prevent deadlocks, such as using spin_lock_irqsave/restore and avoiding semaphores in interrupt context.
The document discusses Java 5 concurrency features including locks, conditions, atomic variables, blocking queues, concurrent hash maps, synchronizers like semaphores and mutexes, and the executor framework. Key points include:
- Locks provide an alternative to synchronized blocks and methods, and allow more flexible locking behavior. ReentrantLock is a common lock implementation.
- Conditions (condition variables) allow threads to wait/signal and are used with locks rather than synchronized monitors.
- Atomic variables ensure thread-safe operations on single variables without locking.
Repetition statements (loops) allow code to execute multiple times. There are three types of loops in Java: while, do, and for. The while loop continuously executes a block of code as long as a condition is true. It checks the condition, executes the code, then re-evaluates the condition. This repeats until the condition becomes false. Nested loops contain one loop within another, so the inner loop iterates fully each time the outer loop iterates.
Repetition statements (loops) allow code to execute multiple times. There are three types of loops in Java: while, do, and for. The while loop continuously executes a block of code as long as a condition is true. It checks the condition, executes the code, then re-evaluates the condition. An example prints a message twice using a counter variable. Nested loops run an inner loop fully for each iteration of the outer loop. An example nested loop would print a message 200 times. Loops must ensure the condition will become false to avoid an infinite loop.
The document discusses the producer-consumer problem in operating systems. It describes a scenario where a producer tries to add elements to a full buffer while a consumer tries to remove elements from an empty buffer, resulting in a deadlock. The proposed solution is to use semaphores to regulate access to the shared buffer and ensure only one process can access it at a time. Pseudocode is provided as an example implementation using semaphores.
The document discusses the producer-consumer problem in operating systems. It describes a scenario where a producer tries to add elements to a full buffer while a consumer tries to remove elements from an empty buffer, resulting in a deadlock. The proposed solution is to use semaphores to regulate access to the shared buffer and ensure only one process can access it at a time. Pseudocode is provided as an example implementation using semaphores.
This document discusses advanced non-blocking concurrency techniques. It begins with an introduction to lock-free programming and why it can provide benefits over locking approaches. It then covers concepts like blocking, non-blocking algorithms, and waiting without locks. Specific lock-free algorithms for a toy problem, waiting, and wakeup are presented. The document also discusses using the AbstractQueuedSynchronizer for implementing lock-free waiting queues and provides examples of how to make lock-free code fair. It concludes by emphasizing that lock-free programming is error-prone and the importance of learning patterns for concurrent code.
Threads are lightweight processes as the overhead of switching between threads is less
Synchronization allows only one thread to perform an operation on a object at a time.
Synchronization prevent data corruption
Thread Synchronization-The synchronized methods define critical sections.
You will learn the Deadlock Condition in Threads and Syncronization of Threads
Using RxJava on Android platforms provides benefits like making asynchronous code more manageable, but requires changing how developers think about programming. Key concepts include Observables that emit items, Observer subscribers, and combining/transforming data streams. For Android, RxJava and RxAndroid libraries are used, with RxAndroid mainly providing threading helpers. Effective practices include proper subscription management and error handling to avoid issues from asynchronous execution and component lifecycles.
This document provides an overview of operating systems. It discusses that an operating system acts as an interface between the user and hardware, managing resources and running applications. Key parts of an operating system include the kernel and system programs. Operating systems allow for multiprogramming and time-sharing to enable efficient sharing of resources between multiple processes. Interprocess communication and process synchronization are important aspects that operating systems facilitate.
This document is a lab manual for database management systems. It contains instructions for installing and using Visual Studio and SQL Server software. Visual Studio is a popular integrated development environment used to develop a wide range of computer programs and applications. It includes features like a code editor, debugger, and various designers. The document provides guidance on tasks for several labs covering topics like creating applications in Visual Studio, installing and managing databases in SQL Server, and building a school management system to apply concepts.
The document provides an outline for a course on data structures and algorithms. It includes topics like data types and operations, time-space tradeoffs, algorithm development, asymptotic notations, common data structures, sorting and searching algorithms, and linked lists. The course will use Google Classroom and have assignments, quizzes, and a final exam.
This document discusses algorithms and their analysis. It defines an algorithm as a finite sequence of unambiguous instructions that terminate in a finite amount of time. It discusses areas of study like algorithm design techniques, analysis of time and space complexity, testing and validation. Common algorithm complexities like constant, logarithmic, linear, quadratic and exponential are explained. Performance analysis techniques like asymptotic analysis and amortized analysis using aggregate analysis, accounting method and potential method are also summarized.
The document contains a multiple choice quiz with questions about various topics in computer science. There are 47 multiple choice questions testing knowledge about topics like binary, memory, operating systems, programming languages, networks, and security. The questions are short, with single sentences providing the prompts and possible multiple choice answers.
The document contains 40 multiple choice questions related to computer science class 12. It covers topics like variables, data types, operators, loops, functions, arrays and more. The questions test concepts like escape sequences, format specifiers, assignment operators, comments, input/output functions, and the difference between various loops in C programming language. It is a practice test to help students prepare for their computer science exam.
This document provides an introduction to databases. It defines what a database is, the steps to create one, and benefits such as fast querying and flexibility. It describes database models like hierarchical, network, entity-relationship, and relational. Key database concepts are explained, including entities, attributes, primary keys, and foreign keys. Finally, it outlines database management system components, common users, and introduces Microsoft Access.
Program, Language, & Programming Language
Object Oriented Programming vs Procedure Oriented Programming
About C
Why still Learn C?
Basic Terms
C Stuff
C Syntax
C Program
Flowcharts provide a graphical representation of steps in a process or algorithm using standard symbols. They were developed in the 1920s-1930s to document business processes but are now widely used to depict computer programs and workflows. The key symbols include boxes, diamonds, arrows, and other shapes to represent tasks, decisions, data, and flow. Flowcharts clarify complex processes, help teams understand them, and can be used to improve or design new procedures.
Algorithm
What is an algorithm?
How are mathematical statements and algorithms related?
What do algorithms have to do with computers?
Pseudo Code
What is pseudocode?
Writing pseudocode
Pseudo Code vs Algorithm
This document outlines the chapters and content covered in a Computer Science course. It includes 17 total chapters, with 5 theoretical chapters, 8 practical chapters, and 4 optional chapters. The chapters cover topics like programming concepts, algorithms, an overview of the C language, variables, operators, input/output statements, selection and iteration control structures, functions, arrays, strings, pointers, data files, data management systems, and Microsoft Access. It also provides a study plan which includes performing practical work every Tuesday, having question/answer sessions every Friday, and using other days for theoretical content, as well as creating a WhatsApp group for the class.
The document discusses computer crimes and cyber crimes under Pakistani law. It defines computer crimes such as copyright violation, cracking codes, cyberbullying and various types of computer viruses. It then outlines specific cyber crimes in Pakistan such as spreading false information about an individual, making or spreading explicit images without consent, cyberstalking, and hacking for stalking. The punishments for these crimes under Pakistani law include prison sentences up to 7 years and fines up to 10 million Pakistani rupees.
Components of Data Communication
Characteristics of Data Transmission
Communication Media
Communication Speed
Communication Hardware
Communication Software
OSI Model
This document provides an overview of basic concepts in information technology, including definitions of computers and computer systems, their key characteristics and components. It describes common input devices like keyboards, mice, and scanners, as well as output devices like monitors, printers and speakers. It also discusses computer hardware, software, data, procedures, and different generations of computers from the past to present.
1st Year Computer Science Book
Sindh Text Book Board Introduction
Introduction
Syed Zaid Irshad
Rules (that You have to Follow)
Book Introduction
10 Chapters
Theoretical Chapters are 6
Practical Chapters are 4
Chapter 1: Basic Concept of Information Technology
Introduction of Computer
Definition
Characteristics
Parts of Computer
Input
Output
Memory
Primary Storage
Secondary Storage
Ports
Language Translator
Compiler
Interpreter
Generations of Programming Language
Ages of Computers
Generations of Computer
Classification of Computers
Chapter 2: Information Networks
Types of Network
LAN
WAN
MAN
GAN
Topologies
Star
Ring
Bus
Hybrid
File Transfer Protocol
World Wide Web
Chapter 3: Data Communication
Standards
Transmission
Simples
Half Duplex
Full Duplex
Media
Twisted Pair Cable
Coaxial Cable
Fiber Optic Cable
Microwave Transmission
Satellite Transmission
Open Systems Interconnection model (OSI model)
Chapter 4: Applications and Use of Computers
Difference Between Application and Use
Impacts of Computers
Chapter 5: Computer Architecture
Address of Memory Locations
Instruction Format
Fetch and Execute
Chapter 6: Security, Copyright and The Law
Computer Crime
Computer Viruses
Computer Privacy
Software Piracy and Law
Chapter 7: Operating System
User Interface
Graphical User Interface
Operating Systems
Chapter 8: Word Processing
Introduction to MS Word
Creating
Editing
Formatting
Printing
Chapter 9: Spreadsheet
Introduction to MS Excel
Creating
Editing
Formatting
Printing
Formulae
Project
Chapter 10: Internet Browsing and Using E-mail
Create Email ID
Send Mail
Download File
Upload File
Study Plan
Every Tuesday we perform Practical
Every Friday Half of the Lecture will be used as question answer session
Rest of the days are for Theoretical Stuff
Make WhatsApp Group for class where we can share stuff related to the Subject
This document discusses SQL set operators such as UNION, UNION ALL, INTERSECT, and MINUS. It provides examples of how to use each operator to combine result sets from multiple queries, eliminate duplicates, return common or unique rows, and control the order of rows in the final output. Tables used in the examples include the EMPLOYEES and JOB_HISTORY tables, which contain data on current and previous employee jobs. Guidelines are provided around matching columns in UNION queries and using parentheses and ORDER BY.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
1. Full Solution to Bounded Buffer
Semaphore fullBuffer = 0; // Initially, no coke
Semaphore emptyBuffers = numBuffers;
// Initially, num empty slots
Semaphore mutex = 1; // No one using machine
Producer(item) {
emptyBuffers.P(); // Wait until space
mutex.P(); // Wait until buffer free
Enqueue(item);
mutex.V();
fullBuffers.V(); // Tell consumers there is
// more coke
}
Consumer() {
fullBuffers.P(); // Check if there’s a coke
mutex.P(); // Wait until machine free
item = Dequeue();
mutex.V();
emptyBuffers.V(); // tell producer need more
return item;
}
2. Discussion about Solution
• Why asymmetry?
• Producer does: emptyBuffer.P(), fullBuffer.V()
• Consumer does: fullBuffer.P(), emptyBuffer.V()
• Is order of P’s important?
• Yes! Can cause deadlock:
Producer(item) {
mutex.P(); // Wait until buffer free
emptyBuffers.P(); // Could wait forever!
Enqueue(item);
mutex.V();
fullBuffers.V(); // Tell consumers more coke
}
• Is order of V’s important?
• No, except that it might affect scheduling efficiency
• What if we have 2 producers or 2 consumers?
• Do we need to change anything?
3. Motivation for Monitors and Condition Variables
• Semaphores are a huge step up, but:
• They are confusing because they are dual purpose:
• Both mutual exclusion and scheduling constraints
• Example: the fact that flipping of P’s in bounded buffer gives deadlock is not immediately obvious
• Cleaner idea: Use locks for mutual exclusion and condition variables for scheduling
constraints
• Definition: Monitor: a lock and zero or more condition variables for
managing concurrent access to shared data
• Use of Monitors is a programming paradigm
• Some languages like Java provide monitors in the language
• The lock provides mutual exclusion to shared data:
• Always acquire before accessing shared data structure
• Always release after finishing with shared data
• Lock initially free
4. Simple Monitor Example (version 1)
• Here is an (infinite) synchronized queue
Lock lock;
Queue queue;
AddToQueue(item) {
lock.Acquire(); // Lock shared data
queue.enqueue(item); // Add item
lock.Release(); // Release Lock
}
RemoveFromQueue() {
lock.Acquire(); // Lock shared data
item = queue.dequeue();// Get next item or null
lock.Release(); // Release Lock
return(item); // Might return null
}
• Not very interesting use of “Monitor”
• It only uses a lock with no condition variables
• Cannot put consumer to sleep if no work!
5. Condition Variables
• How do we change the RemoveFromQueue() routine to wait until something
is on the queue?
• Could do this by keeping a count of the number of things on the queue (with
semaphores), but error prone
• Condition Variable: a queue of threads waiting for something inside a critical
section
• Key idea: allow sleeping inside critical section by atomically releasing lock at time we
go to sleep
• Contrast to semaphores: Can’t wait inside critical section
• Operations:
• Wait(&lock): Atomically release lock and go to sleep. Re-acquire lock later, before
returning.
• Signal(): Wake up one waiter, if any
• Broadcast(): Wake up all waiters
• Rule: Must hold lock when doing condition variable ops!
• In Birrell paper, he says can perform signal() outside of lock – IGNORE HIM (this is only
an optimization)
6. Complete Monitor Example (with condition variable)
• Here is an (infinite) synchronized queue
Lock lock;
Condition dataready;
Queue queue;
AddToQueue(item) {
lock.Acquire(); // Get Lock
queue.enqueue(item); // Add item
dataready.signal(); // Signal any waiters
lock.Release(); // Release Lock
}
RemoveFromQueue() {
lock.Acquire(); // Get Lock
while (queue.isEmpty()) {
dataready.wait(&lock); // If nothing, sleep
}
item = queue.dequeue(); // Get next item
lock.Release(); // Release Lock
return(item);
7. Mesa vs. Hoare monitors
• Need to be careful about precise definition of signal and wait. Consider a
piece of our dequeue code:
while (queue.isEmpty()) {
dataready.wait(&lock); // If nothing, sleep
}
item = queue.dequeue(); // Get next item
• Why didn’t we do this?
if (queue.isEmpty()) {
dataready.wait(&lock); // If nothing, sleep
}
item = queue.dequeue(); // Get next item
• Answer: depends on the type of scheduling
• Hoare-style (most textbooks):
• Signaler gives lock, CPU to waiter; waiter runs immediately
• Waiter gives up lock, processor back to signaler when it exits critical section or if it waits again
• Mesa-style (Nachos, most real operating systems):
• Signaler keeps lock and processor
• Waiter placed on ready queue with no special priority
• Practically, need to check condition again after wait
8. Using of Compare&Swap for queues
• compare&swap (&address, reg1, reg2) { /* 68000 */
if (reg1 == M[address]) {
M[address] = reg2;
return success;
} else {
return failure;
}
}
Here is an atomic add to linked-list function:
addToQueue(&object) {
do { // repeat until no conflict
ld r1, M[root] // Get ptr to current head
st r1, M[object] // Save link in new object
} until (compare&swap(&root,r1,object));
}
root next next
next
New
Object
9. Readers/Writers Problem
• Motivation: Consider a shared database
• Two classes of users:
• Readers – never modify database
• Writers – read and modify database
• Is using a single lock on the whole database sufficient?
• Like to have many readers at the same time
• Only one writer at a time
R
R
R
W
10. Basic Readers/Writers Solution
• Correctness Constraints:
• Readers can access database when no writers
• Writers can access database when no readers or writers
• Only one thread manipulates state variables at a time
• Basic structure of a solution:
• Reader()
Wait until no writers
Access data base
Check out – wake up a waiting writer
• Writer()
Wait until no active readers or writers
Access database
Check out – wake up waiting readers or writer
• State variables (Protected by a lock called “lock”):
• int AR: Number of active readers; initially = 0
• int WR: Number of waiting readers; initially = 0
• int AW: Number of active writers; initially = 0
• int WW: Number of waiting writers; initially = 0
• Condition okToRead = NIL
• Conditioin okToWrite = NIL
11. Code for a Reader
Reader() {
// First check self into system
lock.Acquire();
while ((AW + WW) > 0) { // Is it safe to read?
WR++; // No. Writers exist
okToRead.wait(&lock); // Sleep on cond var
WR--; // No longer waiting
}
AR++; // Now we are active!
lock.release();
// Perform actual read-only access
AccessDatabase(ReadOnly);
// Now, check out of system
lock.Acquire();
AR--; // No longer active
if (AR == 0 && WW > 0) // No other active readers
okToWrite.signal(); // Wake up one writer
lock.Release();
}
12. Code for a Writer
Writer() {
// First check self into system
lock.Acquire();
while ((AW + AR) > 0) { // Is it safe to write?
WW++; // No. Active users exist
okToWrite.wait(&lock); // Sleep on cond var
WW--; // No longer waiting
}
AW++; // Now we are active!
lock.release();
// Perform actual read/write access
AccessDatabase(ReadWrite);
// Now, check out of system
lock.Acquire();
AW--; // No longer active
if (WW > 0){ // Give priority to writers
okToWrite.signal(); // Wake up one writer
} else if (WR > 0) { // Otherwise, wake reader
okToRead.broadcast(); // Wake all readers
}
lock.Release();
}
13. Simulation of Readers/Writers solution
• Consider the following sequence of operators:
• R1, R2, W1, R3
• On entry, each reader checks the following:
while ((AW + WW) > 0) { // Is it safe to read?
WR++; // No. Writers exist
okToRead.wait(&lock); // Sleep on cond var
WR--; // No longer waiting
}
AR++; // Now we are active!
• First, R1 comes along:
AR = 1, WR = 0, AW = 0, WW = 0
• Next, R2 comes along:
AR = 2, WR = 0, AW = 0, WW = 0
• Now, readers make take a while to access database
• Situation: Locks released
• Only AR is non-zero
14. Simulation(2)
• Next, W1 comes along:
while ((AW + AR) > 0) { // Is it safe to write?
WW++; // No. Active users exist
okToWrite.wait(&lock); // Sleep on cond var
WW--; // No longer waiting
}
AW++;
• Can’t start because of readers, so go to sleep:
AR = 2, WR = 0, AW = 0, WW = 1
• Finally, R3 comes along:
AR = 2, WR = 1, AW = 0, WW = 1
• Now, say that R2 finishes before R1:
AR = 1, WR = 1, AW = 0, WW = 1
• Finally, last of first two readers (R1) finishes and wakes up writer:
if (AR == 0 && WW > 0) // No other active readers
okToWrite.signal(); // Wake up one writer
15. Simulation(3)
• When writer wakes up, get:
AR = 0, WR = 1, AW = 1, WW = 0
• Then, when writer finishes:
if (WW > 0){ // Give priority to writers
okToWrite.signal(); // Wake up one writer
} else if (WR > 0) { // Otherwise, wake reader
okToRead.broadcast(); // Wake all readers
}
• Writer wakes up reader, so get:
AR = 1, WR = 0, AW = 0, WW = 0
• When reader completes, we are finished
16. Questions
• Can readers starve? Consider Reader() entry code:
while ((AW + WW) > 0) { // Is it safe to read?
WR++; // No. Writers exist
okToRead.wait(&lock); // Sleep on cond var
WR--; // No longer waiting
}
AR++; // Now we are active!
• What if we erase the condition check in Reader exit?
AR--; // No longer active
if (AR == 0 && WW > 0) // No other active readers
okToWrite.signal(); // Wake up one writer
• Further, what if we turn the signal() into broadcast()
AR--; // No longer active
okToWrite.broadcast(); // Wake up one writer
• Finally, what if we use only one condition variable (call it “okToContinue”)
instead of two separate ones?
• Both readers and writers sleep on this variable
• Must use broadcast() instead of signal()
17. Can we construct Monitors from Semaphores?
• Locking aspect is easy: Just use a mutex
• Can we implement condition variables this way?
Wait() { semaphore.P(); }
Signal() { semaphore.V(); }
• Doesn’t work: Wait() may sleep with lock held
• Does this work better?
Wait(Lock lock) {
lock.Release();
semaphore.P();
lock.Acquire();
}
Signal() { semaphore.V(); }
• No: Condition vars have no history, semaphores have history:
• What if thread signals and no one is waiting? NO-OP
• What if thread later waits? Thread Waits
• What if thread V’s and noone is waiting? Increment
• What if thread later does P? Decrement and continue
18. Construction of Monitors from Semaphores
• Problem with previous try:
• P and V are commutative – result is the same no matter what order they occur
• Condition variables are NOT commutative
• Does this fix the problem?
Wait(Lock lock) {
lock.Release();
semaphore.P();
lock.Acquire();
}
Signal() {
if semaphore queue is not empty
semaphore.V();
}
• Not legal to look at contents of semaphore queue
• There is a race condition – signaler can slip in after lock release and before waiter
executes semaphore.P()
• It is actually possible to do this correctly
• Complex solution for Hoare scheduling in book
• Can you come up with simpler Mesa-scheduled solution?
19. Monitor Conclusion
• Monitors represent the logic of the program
• Wait if necessary
• Signal when change something so any waiting threads can proceed
• Basic structure of monitor-based program:
lock
while (need to wait) {
condvar.wait();
}
unlock
do something so no need to wait
lock
condvar.signal();
unlock
Check and/or update
state variables
Wait if necessary
Check and/or update
state variables
20. C-Language Support for Synchronization
• C language: Pretty straightforward synchronization
• Just make sure you know all the code paths out of a critical section
int Rtn() {
lock.acquire();
…
if (exception) {
lock.release();
return errReturnCode;
}
…
lock.release();
return OK;
}
• Watch out for setjmp/longjmp!
• Can cause a non-local jump out of procedure
• In example, procedure E calls longjmp, poping stack back to procedure B
• If Procedure C had lock.acquire, problem!
Proc A
Proc B
Calls setjmp
Proc C
lock.acquire
Proc D
Proc E
Calls longjmp
Stackgrowth
21. C++ Language Support for Synchronization
• Languages with exceptions like C++
• Languages that support exceptions are problematic (easy to make a non-local exit
without releasing lock)
• Consider:
void Rtn() {
lock.acquire();
…
DoFoo();
…
lock.release();
}
void DoFoo() {
…
if (exception) throw errException;
…
}
• Notice that an exception in DoFoo() will exit without releasing the lock
22. C++ Language Support for Synchronization
• Must catch all exceptions in critical sections
• Catch exceptions, release lock, and re-throw exception:
void Rtn() {
lock.acquire();
try {
…
DoFoo();
…
} catch (…) { // catch exception
lock.release(); // release lock
throw; // re-throw the exception
}
lock.release();
}
void DoFoo() {
…
if (exception) throw errException;
…
}
• Even Better: auto_ptr<T> facility. See C++ Spec.
• Can deallocate/free lock regardless of exit method
23. Java Language Support for Synchronization
• Java has explicit support for threads and thread synchronization
• Bank Account example:
class Account {
private int balance;
// object constructor
public Account (int initialBalance) {
balance = initialBalance;
}
public synchronized int getBalance() {
return balance;
}
public synchronized void deposit(int amount) {
balance += amount;
}
}
• Every object has an associated lock which gets automatically acquired and released
on entry and exit from a synchronized method.
24. Java Language Support for Synchronization
• Java also has synchronized statements:
synchronized (object) {
…
}
• Since every Java object has an associated lock, this type of statement acquires and
releases the object’s lock on entry and exit of the body
• Works properly even with exceptions:
synchronized (object) {
…
DoFoo();
…
}
void DoFoo() {
throw errException;
}
25. Java Language Support for Synchronization
• In addition to a lock, every object has a single condition variable associated
with it
• How to wait inside a synchronization method of block:
• void wait(long timeout); // Wait for timeout
• void wait(long timeout, int nanoseconds); //variant
• void wait();
• How to signal in a synchronized method or block:
• void notify(); // wakes up oldest waiter
• void notifyAll(); // like broadcast, wakes everyone
• Condition variables can wait for a bounded length of time. This is useful for handling
exception cases:
t1 = time.now();
while (!ATMRequest()) {
wait (CHECKPERIOD);
t2 = time.new();
if (t2 – t1 > LONG_TIME) checkMachine();
}
• Not all Java VMs equivalent!
• Different scheduling policies, not necessarily preemptive!