Latches are low-level locking mechanisms that coordinate access to shared resources in the SGA, library cache, and database buffers. They ensure consistent information for shared objects and are very fast with no queuing, acquiring latches in nanoseconds. Latches are used to prevent concurrent modification or reading of data being modified by multiple sessions or during memory deallocation. Latch contention can decrease concurrency.
Deadlock occurs when multiple processes are blocked waiting for resources held by other processes in the set, resulting in no forward progress. There are four conditions required for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be handled through prevention, avoidance, detection, and recovery. Prevention ensures one of the four conditions is never satisfied. Avoidance allows resource allocation if it does not lead to an unsafe state. Detection identifies when deadlock occurs. Recovery regains resources by terminating processes or preempting resources.
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
This document discusses deadlocks, which occur when two or more processes wait indefinitely for each other to release resources. The four conditions for deadlock are outlined: mutual exclusion, hold and wait, no preemption, and circular wait. Strategies to address deadlocks include detection and recovery, avoidance, and prevention. Detection involves building a resource graph to identify deadlocks, then killing processes to break cycles. Avoidance analyzes requests to grant resources in a safe order. Prevention eliminates conditions like making all resources shareable.
This document outlines different methods for handling deadlocks in operating systems, including deadlock prevention, avoidance, detection, and recovery. It discusses the four necessary conditions for deadlock and defines a resource-allocation graph model. For deadlock prevention, it describes ways to ensure that the mutual exclusion, hold and wait, no preemption, and circular wait conditions cannot simultaneously hold through protocols like requesting all resources at start, releasing resources before requesting new ones, preempting held resources, and imposing a total ordering of resource types. Deadlock avoidance uses additional process information to decide if a request should wait. Detection identifies deadlocks in the system state, while recovery terminates processes or preempts resources.
The document discusses different methods for deadlock management in distributed database systems. It describes deadlock prevention, avoidance, and detection and resolution. For deadlock prevention, transactions declare all resource needs upfront and the system reserves them to prevent cycles in the wait-for graph. Deadlock avoidance methods order resources or sites and require transactions to request locks in that order. Deadlock detection identifies cycles in the global wait-for graph using centralized, hierarchical, or distributed detection across sites. The system then chooses victim transactions to abort to break cycles.
This document summarizes key concepts related to deadlocks in operating systems. It defines the four necessary conditions for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It describes methods for handling deadlocks, including deadlock prevention, avoidance, detection, and recovery. Deadlock prevention techniques aim to ensure that at least one of the necessary conditions does not hold, such as imposing an ordering on how resources can be requested. Deadlock avoidance uses additional information to determine if a request could lead to a deadlocked state. Detection and recovery methods allow deadlocks to occur but provide algorithms for identifying and resolving deadlocked processes.
The document discusses Java 5 concurrency features including locks, conditions, atomic variables, blocking queues, concurrent hash maps, synchronizers like semaphores and mutexes, and the executor framework. Key points include:
- Locks provide an alternative to synchronized blocks and methods, and allow more flexible locking behavior. ReentrantLock is a common lock implementation.
- Conditions (condition variables) allow threads to wait/signal and are used with locks rather than synchronized monitors.
- Atomic variables ensure thread-safe operations on single variables without locking.
The document discusses deadlocks in operating systems. It defines the four conditions for deadlock - mutual exclusion, hold and wait, no preemption, and circular wait. It explains resource allocation graphs and how they can be used to detect deadlocks. The document also covers deadlock prevention methods like mutual exclusion, holding and waiting, preemption, and imposing a total ordering of resources. It describes Banker's algorithm for deadlock prevention and detection with multiple instances of resources. Finally, it discusses different approaches for deadlock recovery like process termination and resource preemption.
Deadlock occurs when multiple processes are blocked waiting for resources held by other processes in the set, resulting in no forward progress. There are four conditions required for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be handled through prevention, avoidance, detection, and recovery. Prevention ensures one of the four conditions is never satisfied. Avoidance allows resource allocation if it does not lead to an unsafe state. Detection identifies when deadlock occurs. Recovery regains resources by terminating processes or preempting resources.
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
This document discusses deadlocks, which occur when two or more processes wait indefinitely for each other to release resources. The four conditions for deadlock are outlined: mutual exclusion, hold and wait, no preemption, and circular wait. Strategies to address deadlocks include detection and recovery, avoidance, and prevention. Detection involves building a resource graph to identify deadlocks, then killing processes to break cycles. Avoidance analyzes requests to grant resources in a safe order. Prevention eliminates conditions like making all resources shareable.
This document outlines different methods for handling deadlocks in operating systems, including deadlock prevention, avoidance, detection, and recovery. It discusses the four necessary conditions for deadlock and defines a resource-allocation graph model. For deadlock prevention, it describes ways to ensure that the mutual exclusion, hold and wait, no preemption, and circular wait conditions cannot simultaneously hold through protocols like requesting all resources at start, releasing resources before requesting new ones, preempting held resources, and imposing a total ordering of resource types. Deadlock avoidance uses additional process information to decide if a request should wait. Detection identifies deadlocks in the system state, while recovery terminates processes or preempts resources.
The document discusses different methods for deadlock management in distributed database systems. It describes deadlock prevention, avoidance, and detection and resolution. For deadlock prevention, transactions declare all resource needs upfront and the system reserves them to prevent cycles in the wait-for graph. Deadlock avoidance methods order resources or sites and require transactions to request locks in that order. Deadlock detection identifies cycles in the global wait-for graph using centralized, hierarchical, or distributed detection across sites. The system then chooses victim transactions to abort to break cycles.
This document summarizes key concepts related to deadlocks in operating systems. It defines the four necessary conditions for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It describes methods for handling deadlocks, including deadlock prevention, avoidance, detection, and recovery. Deadlock prevention techniques aim to ensure that at least one of the necessary conditions does not hold, such as imposing an ordering on how resources can be requested. Deadlock avoidance uses additional information to determine if a request could lead to a deadlocked state. Detection and recovery methods allow deadlocks to occur but provide algorithms for identifying and resolving deadlocked processes.
The document discusses Java 5 concurrency features including locks, conditions, atomic variables, blocking queues, concurrent hash maps, synchronizers like semaphores and mutexes, and the executor framework. Key points include:
- Locks provide an alternative to synchronized blocks and methods, and allow more flexible locking behavior. ReentrantLock is a common lock implementation.
- Conditions (condition variables) allow threads to wait/signal and are used with locks rather than synchronized monitors.
- Atomic variables ensure thread-safe operations on single variables without locking.
The document discusses deadlocks in operating systems. It defines the four conditions for deadlock - mutual exclusion, hold and wait, no preemption, and circular wait. It explains resource allocation graphs and how they can be used to detect deadlocks. The document also covers deadlock prevention methods like mutual exclusion, holding and waiting, preemption, and imposing a total ordering of resources. It describes Banker's algorithm for deadlock prevention and detection with multiple instances of resources. Finally, it discusses different approaches for deadlock recovery like process termination and resource preemption.
This document defines and explains deadlocks in an operating system. A deadlock occurs when a set of processes are blocked waiting for resources held by other processes in the set, resulting in a circular wait. Four necessary conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait. Resource allocation graphs can be used to visualize resource allocation relations between processes and determine if a deadlock has occurred. Methods for recovering from deadlock include violating mutual exclusion, preempting resources from processes, and aborting processes. The operating system may also ignore deadlocks altogether.
This document discusses different strategies for handling deadlocks in operating systems, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure that one of the four necessary conditions for deadlock does not occur. Avoidance allows all conditions but detects unsafe states and stops requests that could lead to deadlock. Detection identifies when a deadlock has occurred. Recovery methods regain resources by terminating processes or preempting resources to break cycles in resource allocation graphs.
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
The document discusses deadlocks in computer systems. It defines deadlocks as a situation where two or more competing processes are waiting for resources held by each other, leading to indefinite blocking. The key characteristics of deadlocks are mutual exclusion of resources, hold and wait conditions, no preemption of allocated resources, and circular wait dependencies between processes. The document outlines several methods to handle deadlocks, including prevention, avoidance through algorithms like the banker's algorithm, and detection and recovery.
deadlock detection using Goldman's algorithm by ANIKET CHOUDHURYअनिकेत चौधरी
This document discusses deadlock detection in distributed systems. It defines deadlock as when a set of processes are permanently blocked waiting for resources held by each other. There are four conditions for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be detected using centralized, hierarchical, or distributed approaches. Goldman's distributed algorithm avoids maintaining a continuous wait-for graph by exchanging blocked process lists to detect cycles that indicate deadlocks.
Extending OpenJDK To Support Hybrid STM/HTMAntony Hosking
This document discusses extending OpenJDK to support hybrid software transactional memory (STM) and hardware transactional memory (HTM). It describes the current XJ system architecture, which uses bytecode rewriting and a runtime library to support transactions in Java. The document proposes direct extensions to OpenJDK to support transactions more efficiently, including modifications to the locking protocol and compiler optimizations to better leverage Intel TSX hardware transactional memory.
This document summarizes a chapter on deadlocks from an operating systems textbook. It defines deadlock as when a set of blocked processes wait for resources held by each other. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include prevention, avoidance, detection, and recovery. Prevention ensures deadlocks cannot occur by restricting resource usage. Avoidance dynamically checks the system state remains safe to prevent deadlocks. Detection allows deadlocks but recovers the system. Recovery options are terminating processes or preempting resources.
The document discusses the Post Memory Corruption (PMCMA) tool, which allows analyzing memory corruption bugs by testing different memory overwrite scenarios in a process. PMCMA uses a technique called "mk_fork()" to efficiently fork a process multiple times and overwrite different memory locations in each offspring to test for exploitation possibilities. It discusses challenges like dealing with zombie processes and invalid system calls caused by forking, and how PMCMA addresses these through techniques like process grouping and ignoring SIGCHILD signals.
The document discusses deadlocks in operating systems. It defines deadlock as a situation where a set of processes are blocked waiting for resources held by other processes in the set, resulting in none of the processes making any progress. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. The document presents examples to illustrate deadlock and discusses different strategies for dealing with it, including deadlock prevention, avoidance, and detection and recovery. It specifically describes the Banker's Algorithm for deadlock avoidance.
This document discusses deadlocks in operating systems. It defines a deadlock as a set of blocked processes that are each holding a resource and waiting for a resource held by another process. Four conditions must be met for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be modeled using directed resource allocation graphs. Methods for handling deadlocks include prevention, avoidance, detection, and recovery.
This document discusses deadlocks and techniques for handling them. It begins by defining the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes three approaches to handling deadlocks: prevention, avoidance, and detection and recovery. Prevention aims to ensure one of the four conditions never holds. Avoidance uses more information to determine if a request could lead to a deadlock. Detection and recovery allows deadlocks but detects and recovers from them after the fact. The document provides examples of different prevention techniques like limiting resource types that can be held, ordering resource types, and preemption. It also explains the banker's algorithm for deadlock avoidance.
This document summarizes different concurrency control techniques used in database systems, including lock-based protocols, timestamp-based protocols, and validation-based protocols. It discusses lock-based protocols in detail, covering how locks work, the lock compatibility matrix, deadlocks, starvation, and the two-phase locking protocol. It also discusses automatic acquisition of locks to simplify concurrency control.
Threads are lightweight processes that share the same address space. The Linux implementation uses clone() to create threads that have separate thread IDs but share other attributes like the virtual memory. Pthreads provides objects and functions for thread management including creation, attributes, mutual exclusion with mutexes and condition variables, cancellation, and thread-specific data.
The document discusses deadlocks in operating systems. It defines deadlocks and explains the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes different strategies for handling deadlocks, including prevention, avoidance, detection, and recovery.
A New Age of JVM Garbage Collectors (Clojure Conj 2019)Alexander Yakushev
Some programmers might think that garbage collection is a solved problem. It runs with the VM and takes care of your unused objects – what else would you want? However, the recent spike of interest in new garbage collectors for JVM (Shenandoah, ZGC) and beyond (Go GC) shows that there are still possibilities for improvement. In this talk, we will briefly walk through the history of memory management and discover what the modern GCs can do for you.
Deadlock avoidance methods analyze resource allocation to determine if granting a request would lead to an unsafe state where deadlock could occur. A deadlock happens when multiple processes are waiting indefinitely for resources held by each other in a cyclic dependency. To prevent deadlock, an operating system must have information on current resource availability and allocations, as well as future resource needs. The system only grants requests that will lead to a safe state where there are enough resources for all remaining processes and deadlock is not possible.
After an overview of Qt and its tools, a Hello World application quickly demonstrates the basic principles.
Qt is mainly famous for its intelligent concepts of signals and slots, which is explained together with examples for how to use widgets (UI controls).
At the end, the foundations of the meta-object system and its implications on memory management are explained.
This module follows up the introduction in the "Software Development with Qt" module, plus the Quickstart slides.
There are three main approaches to handling deadlocks: prevention, avoidance, and detection with recovery. Prevention methods constrain how processes request resources to ensure at least one necessary condition for deadlock cannot occur. Avoidance requires advance knowledge of processes' resource needs to decide if requests can be immediately satisfied. Detection identifies when a deadlocked state occurs and recovers by undoing the allocation that caused it.
The document discusses sequential circuits and latches. It explains that sequential circuits have memory so their outputs depend not only on current inputs but also on the stored state. Latches are described as basic memory units that can store a single bit. Different types of latches like SR, D, and JK latches are presented along with their truth tables and operation. State diagrams are introduced as a way to represent sequential circuits. The use of latches with an ALU to increment a stored value is given as an example, but the timing issue of when to disable the latches is noted as a potential problem.
This document discusses latches and flip-flops. It begins by explaining the difference between latches and flip-flops, noting that latches do not have a clock signal while flip-flops do. It then discusses several types of flip-flops - RS, Clocked RS, D, JK, and T - providing the definition, explanation, circuit diagram, and truth table for each. It also discusses several types of latches - SR, Gated SR, and D - providing the definition, explanation, and circuit diagram for each. The document aims to explain the key characteristics and workings of various latches and flip-flops.
This document defines and explains deadlocks in an operating system. A deadlock occurs when a set of processes are blocked waiting for resources held by other processes in the set, resulting in a circular wait. Four necessary conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait. Resource allocation graphs can be used to visualize resource allocation relations between processes and determine if a deadlock has occurred. Methods for recovering from deadlock include violating mutual exclusion, preempting resources from processes, and aborting processes. The operating system may also ignore deadlocks altogether.
This document discusses different strategies for handling deadlocks in operating systems, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure that one of the four necessary conditions for deadlock does not occur. Avoidance allows all conditions but detects unsafe states and stops requests that could lead to deadlock. Detection identifies when a deadlock has occurred. Recovery methods regain resources by terminating processes or preempting resources to break cycles in resource allocation graphs.
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
The document discusses deadlocks in computer systems. It defines deadlocks as a situation where two or more competing processes are waiting for resources held by each other, leading to indefinite blocking. The key characteristics of deadlocks are mutual exclusion of resources, hold and wait conditions, no preemption of allocated resources, and circular wait dependencies between processes. The document outlines several methods to handle deadlocks, including prevention, avoidance through algorithms like the banker's algorithm, and detection and recovery.
deadlock detection using Goldman's algorithm by ANIKET CHOUDHURYअनिकेत चौधरी
This document discusses deadlock detection in distributed systems. It defines deadlock as when a set of processes are permanently blocked waiting for resources held by each other. There are four conditions for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be detected using centralized, hierarchical, or distributed approaches. Goldman's distributed algorithm avoids maintaining a continuous wait-for graph by exchanging blocked process lists to detect cycles that indicate deadlocks.
Extending OpenJDK To Support Hybrid STM/HTMAntony Hosking
This document discusses extending OpenJDK to support hybrid software transactional memory (STM) and hardware transactional memory (HTM). It describes the current XJ system architecture, which uses bytecode rewriting and a runtime library to support transactions in Java. The document proposes direct extensions to OpenJDK to support transactions more efficiently, including modifications to the locking protocol and compiler optimizations to better leverage Intel TSX hardware transactional memory.
This document summarizes a chapter on deadlocks from an operating systems textbook. It defines deadlock as when a set of blocked processes wait for resources held by each other. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include prevention, avoidance, detection, and recovery. Prevention ensures deadlocks cannot occur by restricting resource usage. Avoidance dynamically checks the system state remains safe to prevent deadlocks. Detection allows deadlocks but recovers the system. Recovery options are terminating processes or preempting resources.
The document discusses the Post Memory Corruption (PMCMA) tool, which allows analyzing memory corruption bugs by testing different memory overwrite scenarios in a process. PMCMA uses a technique called "mk_fork()" to efficiently fork a process multiple times and overwrite different memory locations in each offspring to test for exploitation possibilities. It discusses challenges like dealing with zombie processes and invalid system calls caused by forking, and how PMCMA addresses these through techniques like process grouping and ignoring SIGCHILD signals.
The document discusses deadlocks in operating systems. It defines deadlock as a situation where a set of processes are blocked waiting for resources held by other processes in the set, resulting in none of the processes making any progress. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. The document presents examples to illustrate deadlock and discusses different strategies for dealing with it, including deadlock prevention, avoidance, and detection and recovery. It specifically describes the Banker's Algorithm for deadlock avoidance.
This document discusses deadlocks in operating systems. It defines a deadlock as a set of blocked processes that are each holding a resource and waiting for a resource held by another process. Four conditions must be met for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be modeled using directed resource allocation graphs. Methods for handling deadlocks include prevention, avoidance, detection, and recovery.
This document discusses deadlocks and techniques for handling them. It begins by defining the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes three approaches to handling deadlocks: prevention, avoidance, and detection and recovery. Prevention aims to ensure one of the four conditions never holds. Avoidance uses more information to determine if a request could lead to a deadlock. Detection and recovery allows deadlocks but detects and recovers from them after the fact. The document provides examples of different prevention techniques like limiting resource types that can be held, ordering resource types, and preemption. It also explains the banker's algorithm for deadlock avoidance.
This document summarizes different concurrency control techniques used in database systems, including lock-based protocols, timestamp-based protocols, and validation-based protocols. It discusses lock-based protocols in detail, covering how locks work, the lock compatibility matrix, deadlocks, starvation, and the two-phase locking protocol. It also discusses automatic acquisition of locks to simplify concurrency control.
Threads are lightweight processes that share the same address space. The Linux implementation uses clone() to create threads that have separate thread IDs but share other attributes like the virtual memory. Pthreads provides objects and functions for thread management including creation, attributes, mutual exclusion with mutexes and condition variables, cancellation, and thread-specific data.
The document discusses deadlocks in operating systems. It defines deadlocks and explains the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes different strategies for handling deadlocks, including prevention, avoidance, detection, and recovery.
A New Age of JVM Garbage Collectors (Clojure Conj 2019)Alexander Yakushev
Some programmers might think that garbage collection is a solved problem. It runs with the VM and takes care of your unused objects – what else would you want? However, the recent spike of interest in new garbage collectors for JVM (Shenandoah, ZGC) and beyond (Go GC) shows that there are still possibilities for improvement. In this talk, we will briefly walk through the history of memory management and discover what the modern GCs can do for you.
Deadlock avoidance methods analyze resource allocation to determine if granting a request would lead to an unsafe state where deadlock could occur. A deadlock happens when multiple processes are waiting indefinitely for resources held by each other in a cyclic dependency. To prevent deadlock, an operating system must have information on current resource availability and allocations, as well as future resource needs. The system only grants requests that will lead to a safe state where there are enough resources for all remaining processes and deadlock is not possible.
After an overview of Qt and its tools, a Hello World application quickly demonstrates the basic principles.
Qt is mainly famous for its intelligent concepts of signals and slots, which is explained together with examples for how to use widgets (UI controls).
At the end, the foundations of the meta-object system and its implications on memory management are explained.
This module follows up the introduction in the "Software Development with Qt" module, plus the Quickstart slides.
There are three main approaches to handling deadlocks: prevention, avoidance, and detection with recovery. Prevention methods constrain how processes request resources to ensure at least one necessary condition for deadlock cannot occur. Avoidance requires advance knowledge of processes' resource needs to decide if requests can be immediately satisfied. Detection identifies when a deadlocked state occurs and recovers by undoing the allocation that caused it.
The document discusses sequential circuits and latches. It explains that sequential circuits have memory so their outputs depend not only on current inputs but also on the stored state. Latches are described as basic memory units that can store a single bit. Different types of latches like SR, D, and JK latches are presented along with their truth tables and operation. State diagrams are introduced as a way to represent sequential circuits. The use of latches with an ALU to increment a stored value is given as an example, but the timing issue of when to disable the latches is noted as a potential problem.
This document discusses latches and flip-flops. It begins by explaining the difference between latches and flip-flops, noting that latches do not have a clock signal while flip-flops do. It then discusses several types of flip-flops - RS, Clocked RS, D, JK, and T - providing the definition, explanation, circuit diagram, and truth table for each. It also discusses several types of latches - SR, Gated SR, and D - providing the definition, explanation, and circuit diagram for each. The document aims to explain the key characteristics and workings of various latches and flip-flops.
This document discusses flip-flops and sequential circuits. It begins with an introduction to sequential circuits and flip-flops. There are several types of flip-flops discussed including SR flip-flops, clocked SR flip-flops, JK flip-flops, and T flip-flops. SR flip-flops can be constructed using either NAND or NOR gates. The document provides details on the logic diagrams, truth tables, and operation of SR flip-flops. It also discusses using a clock signal to control synchronous sequential circuits and provides examples of waveforms and exercises for SR flip-flops.
This document discusses registers, register transfer, and binary logic in digital systems. It contains the following key points:
1. Registers are used to store and hold binary information in a computer system. Register transfer involves moving data between registers, such as from an input register to a processor register.
2. The control unit uses registers to track computer sequences. Data must be stored in a register when transferred to or from an input/output device.
3. Binary logic deals with variables that can take on two values (0 or 1) and logical operations on these values. Logic gates are used to manipulate bits.
This document discusses programmable logic arrays (PLAs). It defines a PLA as a type of programmable logic device that can be used to implement combinational logic circuits. Specifically, a PLA has programmable AND gates linked to a set of programmable OR gates, allowing it to represent large numbers of logic functions. However, PLAs also have disadvantages in that they lack portability and do not provide full decoding of variables or generate all minterms. The document provides an example of logic functions F1 and F2 that could be implemented with a PLA.
The document discusses different types of counters used in logic design including decades counters, binary counters, up-down counters, BCD counters, and ring counters. It provides examples of synchronous and asynchronous binary counters, BCD counters with parallel load, and specialized counters like Johnson counters. Key aspects covered are how the counters work by toggling flip-flops to count in binary or BCD. Diagrams illustrate the circuitry and signal flow for different counter configurations.
Flipflops JK T SR D All FlipFlop SlidesSid Rehmani
Flipflops JK T SR D All FlipFlop Slides. Uploaded by SidRehmani.
Jk flip flop presentation, T flip flop presentation, D flip flop presentation, D flip flop presentation.
Follow Me For More:
facebook.com/RjSidRehmani
This document discusses digital registers and counters. It defines latches and flip flops, which are basic memory elements that can store single bits of data. Registers are groups of flip flops that can store multiple bits and are used to hold information in digital systems. Shift registers can shift data in one or both directions, while cyclic registers can shift in both directions. Parallel-in serial-out registers load data in parallel and output it serially. Counters are registers that sequence through states upon each input pulse and are used to count events. Asynchronous counters have external clocks connected to each flip flop, while synchronous counters receive a common external clock.
This document discusses latches and flip flops, which are types of sequential logic circuits. It describes the basic components and functioning of latches like SR latches, D latches, and gated latches. For flip flops, it covers SR flip flops, D flip flops, JK flip flops, and master-slave flip flops. The key differences between latches and flip flops are that latches do not have a clock input while flip flops are edge-triggered by a clock signal. Latches and flip flops are used as basic storage elements in more complex sequential circuits and in computer components like registers and RAM.
This document provides information about a presentation on digital electronics and logic design. The presentation was given by three students - MD. Toufiq Akbar Shawon, Al-Fariha Arpa, and Sabiha Jannat - for their CSE-204 course, which was taught by Amathul Hadi Shakara. The presentation covered RAM and ROM, including their introduction, history, types, how they work, and memory structures. References for additional information were also provided at the end.
Flip-flops are basic memory circuits that have two stable states and can store one bit of information. There are several types of flip-flops including SR, JK, D, and T. The SR flip-flop has two inputs called set and reset that determine its output state, while the JK flip-flop's J and K inputs can toggle its output. Flip-flops like the D and JK can be constructed from more basic flip-flops. For sequential circuits, flip-flops are made synchronous using a clock input so their state only changes at the clock edge.
A register is a group of flip-flops that can store binary information. Registers come in various types, including shift registers and counters. A shift register contains flip-flops connected such that data shifts along the line when activated. Counters increment through a sequence of states upon each input pulse. Common types include binary counters using J-K flip-flops. Registers can have additional capabilities like parallel loading to simultaneously input multiple bits or clearing to reset the count/data.
This document discusses sequential and combinational circuits. Combinational circuits are made of logic gates and their outputs depend only on the current inputs. Sequential circuits contain memory elements like flip-flops in addition to combinational logic, so their outputs depend on current inputs and the circuit's previous state. There are two types of sequential circuits: synchronous use a clock signal to control state changes while asynchronous circuits change state directly in response to input changes.
What is CPU Register? Type of CPU Register.Kapil Dev Das
Registers are the fastest memory locations that the CPU uses to temporarily store data and instructions during processing. There are different types of registers that each serve a specific purpose like the accumulator, program counter, memory address register, and instruction register. Registers are involved in the fetch, decode, and execute operations of the CPU as it processes instructions and data. The most important registers and their functions are described.
This document describes different types of synchronous counters that are triggered by a common clock signal. It discusses binary counters that count up in binary, up-down binary counters that can count up or down, BCD counters that count in decimal, and binary counters that can be parallel loaded. The key aspects covered are how the flip-flops are triggered in each counter type and the inputs and functions that control counting direction, clearing, loading, and counting.
1. A counter is a sequential logic circuit consisting of a set of flip-flops which can go through a sequence of states.
2. There are two main types of counters - asynchronous counters and synchronous counters. Asynchronous counters have propagation delay issues and synchronous counters do not.
3. Counters can be designed to count up, down, or in other sequences depending on the state transition logic and excitation table used to determine the flip-flop inputs.
AOS Lab 4: If you liked it, then you should have put a “lock” on itZubair Nabi
The document discusses concurrency issues that arise in operating systems and how xv6 handles them using locks. It begins by explaining how multiple CPUs can interfere with each other when sharing kernel data structures. It also notes that even on single-CPU systems, interrupt handlers can interfere with non-interrupt code. xv6 uses locks to address concurrency for both of these situations. The document then provides examples of race conditions that can occur without locks, such as when multiple processors concurrently add to a shared linked list. It shows how xv6 implements locks and how they are used to make operations like inserting into a linked list atomic. The document also discusses challenges like lock ordering, handling locks for interrupt handlers, and when to use coarse
The document discusses latches and enqueues in Oracle databases. Latches are points of concurrency that protect access to shared memory resources and allow multiple processes to access or modify data simultaneously. Enqueues are points of serialization that impose a queue on processes accessing the same data in a chronological, serialized order. The document provides examples to illustrate the differences and describes some common latches and enqueues as well as how they can be monitored and diagnosed in an Oracle database.
This document provides an overview of StormCrawler, an open source distributed crawler built on Apache Storm. It discusses key components like the FetcherBolt for fetching URLs, ParserBolt for parsing content and extracting metadata, and StatusUpdaterBolt for tracking crawl status. It also covers common configurations like URL partitioning, politeness policies, and filters. The document recommends using ConfigurableTopology or the archetype to simplify development, and describes how StormCrawler is used in production crawls involving news, images, and archiving to WARC files.
Concurrent programming allows running multiple tasks simultaneously through processes and threads. It describes running tasks in a time-shared manner on a single CPU core or truly in parallel across multiple CPU cores. Concurrent programs have different execution paths that run simultaneously. Concurrency means tasks happen within the same timeframe with dependencies, while parallelism means tasks happen simultaneously without dependencies. Concurrency challenges include shared resources, race conditions, deadlocks, priority inversions and starvation due to lack of proper synchronization techniques like mutexes and semaphores. In iOS, Apple provides threads, Grand Central Dispatch, and other synchronization tools to support concurrent programming.
Oracle Latch and Mutex Contention TroubleshootingTanel Poder
This is an intro to latch & mutex contention troubleshooting which I've delivered at Hotsos Symposium, UKOUG Conference etc... It's also the starting point of my Latch & Mutex contention sections in my Advanced Oracle Troubleshooting online seminar - but we go much deeper there :-)
This document provides an overview of lightweight virtualization using Linux containers (LXC) and the AUFS filesystem. It discusses how LXC containers leverage namespaces and cgroups to provide isolation between processes and control over resource usage. Namespaces partition kernel structures like processes, networking, filesystems, and users to create isolated environments. Cgroups limit, account, and isolate resource usage like CPU and memory for groups of processes. AUFS allows using a single filesystem image with copy-on-write capabilities. The document also summarizes how to set up and use LXC containers and AUFS.
Codership provides high availability, no-data-loss, and scalable data replication and clustering solutions for open source databases to securely store customers' valuable data. They do this through solutions like Galera replication which allows for synchronous multi-master replication across MariaDB and MySQL database clusters.
Dead Lock Analysis of spin_lock() in Linux Kernel (english)Sneeker Yeh
The document discusses spin locks and semaphores in the Linux kernel. It begins with an introduction to the difference between spin locks and semaphores. Spin locks cause threads to continuously loop trying to acquire the lock, while semaphores cause threads to sleep. An example is given of a deadlock scenario that can occur with spin locks. The document then discusses the concept of context in the kernel, including user context, interrupt context, and the control flow during procedure calls and interrupts. Log analysis and examples of double-acquire deadlocks involving spin locks are provided. The document concludes with recommendations for how to prevent deadlocks, such as using spin_lock_irqsave/restore and avoiding semaphores in interrupt context.
Introduction to ZooKeeper - TriHUG May 22, 2012mumrah
Presentation given at TriHUG (Triangle Hadoop User Group) on May 22, 2012. Gives a basic overview of Apache ZooKeeper as well as some common use cases, 3rd party libraries, and "gotchas"
Demo code available at https://github.com/mumrah/trihug-zookeeper-demo
Long thought to be relegated to the domain of fast, multithreaded desktop applications, race conditions have made their way into web applications. These bugs are often difficult to test for, and are becoming increasingly prevalent due to faster and faster clients, while server-side languages like Node.js and PHP are struggling to keep up. Race conditions are no longer just bugs- when they are found in critical components of web applications, they become a serious security vulnerability. If the proper checks and defensive measures are not in place, databases get confused, “one-time-use” becomes a relative term, and “limited” becomes “unlimited”. This talk will detail specific examples where malicious users could cause damage or profit from a race-condition flaw in a web application. A custom open-source tool will also be introduced to help security researchers and developers easily check for this class of vulnerability in web applications.
Search at Twitter: Presented by Michael Busch, TwitterLucidworks
Twitter processes over 500 million tweets per day and more than 2 billion search queries per day. The company uses a search architecture based on Lucene with custom extensions. This includes an in-memory real-time index optimized for concurrency without locks, and a schema-based document factory. Future work includes support for parallel index segments and additional Lucene features.
GlusterFS uses "translators" to modify and route file requests between users and storage bricks. Translators can convert request types, modify request properties like paths or flags, intercept or block requests, and spawn new requests. This allows GlusterFS to provide features like replication, caching, and integration with other systems, but also enables custom file systems to be built by modifying the translators. The asynchronous programming model and shared context objects allow translators to cooperate complex workflows across multiple servers.
GlusterFS uses "translators" to modify and route file requests between users and storage bricks. Translators can convert request types, modify request properties like paths or flags, intercept or block requests, and spawn new requests. This allows GlusterFS to provide features like replication, caching, and integration with other systems, but also enables custom file systems to be built by modifying the translators. The asynchronous programming model and shared context objects allow translators to cooperate complex workflows across multiple servers.
- MySQL HA can be achieved with solutions like shared storage (DRBD), replication, MySQL Cluster, or Linux HA/Pacemaker.
- Linux HA/Pacemaker provides high availability by managing resources across nodes and ensuring that services are running on an available node if one fails.
- It uses a central configuration (CIB) to define resources, constraints between them, and monitor their status to determine the optimal placement of resources across nodes.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the fundamental concepts of observability and then demonstrates how to instrument your applications using the OpenTelemetry libraries.
LoLA is a tool for verifying properties of Petri nets. This document discusses how to:
1. Choose and manage LoLA configurations to optimally verify properties.
2. Ask the right verification questions in a specific, modular way to efficiently verify properties.
3. Optimize Petri net modeling to take advantage of LoLA's reduction techniques and scale verification.
4. Employ scripts and makefiles to automate calling LoLA and analyzing results.
5. Integrate calling LoLA from other tools using UNIX streams for modular verification.
An overview of Cassandra drivers for Java, Ruby, Python with tips and tricks for getting the most performance from Cassandra. Tune your application for low latency or high throughput.
Linux High Availability provides information on configuring high availability clusters in Linux. It discusses:
- Key components of HA clustering including one service taking over work of another if it fails, IP and service takeover.
- The importance of high availability and costs of downtime for businesses. Statistics show significant downtime even at 99.9% availability levels.
- Best practices for HA including keeping configurations simple, preparing for failures, and testing HA setups. Complexity increases reliability risks.
The document discusses strategies for addressing Metaspace OutOfMemoryErrors (OOM) when redeploying web applications. It recommends monitoring Tomcat logs and memory usage, analyzing heap dumps to find classloader leaks and duplicate classes, and making configuration changes like closing Quartz schedulers and Log4j contexts. Specific issues addressed include leaks caused by third-party JARs like log4j-web and Xerces, and weak references in WeakHashMaps. Stress testing and troubleshooting on a production system found the root cause was an outdated Xerces JAR producing classloader leaks.
Reshaping Core Genomics Software Tools for the Manycore EraIntel® Software
This session provides an overview of the results from a multiyear collaboration with engineers at Intel to revamp Bowtie and Bowtie 2 (and another related tool called HISAT) to make effective use of all the processor cores on modern Intel® CPUs.
2. Wie ben ik
● Rick van Ek, rick.v.ek@xs4all.nl
http://nl.linkedin.com/in/rickvek
● Werkt met Oracle producten sinds 1992
● Zelfstandig sinds 1996, Van Ek IT Consultancy BV
● Oracle database
● Baan IV software
● Web Logic (2012)
● Getrouwd, twee pubers, meisje en jongen.
3. Agenda
● Definitie van een Latch.
● Wat is een Latch?
● Hoe ziet een Latch er uit?
● Wanneer wordt deze gebruikt?
● Eigenschappen van een Latch.
● Welke informatie over een Latch?
● Latch contention
● Demo?
4. Definition of a latch by Oracle
Latches are simple, low-level serialization mechanisms that coordinate multiuser access to shared data structures,
objects, and files. Latches protect shared memory resources from corruption when accessed by multiple processes.
Specifically, latches protect data structures from the following situations:
Concurrent modification by multiple sessions
Being read by one session while being modified by another session
Deallocation (aging out) of memory while being accessed
Typically, a single latch protects multiple objects in the SGA. For example, background processes such as DBWn and
LGWR allocate memory from the shared pool to create data structures. To allocate this memory, these processes use a
shared pool latch that serializes access to prevent two processes from trying to inspect or modify the shared pool
simultaneously. After the memory is allocated, other processes may need to access shared pool areas such as the library
cache, which is required for parsing. In this case, processes latch only the library cache, not the entire shared pool.
Unlike enqueue latches such as row locks, latches do not permit sessions to queue. When a latch becomes available, the
first session to request the latch obtains exclusive access to it. Latch spinning occurs when a process repeatedly requests
a latch in a loop, whereas latch sleeping occurs when a process releases the CPU before renewing the latch request.
Typically, an Oracle process acquires a latch for an extremely short time while manipulating or looking at a data
structure. For example, while processing a salary update of a single employee, the database may obtain and release
thousands of latches. The implementation of latches is operating system-dependent, especially in respect to whether and
how long a process waits for a latch.
An increase in latching means a decrease in concurrency. For example, excessive hard parse operations create
contention for the library cache latch. The V$LATCH view contains detailed latch usage statistics for each latch,
including the number of times each latch was requested and waited for.
5. Wat is een Latch?
● Latch is een locking mechanisme
● Regelt toegang tot resources in de SGA, library
cache en database buffers e.d.
● Zorgt dat informatie consistent is voor shared
objects.
● Is razend snel en heeft geen intelligentie, geen
queuing. (nano seconden)
● Een latch is ongeveer 100 tot 200 bytes groot.
● Bij memory objects kan het gebeuren dat readers
writers blocken en vice versa.
8. Wanneer wordt deze gebruikt.
In principe wordt een latch altijd gebruikt als men
resources nodig heeft uit een de SGA. Er zijn dus veel
verschillende soorten latches (zie v$latch).
Twee phase actie bij lange operaties, dwz :
Get latch, pin buffer, unset latch, do changes, get
latch , unpin buffer, unset latch.
9. Eigenschappen van een Latch.
● Latches worden gebruikt gedurende de periode dat
een memory structure wordt ge-update.
● Ze hebben een extreem korte levensduur. Order
grote van nano seconden.
● Ze zijn atomic, d.w.z. “test en set” of “compare and
swap” CPU instructions.
● Doordat het een single instruction operation is, zijn
ze gegarandeerd voor het betreffende proces.
10. Eigenschappen van een Latch.
● Een latch heeft geen initiele “sleep” maar blijft proberen de
lock te krijgen “spinning” (paar duizend pogingen).
● Na het spinnen krijgt deze een sleep time.(verschillend)
● Een latch zit op dezelfde CPU, een context switch zou te
lang duren.
● Er is geen intelligent gedrag, daar is de tijd niet voor.
● Er is geen queue, als een latch vrij komt dan is deze voor de
eerste de beste(mob of waiters).
●
Als de “holder” er niet meer is maar de latch wel wordt dit
door PMON opgeschoond.
11. Eigenschappen van een Latch.
● Een latch kent twee types “willing to wait” en
“immediate”
● Type “immediate latch” wacht niet maar gaat op
zoek naar vrije child latches.
● Latches opereren op instance level. (RAC)
● De implementatie van latches zorgt ervoor dat er
geen deadlock kan ontstaan.
● Twee smaken, exclusive en shared.
● Een latch beslaat 32 of meer hash buckets.
12. Latch informatie.
● V$LATCH shows aggregate latch statistics for both
parent and child latches.
select latch#
, level#
, name
, gets
, misses
, sleeps
, immediate_gets
, immediate_misses
, wait_time -- "Wait microsec"
from v$latch ;
● Information from X$KSLLTR
13. Latch informatie.
● V$LATCH_PARENT shows aggregate latch
statistics for parent latches
● Information from X$KSLLTR_PARENT
● V$LATCH_CHILDREN shows aggregate latch
statistics for children latches
● Information from X$KSLLTR_CHILDREN
14. Latch informatie.
● V$LATCHHOLDER This view contains
information about the current latch holders.
– PID NUMBER Identifier of the process holding
the latch
– SID NUMBER Identifier of the session that owns
the latch
– LADDR RAW(4 | 8) Latch address
– NAME VARCHAR2(64) Name of the latch being
held
– GETS NUMBER Number of times that the latch
was obtained in either wait mode or no-wait
mode
● Gebaseerd op X$KSUPRLAT
15. Latch informatie.
● Wat houdt de latch vast? v.b. cache buffer chains
● In v$latch_children vindt je addr
● In x$bh vindt je hladdr, file# ,dbablck , state,
TCH
● v$latch_children.addr = x$bh.hladdr
● TCH = touch count ( updated elke 3 seconden)
16. Misleidende informatie.
● Er is een valkuil, een latch leeft in een cyclus van
nano seconden.
● De informatie (TCH) in x$bh wordt ieder 3
seconden ververst.
● v.b. 1 maal elke 3 sec gedurende 24 uur
TCB =28800 (86400/3 =28800)
● tig-maal voor 2 sec elke 10 sec gedurende 24
uur (tcb verhoogd met 1)
TCB = 8640 (86400/10= 8640)
● TCH counter wordt gebruikt voor de LRU process
17. Wat is contention?
● Als een latch gezet wordt maar er is al een latch dan “spins”
deze en probeert het weer.
● Het maximale aantal keren van “spinning” is vast gelegd in
_SPIN_COUNT (afhankelijk van CPU count)
● Daarna wacht het voor enkele honderdste van seconden en
probeert het weer.
● Na ieder poging loopt de wacht tijd iets op.
● CPU utilization loopt op gedurende dit proces.
● “! latch contention is a Sympthom not a root cause”
18. Specifieke latch contention.
● Redo copy/redo allocation latch
● Verkeerd geconfigureerde redo logfiles/buffers.
● Library Cache latch
● Literals in plaats van binds
● Cache Buffers Chains latch
● Hot blocks.
● Shared Pool latch.
● Te grote large pool en/of geen (te kleine)
reserved area.
19. Hoe identificeer je latch contention?
● Ratio based indentificatie.
● "willing-to-wait" Hit Ratio = (GETS-
MISSES)/GETS
● "no wait" Hit Ratio = (IMMEDIATE_GETS-
IMMEDIATE_MISSES)/IMMEDIATE_GETS
● Zie ook AWR/spotlight/lab128 etc.
20. Hoe identificeer je latch contention?
● Wait Interface Based Techniques
● Meet de impact van de latches op je overal
performance.
● Kijk hoeveel tijd er gespendeerd word in het
wachten op een latch.
– v$system_event
– v$sysstat
– v$latch
21. Wie houd deze latch?
● Latch contention is een symptoom, dus;
● Volg/begrijp het process
● v$latchholder => sid, pid
● v$session => sql_address
● v$sqlarea => sql_text
● v$latch_children => addr
● X$BH => haddr
● Gebruik tools:
● detectie
– AWR
– Spotlight
– lab128
● onderzoek
– Latchprof / latchprofx (session)
22. Latch onderzoek
● Latchprof
● Gebaseerd op v$ views
● Latchprofx
● Gebaseerd op x$ views
● Meer details
● Toegang tot x views nodig
● Parameter: _ultrafast_latch_statistics
23. Latch onderzoek
● Parameter 1 specifies which columns from V$LATCHHOLDER to report and
group by. In the case below I just want to report latch holds by latch name (and not
even break it down by SID for starters).
● Parameter 2 specifies which SIDs to monitor. In the case below, I am interested in
any SID which holds a latch (%).
● Parameter 3 specifies which latches to monitor. This can be set either to latch name
or latch address in memory. All latches (%).
● Parameter 4 specifies how many times to sample V$LATCHHOLDER. The
sampling speed depends on your server CPU/memory bus speed and the value of
processes parameter. You should start from lower number like 1000 and adjust it so
that LatchProf would complete its sampling in a couple of seconds, and that is
usually enough for diagnosing ongoing latch contention problems. You shouldn't
keep sampling for long periods since LatchProf runs constantly on the CPU.
24. Latch onderzoek
● Name - Latch name
● Held - During how many samples out of total samples (100000) the
particular latch was held by somebody
● Gets - How many latch gets against that latch were detected during
LatchProf sampling
● Held % - How much % of time was the latch held by somebody during the
sampling. This is the main column you want to be looking at in order to see
who/what holds the latch the most (the latchprof output is reverse-ordered by that
column)
● Held ms - How many milliseconds in total was the latch held during the
sampling
● Avg hold ms - Average latch hold time in milliseconds (normally latches are held
from a few to few hundred microseconds)
25. Conclusie
● Latch contention en CPU utilization gaan samen
● Kan veroorzaakt worden door CPU starvation.
● Latchholder is de weg naar de bron
● Kan ook gebruikt worden om hot blocks te
detecteren.
● Geeft impact van gebruik van literals.
● Be-invloed scalability
● Veel informatie maar erg verspreidt.
27. Mutex in het kort
● Opvolger van de latch
● Is nog kleiner en sneller
● Nog minder informatie opgeslagen
● Introductie in Oracle 10g
● Iedere hash bucket eigen mutex
● Beter schaalbaar
30. Wie ben ik
● Rick van Ek, rick.v.ek@xs4all.nl
http://nl.linkedin.com/in/rickvek
● Werkt met Oracle producten sinds 1992
● Zelfstandig sinds 1996, Van Ek IT Consultancy BV
● Oracle database
● Baan IV software
● Web Logic (2012)
● Getrouwd, twee pubers, meisje en jongen.
31. Agenda
● Definitie van een Latch.
● Wat is een Latch?
● Hoe ziet een Latch er uit?
● Wanneer wordt deze gebruikt?
● Eigenschappen van een Latch.
● Welke informatie over een Latch?
● Latch contention
● Demo?
32. Definition of a latch by Oracle
Latches are simple, low-level serialization mechanisms that coordinate multiuser access to shared data structures,
objects, and files. Latches protect shared memory resources from corruption when accessed by multiple processes.
Specifically, latches protect data structures from the following situations:
Concurrent modification by multiple sessions
Being read by one session while being modified by another session
Deallocation (aging out) of memory while being accessed
Typically, a single latch protects multiple objects in the SGA. For example, background processes such as DBWn and
LGWR allocate memory from the shared pool to create data structures. To allocate this memory, these processes use a
shared pool latch that serializes access to prevent two processes from trying to inspect or modify the shared pool
simultaneously. After the memory is allocated, other processes may need to access shared pool areas such as the library
cache, which is required for parsing. In this case, processes latch only the library cache, not the entire shared pool.
Unlike enqueue latches such as row locks, latches do not permit sessions to queue. When a latch becomes available, the
first session to request the latch obtains exclusive access to it. Latch spinning occurs when a process repeatedly requests
a latch in a loop, whereas latch sleeping occurs when a process releases the CPU before renewing the latch request.
Typically, an Oracle process acquires a latch for an extremely short time while manipulating or looking at a data
structure. For example, while processing a salary update of a single employee, the database may obtain and release
thousands of latches. The implementation of latches is operating system-dependent, especially in respect to whether and
how long a process waits for a latch.
An increase in latching means a decrease in concurrency. For example, excessive hard parse operations create
contention for the library cache latch. The V$LATCH view contains detailed latch usage statistics for each latch,
including the number of times each latch was requested and waited for.
33. Wat is een Latch?
● Latch is een locking mechanisme
● Regelt toegang tot resources in de SGA, library
cache en database buffers e.d.
● Zorgt dat informatie consistent is voor shared
objects.
● Is razend snel en heeft geen intelligentie, geen
queuing. (nano seconden)
● Een latch is ongeveer 100 tot 200 bytes groot.
● Bij memory objects kan het gebeuren dat readers
writers blocken en vice versa.
Latches are simple, low-level serialization mechanisms that coordinate multiuser
access to shared data structures, objects, and files. Latches protect shared
memory resources from corruption when accessed by multiple processes.
Specifically, latches protect data structures from the following situations:
Concurrent modification by multiple sessions
Being read by one session while being modified by another session
Deallocation (aging out) of memory while being accessed
Typically, a single latch protects multiple objects in the SGA. For example,
background processes such as DBWn and LGWR allocate memory from the
shared pool to create data structures. To allocate this memory, these processes
use a shared pool latch that serializes access to prevent two processes from trying
to inspect or modify the shared pool simultaneously. After the memory is allocated,
other processes may need to access shared pool areas such as the library cache,
which is required for parsing. In this case, processes latch only the library cache,
not the entire shared pool.
Unlike enqueue latches such as row locks, latches do not permit sessions to queue.
When a latch becomes available, the first session to request the latch obtains
exclusive access to it. Latch spinning occurs when a process repeatedly requests
a latch in a loop, whereas latch sleeping occurs when a process releases the CPU
before renewing the latch request.
Typically, an Oracle process acquires a latch for an extremely short time while
manipulating or looking at a data structure. For example, while processing a salary
update of a single employee, the database may obtain and release thousands of
latches. The implementation of latches is operating system-dependent, especially
in respect to whether and how long a process waits for a latch.
An increase in latching means a decrease in concurrency. For example, excessive
hard parse operations create contention for the library cache latch. The V$LATCH
view contains detailed latch usage statistics for each latch, including the number of
times each latch was requested and waited for.
34. Hoe ziet een latch er uit?
Memory build up:
-Arrays
No addressing
Fixed sizes
Segmented Arrays - dynamic allocation
- hold address of next in list
-Pointers
Memory location
Hold address to interesting piece of memory
- Linked lists
List of acciated data
Varying in shapes/size
Frequently/heavily used
Double linked lists - forward address
- backward address
- Hash table
Hashvalue (bucket)
Different values hash to the same bucket
Not a lot of items to same bucktet
Hash algorithm spread data evenly
Object always to same bucket
36. Wanneer wordt deze gebruikt.
In principe wordt een latch altijd gebruikt als men
resources nodig heeft uit een de SGA. Er zijn dus veel
verschillende soorten latches (zie v$latch).
Twee phase actie bij lange operaties, dwz :
Get latch, pin buffer, unset latch, do changes, get
latch , unpin buffer, unset latch.
Since “doing something” with the buffer content can take a relatively long time, Oracle
often adopts a two-step strategy to latching so that it doesn’t have to hold a latch
while working. There are some operations that can be completed while holding the
latch, but Oracle often uses the following strategy:
1. Get the latch
2. Find and pin the buffer.
3. Drop the latch.
4. do something with the buffer content.
5. Get the latch.
6. Unpin the buffer.
7. Get the latch.
8. Drop the latch.
By Jonathan Lewis.
37. Eigenschappen van een Latch.
● Latches worden gebruikt gedurende de periode dat
een memory structure wordt ge-update.
● Ze hebben een extreem korte levensduur. Order
grote van nano seconden.
● Ze zijn atomic, d.w.z. “test en set” of “compare and
swap” CPU instructions.
● Doordat het een single instruction operation is, zijn
ze gegarandeerd voor het betreffende proces.
38. Eigenschappen van een Latch.
● Een latch heeft geen initiele “sleep” maar blijft proberen de
lock te krijgen “spinning” (paar duizend pogingen).
● Na het spinnen krijgt deze een sleep time.(verschillend)
● Een latch zit op dezelfde CPU, een context switch zou te
lang duren.
● Er is geen intelligent gedrag, daar is de tijd niet voor.
● Er is geen queue, als een latch vrij komt dan is deze voor de
eerste de beste(mob of waiters).
●
Als de “holder” er niet meer is maar de latch wel wordt dit
door PMON opgeschoond.
Geen intelligentie nodig voor multiple cpu omdat door het spinnen het process op
dezelfde cpu blijft.
Sleep tussen spins is afhankelijk van het aantal CPU's
39. Eigenschappen van een Latch.
● Een latch kent twee types “willing to wait” en
“immediate”
● Type “immediate latch” wacht niet maar gaat op
zoek naar vrije child latches.
● Latches opereren op instance level. (RAC)
● De implementatie van latches zorgt ervoor dat er
geen deadlock kan ontstaan.
● Twee smaken, exclusive en shared.
● Een latch beslaat 32 of meer hash buckets.
Immediate gaat onmiddelijk zoeken naar een ander pad om lock alsnog te verkrijgen
Exclusive betekend ook exclusief, er kan maar een gebruiker/waiter zijn van de latch.
40. Latch informatie.
● V$LATCH shows aggregate latch statistics for both
parent and child latches.
select latch#
, level#
, name
, gets
, misses
, sleeps
, immediate_gets
, immediate_misses
, wait_time -- "Wait microsec"
from v$latch ;
● Information from X$KSLLTR
V$LATCH
Shows aggregate latch statistics for both parent and child latches, grouped by latch name. Individual
parent and child latch statistics are broken down in the views:
V$LATCH_PARENT
V$LATCH_CHILDREN
.
Key information in these views is:
GETS - Number of successful willing-to-wait requests for a latch.
MISSES - Number of times an initial willing-to-wait request was unsuccessful
SLEEPS - Number of times a process waited for requested a latch after an initial wiling-to-wait
request.
IMMEDIATE_GETS - Number of successful immediate requests for each latch.
IMMEDIATE_MISSES Number of unsuccessful immediate requests for each latch.
.
V$LATCHNAME
contains information about decoded latch names for the latches shown in V$LATCH
.
Oracle versions might differ in the latch# assigned to the existing latches.In order to obtain information
for the specific version query as follows:
clm aefra 4 edn LTHNM'
ounnm omta0haig'AC AE
slc ac# aefo $acnm;
eetlth,nm rmvlthae
.
V$LATCHHOLDER
contains information about the current latch holders.
.
(Metalink [ID 22908.1])
41. Latch informatie.
● V$LATCH_PARENT shows aggregate latch
statistics for parent latches
● Information from X$KSLLTR_PARENT
● V$LATCH_CHILDREN shows aggregate latch
statistics for children latches
● Information from X$KSLLTR_CHILDREN
42. Latch informatie.
● V$LATCHHOLDER This view contains
information about the current latch holders.
– PID NUMBER Identifier of the process holding
the latch
– SID NUMBER Identifier of the session that owns
the latch
– LADDR RAW(4 | 8) Latch address
– NAME VARCHAR2(64) Name of the latch being
held
– GETS NUMBER Number of times that the latch
was obtained in either wait mode or no-wait
mode
● Gebaseerd op X$KSUPRLAT
43. Latch informatie.
● Wat houdt de latch vast? v.b. cache buffer chains
● In v$latch_children vindt je addr
● In x$bh vindt je hladdr, file# ,dbablck , state,
TCH
● v$latch_children.addr = x$bh.hladdr
● TCH = touch count ( updated elke 3 seconden)
This latch has a memory address, identified by the ADDR column.
SELECT
addr,
sleeps
FROM
v$latch_children c,
v$latchname n
WHERE
n.name='cache buffers chains' and
c.latch#=n.latch# and
sleeps > 100
ORDER BY sleeps
/
Use the value in the ADDR column joined with the V$BH view to identify the blocks
protected by this latch. For example, given the address
(V$LATCH_CHILDREN.ADDR) of a heavily contended latch, this queries the file and
block numbers:
SELECT file#, dbablk, class, state, TCH
FROM X$BH
WHERE HLADDR='address of latch';
X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot
block.
44. Misleidende informatie.
● Er is een valkuil, een latch leeft in een cyclus van
nano seconden.
● De informatie (TCH) in x$bh wordt ieder 3
seconden ververst.
● v.b. 1 maal elke 3 sec gedurende 24 uur
TCB =28800 (86400/3 =28800)
● tig-maal voor 2 sec elke 10 sec gedurende 24
uur (tcb verhoogd met 1)
TCB = 8640 (86400/10= 8640)
● TCH counter wordt gebruikt voor de LRU process
But still, it would not be always reliable for another reason – touchcounts are incremented only after 3
seconds have passed since last increment! This factor has been coded in to avoid situation such a short
but crazy nested loop join hammering a single buffer hundreds of thousands of times in few seconds
and then finishing. The buffer wouldn’t be hot anymore but the touchcount would be hundreds of
thousands due a single SQL execution. So, unless 3 seconds (of SGA internal time) has passed since
last TCH update, the touchcounts would not be increased during buffer access.
This time is controlled by SGA variable kcbatt_ by the way:
SQL> oradebug dumpvar sga kcbatt_
ub4 kcbatt_ [3C440F4, 3C440F8) = 00000003
This 3-second delay leaves us in the following situation, let say there are 2 blocks protected by a CBC child
latch:
One block has been accessed once every 3 seconds for 24 hours in a row. A block accessed once per 3
seconds is definitely not a hot block, but its touchcount would be around 28800 (86400 seconds per 24
hours / 3 = 28800).
And there is another block which is accessed crazily for 2 seconds in a row and this happens every 10
seconds. 2 seconds of consecutive access would increase the touchcount by 1. If such access pattern
has been going on every 10 seconds over last 24 hours, then the touch count for that buffer would be
86400 / 10 = 8640.
In the first case we can have a very cold block with TCH = 28800 and in second case a very hot block with
TCH = 8640 only and this can mislead DBAs to fixing the wrong problem.
(Tanel Poder)
45. Wat is contention?
● Als een latch gezet wordt maar er is al een latch dan “spins”
deze en probeert het weer.
● Het maximale aantal keren van “spinning” is vast gelegd in
_SPIN_COUNT (afhankelijk van CPU count)
● Daarna wacht het voor enkele honderdste van seconden en
probeert het weer.
● Na ieder poging loopt de wacht tijd iets op.
● CPU utilization loopt op gedurende dit proces.
● “! latch contention is a Sympthom not a root cause”
What causes latch contention?
If a required latch is busy, the process requesting it spins, tries again and if still unavailable, spins again.
The
loop is repeated up to a maximum number of times determined by the hidden initialization parameter
_SPIN_COUNT. The default value of the parameter is automatically adjusted when the machine's CPU
count
changes provided that the default was used. If the parameter was explicitly set then there is no change. It is
not usually recommended to change the default value for this parameter.
If after this entire loop, the latch is still not available, the process must yield the CPU and go to sleep. Initially
it sleeps for one centisecond. This time is doubled in every subsequent sleep. This causes a slowdown to
occur and results in additional CPU usage,until a latch is available. The CPU usage is a consequence of the
"spinning" of the process. "Spinning" means that the process continues to look for the availability of the latch
after certain intervals of time, during which it sleeps.
46. Specifieke latch contention.
● Redo copy/redo allocation latch
● Verkeerd geconfigureerde redo logfiles/buffers.
● Library Cache latch
● Literals in plaats van binds
● Cache Buffers Chains latch
● Hot blocks.
● Shared Pool latch.
● Te grote large pool en/of geen (te kleine)
reserved area.
CAUSES OF CONTENTION FOR SPECIFIC LATCHES
The latches that most frequently affect performance are those protecting the buffer
cache, areas of the shared pool and the redo buffer.
• Library cache latches: These latches protect the library cache in which sharable
SQL is stored. In a well defined application there should be little or no
contention for these latches, but in an application that uses literals instead of
bind variables (for instance “WHERE surname=’HARRISON’” rather that
“WHERE surname=:surname”, library cache contention is common.
• Redo copy/redo allocation latches: These latches protect the redo log buffer,
which buffers entries made to the redo log. Recent improvements
(from Oracle 7.3 onwards) have reduced the frequency and severity of
contention for these latches.
• Shared pool latches: These latches are held when allocations or de-allocations
of memory occur in the shared pool. Prior to Oracle 8.1.7, the most common
cause of shared pool latch contention was an overly large shared pool and/or
failure to make use of the reserved area of the shared poolii.
• Cache buffers chain latches: These latches are held when sessions read or
write to buffers in the buffer cache. In Oracle8i, there are typically a very large
number of these latches each of which protects only a handful of blocks.
Contention on these latches is typically caused by concurrent access to a very
“hot” block and the most common type of such a hot block is an index root or
branch block (since any index based query must access the root block).
47. Hoe identificeer je latch contention?
● Ratio based indentificatie.
● "willing-to-wait" Hit Ratio = (GETS-
MISSES)/GETS
● "no wait" Hit Ratio = (IMMEDIATE_GETS-
IMMEDIATE_MISSES)/IMMEDIATE_GETS
● Zie ook AWR/spotlight/lab128 etc.
Select name
, gets, misses
, {gets - misses)/gets ratio
From v$latch
Where gets>0;
select name
, immediate_gets
, immediate_misses
, (immediate_gets -
immediate_misses)/immediate_gets
ratio
from v$latch
where immediate_gets > 0 ;
48. Hoe identificeer je latch contention?
● Wait Interface Based Techniques
● Meet de impact van de latches op je overal
performance.
● Kijk hoeveel tijd er gespendeerd word in het
wachten op een latch.
– v$system_event
– v$sysstat
– v$latch
A better approach to estimating the impact of latch contention is to consider the relative
amount of time being spent waiting for latches. The following query gives us some
indication of this:
SELECT event
, time_waited
, round(time_waited*100/ SUM (time_waited) OVER(),2) wait_pct
FROM (SELECT event, time_waited
FROM v$system_event
WHERE event NOT IN ('Null event',
'client message',
'rdbms ipc reply',
'smon timer',
'rdbms ipc message',
'PX Idle Wait',
'PL/SQL lock timer',
'file open',
'pmon timer',
'WMON goes to sleep',
'virtual circuit status',
'dispatcher timer',
'SQL*Net message from client',
'parallel query dequeue wait',
'pipe get )
UNION
(SELECT NAME, VALUE
FROM v$sysstat
WHERE NAME LIKE 'CPU used when call started'))
ORDER BY 2 DESC ;
select name, gets, sleeps,
sleeps*100/sum(sleeps) over() sleep_pct, sleeps*100/gets
sleep_rate
from v$latch where gets>0
order by sleeps desc;
49. Wie houd deze latch?
● Latch contention is een symptoom, dus;
● Volg/begrijp het process
● v$latchholder => sid, pid
● v$session => sql_address
● v$sqlarea => sql_text
● v$latch_children => addr
● X$BH => haddr
● Gebruik tools:
● detectie
– AWR
– Spotlight
– lab128
● onderzoek
– Latchprof / latchprofx (session)
Latch contention is niet altijd slecht. Het betekent gewoon dat men veel resources nodig
heeft. Heeft wel invloed op de schaalbaarheid van een application.
50. Latch onderzoek
● Latchprof
● Gebaseerd op v$ views
● Latchprofx
● Gebaseerd op x$ views
● Meer details
● Toegang tot x views nodig
● Parameter: _ultrafast_latch_statistics
MUTEXES, PART 1
A brief comment about mutexes is necessary at this point because a mutex is very
similar to a latch in the way it is implemented and used. Mutexes were introduced in
the library cache processing in Oracle 10.2 as a step toward eliminating the use of
pins (which I will discuss in conjunction with library cache locking toward the end of
the following section). Essentially a mutex is a “private mini-latch” that is part of the
library cache object. This means that instead of a small number of latches covering a
large number of objects—with the associated risk of competition for latches—we now
have individual mutexes for every single library cache hash bucket, and two mutexes
(one to replace the KGL pin, the other related in some way to handling dependencies)
on every parent and child cursor, which should improve the scalability of frequently
executed statements. The downside to this change is that we have less information if
problems arise. The support code for latching contains a lot of information about who,
what, where, when, why, how often, and how much contention appeared. The code
path for operating mutexes is shorter, and captures less of this information.
Nevertheless, once you’ve seen how (and why) Oracle operates locking and pinning
in the library cache, you will recognize the performance benefits of mutexes.
Read the full story : Oracle Core: Essential Internals for DBAs and Developers
Jonathan Lewis.
51. Latch onderzoek
● Parameter 1 specifies which columns from V$LATCHHOLDER to report and
group by. In the case below I just want to report latch holds by latch name (and not
even break it down by SID for starters).
● Parameter 2 specifies which SIDs to monitor. In the case below, I am interested in
any SID which holds a latch (%).
● Parameter 3 specifies which latches to monitor. This can be set either to latch name
or latch address in memory. All latches (%).
● Parameter 4 specifies how many times to sample V$LATCHHOLDER. The
sampling speed depends on your server CPU/memory bus speed and the value of
processes parameter. You should start from lower number like 1000 and adjust it so
that LatchProf would complete its sampling in a couple of seconds, and that is
usually enough for diagnosing ongoing latch contention problems. You shouldn't
keep sampling for long periods since LatchProf runs constantly on the CPU.
52. Latch onderzoek
● Name - Latch name
● Held - During how many samples out of total samples (100000) the
particular latch was held by somebody
● Gets - How many latch gets against that latch were detected during
LatchProf sampling
● Held % - How much % of time was the latch held by somebody during the
sampling. This is the main column you want to be looking at in order to see
who/what holds the latch the most (the latchprof output is reverse-ordered by that
column)
● Held ms - How many milliseconds in total was the latch held during the
sampling
● Avg hold ms - Average latch hold time in milliseconds (normally latches are held
from a few to few hundred microseconds)
53. Conclusie
● Latch contention en CPU utilization gaan samen
● Kan veroorzaakt worden door CPU starvation.
● Latchholder is de weg naar de bron
● Kan ook gebruikt worden om hot blocks te
detecteren.
● Geeft impact van gebruik van literals.
● Be-invloed scalability
● Veel informatie maar erg verspreidt.
55. Mutex in het kort
● Opvolger van de latch
● Is nog kleiner en sneller
● Nog minder informatie opgeslagen
● Introductie in Oracle 10g
● Iedere hash bucket eigen mutex
● Beter schaalbaar