Transaction management provides concurrency control and recovery in databases. Concurrency control ensures transactions execute correctly and reliably despite concurrent access through techniques like locking. Recovery ensures the database remains fault tolerant by undoing aborted transactions and redoing committed ones using write-ahead logging to survive failures. The ARIES protocol analyzes the log, redoes dirty writes, and undoes uncommitted transactions to recover the consistent database state after a crash.
Recovery Techniques and Need of RecoveryPooja Dixit
Recovery Techniques and Need of Recovery, 3 states of database Recovery:, DBMS Failure , Transaction Failure…, System Crash…, Disk Failure…,LOG BASED , CONCURRENT TRANSACTION, Checkpoint…
Recovery Techniques and Need of RecoveryPooja Dixit
Recovery Techniques and Need of Recovery, 3 states of database Recovery:, DBMS Failure , Transaction Failure…, System Crash…, Disk Failure…,LOG BASED , CONCURRENT TRANSACTION, Checkpoint…
This will tell about the three of the Protocols(Lock-Based Protocols, Timestamp-Based Protocols, Validation-Based Protocols) of Concurrency Control used in the database management system.
In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC) is a type of atomic commitment protocol (ACP). ... The protocol achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely used.
ACID properties
Atomicity, Consistency, Isolation, Durability
Transactions should possess several properties, often called the ACID properties; they should be enforced by the concurrency control and recovery methods of the DBMS.
This will tell about the three of the Protocols(Lock-Based Protocols, Timestamp-Based Protocols, Validation-Based Protocols) of Concurrency Control used in the database management system.
In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC) is a type of atomic commitment protocol (ACP). ... The protocol achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely used.
ACID properties
Atomicity, Consistency, Isolation, Durability
Transactions should possess several properties, often called the ACID properties; they should be enforced by the concurrency control and recovery methods of the DBMS.
Distributed Deadlock & Recovery Deadlock concept, Deadlock in Centralized systems, Deadlock in Distributed Systems – Detection, Prevention, Avoidance, Wait-Die Algorithm, Wound-Wait algorithm Recovery in DBMS - Types of Failure, Methods to control failure, Different techniques of recoverability, Write- Ahead logging Protocol, Advanced recovery techniques- Shadow Paging, Fuzzy checkpoint, ARIES, RAID levels, Two Phase and Three Phase commit protocols
Transaction concept, ACID property, Objectives of transaction management, Types of transactions, Objectives of Distributed Concurrency Control, Concurrency Control anomalies, Methods of concurrency control, Serializability and recoverability, Distributed Serializability, Enhanced lock based and timestamp based protocols, Multiple granularity, Multi version schemes, Optimistic Concurrency Control techniques
What is Database Backup? The 3 Important Recovery Techniques from transaction...Raj vardhan
What is Database Backup?
What is Database recovery techniques
Why recovery is needed? (What causes a Transaction to fail?)
The 3 Important Recovery Techniques from transaction failures:
The figure below illustrates the use of Shadow paging techniques:
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
How world-class product teams are winning in the AI era by CEO and Founder, P...
Transaction & Concurrency Control
1. Transaction Management Overview
Chapter 16
There are three side effects of acid.
Enhanced long term memory,
decreased short term memory,
and I forget the third.
- Timothy Leary
Concurrency Control & Recovery
• Concurrency Control
– Provide correct and highly available access to data
in the presence of concurrent access by large and
diverse user populations
• Recovery
– Ensures database is fault tolerant, and not
corrupted by software, system or media failure
– 7x24 access to mission critical data
• Existence of CC&R allows applications to be
written without explicit concern for
concurrency and fault tolerance
2. Roadmap
• Overview (Today)
• Concurrency Control
• Recovery
Structure of a DBMS
Query Optimization
and Execution
Relational Operators
Files and Access Methods These layers must
consider concurrency
Buffer Management control and recovery
Disk Space Management (Transaction, Lock,
Recovery Managers)
DB
3. Transactions and Concurrent Execution
• Transaction - DBMS’s abstract view of a user program (or
activity):
– A sequence of reads and writes of database objects.
– Unit of work that must commit and abort as a single atomic unit
• Transaction Manager controls the execution of transactions.
• User program may carry out many operations on the data
retrieved from the database, but the DBMS is only concerned
about what data is read/written from/to the database.
• Concurrent execution of multiple transactions essential for
good performance.
– Disk is the bottleneck (slow, frequently used)
– Must keep CPU busy w/many queries
– Better response time
ACID properties of Transaction Executions
• A tomicity: All actions in the Xact happen, or none
happen.
• C onsistency: If each Xact is consistent, and the DB
starts consistent, it ends up consistent.
• I solation: Execution of one Xact is isolated from that
of other Xacts.
• D urability: If a Xact commits, its effects persist.
4. Atomicity and Durability
• A transaction might commit after completing all its
actions, or it could abort (or be aborted by the DBMS)
after executing some actions. Also, the system may
crash while the transaction is in progress.
• Important properties:
– Atomicity : Either executing all its actions, or none of its
actions.
– Durability : The effects of committed transactions must
survive failures.
• DBMS ensures the above by logging all actions:
– Undo the actions of aborted/failed transactions.
– Redo actions of committed transactions not yet
propagated to disk when system crashes.
Transaction Consistency
• A transaction performed on a database that is
internally consistent will leave the database in an
internally consistent state.
• Consistency of database is expressed as a set of
declarative Integrity Constraints
– CREATE TABLE/ASSERTION statements
• E.g. Each CSC434 student can only register in one project
group. Each group must have 2 students.
– Application-level
• E.g. Bank account of each customer must stay the same during
a transfer from savings to checking account
• Transactions that violate ICs are aborted.
5. Isolation (Concurrency)
• Concurrency is achieved by DBMS, which interleaves
actions (reads/writes of DB objects) of various
transactions.
• DBMS ensures transactions do not step onto one
another.
• Each transaction executes as if it was running by
itself.
– Transaction’s behavior is not impacted by the presence of
other transactions that are accessing the same database
concurrently.
– Net effect must be identical to executing all transactions
for some serial order.
– Users understand a transaction without considering the
effect of other concurrently executing transactions.
Example
• Consider two transactions (Xacts):
T1: BEGIN A=A+100, B=B-100 END
T2: BEGIN A=1.06*A, B=1.06*B END
• 1st xact transfers $100 from B’s account to A’s
• 2nd credits both accounts with 6% interest.
• Assume at first A and B each have $1000. What are the
legal outcomes of running T1 and T2?
• T1 ; T2 (A=1166,B=954)
• T2 ; T1 (A=1160,B=960)
• In either case, A+B = $2000 *1.06 = $2120
• There is no guarantee that T1 will execute before T2 or
vice-versa, if both are submitted together.
6. Example (Contd.)
• Consider a possible interleaved schedule:
T1: A=A+100, B=B-100
T2: A=1.06*A, B=1.06*B
This is OK (same as T1;T2). But what about:
T1: A=A+100, B=B-100
T2: A=1.06*A, B=1.06*B
• Result: A=1166, B=960; A+B = 2126, bank loses $6 !
• The DBMS’s view of the second schedule:
T1: R(A), W(A), R(B), W(B)
T2: R(A), W(A), R(B), W(B)
Scheduling Transactions
• Serial schedule: Schedule that does not interleave the
actions of different transactions.
• Equivalent schedules: For any database state, the
effect (on the set of objects in the database) of
executing the first schedule is identical to the effect of
executing the second schedule.
• Serializable schedule: A schedule that is equivalent to
some serial execution of the transactions.
(Note: If each transaction preserves consistency, every
serializable schedule preserves consistency. )
7. Anomalies with Interleaved Execution
• Reading Uncommitted Data (WR Conflicts, “dirty
reads”):
T1: R(A), W(A), R(B), W(B), Abort
T2: R(A), W(A), C
• Unrepeatable Reads (RW Conflicts):
T1: R(A), R(A), W(A), C
T2: R(A), W(A), C
Anomalies (Continued)
• Overwriting Uncommitted Data (WW
Conflicts):
T1: W(A), W(B), C
T2: W(A), W(B), C
8. Lock-Based Concurrency Control
• Here’s a simple way to allow concurrency but
avoid the anomalies just described…
• Two-phase Locking (2PL) Protocol:
– Each Xact must obtain a S (shared) lock on object before
reading, and an X (exclusive) lock on object before writing.
– If an Xact holds an X lock on an object, no other Xact can
get a lock (S or X) on that object.
– System can obtain these locks automatically
– Two phases: acquiring locks, and releasing them
• No lock is ever acquired after one has been released
• “Growing phase” followed by “shrinking phase”.
• Lock Manager keeps track of request for locks and
grants locks on database objects when they become
available.
Strict 2PL
• 2PL allows only serializable schedules but is
subjected to cascading aborts.
• Example: rollback of T1 requires rollback of
T2!
T1: R(A), W(A), Abort
T2: R(A), W(A), R(B), W(B)
• To avoid Cascading aborts, use Strict 2PL
• Strict Two-phase Locking (Strict 2PL)
Protocol:
– Same as 2PL, except:
– All locks held by a transaction are released only
when the transaction completes
9. Introduction to Crash Recovery
• Recovery Manager
– When a DBMS is restarted after crashes, the
recovery manager must bring the database to a
consistent state
– Ensures transaction atomicity and durability
– Undos actions of transactions that do not commit
– Redos actions of committed transactions during
system failures and media failures (corrupted
disk).
• Recovery Manager maintains log information
during normal execution of transactions for
use during crash recovery
The Log
• Log consists of “records” that are written sequentially.
– Typically chained together by Xact id
– Log is often duplexed and archived on stable storage.
• Log stores modifications to the database
– if Ti writes an object, write a log record with:
– If UNDO required need “before image”
– IF REDO required need “after image”.
– Ti commits/aborts: a log record indicating this action.
• Need for UNDO and/or REDO depend on Buffer Mgr.
– UNDO required if uncommitted data can overwrite stable
version of committed data (STEAL buffer management).
– REDO required if xact can commit before all its updates are
on disk (NO FORCE buffer management).
10. Logging Continued
• Write Ahead Logging (WAL) protocol
– Log record must go to disk before the changed page!
• implemented via a handshake between log manager
and the buffer manager.
– All log records for a transaction (including it’s commit
record) must be written to disk before the transaction is
considered “Committed”.
• All log related activities (and in fact, all CC related
activities such as lock/unlock, dealing with
deadlocks etc.) are handled transparently by the
DBMS.
ARIES Recovery
• There are 3 phases in ARIES recovery:
– Analysis: Scan the log forward (from the most recent
checkpoint) to identify all Xacts that were active, and all
dirty pages in the buffer pool at the time of the crash.
– Redo: Redoes all updates to dirty pages in the buffer pool,
as needed, to ensure that all logged updates are in fact
carried out and written to disk.
– Undo: The writes of all Xacts that were active at the crash
are undone (by restoring the before value of the update, as
found in the log), working backwards in the log.
• At the end --- all committed updates and only those
updates are reflected in the database.
• Some care must be taken to handle the case of a crash
occurring during the recovery process!
11. Summary
• Concurrency control and recovery are among the most
important functions provided by a DBMS.
• Concurrency control is automatic.
– System automatically inserts lock/unlock requests and
schedules actions of different Xacts in such a way as to
ensure that the resulting execution is equivalent to
executing the Xacts one after the other in some order.
• Write-ahead logging (WAL) and the recovery protocol
are used to undo the actions of aborted transactions
and to restore the system to a consistent state after a
crash.