3. Relational Database Management System
4. File Processing System
5. File Processing System
Written in C (Data Structure (Information in
Pascal etc.) File Handling) Files Format)
6. File System
7. Disadvantages of FPS
Data Redundancy and Inconsistency
Difficulty in accessing data
8. Data Redundancy and Inconsistency
Name Address AccNo Name Address
ABC Bhiwani 1002 ABC Bhiwani
DEF Delhi 1005 DEF Jaipur
Customer Information Saving Account
9. Difficulty in accessing data
Written in C (Data Structure (Information Storage
Pascal etc.) File Handling) in Files Format)
10. Data Isolation and Integrity Problems
Program in C Program in COBOL
05 accno PIC A(2)
11. Atomicity Problems
13. Security Problems
14. Database the Piece of mind
15. Requirements of a DBMS
• A mechanism for specification of data and its dependencies
(Integrity Constraints) in an integrated fashion.
• Prevention of redundancy and inconsistency.
• Provision of adequate security and access-rights.
• Mechanism for concurrency control.
• Mechanism for recovery from failure.
Additionally any DBMS must provide
• Schemes for specification of procession rules or application Programs.
• Efficient techniques for storage and retrieval of data from the secondary
16. A DBMS has two major components, namely
Structure of Database is called Database Schema.
Instance, which is a state of the database with the actual data loaded.
A set of software tools/programs which access, update and
process the database, called the query and update-mechanism.
17. View of DATA
View Level (External Level)
View 1 View 2 View n
Logical Level Conceptual View
Physical Level Internal View
18. Data Independence
The ability to modify a schema definition in one level without affecting a
schema definition in the next higher level is called data independence.
Physical data independence Logical data independence
Create table emp
19. Data Models
A Data Model is a mechanism for describing the data, their interrelationships
and the constraints.
Object-based Conceptual models.
Physical data models.
20. The E-R Model
Entities : An entity is a distinct clearly identifiable object of the database e.g Book
Attribute : Each Entity is characterized by a set of attributes e.g. Acc.No.
Entity Set : Set of all entities having attributes of the same type.
Relationships : A relationship is a mapping between entity sets.
Acc_No Card_No Name
BOOK Borrowed_By USERS
Author YearofPub Card_No DOI Address
21. The Relational Model
Relational Model uses a collection of tables to represent both data and
relationship among those data. Each table has multiple Attributes and
similar kind of tuples.
AccNo Title Author YearofPub
22. Network Model
Data in the network model are represented by collection of records and
relationships among data are represented by links, which can be viewed
Card_No Name Address Link
Acc_No Author ----- Link
23. Hierarchical Model
This is special kind of a network model where the relationship is
essentially a tree-like structure.
Patient Doctors Nurses Cardiology Skin
24. Physical Data Models
Physical data models are used to describe data at the lowest level.
In contrast to logical data models, there are few physical data models
In use. Two of the widely known ones are the Unifing model and
25. Database Languages
Data-Definition Data-Manipulation Data-Control
Create Table Test
Title Varchar2(20), Insert GRANT Connect,
-------- Delete Resource TO xUser
26. Database Management System Structure
Naïve Users Application Sophisticated Database
(tellers, agents, etc.) Programmers Users Administrators
Application Application Database
Interfaces Programs Query Scheme
Object Code Query
Indices Statistical Data
Data Files Data Dictionary
28. Oracle Storage System Structure
29. Database Administrator
Roles of DBA
• Schema Definition
• Storage structure and access-method definition
• Schema and Physical-organization modification
• Granting of authorization for data access
• Integrity-constraint specification
Simple and Composite Attributes
Single-valued and Multivalued Attributes
Weak Entity Set and Strong Entity Set
40. Mapping Cardinalities
Mapping cardinalities, or cardinality ratios, express the number of entities
to which another entity can be associated via a relationship set.
For a binary relationship set R between entity sets A and B, the mapping
Cardinality must be one of the following
A B A B
One to One One to Many
41. A B A B
Many to One Many to Many
42. More on E-R Diagrams
Owns Multiple Relationship between Leased
Same entity set
Staff Reports to
43. Ternary E-R Diagram
Instructors Teaches Students
Book Borrowed_By User
44. E-R Diagram Components
Total Participation of an entity in a relationship set
45. Existence Dependencies
46. Generalization and Specialization
47. Generalization and Specialization
The abstraction mechanisms
Emp_No Name Date_of_hire
Generalization Employee Specialization
Employee Salary Employee
IS_A IS_A IS_A IS_A
Faculty Staff Teaching Casual
Degree Interest Stipend Hour_Rate
The Process of compiling information on an object
Teacher Uses Course
Teacher Teaches Course
49. Represent ER model using tables
50. Query Languages
A query language is a language in which a user requests information from a database.
These are typically higher-level than programming languages.
They may be one of:
Procedural, where the user instructs the system to perform a sequence of operations
on the database. This will compute the desired information.
Nonprocedural, where the user species the information desired without giving a
procedure for ob-taining the information.
A complete query language also contains facilities to insert and delete tuples as well
as to modify parts of existing tuples.
51. The Relational Algebra
The relational algebra is a procedural query language.
The Borrow and Branch relations
52. Fundamental Operations
cartesian product (binary)
Several other operations, dened in terms of the fundamental operations:
Operations produce a new relation as a result.
53. Formal Definition of Relational Algebra
54. The Select Operation
55. The Project Operation
56. The Cartesian Product Operation
57. Output of Cartesian Product
Relation A Relation B AXB
A B A B
1 1 X
2 1 Y
Y 2 X
58. The Rename Operation
59. The Union Operation
60. The Set Difference Operation
61. Additional Operations
The Set Intersection Operation
62. The Natural Join Operation
63. The Division Operation
64. Example of Division Operation
Relation R Relation S ÷S
A B B A
P A A P
P B B Q
65. The Assignment Operation
66. Relational Calculus
Relational Calculus is a nonprocedural Query language
Tuple Relational Calculus
Uses Tuple variables which take values of an entire tuple
Domain Relational Calculus
Uses Domain variables which takes values from an attribute
72. Integrity Constraints
Integrity and Consistency is of primary concern to any database design
At any instance a database must be correct according to a set of rules.
Rules are checked during any database operation.
Recovery from Failure
Types of Constraints
Referential Integrity Constraint
73. Domain Constraints
Null or Not Null
Specify at the time of designing
Checked at the time of insertion, deletion or modification
DOL date check (date>=29/09/2004
City char(10) not null
TotalAmt = amount + interest
74. Referential Integrity
Referential integrity states that all values of the foreign key of one
Relation must be present in another relation where the same attribute
Is declared as the primary key
Checks during Database Modification
75. Assertions and Triggers
An assertion is a general predicate, expressed in relational algebra
Or calculus or any language like SQL which must always hold in a
Assert salary-constraint on emp
salary >= 1000
A trigger is a statement or a block of statements which are executed
Automatically by the system when an event (i.e., insertion, updation
Or deletion) takes place on a table
Define trigger insert_record
on delete of emp e
(insert into emp_history
values e.empno, e.name, e.deptno)
76. Functional Dependencies
Functional Dependencies provide a formal mechanism to express
Constraints between attributes.
It is a mean of identifying how values of certain attributes are
Determined by values of other attributes.
A functional dependency (FD) generalizes the concept of a key.
Book (acc_no, yr_pub, title)
Acc_no is Primary Key
Formal representation of Constraints
77. Formal Notation of FD
In general if there are two attributes A and B and the FD
Holds then, it means that there can be no two tuple which have
The same value of attributes A and different values in attribute B.
If α and β are two sets of attributes then the FD α β holds on a
Relation r(R), if –
1. α , β ⊆ R, i.e. α , β subset of R
2. for all tuples t1 and t2 in r,
if t1 [α ] = t2 [α ] then
t1 [β ] = t2 [β ]
78. Closure of a Set of Functional
80. Closure of a Set of F+
81. Closure of Attribute Sets
82. Canonical Cover
To minimize the number of functional dependencies that need to be
Tested in case of an update we may restrict F to a canonical cover Fc .
A canonical cover for F is a set of dependencies such that F logically
Implies all dependencies in Fc.
A canonical cover Fc of a set of FDs F is a minimal cover of F in the
Sense that there is no subset of Fc which also covers F.
83. Example of Cannonical Cover
Consider a relation r ( X, Y, Z ) with the FDs F.
1. X YZ
2. Y Z
3. X Y
4. XY Z
Here 4 is redundant because (1) states that X Y and X Z holds.
Thus (4) can be derived from (1). Also (3) is redundant because (1) contains (3).
Deleting these two we get
1. X YZ
2. Y Z
Which is a cover of F. Here again since X Y and Y Z holds, by
Transitivity X Z holds. So it is redundant. Deleting this we get the FDs as
Which is a cannonical cover of F.
Normalization is a process of removing redundancy using functional Dependencies.
To reduce redundancy it is necessary to decompose a relation into a number of smaller relations.
There are several normal Forms.
-First Normal Form (1 NF)
-Second Normal Form (2 NF)
-Third Normal Form(3 NF)
-Boyce-Codd Normal Form (BCNF)
93. First Normal Form (1NF)
This normal form says that all attributes are simple.
An attribute is said to be simple if it does not contain any subparts.
An attributes which contains subparts is called complex attributes.
F_name L_name City State Zip
94. Second Normal Form (2NF)
A relation is said to be in 2NF if it is in 1NF and
All non-prime attributes are fully functionally dependent on candidate key
Consider a relation savings_deposit having the following structure:-
Saving_deposit (name, addr, acc_no, amt )
With the following FDs :
name, acc_no amt
Here [name, acc_no ] is the candidate key and addr and amt are the non prime attributes.
Among the non-prime attributes amt depends on [name, acc_no ] whereas addr depends
on name only.
Note that due to FD name addr every tuple with the same name will contain the same
Address causing redundancy.
This redundancy arises because a non-prime attribute like address is dependent on an attribute
Which is not a candidate key.
We can remove this redundancy by splitting the original relation into following two relations
Sav_sch1 (name, addr)
Both the relations are now 2NF.
In the first relation name is Primary Key and the onlyNon-prime attribute is addr
which is dependent on name
In the second relation the only non-prime attribute amt depend on both name and
Acc_no. that this decomposition is also lossless join and dependency preserving
Courses ( Course_no, title, loc, time )
And FD’s are –
Course_no, time loc
96. Third Normal Form (3NF)
A relation is said to be in 3NF and non-prime attributes are not dependent
On each other.
Consider the relation –
s_by ( s_name, item, price, gift_item )
s_name, item price
Here all prime attributes are fully functional dependent on candidate keys, the
Non-prime attribute gift-item is also fully functional dependent on the non-prime
Attribute price. This create redundancy because every price value there is a fixed
We shall have to impose the additional restriction that no non-prime attribute can
Be functionally dependent on another non-prime attributes.
We decompose the relation
s_by (s_name, item, price, gift_item )
s_by_1 (s_name, item, price )
s_by_2 (price, gift_item)
Now we have a lossless join and dependency preserving decomposition.
An alternative yet equivalent definition for 3NF is :
For every FD α β on R at least one of the following conditions hold
•α ⊆ β (trivial dependency)
•α R (α is a super key )
98. Boyce-Codd Normal Form (BCNF)
99. More on BCNF
100. Comparison of BCNF and 3NF
101. Comparison of BCNF and 3NF - 2
102. Normalization using Multivalued
106. Fourth Normal Form (4NF)
108. Normalization using Join Dependencies
Let R be a relation schema and R1, R2,….Rn be a decomposition of R. The join dependency
*(R1, R2,….Rn) is used to restrict the set of legal relations to those for which R1, R2,….Rn is
A lossless-join decomposition of R.
Formally, if R = R1∪ R2 ∪ …… ∪ Rn, we say that a relation r( R ) satisfies the join dependency.
109. Fifth Normal Form (5NF)
Project-Join Normal Form
Project-join normal form (PJNF) is defined in a manner similar to BCNF and 4NF,
Except that join dependencies are used.
A relation schema R is in PJNF with respect to a set D of functional multivalued and
Join dependencies if, for all join depencdencies in D+ of the form *(R1, R2,…. Rn).
Where each Ri ⊆ R and R = R1 ∪ R2 ∪…… ∪ Rn, at least one of the following holds:
• *(R1, R2…..Rn) is a trival join dependency.
• Every Ri is a superkey for R.
It’s seems that every PJNF is also in 4NF
Thus, in general, we may not be able to find a dependency-preserving decomposition
Into PJNF for a given schema.
110. Storage and File Structure
Hierarchy of Storage
118. Organization of Records in files
119. Concurrency Control and Recovery
Concurrent execution of user programs is essential for good DBMS performance.
Because disk accesses are frequent, and relatively slow, it is important to keep the cpu humming by
working on several user programs concurrently.
A user’s program may carry out many operations on the data retrieved from the database, but the
DBMS is only concerned about what data is read/written from/to the database.
A transaction is the DBMS’s abstract view of a user program: a sequence of reads and writes.
A Tracnsaction is a unit of program execution That accesses and possibly updates various
Collection of operations that form a single logical unit of work are called tracsactions.
A database system must ensure proper execution of transaction despite failures.
To ensure integrity of the data, database system must maintain the following properties of the
121. States of Transactions
122. Concurrency in a DBMS
Users submit transactions, and can think of each transaction as executing by itself.
Concurrency is achieved by the DBMS, which interleaves actions (reads/writes of DB objects) of
Each transaction must leave the database in a consistent state if the DB is consistent when the
DBMS will enforce some ICs, depending on the ICs declared in CREATE TABLE statements.
Beyond this, the DBMS does not really understand the semantics of the data. (e.g., it does not
understand how the interest on a bank account is computed).
Issues: Effect of interleaving transactions, and crashes.
Consider two transactions (Xacts):
T1: BEGIN A=A+100, B=B-100 END
T2: BEGIN A=1.06*A, B=1.06*B END
y Intuitively, the first transaction is transferring $100 from B’s account to A’s account. The
second is crediting both accounts with a 6% interest payment.
y There is no guarantee that T1 will execute before T2 or vice-versa, if both are submitted
together. However, the net effect must be equivalent to these two transactions running
serially in some order.
124. Example (Contd.)
Consider a possible interleaving (schedule):
T1: A=A+100, B=B-100
T2: A=1.06*A, B=1.06*B
y This is OK. But what about:
T1: A=A+100, B=B-100
T2: A=1.06*A, B=1.06*B
y The DBMS’s view of the second schedule:
T1: R(A), W(A), R(B), W(B)
T2: R(A), W(A), R(B), W(B)
125. Example (Contd.)
The DBMS must not allow schedules like this!
T1: R(A), W(A), R(B), W(B)
T2: R(A), W(A), R(B), W(B)
T1 T2 Dependency graph
y Dependency graph: One node per Xact; edge from Ti to Tj if Tj reads or writes an object last
written by Ti.
y The cycle in the graph reveals the problem. The output of T1 depends on T2, and vice-versa.
126. Scheduling Transactions
Equivalent schedules: For any database state, the effect (on the set of objects in the database) of
executing the first schedule is identical to the effect of executing the second schedule.
Serializable schedule: A schedule that is equivalent to some serial execution of the transactions.
If the dependency graph of a schedule is acyclic, the schedule is called conflict serializable. Such a
schedule is equivalent to a serial schedule.
This is the condition that is typically enforced in a DBMS (although it is not necessary for
127. Detection of Serializability
One of the techniques of concurrency control is to detect whether a schedule is valid or not
Prior to execution.
The task of understanding a schedule is simplified by considering only the sequence of read
and write operation in a transaction
Read-Write sequence of a non-serializable schedule
128. Serializable Concurrency
A serializable concurrent schedule
Generalize the idea of conflict. Consider the four possibilities which can arise between two
Consecutive instructions T1 and T2 in a schedule ( T1 and T2 belong to two different transactions)
1. T1 : Read(X) followed by T2 : Write(X)
2. T1 : Read(X) followed by T2 : Read(X)
3. T1 : Write(X) followed by T2 : Read(X)
4. T1 : Write(X) followed by T2 : Write(X)
T1 and T2 are said to be conflict if they cannot be swapped without fear of loss of consistency.
In above 3 cases all pairs except case 2 are said to be in conflict.
129. Deadlock Condition
UPDATE account UPDATE account
SET balance = balance * 0.1 SET balance = balance * 0.1
WHERE acc_no = ‘FC821’ WHERE acc_no = ‘FC523’
UPDATE account UPDATE account
SET age = 30 SET age = 38
WHERE acc_no = ‘FC523’ WHERE acc_no = ‘FC821’
130. Lock-Based Techniques
In this technique the system does not participate in detection of inconsistency nor does it take any
The DBMS however, provides the user with a set of operations which when used properly can
ensure that concurrent execution will not violate consistency.
In this techniques functions are provided to lock and unlock data items by transactions,
In the simplest case a data item X can be locked by a transaction T1 in two modes :
Shared Mode : if T1 locks X in shared mode then before T1 unlocks X, no other transaction T2
can write into X. But a transaction T2 can read the value of X even if T1 has locked
locked X in shared mode.
Exclusive Mode : If T1 locks X in exclusive mode then before T1 unlocks X, no other transaction
T2 can read or write into X.
132. Two-Phase locking
Phase I – Acquiring Phase : During this phase a transaction may lock a data item but not
unlock any data item.
Phase II – Releasing Phase : During this phase a transaction may unlock data items locked
earlier but no new locks may be acquired.
In two phase locking phase I must always precede phase II. This will ensure that all schedule
are automatically conflict serialzable.
133. Enforcing (Conflict) Serializability
Two-phase Locking (2PL) Protocol:
Each Xact must obtain a S (shared) lock on object before reading, and an X (exclusive) lock on object
Once an Xact releases any lock, it cannot obtain new locks.
If an Xact holds an X lock on an object, no other Xact can get a lock (S or X) on that object.
2PL allows only conflict-serializable schedules.
Potential problem of deadlocks: we could have a cycle of Xacts, T1, T2, ... , Tn, with each Ti waiting for its
predecessor to release some lock that it needs.
Dealt with by killing one of them and releasing its locks.
134. Atomicity of Transactions
A transaction might commit after completing all its actions, or it could abort (or be aborted by the DBMS)
after executing some actions.
A very important property guaranteed by the DBMS for all transactions is that they are atomic. That is, a
user can think of a Xact as always executing all its actions in one step, or not executing any actions at all.
DBMS logs all actions so that it can undo the actions of aborted transactions.
This ensures that if each Xact preserves consistency, every serializable schedule preserves consistency.
135. Aborting a Transaction
If a transaction Ti is aborted, all its actions have to be undone. Not only that, if Tj reads an object last
written by Ti, Tj must be aborted as well!
Most systems avoid such cascading aborts by releasing a transaction’s locks only at commit time.
If Ti writes an object, Tj can read this only after Ti commits.
In order to undo the actions of an aborted transaction, the DBMS maintains a log in which every write is
recorded. This mechanism is also used to recover from system crashes: all active Xacts at the time of the
crash are aborted when the system comes back up.
136. The Log
The following actions are recorded in the log:
Ti writes an object: the old value and the new value.
Log record must go to disk before the changed page!
Ti commits/aborts: a log record indicating this action.
Log records are chained together by Xact id, so it’s easy to undo a specific Xact.
Log is often duplexed and archived on stable storage.
All log related activities (and in fact, all activities such as lock/unlock, dealing with deadlocks etc.) are
handled transparently by the DBMS.
137. The Log - 2
Log file e.g. X=1000, Y= 2000
Read (X, xi) Transaction Name
xi xi – 500 Data item Name
Write (X,xi) Old Value
Read ( Y, yi)
yi yi + 500 <T starts>
Write (Y, yi) <T, X, 1000, 500>
<T, Y, 2000, 2500>
At the time of recovery the entire log needs to be searched to know which transaction need to
Be redone and which transactions needs to be undone. The problem with this approach is:
1. It will take a reasonable amount of time.
2. Most of the transactions that need to be redone have already modified the database.
To solve this problem the concept of checkpoint is used here at different points.
Checkpoints are introduced to indicate that the data before this point has already been
Updated to the database. Before writing checkpoints the following sequence of actions
shuld to take place –
- Output all log records currently residing in the main store to a stable storage
- Output all modified buffer blocks to secondary storage.
- Output a log record <checkpoint>
139. Recovering From a Crash
There are 3 phases in the Aries recovery algorithm:
Analysis: Scan the log forward (from the most recent checkpoint) to identify all Xacts that were active,
and all dirty pages in the buffer pool at the time of the crash.
Redo: Redoes all updates to dirty pages in the buffer pool, as needed, to ensure that all logged
updates are in fact carried out and written to disk.
Undo: The writes of all Xacts that were active at the crash are undone (by restoring the before value
of the update, which is in the log record for the update), working backwards in the log. (Some care
must be taken to handle the case of a crash occurring during the recovery process!)
Data can be lost due to the failure of the nonvolatile storage like the disk. The scheme which is available
To protect the data from disk failure is to periodically dump the entire contents of the database to any backup
(or even stable) storage like a magnetic tape. When a failure occurs the most recent dump is used to restoring
The datbase to a previous consistent state. Then the log is used to redo all the transactions that have committed
Since the last dump occurred. The following steps are performed for this purpose :
• Output all log records currently residing in the main memory onto stable store.
• Output all buffer blocks onto the disk.
• Copy the contents of the database to stable store.
• Output a log record <dump>.
Concurrency control and recovery are among the most important functions provided by a DBMS.
Users need not worry about concurrency.
System automatically inserts lock/unlock requests and schedules actions of different Xacts in such a
way as to ensure that the resulting execution is equivalent to executing the Xacts one after the other in
Write-ahead logging (WAL) is used to undo the actions of aborted transactions and to restore the
system to a consistent state after a crash.
Consistent state: Only the effects of commited Xacts seen.
Optimization using algebraic Manipulation
Any algebraic manipulation approach to query optimization uses a set of rules, which may
Be enumerated as follows.
Perform selection as early as possible, in order to reduce the number of tuples to be
Projections of projections should be combined, if possible, in order to avoid repeated
scanning of tuples.
Projection over indexed attributes should be done earlier and That over non-indexed
attributes should be done later.
Intermediate relations produced in separate processing sequences must be shared as
as and when possible.
If possible, attributes which are controlling a join operation should be sorted earlier.