Distributed Database Designs are nothing but multiple, logically related Database systems, physically distributed over several sites, using a Computer Network, which is usually under a centralized site control.
Distributed database design refers to the following problem:
Given a database and its workload, how should the database be split and allocated to sites so as to optimize certain objective function
There are two issues:
(i) Data fragmentation which determines how the data should be fragmented.
(ii) Data allocation which determines how the fragments should be allocated.
3. Distributed database design
Distributed Database Designs are nothing but multiple, logically related
Database systems, physically distributed over several sites, using a
Computer Network, which is usually under a centralized site control.
Distributed database design refers to the following problem:
Given a database and its workload, how should the database be split
and allocated to sites so as to optimize certain objective function
There are two issues:
(i) Data fragmentation which determines how the data should be
fragmented.
(ii) Data allocation which determines how the fragments should be
allocated.
4. Architecture of Distributed
Processing system
Distributed Processing architectures are generally developed depending
on three parameters −
Distribution − It states the physical distribution of data across the
different sites.
Autonomy − It indicates the distribution of control of the database
system and the degree to which each constituent DBMS can operate
independently.
Heterogeneity − It refers to the uniformity or dissimilarity of the data
models, system components and databases.
6. Client - Server Architecture for DDBMS
This is a two-level architecture where the functionality is divided into
servers and clients. The server functions primarily encompass data
management, query processing, optimization and transaction
management. Client functions include mainly user interface. However,
they have some functions like consistency checking and transaction man
agement.
The two different client - server architecture are −
1. Single Server Multiple Client
2. Multiple Server Multiple Client
7. Peer- to-Peer Architecture for DDBMS
In these systems, each peer acts both as a client and a server for
imparting database services. The peers share their resource with other p
eers and co-ordinate their activities.
This architecture generally has four levels of schemas −
Global Conceptual Schema − Depicts the global logical view of data.
Local Conceptual Schema − Depicts logical data organization at each
site.
Local Internal Schema − Depicts physical data organization at each
site.
External Schema − Depicts user view of data.
9. Multi - DBMS Architectures
This is an integrated database system formed by a collection of two or
more autonomous database systems.
Multi-DBMS can be expressed through six levels of schemas −
1. Multi-database View Level − Depicts multiple user views comprising
of subsets of the integrated distributed database.
2. Multi-database Conceptual Level − Depicts integrated multi-databa
se that comprises of global logical multi-database structure definitions
3. Multi-database Internal Level − Depicts the data distribution across
different sites and multi-database to local data mapping.
4. Local database View Level − Depicts public view of local data.
5. Local database Conceptual Level − Depicts local data organization
at each site.
6. Local database Internal Level − Depicts physical data organization
at each site.
10. Design Alternatives
The distribution design alternatives for the tables in a DDBMS are as
follows −
• Non-replicated and non-fragmented
• Fully replicated
• Partially replicated
• Fragmented
• Mixed
11. Non-replicated & Non-fragmented
In this design alternative, different tables are placed at different sites.
Data is placed so that it is at a close proximity to the site where it is used
most. It is most suitable for database systems where the percentage of
queries needed to join information in tables placed at different sites is
low. If an appropriate distribution strategy is adopted, then this design
alternative helps to reduce the communication cost during data
processing.
12. Fully Replicated
In this design alternative, at each site, one copy of all the database
tables is stored. Since, each site has its own copy of the entire database,
queries are very fast requiring negligible communication cost. On the
contrary, the massive redundancy in data requires huge cost during
update operations. Hence, this is suitable for systems where a large
number of queries is required to be handled whereas the number of data
base updates is low.
13. Partially Replicated
Copies of tables or portions of tables are stored at different sites. The
distribution of the tables is done in accordance to the frequency of
access. This takes into consideration the fact that the frequency of
accessing the tables vary considerably from site to site. The number of
copies of the tables (or portions) depends on how frequently the
access queries execute and the site which generate the access queries.
14. Fragmented
In this design, a table is divided into two or more pieces referred to as
fragments or partitions, and each fragment can be stored at different
sites. This considers the fact that it seldom happens that all data stored
in a table is required at a given site. Moreover, fragmentation increases
parallelism and provides better disaster recovery. Here, there is only one
copy of each fragment in the system, i.e. no redundant data.
The three fragmentation techniques are −
• Vertical fragmentation
• Horizontal fragmentation
• Hybrid fragmentation
15. Fragmented
In this design, a table is divided into two or more pieces referred to as
fragments or partitions, and each fragment can be stored at different
sites. This considers the fact that it seldom happens that all data stored
in a table is required at a given site. Moreover, fragmentation increases
parallelism and provides better disaster recovery. Here, there is only one
copy of each fragment in the system, i.e. no redundant data.
The three fragmentation techniques are −
• Vertical fragmentation
• Horizontal fragmentation
• Hybrid fragmentation
16. Mixed Distribution
This is a combination of fragmentation and partial replications. Here, the
tables are initially fragmented in any form (horizontal or vertical), and
then these fragments are partially replicated across the different sites
according to the frequency of accessing the fragments.
17. Fragmentation
Fragmentation is the task of dividing a table into a set of smaller tables.
The subsets of the table are called fragments. Fragmentation can be of
three types: horizontal, vertical, and hybrid (combination of horizontal
and vertical). Horizontal fragmentation can further be classified into two
techniques: primary horizontal fragmentation and derived horizontal
fragmentation.
Fragmentation should be done in a way so that the original table can be
reconstructed from the fragments. This is needed so that the original
table can be reconstructed from the fragments whenever required. This
requirement is called “constructiveness.”
18. Advantages of Fragmentation
• Since data is stored close to the site of usage, efficiency of the
database system is increased.
• Local query optimization techniques are sufficient for most queries
since data is locally available.
• Since irrelevant data is not available at the sites, security and privacy
of the database system can be maintained.
• When data from different fragments are required, the access speeds
must be very high and access cost may be high.
• In case of recursive fragmentations, the job of reconstruction will
need expensive techniques.
• Lack of back-up copies of data in different sites may render the
database ineffective in case of failure of a site.
Disadvantages of Fragmentation
19. Vertical Fragmentation
In vertical fragmentation, the fields or columns of a table are grouped
into fragments. In order to maintain constructiveness, each fragment
should contain the primary key field(s) of the table. Vertical fragmentation
can be used to enforce privacy of data.
For example, let us consider that a University database keeps records of
all registered students in a Student table having the following schema.
Regd_No Name Course Address Semester Fees Marks
Now, the fees details are maintained in the accounts section. In this case, th
e designer will fragment the database as follows −
20. Vertical Fragmentation
CREATE TABLE STD_FEES AS
SELECT Regd_No, Fees
FROM STUDENT;
Reconstruction of vertical fragmentation is performed by using Full
Outer Join operation on fragments.
21. Horizontal Fragmentation
Horizontal fragmentation groups the tuples of a table in accordance to values of
one or more fields. Horizontal fragmentation should also confirm to the rule of
constructiveness. Each horizontal fragment must have all columns of the origin
al base table.
For example, in the student schema, if the details of all students of Computer
Science Course needs to be maintained at the School of Computer Science,
then the designer will horizontally fragment the database as follows −
CREATE COMP_STD AS
SELECT * FROM STUDENT
WHERE COURSE = "Computer Science";
Reconstruction of horizontal fragmentation can be performed using UNION
operation on fragments.
22. Hybrid Fragmentation
In hybrid fragmentation, a combination of horizontal and vertical
fragmentation techniques are used. This is the most flexible
fragmentation technique since it generates fragments with minimal
extraneous information. However, reconstruction of the original table is
often an expensive task.
Hybrid fragmentation can be done in two alternative ways −
• At first, generate a set of horizontal fragments; then generate vertical
fragments from one or more of the horizontal fragments.
• At first, generate a set of vertical fragments; then generate horizontal
fragments from one or more of the vertical fragments.
23. Hybrid Fragmentation
In hybrid fragmentation, a combination of horizontal and vertical
fragmentation techniques are used. This is the most flexible
fragmentation technique since it generates fragments with minimal
extraneous information. However, reconstruction of the original table is
often an expensive task.
Hybrid fragmentation can be done in two alternative ways −
• At first, generate a set of horizontal fragments; then generate vertical
fragments from one or more of the horizontal fragments.
• At first, generate a set of vertical fragments; then generate horizontal
fragments from one or more of the vertical fragments.
24. Hybrid Fragmentation
Distribution transparency is the property of distributed databases by the
virtue of which the internal details of the distribution are hidden from the
users. The DDBMS designer may choose to fragment tables, replicate
the fragments and store them at different sites. However, since users are
oblivious of these details, they find the distributed database easy to use
like any centralized database.
The three dimensions of distribution transparency are −
• Location transparency
• Fragmentation transparency
• Replication transparency
25. Hybrid Fragmentation
Emp_ID Emp_Name Emp_Address Emp_Age Emp_Salary
101 Surendra Baroda 25 15000
102 Jaya Pune 37 12000
103 Jayesh Pune 47 10000
•Hybrid fragmentation can be achieved by performing horizontal and vertical
partition together.
•Mixed fragmentation is group of rows and columns in relation.
Example: Consider the following table which consists of employee information.
26. Hybrid Fragmentation
Fragmentation1:
SELECT * FROM Emp_Name WHERE Emp_Age < 40
Fragmentation2:
SELECT * FROM Emp_Id WHERE Emp_Address= 'Pune' AND Salary < 14
000
Reconstruction of Hybrid Fragmentation:
The original relation in hybrid fragmentation is reconstructed by performin
g UNION and FULL OUTER JOIN.
27. Data communication concepts
Data communication refers to the exchange of data between a source and
a receiver via form of transmission media such as a wire cable.
Data communication is said to be local if communicating devices are in the
same building or a similarly restricted geographical area.
A data communication system may collect data from remote locations
through data transmission circuits, and then outputs processed results to
remote locations. The different data communication techniques which are
presently in widespread use evolved gradually either to improve the data
communication techniques already existing or to replace the same with
better options and features.
28. Infographic Style
Insert the title of your subtitle Here
Modern PowerPoint
Presentation
Get a modern PowerPoint Presentation that is beautifully
designed. Easy to change colors, photos and Text. You
can simply impress your audience and add a unique zing
and appeal to your Presentations. Easy to change colors,
photos and Text. Get a modern PowerPoint Presentation
that is beautifully designed.
Easy to change colors, photos and Text. You can simply
impress your audience and add a unique zing and appeal
to your Presentations.
Your Text Here
29. Components of data communication system
A Communication system has following components:
1. Message: It is the information or data to be communicated. It can consist
of text, numbers, pictures, sound or video or any combination of these.
2. Sender: It is the device/computer that generates and sends that
message
3. Receiver: It is the device or computer that receives the message. The
location of receiver computer is generally different from the sender
computer. The distance between sender and receiver depends upon the
types of network used in between.
4. Medium: It is the channel or physical path through which the message is
carried from sender to the receiver. The medium can be wired like
twisted pair wire, coaxial cable, fiber-optic cable or wireless like laser, rad
io waves, and microwaves.
30. Concurrency Control and Recovery
Concurrency control (CC) is a process to ensure that data is updated
correctly and appropriately when multiple transactions are concurrently
executed in DBMS (Connolly & Begg, 2015).
Distributed Databases encounter a number of concurrency control and
recovery problems which are not present in centralized databases.
Some of them are listed below:
• Dealing with multiple copies of data items
• Failure of individual sites
• Communication link failure
• Distributed commit
• Distributed deadlock
31. Concurrency Control and Recovery
Concurrency control (CC) is a process to ensure that data is updated
correctly and appropriately when multiple transactions are concurrently
executed in DBMS (Connolly & Begg, 2015).
Distributed Databases encounter a number of concurrency control and
recovery problems which are not present in centralized databases.
Some of them are listed below:
• Dealing with multiple copies of data items
• Failure of individual sites
• Communication link failure
• Distributed commit
• Distributed deadlock
32. Concurrency Control
1. Dealing with multiple copies of data items:
The concurrency control must maintain global consistency. Likewise the recovery
mechanism must recover all copies and maintain consistency after recovery.
2. Failure of individual sites:
Database availability must not be affected due to the failure of one or two sites
and the recovery scheme must recover them before they are available for use.
3. Communication link failure:
This failure may create network partition which would affect database availability e
ven though all database sites may be running.
4. Distributed commit:
A transaction may be fragmented and they may be executed by a number of sites.
This require a two or three-phase commit approach for transaction commit.
33. Concurrency Control
5. Distributed deadlock:
Since transactions are processed at multiple sites, two or more sites may get
involved in deadlock. This must be resolved in a distributed manner.
Concurrency control protocols can be broadly divided into two categories −
• Lock based protocols
• Time stamp based protocols
34. Concurrency Control Protocol
1. Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by which
any transaction cannot read or write data until it acquires an appropriate lock on it.
Locks are of two kinds −
• Binary Locks − A lock on a data item can be in two states; it is either locked
or unlocked.
• Shared/exclusive − This type of locking mechanism differentiates the locks
based on their uses. If a lock is acquired on a data item to perform a write
operation, it is an exclusive lock. Allowing more than one transaction to write o
n the same data item would lead the database into an inconsistent state. Read
locks are shared because no data value is being changed.
35. Continue..
1. Binary Locks:
A lock is kind of a mechanism that ensures that the integrity of data is maintained.
A binary lock can have two states or values: locked and unlocked (or 1 and 0, for
simplicity). A distinct lock is associated with each database item X.
If the value of the lock on X is 1, item X cannot be accessed by a database
operation that requests the item. If the value of the lock on X is 0, the item can be
accessed when requested. We refer to the current value (or state) of the lock
associated with item X as LOCK(X).
There are 2 operation in binary locking:
(i) Lock_item(X):
(ii) Unlock_item (X):
36. Continue..
1. Lock_item(X):
A transaction requests access to an item X by first issuing a lock_item(X)
operation. If LOCK(X) = 1, the transaction is forced to wait. If LOCK(X) = 0,
it is set to 1 (the transaction locks the item) and the transaction is allowed to
access item X.
2. Unlock_item (X):
When the transaction is through using the item, it issues an unlock_item(X)
operation, which sets LOCK(X) to 0 (unlocks the item) so that X may be accessed
by other transactions. Hence, a binary lock enforces mutual exclusion on the data
item ; i.e., at a time only one transaction can hold a lock.
37. Continue..
2. Shared / Exclusive Locking :
Shared lock :
Shared lock is placed when we are reading the data, multiple shared locks can be
placed on the data but when a shared lock is placed no exclusive lock can be
placed. These locks are referred as read locks, and denoted by 'S'.
If a transaction T has obtained Shared-lock on data item X, then T can read X, but
cannot write X. Multiple Shared lock can be placed simultaneously on a data item.
For example, when two transactions are reading Steve’s account balance, let
them read by placing shared lock but at the same time if another transaction wants
to update the Steve’s account balance by placing Exclusive lock, do not allow it
until reading is finished.
38. Continue..
Exclusive lock :
Exclusive lock is placed when we want to read and write the data. This lock allows
both the read and write operation, Once this lock is placed on the data no other
lock (shared or Exclusive) can be placed on the data until Exclusive lock is
released.
For example, when a transaction wants to update the Steve’s account balance,
let it do by placing X lock on it but if a second transaction wants to read the data
( S lock) don’t allow it, if another transaction wants to write the data(X lock) don’t
allow that either.
These Locks are referred as Write locks, and denoted by 'X'.
If a transaction T has obtained Exclusive lock on data item X, then T can be read
as well as write X. Only one Exclusive lock can be placed on a data item at a time.
This means multiples transactions does not modify the same data simultaneously.
39. Continue..
Lock Compatibility Matrix
_________________
| | S | X |
|-----------------------------
| S | True | False |
|-----------------------------
| X | False | False |
-----------------------------
How to read this matrix?:
There are two rows, first row says that when S lock is placed, another S lock can
be acquired so it is marked true but no Exclusive locks can be acquired so
marked False.
In second row, When X lock is acquired neither S nor X lock can be acquired so
both marked false
40. TIME STAMP BASED PROTOCOL
Time stamp is used to link time with some event or in more particular say
transaction. To ensure serializability, we associate transaction with the time
called as time stamp. In simple words we order the transaction based on the
time of arrival and there is no deadlock.
For each data item, two time stamp are maintained.
Read time stamp – time stamp of youngest transaction which has performed o
peration read on the data item.
Write time stamp – time stamp of youngest transaction which has performed o
peration write on the data item.
Let the transaction T’s time-stamp be denoted by TS(T), Read time-stamp of d
ata-item be denoted by R-timestamp(X), and Write time-stamp of data-item be
denoted by W-timestamp(X).
41. TIMESTAMP BASED PROTOCOL
The protocol works as follows-
• If a transaction issues read operation
If Ts(T) < W-timestamp(X) then
read request is rejected
else execute the transaction and update the time-stamp.
• If a transaction operates write operation
If Ts(T) < R-timestamp(X) or If TS(T) <W-timestamp(X) then
write request is rejected
else transaction gets executed and update the time-stamp.
42. TIMESTAMP BASED PROTOCOL
Thomas' Write Rule
This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and
Ti is rolled back.
Time-stamp ordering rules can be modified to make the schedule view
serializable.
Instead of making Ti rolled back, the 'write' operation itself is ignored.
43. Need of Recovery
A database is a very huge system with lots of data and transaction. The
transaction in the database is executed at each seconds of time and is very
critical to the database. If there is any failure or crash while executing the
transaction, then it expected that no data is lost. It is necessary to revert the
changes of transaction to previously committed point. There are various
techniques to recover the data depending on the type of failure or crash.
Generalization of failure:
• Transaction failure
• System crash
• Disk failure
45. Classification of failure
Transaction Failure: - This is the condition in the transaction where a transaction
cannot execute it further. This type of failure affects only few tables or processes.
The failure can be because of logical errors in the code or because of system error
like deadlock or unavailability of system resources to execute the transactions.
System Crash: - This can be because of hardware or software failure or because
of external factors like power failure. In most of the cases data in the secondary
memory are not affected because of this crash. This is because; the database has
lots of integrity checkpoints to prevent the data loss from secondary memory.
Disk Failure: - These are the issues with hard disks like formation of bad sectors,
disk head crash, unavailability of disk etc.
46. Need of Recovery
When a DBMS recovers from a crash, it should maintain the following −
• It should check the states of all the transactions, which were being executed.
• A transaction may be in the middle of some operation; the DBMS must ensure
the atomicity of the transaction in this case.
• It should check whether the transaction can be completed now or it needs to
be rolled back.
• No transactions would be allowed to leave the DBMS in an inconsistent state.
47. Recovery Techniques
1. Log-based recovery Or Manual Recovery):
In this method, log of each transaction is maintained in some stable storage, so
that in case of any failure, it can be recovered from there to recover the
database. But storing the logs should be done before applying the actual
transaction on the database.
Every log in this case will have information like what transaction is being
executed, which values have been modified to which value, and state of the
transaction. All these log information will be stored in the order of execution.
For example:
Suppose there is a transaction to modify the address of a student. Let us see
what logs are written for this transaction
48. Log-based recovery continue..
• As soon as transaction is initiated, it writes ‘start’ log.
<Tn, Start>
• When the transaction modifies the address from ‘Troy’ to ‘Fraser Town’,
another log is written to the file.
<Tn, ADDRESS, ‘Troy’, ‘Fraser Town’>
• When the transaction is completed, it writes another log to indicate end of the
transaction.
<Tn, Commit>
49. Log-based recovery continue..
Methods of creating this log files and updating the database:
Deferred database modification: - In this method, all the logs for transaction is
created and stored into stable storage system first. Once it is stored, the
database is updated with changes. In the above example, after all the three log
records are created and stored in some storage system, database will be
updated with those steps.
Immediate database modification: - After creating each log record, database is
modified for each step of log entry immediately. In the above example, database
is modified at each step of log entry i.e.; after first log entry, transaction will hit
the database to fetch the record, then second log will be entered followed by
updating the address, then the third log followed by committing the database
changes.
50. Log-based recovery continue..
Methods of creating this log files and updating the database:
Shadow paging: - This is the method where all the transactions are executed in
the primary memory. Once all the transactions completely executed, it will be
updated to the database. Hence, if there is any failure in the middle of
transaction, it will not be reflected in the database. Database will be updated
after all the transaction.
51. Recovery with Concurrent Transactions
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all
the memory space available in the system. As time passes, the log file may grow
too big to be handled at all. Checkpoint is a mechanism where all the previous lo
gs are removed from the system and stored permanently in a storage disk.
Checkpoint declares a point before which the DBMS was in consistent state, and
all the transactions were committed.
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in
the following manner −
• The recovery system reads the logs backwards from the end to the last check
point.
• It maintains two lists, an undo-list and a redo-list.
52. Recovery with Concurrent Transactions
• If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just
<Tn, Commit>, it puts the transaction in the redo-list.
• If the recovery system sees a log with <Tn, Start> but no commit or abort log
found, it puts the transaction in undo-list.
All the transactions in the undo-list are then undone and their logs are removed.
All the transactions in the redo-list and their previous logs are removed and then
redone before saving their logs.