This presentation several topics of subjects RDBMS and DBMS including Distributed Database Design,Architecture of Distributed database processing system,Data Communication concept,Concurrency control and recovery. All the topics are briefly described according to syllabus of BCA II and BCA III year subjects.
DDBMS, characteristics, Centralized vs. Distributed Database, Homogeneous DDBMS, Heterogeneous DDBMS, Advantages, Disadvantages, What is parallel database, Data fragmentation, Replication, Distribution Transaction
Query Processing : Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization,
DDBMS, characteristics, Centralized vs. Distributed Database, Homogeneous DDBMS, Heterogeneous DDBMS, Advantages, Disadvantages, What is parallel database, Data fragmentation, Replication, Distribution Transaction
Query Processing : Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization,
Transaction concept, ACID property, Objectives of transaction management, Types of transactions, Objectives of Distributed Concurrency Control, Concurrency Control anomalies, Methods of concurrency control, Serializability and recoverability, Distributed Serializability, Enhanced lock based and timestamp based protocols, Multiple granularity, Multi version schemes, Optimistic Concurrency Control techniques
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
Distributed database system is collection of loosely coupled sites that are independeant of each other.
Distributed transaction model
Concurrency control
2 phase commit protocol
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUESAAKANKSHA JAIN
Distributed Database Designs are nothing but multiple, logically related Database systems, physically distributed over several sites, using a Computer Network, which is usually under a centralized site control.
Distributed database design refers to the following problem:
Given a database and its workload, how should the database be split and allocated to sites so as to optimize certain objective function
There are two issues:
(i) Data fragmentation which determines how the data should be fragmented.
(ii) Data allocation which determines how the fragments should be allocated.
Distributed Database Introduction
TYPES OF DD:
1. HOMOGENEOUS DISTRIBUTED DATABASE
2. HETEROGENEOUS DISTRIBUTED DATABASE
Distributed DBMS Architectures
Architectural Models
Some of the common architectural models are −
● Client - Server Architecture for DDBMS
● Peer - to - Peer Architecture for DDBMS
● Multi - DBMS Architecture
Design issues of distributed system –
1. Complex nature :
Distributed Databases are a network of many computers present at different locations and they provide an outstanding level of performance,
availability, and of course reliability. Therefore, the nature of Distributed DBMS is comparatively more complex than a centralized DBMS. Complex
software is required for Distributed DBMS. Also, It ensures no data replication, which adds even more complexity in its nature.
2. Overall Cost :
Various costs such as maintenance cost, procurement cost, hardware cost, network/communication costs, labor costs, etc, adds up to the overall
cost and make it costlier than normal DBMS.
3. Security issues:
In a Distributed Database, along with maintaining no data redundancy, the security of data as well as a network is a prime concern. A network can be
easily attacked for data theft and misuse.
4. Integrity Control:
In a vast Distributed database system, maintaining data consistency is important. All changes made to data at one site must be reflected on all the
sites. The communication and processing cost is high in Distributed DBMS in order to enforce the integrity of data.
5. Lacking Standards:
Although it provides effective communication and data sharing, still there are no standard rules and protocols to convert a centralized DBMS to a
large Distributed DBMS. Lack of standards decreases the potential of Distributed DBMS.
6. Lack of Professional Support:
Due to a lack of adequate communication standards, it is not possible to link different equipment produced by different vendors into a smoothly
functioning network. Thu several good resources may not be available to the users of the network.
7. Data design complex:
Fragmentation
Transaction concept, ACID property, Objectives of transaction management, Types of transactions, Objectives of Distributed Concurrency Control, Concurrency Control anomalies, Methods of concurrency control, Serializability and recoverability, Distributed Serializability, Enhanced lock based and timestamp based protocols, Multiple granularity, Multi version schemes, Optimistic Concurrency Control techniques
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
Distributed database system is collection of loosely coupled sites that are independeant of each other.
Distributed transaction model
Concurrency control
2 phase commit protocol
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUESAAKANKSHA JAIN
Distributed Database Designs are nothing but multiple, logically related Database systems, physically distributed over several sites, using a Computer Network, which is usually under a centralized site control.
Distributed database design refers to the following problem:
Given a database and its workload, how should the database be split and allocated to sites so as to optimize certain objective function
There are two issues:
(i) Data fragmentation which determines how the data should be fragmented.
(ii) Data allocation which determines how the fragments should be allocated.
Distributed Database Introduction
TYPES OF DD:
1. HOMOGENEOUS DISTRIBUTED DATABASE
2. HETEROGENEOUS DISTRIBUTED DATABASE
Distributed DBMS Architectures
Architectural Models
Some of the common architectural models are −
● Client - Server Architecture for DDBMS
● Peer - to - Peer Architecture for DDBMS
● Multi - DBMS Architecture
Design issues of distributed system –
1. Complex nature :
Distributed Databases are a network of many computers present at different locations and they provide an outstanding level of performance,
availability, and of course reliability. Therefore, the nature of Distributed DBMS is comparatively more complex than a centralized DBMS. Complex
software is required for Distributed DBMS. Also, It ensures no data replication, which adds even more complexity in its nature.
2. Overall Cost :
Various costs such as maintenance cost, procurement cost, hardware cost, network/communication costs, labor costs, etc, adds up to the overall
cost and make it costlier than normal DBMS.
3. Security issues:
In a Distributed Database, along with maintaining no data redundancy, the security of data as well as a network is a prime concern. A network can be
easily attacked for data theft and misuse.
4. Integrity Control:
In a vast Distributed database system, maintaining data consistency is important. All changes made to data at one site must be reflected on all the
sites. The communication and processing cost is high in Distributed DBMS in order to enforce the integrity of data.
5. Lacking Standards:
Although it provides effective communication and data sharing, still there are no standard rules and protocols to convert a centralized DBMS to a
large Distributed DBMS. Lack of standards decreases the potential of Distributed DBMS.
6. Lack of Professional Support:
Due to a lack of adequate communication standards, it is not possible to link different equipment produced by different vendors into a smoothly
functioning network. Thu several good resources may not be available to the users of the network.
7. Data design complex:
Fragmentation
• One of the most important decisions a distributed database designer has to make is data placement. Proper data placement is a crucial factor in determining the success of a distributed database system.
• There are four basic alternatives: namely,
– centralized,
– replicated,
– partitioned, and
– hybrid.
Distributed database consists of multiple databases that are connected with each other and are spread across different physical locations. The data that is stored on various physical locations can thus be managed independently of other physical locations. The communication between databases at different physical locations is thus done by a computer network.
A distributed database is a database that is not limited to one computer system.
It is like a database that consists of two or more files located in different computers or sites either on the same network or on an entirely different network.
Instead of storing all of the data in one database, data is divided and stored at different locations or sites which do not share any physical component.
The data can be easily accessed, managed, modified, updated, controlled, and organized in a database.
Adbms 27 parallel database distribution architectureVaibhav Khanna
In a parallel database architecture, there are multiple processors that control multiple disk units containing the database.
The database may be partitioned on the disks, or possibly replicated.
If fault tolerance is a high priority, the system can be set up so that each component can serve as a backup for the other components of the same type, taking over the functions of any similar component that fails.
Parallel database system architectures can be shared memory, shared-disk, shared-nothing, or hierarchical, which is also called cluster.
Overview, Database System vs File System, Database System Concept and
Architecture, Data Model Schema and Instances, Data Independence and Database Language and
Interfaces, Data Definitions Language, DML, Overall Database Structure. Data Modeling Using the
Entity Relationship Model: ER Model Concepts, Notation for ER Diagram, Mapping Constraints,
Keys, Concepts of Super Key, Candidate Key, Primary Key, Generalization, Aggregation,
Reduction of an ER Diagrams to Tables, Extended ER Model, Relationship of Higher Degree.
Similar to Distributed Database Management System (20)
Introduction
What is ML, DL, AL?
Decision Tree
Definition
Why Decision Tree?
Basic Terminology
Challenges
Random Forest
Definition
Why Random Forest
How does it work?
Advantages & Disadvantages
Definition: According to Arthur Samuel (1950) “Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed”.
Machine learning is the study and design of algorithms which can learn by processing input (learning samples) data.
The most widely used definition of machine learning is that of Carnegie Mellon University Professor Tom Mitchell: “A computer program is said to learn from experience ‘E’, with respect to some class of tasks ‘T’ and performance measure ‘P’ if its performance at tasks in ‘T’ as measured by ‘P’ improves with experience ‘E’”.
Decision Tree
Definition
Why Decision Tree?
Basic Terminology
Challenges
Random Forest
Definition
Why Random Forest
How does it work?
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data
Inheritance in java introduces the concept of reusability by implementing a mechanism in which one object acquires all the properties and behaviors of the parent object.
Inheritance in Java is a mechanism in which one object acquires all the properties and behaviors of a parent object. It is an important part of OOPs (Object Oriented programming system).
The idea behind inheritance in Java is that you can create new classes that are built upon existing classes. When you inherit from an existing class, you can reuse methods and fields of the parent class. Moreover, you can add new methods and fields in your current class also.
Inheritance represents the IS-A relationship which is also known as a parent-child relationship.
Java provides a data structure, the array, which stores a fixed-size sequential collection of elements of the same type.
An array is used to store a collection of data, but it is often more useful to think of an array as a collection of
variables of the same type.
Data Mining is defined as extracting information from huge sets of data. In other words, we can say that data mining is the procedure of mining knowledge from data.
According to Inmon, a data warehouse is a subject oriented,
integrated, time-variant, and non-volatile collection of data. He defined the terms
in the sentence as follows:
DETECTION OF MALICIOUS EXECUTABLES USING RULE BASED CLASSIFICATION ALGORITHMSAAKANKSHA JAIN
Slide present statistical mining of Malicious-Executable dataset collected from various antivirus log-files and other sources.
Further classifications of malicious code as per their impact on user's system & distinguishes threats on the muse in their connected severity.
Implementation of the algorithms JRIP ,PART and RIDOR in additional economical manner to acquire a level of accuracy to the classification results.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Initial Introduction of Image processing is included in these slides which contain 1. Introduction of Image Processing
2.Elements of visual perception
3. Image sensing and Quantization
4.A simple image formation model
5.Basic concept of Sampling and Quantization
Reader will find it easy to understand the topics described here in slides . A detailed description of each topic illustrated here.
Please read and if you like do comments also.... Thanks
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
3. Distributed database design
Distributed Database Designs are nothing but multiple, logically related
Database systems, physically distributed over several sites, using a
Computer Network, which is usually under a centralized site control.
Distributed database design refers to the following problem:
Given a database and its workload, how should the database be split
and allocated to sites so as to optimize certain objective function
There are two issues:
(i) Data fragmentation which determines how the data should be
fragmented.
(ii) Data allocation which determines how the fragments should be
allocated.
4. Architecture of Distributed
Processing system
Distributed Processing architectures are generally developed depending
on three parameters −
Distribution − It states the physical distribution of data across the
different sites.
Autonomy − It indicates the distribution of control of the database
system and the degree to which each constituent DBMS can operate
independently.
Heterogeneity − It refers to the uniformity or dissimilarity of the data
models, system components and databases.
6. Client - Server Architecture for DDBMS
This is a two-level architecture where the functionality is divided into
servers and clients. The server functions primarily encompass data
management, query processing, optimization and transaction
management. Client functions include mainly user interface. However,
they have some functions like consistency checking and transaction man
agement.
The two different client - server architecture are −
1. Single Server Multiple Client
2. Multiple Server Multiple Client
7. Peer- to-Peer Architecture for DDBMS
In these systems, each peer acts both as a client and a server for
imparting database services. The peers share their resource with other p
eers and co-ordinate their activities.
This architecture generally has four levels of schemas −
Global Conceptual Schema − Depicts the global logical view of data.
Local Conceptual Schema − Depicts logical data organization at each
site.
Local Internal Schema − Depicts physical data organization at each
site.
External Schema − Depicts user view of data.
9. Multi - DBMS Architectures
This is an integrated database system formed by a collection of two or
more autonomous database systems.
Multi-DBMS can be expressed through six levels of schemas −
1. Multi-database View Level − Depicts multiple user views comprising
of subsets of the integrated distributed database.
2. Multi-database Conceptual Level − Depicts integrated multi-databa
se that comprises of global logical multi-database structure definitions
3. Multi-database Internal Level − Depicts the data distribution across
different sites and multi-database to local data mapping.
4. Local database View Level − Depicts public view of local data.
5. Local database Conceptual Level − Depicts local data organization
at each site.
6. Local database Internal Level − Depicts physical data organization
at each site.
10. Design Alternatives
The distribution design alternatives for the tables in a DDBMS are as
follows −
• Non-replicated and non-fragmented
• Fully replicated
• Partially replicated
• Fragmented
• Mixed
11. Design Alternatives
The distribution design alternatives for the tables in a DDBMS are as
follows −
• Non-replicated and non-fragmented
• Fully replicated
• Partially replicated
• Fragmented
• Mixed
12. Non-replicated & Non-fragmented
In this design alternative, different tables are placed at different sites.
Data is placed so that it is at a close proximity to the site where it is used
most. It is most suitable for database systems where the percentage of
queries needed to join information in tables placed at different sites is
low. If an appropriate distribution strategy is adopted, then this design
alternative helps to reduce the communication cost during data
processing.
13. Fully Replicated
In this design alternative, at each site, one copy of all the database
tables is stored. Since, each site has its own copy of the entire database,
queries are very fast requiring negligible communication cost. On the
contrary, the massive redundancy in data requires huge cost during
update operations. Hence, this is suitable for systems where a large
number of queries is required to be handled whereas the number of data
base updates is low.
14. Partially Replicated
Copies of tables or portions of tables are stored at different sites. The
distribution of the tables is done in accordance to the frequency of
access. This takes into consideration the fact that the frequency of
accessing the tables vary considerably from site to site. The number of
copies of the tables (or portions) depends on how frequently the
access queries execute and the site which generate the access queries.
15. Fragmented
In this design, a table is divided into two or more pieces referred to as
fragments or partitions, and each fragment can be stored at different
sites. This considers the fact that it seldom happens that all data stored
in a table is required at a given site. Moreover, fragmentation increases
parallelism and provides better disaster recovery. Here, there is only one
copy of each fragment in the system, i.e. no redundant data.
The three fragmentation techniques are −
• Vertical fragmentation
• Horizontal fragmentation
• Hybrid fragmentation
16. Fragmented
In this design, a table is divided into two or more pieces referred to as
fragments or partitions, and each fragment can be stored at different
sites. This considers the fact that it seldom happens that all data stored
in a table is required at a given site. Moreover, fragmentation increases
parallelism and provides better disaster recovery. Here, there is only one
copy of each fragment in the system, i.e. no redundant data.
The three fragmentation techniques are −
• Vertical fragmentation
• Horizontal fragmentation
• Hybrid fragmentation
17. Mixed Distribution
This is a combination of fragmentation and partial replications. Here, the
tables are initially fragmented in any form (horizontal or vertical), and
then these fragments are partially replicated across the different sites
according to the frequency of accessing the fragments.
18. Fragmentation
Fragmentation is the task of dividing a table into a set of smaller tables.
The subsets of the table are called fragments. Fragmentation can be of
three types: horizontal, vertical, and hybrid (combination of horizontal
and vertical). Horizontal fragmentation can further be classified into two
techniques: primary horizontal fragmentation and derived horizontal
fragmentation.
Fragmentation should be done in a way so that the original table can be
reconstructed from the fragments. This is needed so that the original
table can be reconstructed from the fragments whenever required. This
requirement is called “constructiveness.”
19. Advantages of Fragmentation
• Since data is stored close to the site of usage, efficiency of the
database system is increased.
• Local query optimization techniques are sufficient for most queries
since data is locally available.
• Since irrelevant data is not available at the sites, security and privacy
of the database system can be maintained.
• When data from different fragments are required, the access speeds
may be very high.
• In case of recursive fragmentations, the job of reconstruction will
need expensive techniques.
• Lack of back-up copies of data in different sites may render the
database ineffective in case of failure of a site.
Disadvantages of Fragmentation
20. Vertical Fragmentation
In vertical fragmentation, the fields or columns of a table are grouped
into fragments. In order to maintain constructiveness, each fragment
should contain the primary key field(s) of the table. Vertical fragmentation
can be used to enforce privacy of data.
For example, let us consider that a University database keeps records of
all registered students in a Student table having the following schema.
Regd_No Name Course Address Semester Fees Marks
Now, the fees details are maintained in the accounts section. In this case, th
e designer will fragment the database as follows −
21. Vertical Fragmentation
CREATE TABLE STD_FEES AS
SELECT Regd_No, Fees
FROM STUDENT;
Reconstruction of vertical fragmentation is performed by using Full
Outer Join operation on fragments.
22. Horizontal Fragmentation
Horizontal fragmentation groups the tuples of a table in accordance to values of
one or more fields. Horizontal fragmentation should also confirm to the rule of
constructiveness. Each horizontal fragment must have all columns of the origin
al base table.
For example, in the student schema, if the details of all students of Computer
Science Course needs to be maintained at the School of Computer Science,
then the designer will horizontally fragment the database as follows −
CREATE COMP_STD AS
SELECT * FROM STUDENT
WHERE COURSE = "Computer Science";
Reconstruction of horizontal fragmentation can be performed using UNION
operation on fragments.
23. Hybrid Fragmentation
In hybrid fragmentation, a combination of horizontal and vertical
fragmentation techniques are used. This is the most flexible
fragmentation technique since it generates fragments with minimal
extraneous information. However, reconstruction of the original table is
often an expensive task.
Hybrid fragmentation can be done in two alternative ways −
• At first, generate a set of horizontal fragments; then generate vertical
fragments from one or more of the horizontal fragments.
• At first, generate a set of vertical fragments; then generate horizontal
fragments from one or more of the vertical fragments.
24. Hybrid Fragmentation
In hybrid fragmentation, a combination of horizontal and vertical
fragmentation techniques are used. This is the most flexible
fragmentation technique since it generates fragments with minimal
extraneous information. However, reconstruction of the original table is
often an expensive task.
Hybrid fragmentation can be done in two alternative ways −
• At first, generate a set of horizontal fragments; then generate vertical
fragments from one or more of the horizontal fragments.
• At first, generate a set of vertical fragments; then generate horizontal
fragments from one or more of the vertical fragments.
25. Hybrid Fragmentation
Distribution transparency is the property of distributed databases by the
virtue of which the internal details of the distribution are hidden from the
users. The DDBMS designer may choose to fragment tables, replicate
the fragments and store them at different sites. However, since users are
oblivious of these details, they find the distributed database easy to use
like any centralized database.
The three dimensions of distribution transparency are −
• Location transparency
• Fragmentation transparency
• Replication transparency
26. Hybrid Fragmentation
Emp_ID Emp_Name Emp_Address Emp_Age Emp_Salary
101 Surendra Baroda 25 15000
102 Jaya Pune 37 12000
103 Jayesh Pune 47 10000
•Hybrid fragmentation can be achieved by performing horizontal and vertical
partition together.
•Mixed fragmentation is group of rows and columns in relation.
Example: Consider the following table which consists of employee information.
27. Hybrid Fragmentation
Fragmentation1:
SELECT * FROM Emp_Name WHERE Emp_Age < 40
Fragmentation2:
SELECT * FROM Emp_Id WHERE Emp_Address= 'Pune' AND Salary < 14
000
Reconstruction of Hybrid Fragmentation:
The original relation in hybrid fragmentation is reconstructed by performin
g UNION and FULL OUTER JOIN.
28. Data communication concepts
Data communication refers to the exchange of data between a source and
a receiver via form of transmission media such as a wire cable.
Data communication is said to be local if communicating devices are in the
same building or a similarly restricted geographical area.
A data communication system may collect data from remote locations
through data transmission circuits, and then outputs processed results to
remote locations. The different data communication techniques which are
presently in widespread use evolved gradually either to improve the data
communication techniques already existing or to replace the same with
better options and features.
29. Infographic Style
Insert the title of your subtitle Here
Modern PowerPoint
Presentation
Get a modern PowerPoint Presentation that is beautifully
designed. Easy to change colors, photos and Text. You
can simply impress your audience and add a unique zing
and appeal to your Presentations. Easy to change colors,
photos and Text. Get a modern PowerPoint Presentation
that is beautifully designed.
Easy to change colors, photos and Text. You can simply
impress your audience and add a unique zing and appeal
to your Presentations.
Your Text Here
30. Components of data communication system
A Communication system has following components:
1. Message: It is the information or data to be communicated. It can consist
of text, numbers, pictures, sound or video or any combination of these.
2. Sender: It is the device/computer that generates and sends that
message
3. Receiver: It is the device or computer that receives the message. The
location of receiver computer is generally different from the sender
computer. The distance between sender and receiver depends upon the
types of network used in between.
4. Medium: It is the channel or physical path through which the message is
carried from sender to the receiver. The medium can be wired like
twisted pair wire, coaxial cable, fiber-optic cable or wireless like laser, rad
io waves, and microwaves.
31. Concurrency Control and Recovery
Concurrency control (CC) is a process to ensure that data is updated
correctly and appropriately when multiple transactions are concurrently
executed in DBMS (Connolly & Begg, 2015).
Distributed Databases encounter a number of concurrency control and
recovery problems which are not present in centralized databases.
Some of them are listed below:
• Dealing with multiple copies of data items
• Failure of individual sites
• Communication link failure
• Distributed commit
• Distributed deadlock
32. Concurrency Control and Recovery
Concurrency control (CC) is a process to ensure that data is updated
correctly and appropriately when multiple transactions are concurrently
executed in DBMS (Connolly & Begg, 2015).
Distributed Databases encounter a number of concurrency control and
recovery problems which are not present in centralized databases.
Some of them are listed below:
• Dealing with multiple copies of data items
• Failure of individual sites
• Communication link failure
• Distributed commit
• Distributed deadlock
33. Concurrency Control
1. Dealing with multiple copies of data items:
The concurrency control must maintain global consistency. Likewise the recovery
mechanism must recover all copies and maintain consistency after recovery.
2. Failure of individual sites:
Database availability must not be affected due to the failure of one or two sites
and the recovery scheme must recover them before they are available for use.
3. Communication link failure:
This failure may create network partition which would affect database availability e
ven though all database sites may be running.
4. Distributed commit:
A transaction may be fragmented and they may be executed by a number of sites.
This require a two or three-phase commit approach for transaction commit.
34. Concurrency Control
5. Distributed deadlock:
Since transactions are processed at multiple sites, two or more sites may get
involved in deadlock. This must be resolved in a distributed manner.
Concurrency control protocols can be broadly divided into two categories −
• Lock based protocols
• Time stamp based protocols
35. Concurrency Control Protocol
1. Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by which
any transaction cannot read or write data until it acquires an appropriate lock on it.
Locks are of two kinds −
• Binary Locks − A lock on a data item can be in two states; it is either locked
or unlocked.
• Shared/exclusive − This type of locking mechanism differentiates the locks
based on their uses. If a lock is acquired on a data item to perform a write
operation, it is an exclusive lock. Allowing more than one transaction to write o
n the same data item would lead the database into an inconsistent state. Read
locks are shared because no data value is being changed.
36. Continue..
1. Binary Locks:
A lock is kind of a mechanism that ensures that the integrity of data is maintained.
A binary lock can have two states or values: locked and unlocked (or 1 and 0, for
simplicity). A distinct lock is associated with each database item X.
If the value of the lock on X is 1, item X cannot be accessed by a database
operation that requests the item. If the value of the lock on X is 0, the item can be
accessed when requested. We refer to the current value (or state) of the lock
associated with item X as LOCK(X).
There are 2 operation in binary locking:
(i) Lock_item(X):
(ii) Unlock_item (X):
37. Continue..
1. Lock_item(X):
A transaction requests access to an item X by first issuing a lock_item(X)
operation. If LOCK(X) = 1, the transaction is forced to wait. If LOCK(X) = 0,
it is set to 1 (the transaction locks the item) and the transaction is allowed to
access item X.
2. Unlock_item (X):
When the transaction is through using the item, it issues an unlock_item(X)
operation, which sets LOCK(X) to 0 (unlocks the item) so that X may be accessed
by other transactions. Hence, a binary lock enforces mutual exclusion on the data
item ; i.e., at a time only one transaction can hold a lock.
38. Continue..
2. Shared / Exclusive Locking :
Shared lock :
Shared lock is placed when we are reading the data, multiple shared locks can be
placed on the data but when a shared lock is placed no exclusive lock can be
placed. These locks are referred as read locks, and denoted by 'S'.
If a transaction T has obtained Shared-lock on data item X, then T can read X, but
cannot write X. Multiple Shared lock can be placed simultaneously on a data item.
For example, when two transactions are reading Steve’s account balance, let
them read by placing shared lock but at the same time if another transaction wants
to update the Steve’s account balance by placing Exclusive lock, do not allow it
until reading is finished.
39. Continue..
Exclusive lock :
Exclusive lock is placed when we want to read and write the data. This lock allows
both the read and write operation, Once this lock is placed on the data no other
lock (shared or Exclusive) can be placed on the data until Exclusive lock is
released.
For example, when a transaction wants to update the Steve’s account balance,
let it do by placing X lock on it but if a second transaction wants to read the data
( S lock) don’t allow it, if another transaction wants to write the data(X lock) don’t
allow that either.
These Locks are referred as Write locks, and denoted by 'X'.
If a transaction T has obtained Exclusive lock on data item X, then T can be read
as well as write X. Only one Exclusive lock can be placed on a data item at a time.
This means multiples transactions does not modify the same data simultaneously.
40. Continue..
Lock Compatibility Matrix
_________________
| | S | X |
|-----------------------------
| S | True | False |
|-----------------------------
| X | False | False |
-----------------------------
How to read this matrix?:
There are two rows, first row says that when S lock is placed, another S lock can
be acquired so it is marked true but no Exclusive locks can be acquired so
marked False.
In second row, When X lock is acquired neither S nor X lock can be acquired so
both marked false
41. TIME STAMP BASED PROTOCOL
Time stamp is used to link time with some event or in more particular say
transaction. To ensure serializability, we associate transaction with the time
called as time stamp. In simple words we order the transaction based on the
time of arrival and there is no deadlock.
For each data item, two time stamp are maintained.
Read time stamp – time stamp of youngest transaction which has performed o
peration read on the data item.
Write time stamp – time stamp of youngest transaction which has performed o
peration write on the data item.
Let the transaction T’s time-stamp be denoted by TS(T), Read time-stamp of d
ata-item be denoted by R-timestamp(X), and Write time-stamp of data-item be
denoted by W-timestamp(X).
42. TIMESTAMP BASED PROTOCOL
The protocol works as follows-
• If a transaction issues read operation
If Ts(T) < W-timestamp(X) then
read request is rejected
else execute the transaction and update the time-stamp.
• If a transaction operates write operation
If Ts(T) < R-timestamp(X) or If TS(T) <W-timestamp(X) then
write request is rejected
else transaction gets executed and update the time-stamp.
43. TIMESTAMP BASED PROTOCOL
Thomas' Write Rule
This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and
Ti is rolled back.
Time-stamp ordering rules can be modified to make the schedule view
serializable.
Instead of making Ti rolled back, the 'write' operation itself is ignored.
44. Need of Recovery
Media failure, e.g. disc-head crash.
Part of persistent store is lost – need to restore it.
Transactions in progress may be using this area –abort uncommitted transactions
System failure e.g. crash - main memory lost.
Persistent store is not lost but may have been changed by uncommitted
transactions.
Also, committed transactions’ effects may not yet have reached persistent objects.
Transaction abort
Need to undo any changes made by the aborted transaction.
45. Need of Recovery
When a DBMS recovers from a crash, it should maintain the following −
• It should check the states of all the transactions, which were being executed.
• A transaction may be in the middle of some operation; the DBMS must ensure
the atomicity of the transaction in this case.
• It should check whether the transaction can be completed now or it needs to
be rolled back.
• No transactions would be allowed to leave the DBMS in an inconsistent state.
46. Recovery with Concurrent Transactions
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all
the memory space available in the system. As time passes, the log file may grow
too big to be handled at all. Checkpoint is a mechanism where all the previous lo
gs are removed from the system and stored permanently in a storage disk.
Checkpoint declares a point before which the DBMS was in consistent state, and
all the transactions were committed.
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in
the following manner −
• The recovery system reads the logs backwards from the end to the last check
point.
• It maintains two lists, an undo-list and a redo-list.
47. Recovery with Concurrent Transactions
• If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just
<Tn, Commit>, it puts the transaction in the redo-list.
• If the recovery system sees a log with <Tn, Start> but no commit or abort log
found, it puts the transaction in undo-list.
All the transactions in the undo-list are then undone and their logs are removed.
All the transactions in the redo-list and their previous logs are removed and then
redone before saving their logs.