This document discusses database system architectures and distributed database systems. It covers transaction server systems, distributed database definitions, promises of distributed databases, complications introduced, and design issues. It also provides examples of horizontal and vertical data fragmentation and discusses parallel database architectures, components, and data partitioning techniques.
The document discusses indexing and hashing techniques in database management systems. It begins by explaining the basic concept of indexing, noting that indexes work similarly to book indexes by allowing efficient searching for records. It then lists several factors for evaluating indexing techniques, such as access time, insertion/deletion time, and space overhead. The document goes on to explain multi-level indexing with an example involving multiple index levels to handle very large files. It also differentiates between dense and sparse indexes, noting sparse indexes require less space and maintenance overhead. The document concludes by explaining hash file organization with an example using a hash function to map records to disk blocks.
Temporal database, Multimedia database, Access control, Flow controlPooja Dixit
This document discusses temporal databases, multimedia databases, access control, and flow control. It defines temporal databases as storing data related to time instances and offering temporal data types. Multimedia databases are described as collections of interrelated multimedia data like text, graphics, images, video and audio. The document outlines different types of access control including mandatory, discretionary and role-based access control. It also defines flow control as managing data flow between devices to prevent overflow.
ADVANCE DATABASE MANAGEMENT SYSTEM CONCEPTS & ARCHITECTURE by vikas jagtapVikas Jagtap
The data that indicates the earth location (latitude & longitude, or height & depth ) of these rendered objects is known as spatial data.
When the map is rendered, objects of this spatial data are used to project the location of the objects on 2-Dimentional piece of paper.
The spatial data management systems are designed to make the storage, retrieval, & manipulation of spatial data (i.e points, lines and polygons) easier and natural to users, such as GIS.
While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types.
These are typically called geometry or feature.
This document discusses database system concepts and architecture. It covers topics such as data models, categories of data models, history of data models including relational, network, hierarchical and object models, database schemas versus instances, three-schema architecture, data independence, DBMS languages including DDL, DML, and interfaces. Database system utilities are also mentioned.
Components of DDBMS, Computer workstations or remote devices,Network hardware and software components,Communications media,transaction processor (TP), data processor (DP),
4 the relational data model and relational database constraintsKumar
The document discusses the relational data model and constraints in relational databases. It begins by defining key concepts in the relational model such as relations, tuples, attributes, domains and relation schemas. It then covers relational constraints including key constraints, entity integrity constraints, and referential integrity constraints. Examples are provided to illustrate these concepts and constraints. The chapter aims to provide an overview of the formal relational model and constraints that must hold in relational databases.
A distributed database system is a database in which portions of the database are stored on multiple computers within a network. This provides advantages like reliability if one site crashes, and speed since information is distributed rather than centralized. However, proper hardware and software is needed to connect the distributed sites, and there may be connection errors that impact users.
This document provides an overview of database systems and concepts. It covers topics such as the role of databases and database management systems, data models, database design principles, SQL, database performance tuning, distributed databases, and data warehousing. The document is organized into 13 chapters that progress from introductory database topics to more advanced concepts. It includes definitions of key terms, descriptions of different data models and database types, and explanations of the database design process.
The document discusses indexing and hashing techniques in database management systems. It begins by explaining the basic concept of indexing, noting that indexes work similarly to book indexes by allowing efficient searching for records. It then lists several factors for evaluating indexing techniques, such as access time, insertion/deletion time, and space overhead. The document goes on to explain multi-level indexing with an example involving multiple index levels to handle very large files. It also differentiates between dense and sparse indexes, noting sparse indexes require less space and maintenance overhead. The document concludes by explaining hash file organization with an example using a hash function to map records to disk blocks.
Temporal database, Multimedia database, Access control, Flow controlPooja Dixit
This document discusses temporal databases, multimedia databases, access control, and flow control. It defines temporal databases as storing data related to time instances and offering temporal data types. Multimedia databases are described as collections of interrelated multimedia data like text, graphics, images, video and audio. The document outlines different types of access control including mandatory, discretionary and role-based access control. It also defines flow control as managing data flow between devices to prevent overflow.
ADVANCE DATABASE MANAGEMENT SYSTEM CONCEPTS & ARCHITECTURE by vikas jagtapVikas Jagtap
The data that indicates the earth location (latitude & longitude, or height & depth ) of these rendered objects is known as spatial data.
When the map is rendered, objects of this spatial data are used to project the location of the objects on 2-Dimentional piece of paper.
The spatial data management systems are designed to make the storage, retrieval, & manipulation of spatial data (i.e points, lines and polygons) easier and natural to users, such as GIS.
While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types.
These are typically called geometry or feature.
This document discusses database system concepts and architecture. It covers topics such as data models, categories of data models, history of data models including relational, network, hierarchical and object models, database schemas versus instances, three-schema architecture, data independence, DBMS languages including DDL, DML, and interfaces. Database system utilities are also mentioned.
Components of DDBMS, Computer workstations or remote devices,Network hardware and software components,Communications media,transaction processor (TP), data processor (DP),
4 the relational data model and relational database constraintsKumar
The document discusses the relational data model and constraints in relational databases. It begins by defining key concepts in the relational model such as relations, tuples, attributes, domains and relation schemas. It then covers relational constraints including key constraints, entity integrity constraints, and referential integrity constraints. Examples are provided to illustrate these concepts and constraints. The chapter aims to provide an overview of the formal relational model and constraints that must hold in relational databases.
A distributed database system is a database in which portions of the database are stored on multiple computers within a network. This provides advantages like reliability if one site crashes, and speed since information is distributed rather than centralized. However, proper hardware and software is needed to connect the distributed sites, and there may be connection errors that impact users.
This document provides an overview of database systems and concepts. It covers topics such as the role of databases and database management systems, data models, database design principles, SQL, database performance tuning, distributed databases, and data warehousing. The document is organized into 13 chapters that progress from introductory database topics to more advanced concepts. It includes definitions of key terms, descriptions of different data models and database types, and explanations of the database design process.
This document provides an introduction to database management systems (DBMS). It discusses key concepts such as database models including hierarchical, network, relational and entity-relationship models. It also covers database planning, design, implementation and maintenance. Specific topics covered include data modeling, database normalization, query languages, transaction management and database administration.
The document discusses the role and responsibilities of a database administrator (DBA). A DBA has centralized control over the database and is responsible for its creation, modification, and maintenance. Key responsibilities of a DBA include deciding the database's content and storage structure, providing support to users, defining backup and recovery strategies, setting security and integrity checks, and monitoring performance and making adjustments in response to changing requirements.
Distributed database management systems (DDBMS) allow data to be spread across multiple computer sites connected by a network. A DDBMS provides location transparency so users can access data without knowing its physical location. It also coordinates transactions that involve data stored at multiple sites. DDBMS architectures include transaction managers, data managers, and transaction coordinators to process transactions and subtransactions across distributed data.
The document discusses various techniques for image compression. It describes how image compression aims to reduce redundant data in images to decrease file size for storage and transmission. It discusses different types of redundancy like coding, inter-pixel, and psychovisual redundancy that compression algorithms target. Common compression techniques described include transform coding, predictive coding, Huffman coding, and Lempel-Ziv-Welch (LZW) coding. Key aspects like compression ratio, mean bit rate, objective and subjective quality metrics are also covered.
This document summarizes a seminar on temporal databases. It discusses the key topics covered in the seminar including an introduction to temporal databases and their features like valid time and transaction time. It also covers the problems of schema versioning that temporal databases address. The advantages include support for declarative queries and solving problems in temporal data models. Applications mentioned include financial, medical, and scheduling systems. Current research is focused on improving spatiotemporal database management systems. The conclusion is that temporal databases are an emerging concept for storing data in a time-sensitive manner and further efforts are needed to generalize databases as structures change over time.
This document provides an introduction to databases and database management systems (DBMS). It discusses key concepts such as the main components and users of a database including end users, database administrators, and designers. It also summarizes the main characteristics of the database approach like data abstraction, multiple views, and transaction processing. Some advantages of using a DBMS are controlling redundancy, restricting access, and enforcing integrity constraints. The document also outlines scenarios where a DBMS may not be needed.
This document discusses active database management systems. It defines active databases as database systems that can automatically respond to events inside or outside the system through the use of event-condition-action rules. These rules allow the database to monitor and react to specific events. The document outlines the key components of an active database architecture, including a knowledge model and execution model. It also discusses features, applications, strengths and weaknesses of active databases.
The document discusses the fundamental steps in digital image processing. It describes 7 key steps: (1) image acquisition, (2) image enhancement, (3) image restoration, (4) color image processing, (5) wavelets and multiresolution processing, (6) image compression, and (7) morphological processing. For each step, it provides brief explanations of the techniques and purposes involved in digital image processing.
Object relational database management systemSaibee Alam
this presentation provide a full explanation of object relational database management system. its a part of advanced database management system. important topic of computer science if you are UG/PG student or preparing for some competitive exam.
This document provides an overview of database systems and database management systems (DBMS). It discusses the limitations of file-based systems, how the database approach addresses these limitations, the typical components of a DBMS environment including hardware, software, data, procedures and personnel. A brief history of database systems is presented starting from the 1960s. The advantages of DBMSs like data consistency and sharing are outlined as well as some disadvantages such as complexity and costs.
The document discusses database management systems (DBMS). It defines a DBMS as system software that allows users to create, manage, and access databases. A DBMS provides a systematic way for end users to create, read, update, and delete data in a database. It also serves as an interface between databases and users or application programs, ensuring data is organized and accessible. The document outlines some key components of a DBMS, including users, data, DBMS software, and database applications. It also describes several advantages of using a DBMS, such as improved data mapping and access, reduced data redundancy, data independence and consistency, and enhanced security features.
Introduction - Flat File System, DBMS, When RDBMS Came
Codd design who is started using- tech behind design
Future of RDBMS technology, conclusion
Physical architecture of SQL Server
Features of SQL Server
This document provides an overview of different database management systems including DBMS, RDBMS, OODBMS, and ORDBMS. It defines each type of DBMS, their key features, advantages, and disadvantages. DBMS is a collection of programs that enables users to create and maintain databases. RDBMS is based on the relational model and stores data in tables with rows and columns. OODBMS brings object-oriented principles to databases, including object identity, inheritance, and encapsulation. ORDBMS puts an object-oriented front end on an RDBMS to handle new data types like images and video.
The document discusses database management systems and their advantages over traditional file systems. It covers key concepts such as:
1) Databases organize data into tables with rows and columns to allow for easier querying and manipulation of data compared to file systems which store data in unstructured files.
2) Database management systems employ concepts like normalization, transactions, concurrency and security to maintain data integrity and consistency when multiple users are accessing the data simultaneously.
3) The logical design of a database is represented by its schema, while a database instance refers to the current state of the data stored in the database tables at a given time.
The document discusses the drawbacks of using file systems to manage large amounts of shared data, such as data redundancy, inconsistency, isolation, and lack of security and crash recovery. It then introduces database management systems (DBMS) as an alternative that offers advantages like data independence, efficient access, integrity, security, concurrent access, administration, and reduced application development time. However, DBMS also have disadvantages including cost, size, complexity, and higher impact of failure.
Mobile databases allow data to be accessed from mobile devices connected over mobile networks. They replicate and synchronize data with centralized database servers. Key features include communicating with centralized servers wirelessly, managing data locally on mobile devices, and creating customized mobile apps. Popular mobile database management systems include SQLite, SQL Anywhere, and DB2 Everyplace. Choosing a suitable mobile DB requires considering factors like memory footprint, security, operating system support, and handling disconnections.
The document summarizes key concepts in distributed database systems including:
1) Distributed database architectures have external, conceptual, and internal views of data. Common architectures include client-server and peer-to-peer.
2) Distributed databases can be designed top-down using a global schema or bottom-up without a global schema.
3) Fragmentation and allocation distribute data across sites for performance and availability. Correct fragmentation follows completeness, reconstruction, and disjointness rules.
Distributed databases allow data to be stored across multiple computers or sites connected through a network. The data is logically interrelated but physically distributed. A distributed database management system (DDBMS) makes the distribution transparent to users and allows sites to operate autonomously while participating in global applications. Key aspects of DDBMS include distributed transactions, concurrency control, data fragmentation and replication, distributed query processing, and ensuring transparency of the distribution.
This document provides an introduction to database management systems (DBMS). It discusses key concepts such as database models including hierarchical, network, relational and entity-relationship models. It also covers database planning, design, implementation and maintenance. Specific topics covered include data modeling, database normalization, query languages, transaction management and database administration.
The document discusses the role and responsibilities of a database administrator (DBA). A DBA has centralized control over the database and is responsible for its creation, modification, and maintenance. Key responsibilities of a DBA include deciding the database's content and storage structure, providing support to users, defining backup and recovery strategies, setting security and integrity checks, and monitoring performance and making adjustments in response to changing requirements.
Distributed database management systems (DDBMS) allow data to be spread across multiple computer sites connected by a network. A DDBMS provides location transparency so users can access data without knowing its physical location. It also coordinates transactions that involve data stored at multiple sites. DDBMS architectures include transaction managers, data managers, and transaction coordinators to process transactions and subtransactions across distributed data.
The document discusses various techniques for image compression. It describes how image compression aims to reduce redundant data in images to decrease file size for storage and transmission. It discusses different types of redundancy like coding, inter-pixel, and psychovisual redundancy that compression algorithms target. Common compression techniques described include transform coding, predictive coding, Huffman coding, and Lempel-Ziv-Welch (LZW) coding. Key aspects like compression ratio, mean bit rate, objective and subjective quality metrics are also covered.
This document summarizes a seminar on temporal databases. It discusses the key topics covered in the seminar including an introduction to temporal databases and their features like valid time and transaction time. It also covers the problems of schema versioning that temporal databases address. The advantages include support for declarative queries and solving problems in temporal data models. Applications mentioned include financial, medical, and scheduling systems. Current research is focused on improving spatiotemporal database management systems. The conclusion is that temporal databases are an emerging concept for storing data in a time-sensitive manner and further efforts are needed to generalize databases as structures change over time.
This document provides an introduction to databases and database management systems (DBMS). It discusses key concepts such as the main components and users of a database including end users, database administrators, and designers. It also summarizes the main characteristics of the database approach like data abstraction, multiple views, and transaction processing. Some advantages of using a DBMS are controlling redundancy, restricting access, and enforcing integrity constraints. The document also outlines scenarios where a DBMS may not be needed.
This document discusses active database management systems. It defines active databases as database systems that can automatically respond to events inside or outside the system through the use of event-condition-action rules. These rules allow the database to monitor and react to specific events. The document outlines the key components of an active database architecture, including a knowledge model and execution model. It also discusses features, applications, strengths and weaknesses of active databases.
The document discusses the fundamental steps in digital image processing. It describes 7 key steps: (1) image acquisition, (2) image enhancement, (3) image restoration, (4) color image processing, (5) wavelets and multiresolution processing, (6) image compression, and (7) morphological processing. For each step, it provides brief explanations of the techniques and purposes involved in digital image processing.
Object relational database management systemSaibee Alam
this presentation provide a full explanation of object relational database management system. its a part of advanced database management system. important topic of computer science if you are UG/PG student or preparing for some competitive exam.
This document provides an overview of database systems and database management systems (DBMS). It discusses the limitations of file-based systems, how the database approach addresses these limitations, the typical components of a DBMS environment including hardware, software, data, procedures and personnel. A brief history of database systems is presented starting from the 1960s. The advantages of DBMSs like data consistency and sharing are outlined as well as some disadvantages such as complexity and costs.
The document discusses database management systems (DBMS). It defines a DBMS as system software that allows users to create, manage, and access databases. A DBMS provides a systematic way for end users to create, read, update, and delete data in a database. It also serves as an interface between databases and users or application programs, ensuring data is organized and accessible. The document outlines some key components of a DBMS, including users, data, DBMS software, and database applications. It also describes several advantages of using a DBMS, such as improved data mapping and access, reduced data redundancy, data independence and consistency, and enhanced security features.
Introduction - Flat File System, DBMS, When RDBMS Came
Codd design who is started using- tech behind design
Future of RDBMS technology, conclusion
Physical architecture of SQL Server
Features of SQL Server
This document provides an overview of different database management systems including DBMS, RDBMS, OODBMS, and ORDBMS. It defines each type of DBMS, their key features, advantages, and disadvantages. DBMS is a collection of programs that enables users to create and maintain databases. RDBMS is based on the relational model and stores data in tables with rows and columns. OODBMS brings object-oriented principles to databases, including object identity, inheritance, and encapsulation. ORDBMS puts an object-oriented front end on an RDBMS to handle new data types like images and video.
The document discusses database management systems and their advantages over traditional file systems. It covers key concepts such as:
1) Databases organize data into tables with rows and columns to allow for easier querying and manipulation of data compared to file systems which store data in unstructured files.
2) Database management systems employ concepts like normalization, transactions, concurrency and security to maintain data integrity and consistency when multiple users are accessing the data simultaneously.
3) The logical design of a database is represented by its schema, while a database instance refers to the current state of the data stored in the database tables at a given time.
The document discusses the drawbacks of using file systems to manage large amounts of shared data, such as data redundancy, inconsistency, isolation, and lack of security and crash recovery. It then introduces database management systems (DBMS) as an alternative that offers advantages like data independence, efficient access, integrity, security, concurrent access, administration, and reduced application development time. However, DBMS also have disadvantages including cost, size, complexity, and higher impact of failure.
Mobile databases allow data to be accessed from mobile devices connected over mobile networks. They replicate and synchronize data with centralized database servers. Key features include communicating with centralized servers wirelessly, managing data locally on mobile devices, and creating customized mobile apps. Popular mobile database management systems include SQLite, SQL Anywhere, and DB2 Everyplace. Choosing a suitable mobile DB requires considering factors like memory footprint, security, operating system support, and handling disconnections.
The document summarizes key concepts in distributed database systems including:
1) Distributed database architectures have external, conceptual, and internal views of data. Common architectures include client-server and peer-to-peer.
2) Distributed databases can be designed top-down using a global schema or bottom-up without a global schema.
3) Fragmentation and allocation distribute data across sites for performance and availability. Correct fragmentation follows completeness, reconstruction, and disjointness rules.
Distributed databases allow data to be stored across multiple computers or sites connected through a network. The data is logically interrelated but physically distributed. A distributed database management system (DDBMS) makes the distribution transparent to users and allows sites to operate autonomously while participating in global applications. Key aspects of DDBMS include distributed transactions, concurrency control, data fragmentation and replication, distributed query processing, and ensuring transparency of the distribution.
This document discusses distributed databases and distributed database management systems (DDBMS). It defines a distributed database as a logically interrelated collection of shared data physically distributed over a computer network. A DDBMS is software that manages the distributed database and makes the distribution transparent to users. The document outlines key concepts of distributed databases including data fragmentation, allocation, and replication across multiple database sites connected by a network. It also discusses reference architectures, components, design considerations, and types of transparency provided by DDBMS.
This document discusses mobile database systems and their fundamentals. It describes the conventional centralized database architecture with a client-server model. It then covers distributed database systems which partition and replicate data across multiple servers. The key aspects covered are database partitioning, partial and full replication, and how they impact data locality, consistency, reliability and other factors. Transaction processing fundamentals like atomicity, consistency, isolation and durability are also summarized.
A transaction is a collection of operations that performs a logical function like depositing or withdrawing from an account. Transactions have ACID properties - Atomicity, Consistency, Isolation, and Durability. A database system is partitioned into modules that deal with storage and querying. The storage manager provides an interface between stored data and applications, and implements functions like authorization, transactions, file management and buffering. The query processor interprets and executes queries through components like a DDL interpreter and DML compiler. Database applications typically use a two-tier or three-tier architecture with client and server machines.
This document provides an introduction to distributed databases. It defines a distributed database as a collection of logically related databases distributed over a computer network. It describes distributed computing and how distributed databases partition data across multiple computers. The document outlines different types of distributed database systems including homogeneous and heterogeneous. It also discusses distributed data storage techniques like replication, fragmentation, and allocation. Finally, it lists several advantages and objectives of distributed databases as well as some disadvantages.
This document discusses distributed databases and client-server architectures. It begins by outlining distributed database concepts like fragmentation, replication and allocation of data across multiple sites. It then describes different types of distributed database systems including homogeneous, heterogeneous, federated and multidatabase systems. Query processing techniques like query decomposition and optimization strategies for distributed queries are also covered. Finally, the document discusses client-server architecture and its various components for managing distributed databases.
Chapter-6 Distribute Database system (3).pptlatigudata
A distributed database management system (DDBMS) governs logically related data distributed across interconnected computer systems. A DDBMS manages a distributed database while making the distribution transparent to users. Distributed databases provide advantages like improved performance through storing data closer to where it is needed, easier expansion, and increased reliability through redundancy. However, distributed databases also introduce challenges around increased complexity, lack of standards, and security concerns.
Distributed databases allow data to be shared across a computer network while being stored on multiple machines. A distributed database management system (DDBMS) allows for the management of distributed databases and makes the distribution transparent to users. Key concepts in distributed DBMS design include fragmentation, allocation, and replication of data across multiple sites. Transparency, performance, and handling failures and concurrency are important considerations for DDBMS.
The three-level ANSI-SPARC architecture model provides a conceptual framework for understanding DBMS functionality. It consists of three levels - the external level describing different user views, the conceptual level representing a common view of data, and the internal level describing physical storage. This architecture aims to achieve logical and physical data independence by mapping between levels and allowing changes to lower levels without affecting higher ones.
This document discusses distributed databases. It begins by introducing distributed database systems and their structure. Key points include that the database is split across multiple computers that communicate over a network. It then discusses the tradeoffs of distributing a database, such as increased availability but also higher complexity. The document outlines two approaches to distributing data - replication, where copies of data are stored at different sites, and fragmentation, where relations are split into pieces stored at different sites. It provides examples to illustrate these concepts.
This document discusses concurrency control in distributed database systems. It begins by defining a distributed database as a single logical database made up of physically separate databases connected by a network. Concurrency control methods are needed to coordinate simultaneous access by multiple users in a way that maintains consistency. The document then summarizes several prominent distributed concurrency control algorithms: distributed two-phase locking, wound-wait, basic timestamp ordering, and distributed optimistic concurrency control. It notes that these algorithms aim to preserve the ACID properties of atomicity, consistency, isolation, and durability for transactions operating across distributed databases.
CP 121_2.pptx about time to be implementflyinimohamed
The document discusses database concepts and architecture. It covers the three levels of data abstraction - physical, logical, and external levels. It also describes the three schema architecture, including the physical, conceptual, and external schemas. This architecture provides data independence and allows mappings between the different levels. The document also discusses different types of database systems such as single-user, multi-user, centralized, distributed, parallel, and client/server databases.
A distributed database is a collection of logically interrelated databases distributed over a computer network. It uses a distributed database management system (DDBMS) to manage the distributed database and make the distribution transparent to users. There are two main types of DDBMS - homogeneous and heterogeneous. Distributed databases improve availability, scalability and performance but introduce complexity in management, security and consistency compared to centralized databases. Transaction management and recovery are more challenging in distributed databases due to potential failures across multiple sites.
This document discusses distributed databases and their design. It defines a distributed database as a collection of logically related data distributed over a computer network and managed by a distributed database management system (D-DBMS). The document outlines distributed database types including homogeneous and heterogeneous, and covers key aspects of distributed database design such as data fragmentation, allocation, and replication.
A distributed database is a collection of logically interrelated databases distributed over a computer network. A distributed database management system manages the distributed database and provides transparent access to users. Distributed databases can be either homogeneous, with identical software and compatibility across sites, or heterogeneous, with different schemas, software, and data structures at different sites. Key challenges of distributed databases include concurrency control, recovery from failures, and maintaining data consistency across multiple locations.
A distributed database is a collection of logically interrelated databases distributed over a computer network. A distributed database management system (DDBMS) manages the distributed database and makes the distribution transparent to users. There are two main types of DDBMS - homogeneous and heterogeneous. Key characteristics of distributed databases include replication of fragments, shared logically related data across sites, and each site being controlled by a DBMS. Challenges include complex management, security, and increased storage requirements due to data replication.
This document describes LinkedIn's Databus, a distributed change data capture system that reliably captures and propagates changes from primary data stores. It has four main components - a fetcher that extracts changes from data sources, a log store that caches changes, a snapshot store that maintains a moving snapshot, and a subscription client that pulls changes. Databus uses a pull-based model where consumers pull changes based on a monotonically increasing system change number. It supports capturing transaction boundaries, commit order, and consistent states to preserve consistency from the data source.
Databases allow for the storage and organization of related data. A database contains tables that store data in rows and columns. A database management system (DBMS) helps define, construct, and manipulate the database. Relational databases follow a relational model and store data in related tables. Benefits of databases over file systems include reduced data redundancy, avoidance of data inconsistency, ability to share data among multiple users, and application of security restrictions. Transactions allow multiple database operations to be executed atomically as a single unit.
Week 2 Characteristics & Benefits of a Database & Types of Data Modelsoudesign
The document discusses characteristics and benefits of databases. It provides details on how databases can manipulate data through sorting, matching, linking, aggregating, skipping fields and calculating. It also describes common uses of databases such as storing data and metadata, supporting multiple users accessing the same data simultaneously, and managing access rights. Key characteristics of databases that are outlined include being self-describing through metadata, insulating data from programs, supporting multiple views, enabling data sharing, controlling redundancy, enforcing integrity constraints, restricting unauthorized access, and providing backup/recovery facilities.
This document provides instructions for using Microsoft PowerPoint. It begins with an introduction to PowerPoint and its features. It then covers how to open PowerPoint, choose slide formats, save presentations, undo mistakes, change backgrounds, add and format text, work with images, and format templates to apply changes to all slides. The document provides step-by-step explanations of how to perform common PowerPoint tasks in a concise yet thorough manner.
The document discusses contention networks, carrier sense multiple access (CSMA), components of routers, modular network interfaces in routers, differences between hubs, layer 2 switches and layer 3 switches, packet tunneling, shortest path routing, packet fragmentation, functions of routing processors, evolution of router construction, minimum spanning trees, routing protocols for mobile hosts, TCP/IP tunneling over ATM, distance vector routing, link state routing, hierarchical routing, ATM networks, creating ATM virtual circuits, segmentation and reassembly in ATM, internetworking using concatenated virtual circuits and connectionless internetworking, network properties, and an example of the TCP/IP protocol in action.
The document contains test questions about networking concepts including protocols, network layers, switching and routing technologies. It covers the TCP/IP model, data link layer protocols, switching fabrics in routers, and interior and exterior routing protocols. Sample questions test knowledge of autonomous systems, connection-oriented vs connectionless routing, shortest path algorithms, distance vector routing, and hierarchical routing. Problems are also included on error control methods, sliding window protocols, and packet fragmentation.
This document provides an overview of networking and internetworking concepts. It defines what a network is and some common network protocols like TCP/IP. It discusses how network speed is measured by bit rate and latency. It then covers local area networks, wide area networks, and the internet. The document explains the purpose of networks for file sharing, communication, and remote program execution. It also discusses network messaging and different network service models like the OSI reference model and TCP/IP model. Finally, it provides a simplified example of how the TCP/IP protocol functions to route a packet from a source to destination across multiple routers.
This document discusses distributed query processing. It begins by defining what a query and query processor are. It then outlines the main problems in query processing, characteristics of query processors, and layers of query processing. The key layers are query decomposition, data localization, global query optimization, and distributed execution. Query decomposition takes a query expressed on global relations and decomposes it into an algebraic query on global relations.
Here are the steps to construct B+ trees for the given key values with orders n=4 and n=6:
For n=4:
Root: 2 3 5 7
P1 P2 P3 P4
For n=6:
Root: 2 3 5 7 11 17
P1 P2 P3 P4 P5 P6
Leaf 1: 19 23 29 31
P1 P2 P3 P4
So in summary, for n=4 the B+ tree would have a single root node containing the keys 2, 3, 5, 7 and 4 pointers. For n=6, the B+ tree would have a root node containing the keys 2, 3
This document provides an overview of Google's web services and applications. It discusses how Google uses automated technology to index the web for its core search business. It also describes Google's range of cloud-based applications for productivity, mobile, media, and social interactions. Finally, it examines Google's developer tools and platforms like Google App Engine, and how developers can create and deploy web applications using Google's infrastructure.
This lecture covers an introduction to cloud computing. It discusses key topics like cloud types, architecture, services, platforms, security, and applications. Specifically, it defines cloud computing, compares delivery models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also discusses using major cloud platforms from Amazon Web Services, Google, and Microsoft and exploring concepts like virtualization, capacity planning, and establishing identity/security in the cloud. The lecture concludes by discussing mobile cloud integration and streaming media/video applications in cloud computing.
Parallelism involves executing multiple processes simultaneously using two or more processors. There are different types of parallelism including instruction level, job level, and program level. Parallelism is used in supercomputing to solve complex problems more quickly in fields like weather forecasting, climate modeling, engineering, and material science. Parallel computers can be classified based on whether they have a single or multiple instruction and data streams, including SISD, MISD, SIMD, and MIMD architectures. Shared memory parallel computers allow processors to access a global address space but can have conflicts when simultaneous writes occur, while message passing computers communicate via messages to avoid conflicts. Factors like software overhead and load balancing can limit the speedup achieved by parallel algorithms
Decision support systems and expert systems can help with decision making. Decision support systems provide data, models, and tools to help users analyze problems. They are used in industries like agriculture, tax planning, and website design. Expert systems emulate human expertise in specific domains using knowledge bases and inference engines. They are used in fields such as medical diagnosis, credit evaluation, and equipment maintenance. Both systems help improve decisions by standardizing processes and leveraging large amounts of information.
This document discusses strategic uses of information systems and how companies can gain competitive advantages through innovative uses of technology. It provides examples of initiatives companies can take such as reducing costs, creating new products/services, and establishing strategic information systems. JetBlue is presented as a case study of a company that gained significant competitive advantages through massive automation of processes and using information systems in strategic ways like paperless ticketing and flight planning. Their late entry into the airline industry allowed them to not be burdened by legacy systems and gain significant efficiencies.
Management Information Systems (MIS) is the study of people, technology, organizations and the relationships among them. MIS professionals help firms realize maximum benefit from investment in personnel, equipment, and business processes by creating information systems for data management and meeting the needs of managers, staff and customers. A management information system gives managers the information they need to make efficient and effective decisions by collecting, processing, storing and disseminating data.
Telecommunications technologies have improved business processes by enabling better communication, greater efficiency, and more flexible workforces. Networking allows for immediate data delivery and sharing over large distances. Emerging technologies like videoconferencing, wireless payments, and web-empowered commerce are changing how businesses operate and interact with customers. Issues around bandwidth, media, protocols, and security must be addressed for networks and telecommunications to continue developing efficiently.
This document provides an overview of management information systems (MIS). It defines MIS as a computer-based system that provides information to support decision-making. The goals of MIS are to regularly provide managers with information for routine operational control and better planning and organization. The document then discusses the roles of MIS in an organization, comparing it to the heart supplying blood, as it ensures appropriate data collection, processing, and distribution to various destinations according to their needs. Finally, it discusses the impact of MIS in making management of various functions like marketing and finance more efficient.
The document discusses planning and developing information systems. It describes key steps in planning like creating mission and vision statements, strategic and tactical plans, and budgets. Careful planning is necessary for successful enterprise system implementation. Development approaches include the traditional systems development life cycle (SDLC) process of analysis, design, implementation, and support or more agile methods. Analysis involves feasibility studies to determine if a system is needed. Design includes data modeling and testing. Implementation has conversion strategies to transition to the new system. Agile methods emphasize iterative development and user feedback.
This document discusses various web technologies including HTTP, HTML, XML, FTP, blogs, wikis, and podcasting. It then covers how these technologies enable different types of web-enabled businesses from business-to-business (B2B) and business-to-consumer (B2C) interactions. Specific B2B functions like exchanges, auctions, intranets and extranets are examined. The document concludes by stating that web technologies have become highly integrated into most business and customer activities, making it difficult to distinguish online vs offline commerce.
This document discusses the challenges of developing global information systems. It outlines technological barriers like differences in infrastructure, languages, and standards. It also discusses regulatory barriers involving tariffs and import/export laws. Cultural and economic differences between countries are challenges, like payment mechanisms, intellectual property laws, privacy laws, and respecting local customs. Managing projects across different time zones and political environments introduces additional complexity for multinational corporations developing global information systems.
This document discusses how information technology improves business functions and supply chain effectiveness and efficiency. It describes how IT systems help with customer relationship management, finance, supply chain management, shipping, market research, human resource management, and enterprise resource planning. These systems aim to improve productivity, optimize resources, and manage information more effectively to reduce costs and better achieve business goals. However, implementing complex ERP systems also presents challenges as they require customization and special tailoring for each organization.
The document discusses computer networks and the data link layer. It provides classifications of computer networks including PAN, LAN, MAN and WAN. It discusses the goals of computer networks which include resource sharing, reliability, cost savings, performance and communication. It then discusses point-to-point subnets and their possible topologies. Finally, it discusses the services provided by the data link layer, including encapsulation, frame synchronization, error control and logical link control.
This document discusses computer networks and their classification. It defines the goals of computer networks as resource sharing without regard to physical location. It classifies networks into personal, local, metropolitan and wide area networks. The document then discusses how computer networks enable communication and collaboration between employees through technologies like email, video conferencing, desktop sharing and e-commerce. It explains how networks allow businesses to place electronic orders and enhance efficiency.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
1. Database System Architectures
Transaction Server System
A typical transaction-server system today consists of multiple processes accessing data in shared memory.
Server processes: These are processes that receive user queries (transactions), execute them, and send the results
back.
Lock manager process: This process implements lock manager functionality, which includes lock grant, lock release,
and deadlock detection.
Database writer process: There are one or more processes that output modified buffer blocks back to disk on a
continuous basis.
Log writer process: This process outputs log records from the log record buffer to stable storage.
Checkpoint process: This process performs periodic checkpoints. It consults log to determine those transactions
that need to redone or undone.
Process monitor process: This process monitors other processes, and if any of them fails, it takes recovery actions
for the process.
What is a Distributed Database System?
We define a distributed database as a collection of multiple, logically interrelated databases distributed over
a computer network. A distributed database management system (distributed DBMS) is then defined as the
software system that permits the management of the distributed database and makes the distribution
transparent to the users.
Peer-to-Peer Distributed Systems
2. Promises of DDBSs
Transparent management of distributed and replicated data.
Reliable access to data through distributed transactions
Improved performance and
Easier system expansion.
Transparent Management of Distributed and Replicated Data
Reliability through Distributed Transactions
Improved Performance
Easier System Expansion
Complications Introduced by DDBS
1. Data may be replicated in a distributed environment. A distributed database can be designed so
that the entire database, or portions of it, reside at different sites of a computer network.
2. If some sites fail or if some communication links fail while an update is being executed, the system
must make sure that the effects will be reflected on the data residing at the failing or unreachable
sites as soon as the system can recover from the failure.
3. The exchange of messages and the additional computation required to achieve inter-site
coordination are a form of overhead that does not arise in centralized systems.
4. As data in a distributed DBMS are located at multiple sites, the probability of security lapses
increases. Further, all communications between different sites in a distributed DBMS are conveyed
through the network, so the underlying network has to be made secure to maintain system security.
5. Since each site cannot have instantaneous information on the actions currently being carried out at
the other sites, the synchronization of transactions on multiple sites is considerably harder than for
a centralized system.
Correctness Rules for Data Fragmentation
To ensure no loss of information and no redundancy of data, there are three different rules that must be considered
during fragmentation.
Completeness
If a relation instance R is decomposed into fragments R1, R2, . . . .Rn, each data item in R must appear in at least one
of the fragments. It is necessary in fragmentation to ensure that there is no loss of data during data fragmentation.
3. Reconstruction
If relation R is decomposed into fragments R1, R2, . . . .Rn, it must be possible to define a relational operation that
will reconstruct the relation R from fragments R1, R2, . . . .Rn. This rule ensures that constrains defined on the data
are preserved during data fragmentation.
To ensure no loss of information and no redundancy of data, there are three different rules that must be considered
during fragmentation.
Disjointness
If a relation R is decomposed into fragments R1, R2, . . . .Rn and if a data item is found in the fragment Ri, then it must
not appear in any other fragments. This rule ensures minimal data redundancy.
In case of vertical fragmentation, primary key attribute must be repeated to allow reconstruction. Therefore, in case
of vertical fragmentation, disjointness is defined only on non-primary key attributes of a relation.
Example (Horizontal Fragmentation)
P1: σ
project-type = “inside”
(Project)
P2: σ
project-type = “abroad
Example (Horizontal Fragmentation)
These horizontal fragments satisfy all the correctness rules of fragmentation as shown below.
Completeness: Each tuple in the relation Project appears either in fragment P1 or P2. Thus, it satisfies completeness
rule for fragmentation.
Reconstruction: The Project relation can be reconstructed from the horizontal fragments P1 and P2 by using the
union operation of relational algebra, which ensures the reconstruction rule.
Thus, P1 P2 = Project.
Disjointness: The fragments P1 and P2 are disjoint, since there can be no such project whose project type is both
“inside” and “abroad”.
Example (Vertical Fragmentation)
4. Example (Vertical Fragmentation)
These vertical fragments also ensure the correctness rules of fragmentation as shown below.
Completeness: Each tuple in the relation Project appears either in fragment V1 or V2 which satisfies completeness
rule for fragmentation.
Reconstruction: The Project relation can be reconstructed from the vertical fragments V1 and V2 by using the
natural join operation of relational algebra, which ensures the reconstruction rule.
Thus, V1 ⋈ V2 = Project.
Disjointness: The fragments V1 and V2 are disjoint, except for the primary key project-id, which is repeated in both
fragments and is necessary for reconstruction.
Distributed Database System Design Issues
Distributed Database Design
Distributed Directory Management
Distributed Query Processing
Distributed Concurrency Control
Distributed Deadlock Management
Reliability of Distributed DBMS
Replication
Relationship among Problems
5. Components of a Distributed DBMS
Major two components:
User Processor: Handles the interaction with users and
Data Processor: Deals with the storage.
Multidatabase System (MDBS) Architecture
Multidatabase systems (MDBS) represent the case where individual DBMSs (whether distributed or not) are fully
autonomous and have no concept of cooperation;
They may not even “know” of each other’s existence or how to talk to each other.
MDBS Architecture
Fig.: MDBS Architecture with a GCS
LIS
1
LIS
n. . . .
LCS
1
LCS
n
. . . .
LES
11
LES
12
LES
13
LES
n1
LES
n2
LES
n3
GES
1
GES
2
GES
3
GCS
6. MDBS Architecture
1. Users of a local DBMS define their own views on the local database and do not need to change their
applications if they do not want to access data from another database. This is again an issue of
autonomy.
2. Designing the global conceptual schema in multidatabase systems involves the integration of either
the local conceptual schemas or the local external schemas.
3. Once the GCS has been designed, views over the global schema can be defined for users who require
global access. It is not necessary for the GES and GCS to be defined using the same data model and
language; whether they do or not determines whether the system is homogeneous or heterogeneous.
Functional Aspects Provided by Parallel Database Systems
Ideally, a parallel database system should have the following functional aspects.
High-performance: This can be obtained through several complementary solutions: database-oriented operating
system support, parallelism, optimization, and load balancing.
High-availability: Because a parallel database system consists of many similar components, it can exploit data
replication to increase database availability.
Extensibility: It is the ability of smooth expansion of the system by adding processing and storage power to the
system. Ideally, the parallel database system should provide two extensibility advantages:
Linear Speedup and
Linear Scaleup.
Linear Speedup refers to a linear increase in performance for a constant database size and linear increase in
processing and storage power.
Linear Scaleup refers to a sustained performance for a linear increase in both database size and processing and
storage power.
Parallel Architectures
There are three basic parallel computer architectures depending on how main memory or disk is shared:
I. Shared-memory,
II. Shared-disk and
III. Shared-nothing.
Shared-Memory
In the shared-memory approach any processor has access to any memory module or disk unit through a fast
interconnect (e.g. a high-speed bus). All the processors are under the control of a single operating system.
Shared-Memory
7. Advantages: simplicity and load balancing
Problems: high cost, limited extensibility and low availability.
Example: XPRS, DBS3, and Volcano.
Components of Parallel DBM Architecture
It has three major components or subsystems.
Session Manager: It performs the connections and disconnections between the client processes and the two other
subsystems.
Transaction Manager: It receives client transactions related to query compilation and execution. It can access the
database directory that holds all meta-information about data and programs. Depending on the transaction, it
activates the various compilation phases, triggers query execution, and returns the results as well as error codes to
the client application.
Components of Parallel DBM Architecture
Data Manager: It provides all the low-level functions needed to run compiled queries in parallel.
Data Partitioning Techniques
There are three basic strategies for data partitioning:
• Round-robin,
• Hash and
• Range partitioning.
8. Data Partitioning Techniques
Round-robin partitioning is the simplest strategy. It ensures uniform data distribution. With n partitions, the ith
tuple in insertion order is assigned to partition (i mod n).
Hash partitioning applies a hash function to some attribute that yields the partition number. This strategy allows
exact-match queries on the selection attribute to be processed by exactly one node and all other queries to be
processed by all the nodes in parallel.
Range partitioning distributes tuples based on the value intervals of some attribute. It is well-suited for range
queries. However, range partitioning can result in high variation in partition size.
Indexing and Hashing
9.
10.
11.
12.
13.
14.
15. Hash File Organization
In a hash file organization, we obtain the address of the disk block, also called the bucket containing
a desired record directly by computing a function on the search-key value of the record.
Let K denote the set of all search-key values, and let B denote the set of all bucket addresses. A hash
function h is a function from K to B. Let h denote a hash function.
To insert a record with search key Ki, we compute h(Ki), which gives the address of the bucket for
that record. Assume for now that there is space in the bucket to store the record. Then, the record
is stored in that bucket.
Hash File Organization: An Example
Let us choose a hash function for the account file using the search key branch_name.
Suppose we have 26 buckets and we define a hash function that maps names beginning with the
ith letter of the alphabet to the ith bucket.
This hash function has the virtue of simplicity, but it fails to provide a uniform distribution, since we
expect more branch names to begin with such letters as B and R than Q and X.
Hash Indices
Hashing can be used not only for file organization, but also for index-structure creation. We
construct a hash index as follows. We apply a hash function on a search key to identify a bucket,
and store the key and its associated pointers in the bucket.Hash Indices
16. DDBMS
Transparency
– It refers to the separation of the high-level semantics of a system from lower-level implementation
issues. In a distributed system, it hides the implementation details from users of the system.
– The user believes that he/she is working with a centralized database system and that all the
complexities of a distributed database system are either hidden or transparent to the user.
– Four main categories of transparencies:
• Distribution transparency
• Transaction transparency
• Performance transparency
• DBMS transparency
A Model for Transaction Management in DDBMS
– Access to the various data items in a distributed system is usually accomplished through
transactions which must preserve the ACID properties. There are two types of transaction that we
need to consider.
• The local transactions are those that access and update data in only local database.
• The global transactions are those that access and update data in several local databases.
Ensuring the ACID properties of local transactions can be done easily. However, for global transactions, this task is
much more complicated, since several sites are participating in execution. A model for transaction management at
each site of a distributed system is shown below.
Fig. A Model for Transaction Management at each site in a DDBMS
– It consists of two sub-modules:
• Transaction Manager (TM) and
• Transaction Coordinator (TC)
17. Concurrency Control Anomalies
Different anomalies can arise due to concurrent access of data:
– Lost update anomaly – This occurs when a successful completed update operation made by one
transaction is overridden by another transaction.
– Uncommitted dependency – This problem occurs when one transaction allows other transactions
to read its data before it has committed and then decides to abort.
– Inconsistent analysis anomaly – The problem occurs when a transaction reads several values from
the database but a second transaction updates some of them during the execution of the first.
– Phantom read anomaly – This anomaly occurs when a transaction performs some operation on the
database based on a selection predicate, another transaction inserts new tuples satisfying that
predicate into the same database. This is known as phantom read.
– Multiple-copy consistency problem – This occurs when data items are replicated and stored at
different sites. To maintain the consistency, when a replicated data item is updated at one site, all
other copies must be updated. Otherwise, the database becomes inconsistent.
Two-Phase Locking (2PL) Protocol
The 2PL protocol states that no transaction should acquire a lock after it releases one of its lock.
According to this protocol, the life time of each transaction is divided into two phases:
Growing phase and
Shrinking phase.
In growing phase, a transaction can obtain locks on data items and can access data items, but it can not release any
locks.
In shrinking phase, a transaction can release locks but cannot acquire any new locks after that. Thus, the ending of
growing phase of a transaction determines the beginning of the shrinking phase of that transaction. It is not
necessary for each transaction to acquire all locks simultaneously and then start processing. Normally, each
transaction obtains some locks initially, does some processing and then requests for new additional locks that are
required. However, it never releases any lock until it has reached a stage where no more locks are required. If up-
gradation and down-gradation are allowed, then up-gradation of locks can take place in the growing phase, whereas
down-gradation of locks can occur in the shrinking phase.
Distributed Deadlock Prevention Method
Wait-die is a non-preemptive deadlock prevention technique based on timestamp values of transactions:
In this technique, when one transaction is about to block and is waiting for a lock on a data item that is
already locked by another transaction, timestamp values of both the transactions are checked to give priority to the
older transaction. If a younger transaction is holding the lock on data item then the older transaction is allowed to
wait, but if an older transaction is holding the lock, the younger transaction is aborted and restarted with the same
timestamp value. This forces the wait-for graph to be directed from the older to the younger transactions, making
cyclic restarts impossible. For example, if the transaction Ti requests a lock on a data item that is already locked by
the transaction Tj, then Ti is permitted to wait only if Ti has a lower timestamp value than Tj. On the other hand, if Ti
is younger than Tj, then Ti is aborted and restarted with the same timestamp value.
Wound-Wait is an alternative preemptive deadlock prevention technique by which cyclic restarts can be avoided.
18. In this method, if a younger transaction requests for a lock on a data item that is already held by an older
transaction, the younger transaction is allowed to wait until the older transaction releases the corresponding lock.
In this case, the wait-for graph flows from the younger to the older transactions, and cyclic restart is again avoided.
For instance, if the transaction Ti requests a lock on a data item that is already locked by the transaction Tj, then Ti
is permitted to wait only if Ti has a higher timestamp value than Tj, otherwise, the transaction Tj is aborted and the
lock is granted to the transaction Ti.
Centralized Deadlock detection
In Centralized Deadlock detection method, a single site is chosen as Deadlock Detection Coordinator (DDC)
for the entire distributed system. The DDC is responsible for constructing the GWFG for system. Each lock manager
in the distributed database transmits its LWFG to the DDC periodically. The DDC constructs the GWFG from these
LWFGs and checks for cycles in it. The occurrence of a global deadlock situation is detected if there are one or more
cycles in the GWFG. The DDC must break each cycle in the GWFG by selecting the transactions to be rolled back and
restarted to recover from a deadlock situation. The information regarding the transactions that are to be rolled back
and restarted must be transmitted to the corresponding lock managers by the deadlock detection coordinator.
Centralized Deadlock detection
– The centralized deadlock detection approach is very simple, but it has several drawbacks.
– This method is less reliable, as the failure of the central site makes the deadlock detection
impossible.
– The communication cost is very high in the case, as other sites in the distributed system send their
LWFGs to the central site.
– Another disadvantage of centralized deadlock detection technique is that false detection of
deadlocks can occur, for which the deadlock recovery procedure may be initiated, although no
deadlock has occurred. In this method, unnecessary rollbacks and restarts of transactions may also
result owing to phantom deadlocks.