The document discusses database design concepts including the Entity-Relationship (E-R) model, normalization, and features of relational database management systems (RDBMS). It begins by describing the objectives of E-R modeling such as avoiding redundancy and incompleteness. It then explains key components of the E-R model including entities, attributes, relationships, keys, and how to draw E-R diagrams. The document also covers normalization forms up to third normal form as well as important features of RDBMS like ACID properties that ensure accuracy, completeness, and data integrity.
This document provides an overview of database management systems (DBMS). It discusses the objectives and features of DBMS, including organizing data in a structured way and storing data only once. Common applications of DBMS are also outlined, such as enterprise information systems, banking, universities, and telecommunications. The document then examines the purpose of using a DBMS to share and secure data. Key concepts like data models, database languages, and the relational database model are introduced at a high level.
The document provides an overview of database systems and their components. It discusses:
- The purpose of database systems is to provide solutions to problems with using file systems like data redundancy, difficulty of accessing data, and lack of integrity and security.
- Database systems include a collection of interrelated data, a set of programs to access the data called a database management system (DBMS), and database applications in domains like banking, airlines, education and more.
- Key components of database systems include data models, data definition and manipulation languages, transaction management, storage management, database administrators, and database users. The overall system has physical, logical and view levels of abstraction.
A database is a collection of logically related data organized for convenient access, usually by programs for specific purposes. A DBMS is software that allows users to define, construct and manipulate databases for various applications. The database and DBMS together form a database system. A DBMS provides advantages like reducing data redundancy and inconsistency, restricting unauthorized access, and enforcing data integrity and security.
The document discusses database management systems and data modeling. It begins by defining key terms like data, databases, database management systems, and data models. It then provides a brief history of database development from the 1960s to the 1980s. The rest of the document discusses database concepts in more detail, including components of a DBMS, types of database users, database administration responsibilities, data modeling techniques, and the evolution of different data models.
The document discusses database management systems and their advantages over traditional file systems. It covers key concepts such as:
1) Databases organize data into tables with rows and columns to allow for easier querying and manipulation of data compared to file systems which store data in unstructured files.
2) Database management systems employ concepts like normalization, transactions, concurrency and security to maintain data integrity and consistency when multiple users are accessing the data simultaneously.
3) The logical design of a database is represented by its schema, while a database instance refers to the current state of the data stored in the database tables at a given time.
The document discusses database management systems (DBMS). It defines a database as a collection of related data and describes a DBMS as software that enables users to create, maintain and share databases. It provides an example of a university database with files for students, courses, grades and prerequisites. It outlines key characteristics of the database approach such as self-describing nature, insulation between programs and data, support of multiple views, and sharing of data.
This document provides an introduction to database management systems (DBMS). It defines key terminology related to databases and discusses problems with manual databases. It describes the functions and advantages of DBMS, including data representation, transaction management, data sharing, and increased security. Examples of popular DBMS are provided, such as Oracle, Microsoft SQL Server, and MySQL. Database system architecture, data models, and the relational model are overviewed. Finally, entity relationship (ER) modeling is explained as a way to conceptualize data needs and design the database logically before implementation.
This document provides an overview of database management systems (DBMS). It discusses the objectives and features of DBMS, including organizing data in a structured way and storing data only once. Common applications of DBMS are also outlined, such as enterprise information systems, banking, universities, and telecommunications. The document then examines the purpose of using a DBMS to share and secure data. Key concepts like data models, database languages, and the relational database model are introduced at a high level.
The document provides an overview of database systems and their components. It discusses:
- The purpose of database systems is to provide solutions to problems with using file systems like data redundancy, difficulty of accessing data, and lack of integrity and security.
- Database systems include a collection of interrelated data, a set of programs to access the data called a database management system (DBMS), and database applications in domains like banking, airlines, education and more.
- Key components of database systems include data models, data definition and manipulation languages, transaction management, storage management, database administrators, and database users. The overall system has physical, logical and view levels of abstraction.
A database is a collection of logically related data organized for convenient access, usually by programs for specific purposes. A DBMS is software that allows users to define, construct and manipulate databases for various applications. The database and DBMS together form a database system. A DBMS provides advantages like reducing data redundancy and inconsistency, restricting unauthorized access, and enforcing data integrity and security.
The document discusses database management systems and data modeling. It begins by defining key terms like data, databases, database management systems, and data models. It then provides a brief history of database development from the 1960s to the 1980s. The rest of the document discusses database concepts in more detail, including components of a DBMS, types of database users, database administration responsibilities, data modeling techniques, and the evolution of different data models.
The document discusses database management systems and their advantages over traditional file systems. It covers key concepts such as:
1) Databases organize data into tables with rows and columns to allow for easier querying and manipulation of data compared to file systems which store data in unstructured files.
2) Database management systems employ concepts like normalization, transactions, concurrency and security to maintain data integrity and consistency when multiple users are accessing the data simultaneously.
3) The logical design of a database is represented by its schema, while a database instance refers to the current state of the data stored in the database tables at a given time.
The document discusses database management systems (DBMS). It defines a database as a collection of related data and describes a DBMS as software that enables users to create, maintain and share databases. It provides an example of a university database with files for students, courses, grades and prerequisites. It outlines key characteristics of the database approach such as self-describing nature, insulation between programs and data, support of multiple views, and sharing of data.
This document provides an introduction to database management systems (DBMS). It defines key terminology related to databases and discusses problems with manual databases. It describes the functions and advantages of DBMS, including data representation, transaction management, data sharing, and increased security. Examples of popular DBMS are provided, such as Oracle, Microsoft SQL Server, and MySQL. Database system architecture, data models, and the relational model are overviewed. Finally, entity relationship (ER) modeling is explained as a way to conceptualize data needs and design the database logically before implementation.
1. A database is a collection of data organized in a structured format. Examples of databases include attendance registers, bank accounts, shopping lists, resume collections, contact lists, and notes.
2. A database management system (DBMS) is software that allows users to create, access, manage and control databases. Common DBMS functions include database creation, querying, manipulation, and controlling access.
3. The main differences between a database and a DBMS are that a database refers to the collection of data itself, while a DBMS is the software that manages the database and allows users to perform tasks like querying and updating the data.
The document provides an overview of database management systems (DBMS). It discusses DBMS applications, why DBMS are used, different users of databases, data models and languages like SQL. It also summarizes key components of a DBMS including data storage, query processing, transaction management and database architecture.
1. The document discusses how data is organized in a database system using a hierarchy from the bit level up to files, records, fields, and databases.
2. It describes some problems with traditional file-based data storage like redundancy, inconsistency, and lack of flexibility. A database addresses these issues through centralization of data.
3. The key components of a database system are described as people, hardware, software, and data. The database management system (DBMS) acts as an interface between users, applications, and the stored data.
The document discusses database concepts including the advantages and disadvantages of flat file systems versus database systems, database design including normalization and entity relationship modeling, distributed databases including issues around concurrency and replication, and the role of accountants in ensuring data integrity through proper database design. Key topics include data redundancy, database modeling, normalization to avoid anomalies, and concurrency controls for distributed databases.
This document provides an overview of fundamentals of database design. It discusses what a database is, the difference between data and information, and the purpose of database systems. It also covers database definitions and fundamental building blocks like tables and records. Additionally, the document discusses selecting an appropriate database system, database development steps, and considerations for quality control and data entry.
This document provides an overview of database management systems (DBMS). It defines a DBMS as a collection of data and applications used to access and manage that data. The document then briefly discusses the history of DBMS development from early hierarchical models to today's dominant relational model. It describes the key purposes of using a DBMS, including reducing data redundancy and improving data integrity, security and consistency. The document outlines the main components and architecture of a DBMS, including its internal, conceptual and external levels. It also covers the advantages and disadvantages of using a DBMS, as well as common DBMS languages like SQL.
The document discusses different database concepts:
1) A database is a collection of organized data that can be easily retrieved, inserted, and deleted. Database management systems (DBMS) like MySQL and Oracle are software used to manage databases.
2) The two main data models are the relational model, which organizes data into tables and relations, and the object-oriented model, which represents data as objects with properties and methods.
3) DBMS provide advantages like data sharing, backup/recovery, security, and independence between data and applications. However, they also have disadvantages such as higher costs and complexity.
Data Models [DATABASE SYSTEMS: Design, Implementation, and Management]Usman Tariq
In this PPT, you will learn:
• About data modeling and why data models are important
• About the basic data-modeling building blocks
• What business rules are and how they influence database design
• How the major data models evolved
• About emerging alternative data models and the needs they fulfill
• How data models can be classified by their level of abstraction
Author: Carlos Coronel | Steven Morris
The document provides an introduction to database management systems (DBMS). It can be summarized as follows:
1. A DBMS allows for the storage and retrieval of large amounts of related data in an organized manner. It removes data redundancy and allows for fast retrieval of data.
2. Key components of a DBMS include the database engine, data definition subsystem, data manipulation subsystem, application generation subsystem, and data administration subsystem.
3. A DBMS uses a data model to represent the organization of data in a database. Common data models include the entity-relationship model, object-oriented model, and relational model.
Introduction to DBMS(For College Seminars)Naman Joshi
This presentation provides an overview of database management systems. It discusses what a database and DBMS are, and covers data models like relational, network, and hierarchical. It also discusses database concepts like data abstraction, views, keys, and advantages of using a DBMS like data independence and sharing data. The goal is to explain core DBMS concepts at a high level.
Introduction To Database Management Systemcpjcollege
Database Management System (DMBS)
• Collection of interrelated data • Set of programs to access the data • DMBS contains information about a particular enterprise • DBMS provides an environment that it both convenient and efficient to use
This document provides an introduction to database development and Microsoft Access. It defines key database terminology like database, table, fields, records, forms, queries, and reports. It explains that a database is a collection of organized data stored electronically. A database management system (DBMS) is software that allows users to access and manage the database. Microsoft Access is described as a relational database management system designed for home and small business use. The document outlines how to create tables and work with fields in a database.
The document discusses database management systems and distributed databases. It covers the problems with flat file data storage, how databases address these issues, database design concepts like normalization, and the advantages and challenges of distributed database systems. Distributed databases can be centralized, partitioned, or replicated across multiple sites to improve performance, but maintaining data consistency is challenging and requires concurrency control methods.
This document discusses data models and the three schema architecture of database management systems (DBMS). It describes the three levels of schemas in a DBMS - physical schema, conceptual/logical schema, and external schemas. The three schema architecture supports program-data independence and multiple user views of data by providing different levels of abstraction and independence between the schemas.
Database Models, Client-Server Architecture, Distributed Database and Classif...Rubal Sagwal
Introduction to Data Models
-Hierarchical Model
-Network Model
-Relational Model
-Client/Server Architecture
Introduction to Distributed Database
Classification of DBMS
A database is a collection of data that can be used alone or combined to answer users' questions. A database management system (DBMS) provides programs to manage databases, control data access, and include a query language. When designing a database, it is important to structure the data so that specific records can be easily accessed, the database can respond to different questions, minimal storage is used, and redundant data is avoided. Key concepts in database design include entities, attributes, records, primary keys, foreign keys, and relationships between tables.
The document provides an overview of database systems, including:
1) Database systems store and manage large amounts of related data and provide efficient access to that data. They solve problems with traditional file-based data storage like redundancy, data integrity, and concurrent access.
2) Databases are made up of structured data models like the relational model and object-oriented models. They include languages for defining, manipulating, and querying data.
3) Database management systems provide storage, query processing, transaction management, and an abstraction of the data through multiple levels including physical, logical and view levels.
The document discusses the Entity-Relationship (ER) model for database design. It describes the key concepts of the ER model including entity sets, relationship sets, attributes, keys, and cardinalities. The ER model uses entity sets to represent real-world objects, relationship sets to represent associations among entities, and attributes to describe the properties of entities and relationships. Relationships can be one-to-one, one-to-many, many-to-one, or many-to-many. ER diagrams provide a graphical representation of the ER model and can depict entities, relationships, attributes, and cardinality constraints.
The document discusses the entity-relationship (ER) model, which is a top-down approach for conceptual database design. The ER model represents real-world objects as entities and relationships between entities. An ER diagram visually shows entities, attributes, and relationships. The model has advantages such as mapping well to the relational model and being easy to understand. It allows communicating the database design to users and serving as a design plan for developers.
1. A database is a collection of data organized in a structured format. Examples of databases include attendance registers, bank accounts, shopping lists, resume collections, contact lists, and notes.
2. A database management system (DBMS) is software that allows users to create, access, manage and control databases. Common DBMS functions include database creation, querying, manipulation, and controlling access.
3. The main differences between a database and a DBMS are that a database refers to the collection of data itself, while a DBMS is the software that manages the database and allows users to perform tasks like querying and updating the data.
The document provides an overview of database management systems (DBMS). It discusses DBMS applications, why DBMS are used, different users of databases, data models and languages like SQL. It also summarizes key components of a DBMS including data storage, query processing, transaction management and database architecture.
1. The document discusses how data is organized in a database system using a hierarchy from the bit level up to files, records, fields, and databases.
2. It describes some problems with traditional file-based data storage like redundancy, inconsistency, and lack of flexibility. A database addresses these issues through centralization of data.
3. The key components of a database system are described as people, hardware, software, and data. The database management system (DBMS) acts as an interface between users, applications, and the stored data.
The document discusses database concepts including the advantages and disadvantages of flat file systems versus database systems, database design including normalization and entity relationship modeling, distributed databases including issues around concurrency and replication, and the role of accountants in ensuring data integrity through proper database design. Key topics include data redundancy, database modeling, normalization to avoid anomalies, and concurrency controls for distributed databases.
This document provides an overview of fundamentals of database design. It discusses what a database is, the difference between data and information, and the purpose of database systems. It also covers database definitions and fundamental building blocks like tables and records. Additionally, the document discusses selecting an appropriate database system, database development steps, and considerations for quality control and data entry.
This document provides an overview of database management systems (DBMS). It defines a DBMS as a collection of data and applications used to access and manage that data. The document then briefly discusses the history of DBMS development from early hierarchical models to today's dominant relational model. It describes the key purposes of using a DBMS, including reducing data redundancy and improving data integrity, security and consistency. The document outlines the main components and architecture of a DBMS, including its internal, conceptual and external levels. It also covers the advantages and disadvantages of using a DBMS, as well as common DBMS languages like SQL.
The document discusses different database concepts:
1) A database is a collection of organized data that can be easily retrieved, inserted, and deleted. Database management systems (DBMS) like MySQL and Oracle are software used to manage databases.
2) The two main data models are the relational model, which organizes data into tables and relations, and the object-oriented model, which represents data as objects with properties and methods.
3) DBMS provide advantages like data sharing, backup/recovery, security, and independence between data and applications. However, they also have disadvantages such as higher costs and complexity.
Data Models [DATABASE SYSTEMS: Design, Implementation, and Management]Usman Tariq
In this PPT, you will learn:
• About data modeling and why data models are important
• About the basic data-modeling building blocks
• What business rules are and how they influence database design
• How the major data models evolved
• About emerging alternative data models and the needs they fulfill
• How data models can be classified by their level of abstraction
Author: Carlos Coronel | Steven Morris
The document provides an introduction to database management systems (DBMS). It can be summarized as follows:
1. A DBMS allows for the storage and retrieval of large amounts of related data in an organized manner. It removes data redundancy and allows for fast retrieval of data.
2. Key components of a DBMS include the database engine, data definition subsystem, data manipulation subsystem, application generation subsystem, and data administration subsystem.
3. A DBMS uses a data model to represent the organization of data in a database. Common data models include the entity-relationship model, object-oriented model, and relational model.
Introduction to DBMS(For College Seminars)Naman Joshi
This presentation provides an overview of database management systems. It discusses what a database and DBMS are, and covers data models like relational, network, and hierarchical. It also discusses database concepts like data abstraction, views, keys, and advantages of using a DBMS like data independence and sharing data. The goal is to explain core DBMS concepts at a high level.
Introduction To Database Management Systemcpjcollege
Database Management System (DMBS)
• Collection of interrelated data • Set of programs to access the data • DMBS contains information about a particular enterprise • DBMS provides an environment that it both convenient and efficient to use
This document provides an introduction to database development and Microsoft Access. It defines key database terminology like database, table, fields, records, forms, queries, and reports. It explains that a database is a collection of organized data stored electronically. A database management system (DBMS) is software that allows users to access and manage the database. Microsoft Access is described as a relational database management system designed for home and small business use. The document outlines how to create tables and work with fields in a database.
The document discusses database management systems and distributed databases. It covers the problems with flat file data storage, how databases address these issues, database design concepts like normalization, and the advantages and challenges of distributed database systems. Distributed databases can be centralized, partitioned, or replicated across multiple sites to improve performance, but maintaining data consistency is challenging and requires concurrency control methods.
This document discusses data models and the three schema architecture of database management systems (DBMS). It describes the three levels of schemas in a DBMS - physical schema, conceptual/logical schema, and external schemas. The three schema architecture supports program-data independence and multiple user views of data by providing different levels of abstraction and independence between the schemas.
Database Models, Client-Server Architecture, Distributed Database and Classif...Rubal Sagwal
Introduction to Data Models
-Hierarchical Model
-Network Model
-Relational Model
-Client/Server Architecture
Introduction to Distributed Database
Classification of DBMS
A database is a collection of data that can be used alone or combined to answer users' questions. A database management system (DBMS) provides programs to manage databases, control data access, and include a query language. When designing a database, it is important to structure the data so that specific records can be easily accessed, the database can respond to different questions, minimal storage is used, and redundant data is avoided. Key concepts in database design include entities, attributes, records, primary keys, foreign keys, and relationships between tables.
The document provides an overview of database systems, including:
1) Database systems store and manage large amounts of related data and provide efficient access to that data. They solve problems with traditional file-based data storage like redundancy, data integrity, and concurrent access.
2) Databases are made up of structured data models like the relational model and object-oriented models. They include languages for defining, manipulating, and querying data.
3) Database management systems provide storage, query processing, transaction management, and an abstraction of the data through multiple levels including physical, logical and view levels.
The document discusses the Entity-Relationship (ER) model for database design. It describes the key concepts of the ER model including entity sets, relationship sets, attributes, keys, and cardinalities. The ER model uses entity sets to represent real-world objects, relationship sets to represent associations among entities, and attributes to describe the properties of entities and relationships. Relationships can be one-to-one, one-to-many, many-to-one, or many-to-many. ER diagrams provide a graphical representation of the ER model and can depict entities, relationships, attributes, and cardinality constraints.
The document discusses the entity-relationship (ER) model, which is a top-down approach for conceptual database design. The ER model represents real-world objects as entities and relationships between entities. An ER diagram visually shows entities, attributes, and relationships. The model has advantages such as mapping well to the relational model and being easy to understand. It allows communicating the database design to users and serving as a design plan for developers.
This document discusses conceptual data modeling using the entity-relationship (ER) model. It defines key concepts of the ER model including entities, attributes, relationships, entity sets, relationship sets, keys, and ER diagrams. It explains how the ER model is used in the early conceptual design phase of database design to capture the essential data requirements and produce a conceptual schema that can be later mapped to a logical and physical database implementation.
The document discusses different types of data models used in database management systems including conceptual, logical, and physical data models. It describes conceptual data models as defining what the system contains, logical data models as defining how the system should be implemented regardless of the DBMS, and physical data models as describing how the system will be implemented using a specific DBMS. The document also discusses entity-relationship models and diagrams, relational models, keys used in databases like primary keys and foreign keys, and integrity constraints to maintain data validity.
The document discusses entity-relationship (ER) diagrams and database design. It defines key concepts in ER diagrams like entities, attributes, relationships and how they are represented. It explains how to start building an ER diagram by defining entities and relationships based on a narrative. Different types of relationships and how they are drawn are covered, along with cardinality, keys, and other symbols used in ER diagrams. The document provides an example of an ER diagram for a banking system and discusses how an ER diagram can be converted into a relational database with tables.
Week 4 The Relational Data Model & The Entity Relationship Data Modeloudesign
The document discusses the relational data model and relational databases. It explains that the relational model organizes data into tables with rows and columns, and was invented by Edgar Codd. The model uses keys to uniquely identify rows and relationships between tables to link related data. SQL is identified as the most commonly used language for querying and managing data in relational database systems.
The Entity-Relationship model is used for database design and includes several phases:
1) Requirement analysis involves understanding the data to be stored, applications needed, and common operations.
2) Conceptual design builds the ER model to describe data simply matching user requirements.
3) Logical design converts the ER model into a relational database schema.
4) Physical design addresses indexing, clustering, and security access rules.
The document discusses different data models including hierarchical, network, relational, and object-oriented models. It also describes entity-relationship (E-R) modeling which involves defining entities, attributes, and relationships between entities. Key aspects of E-R modeling covered include entity types, relationship types, cardinalities, keys, and converting an E-R diagram into a relational database schema. The document provides examples to illustrate concepts such as weak entities, roles, and design considerations for E-R diagrams.
The sole purpose of sharing these slides are to educate the beginners of IT and Computer Science/Engineering. Credits should go to the referred material and also CICRA campus, Colombo 4, Sri Lanka where I taught these in 2017.
This document describes Chapter 7 of the textbook "Database System Concepts" which covers the Entity-Relationship (ER) model. The chapter discusses the ER modeling process and concepts such as entities, relationships, attributes, and cardinalities. It explains how the ER model can be used to design a conceptual schema to represent an enterprise and its requirements. The chapter also covers advanced ER features, weak entities, and how to map an ER design to relational schemas and tables.
The document discusses database design using the entity-relationship (ER) model. It describes the design phases as initially characterizing user data needs, choosing a data model to translate requirements into a conceptual schema, specifying functional requirements, and performing logical and physical design. The ER model represents an enterprise using entities, relationships between entities, and attributes. Key aspects of the ER model discussed include entity sets, relationship sets, mapping cardinalities between entity sets, participation constraints, complex attribute types, and keys.
Data models can be record-based (hierarchical, network, relational), object-based (entity-relationship, semantic, functional, object-oriented), or physical (unifying frame memory). The entity-relationship model is a method to visualize data logically and independently of hardware. It facilitates database design by allowing specification of entity types, relationships between entities, and attributes of entities. The main concepts are entities, relationships between entities, and attributes of entities.
The document provides an overview of conceptual database design using entity-relationship (ER) modeling. It defines key concepts in ER diagrams like entities, attributes, relationships and their cardinalities. It explains how to model different relationship types like one-to-one, one-to-many and many-to-many. It also covers advanced topics such as weak entities, generalization, specialization and aggregation. The overall purpose is to illustrate how ER diagrams can be used to design databases by visually representing the entities, attributes, and relationships in a domain.
The document discusses conceptual data modeling using entity-relationship (ER) models. It defines key concepts in ER modeling such as entities, attributes, relationships, cardinalities, and participation constraints. Entities can have attributes and relationships with other entities. Relationships have cardinality constraints that specify how many entities can participate in a relationship, such as one-to-one, one-to-many, or many-to-many. Participation constraints specify whether an entity's participation in a relationship is mandatory or optional. Together, cardinalities and participation constraints specify the structural constraints of relationships in an ER model.
This document provides an introduction to Entity-Relationship (ER) data modeling. It describes the basic concepts of entities, attributes, relationships, and keys. It explains how ER diagrams can be used to graphically represent these concepts and the structure of a database. The document also covers entity types, relationship types, participation constraints, mapping cardinalities, weak entities, and how to represent these concepts in an ER diagram.
This document discusses the process of database design, including conceptual modeling using entity-relationship (ER) diagrams. It begins by outlining the initial requirements gathering and conceptual modeling phases. Next, it describes logical and physical design, which involve mapping the conceptual model to relational schemas and deciding on physical storage structures. The bulk of the document then focuses on concepts in ER modeling, including entities, attributes, relationships, relationship types, weak entities, and how to represent these graphically in an ER diagram. It provides examples to illustrate key ER modeling concepts and design issues.
The document provides information about database management systems and the relational database model. It discusses data models, entity relationship modeling, relational databases, normalization of database tables, and relational database design. Key topics covered include the entity relationship model, E-R diagrams, relationship sets, attributes, keys, normalization forms, and designing normalized database tables.
The document discusses database modeling, management, and development. It covers database design and modeling including conceptual, logical, and physical database design. It also discusses entity-relationship modeling including entities, attributes, relationships, keys, and constraints. Additionally, it covers Java database connectivity (JDBC) including the different types of JDBC drivers and how to access a database using JDBC.
The document discusses the process of database design and the entity-relationship (E-R) model. It covers the conceptual, logical, and physical design phases. It also explains the key concepts of the E-R model including entities, attributes, relationships, keys, and cardinality constraints. E-R diagrams provide a way to visually represent an enterprise schema using these basic E-R modeling elements and their relationships.
Wired and Wireless Computer Network Performance Evaluation Using OMNeT++ Simu...Jaipal Dhobale
This document summarizes the performance evaluation of wired and wireless computer networks using the OMNeT++ simulation environment. The performance is evaluated based on throughput. For the wired network simulation, the Nclients application from INET is used, while the Wireless Host to Host application is used for the wireless network simulation. Throughput is measured for both networks by varying the data rate and number of clients. The results show that throughput from the wired server generally increases with more clients, while throughput from the wireless server is highest with a lower number of clients. Throughput to the server is observed to increase with data rate for both networks.
Unit no 08_dm_insights on challenges in management of disasterJaipal Dhobale
The document discusses challenges in disaster management. It explains that education, knowledge, and public awareness are critical to reducing disaster losses. The public health system plays an important role in disaster prevention by addressing communicable diseases and implementing primary, secondary, and tertiary prevention strategies. The triage process helps address challenges by sorting and prioritizing patients. Hazard maps are useful visual aids that identify safe zones. Culture influences disaster management through roles, language, traditions, and communication. Environmental degradation is also linked to disasters, so prevention strategies must consider the environment.
This document discusses the roles and responsibilities of various agencies in disaster management. It outlines that the major roles of agencies include coordinating between different levels of government, establishing response operations, assessing damage and needs, and coordinating assistance programs. Effective disaster management requires collaboration between agencies, communication, and strategic planning. State and local bodies are responsible for coordinating national policy implementation at local levels as first responders. Philanthropic organizations support emergency relief, prevention, and community development. The media plays a key role in information dissemination before, during, and after disasters. A community-based approach is important for achieving disaster management objectives.
The document discusses various aspects of disaster response including:
- The aims of response such as meeting basic needs, improving first responder efforts, and saving lives.
- Methods for administering first aid, evacuating people, and mobilizing essential services in response to disasters.
- The challenges of handling injured people in hospitals and conducting search and rescue operations effectively.
- The need for both traditional methods like providing food and shelter, as well as modern technologies like GPS and mobile phones, to facilitate disaster response.
This document discusses disaster management and planning. It covers the nature and scope of disaster management, including policy and types of plans. Some key points discussed include:
- Disaster management aims to reduce losses from disasters through optimal resource utilization and preparation.
- Plans like hazard and vulnerability analysis and SWOT analysis assess risks at the community, regional, and national levels.
- The disaster management policy provides guidance to promote community-based management and capacity building at all levels.
- Identification, prevention and preparation are important pre-crisis phases of crisis management.
This document outlines the key components of a written research report and oral presentation. It discusses that a written report must be complete, accurate, and clear. The report should include an executive summary, introduction, methods, results, and conclusions sections. For oral presentations, the researcher should understand the audience, state the research problems and conclusions, and use effective visual aids. The document also reviews different types of charts that can be used to graphically present results, such as pie charts, line charts, and bar charts.
The document discusses developing a sampling plan for research. It covers defining the target population, identifying the sampling frame, selecting a sampling procedure, and determining sample size. For sampling procedures, it describes nonprobability and probability methods such as convenience, judgment, quota, simple random, systematic, stratified, and cluster samples. It emphasizes that probability samples allow estimating sampling error and making statistical inferences about the population.
The document outlines the 10-step procedure for developing a questionnaire: 1) Specify the information to be sought, 2) Determine the method of administration, 3) Determine the content of individual questions, 4) Determine the form of responses, 5) Determine the wording of questions, 6) Determine the question sequence, 7) Determine the physical characteristics, 8) Develop a recruiting message, 9) Reexamine and revise the questionnaire, and 10) Pretest the questionnaire and revise it based on feedback. It also discusses procedures for administering questionnaires and conducting a peer review to improve the quality of the questionnaire.
The document discusses measurement scales and establishing validity and reliability of measures. It outlines the four scales of measurement - nominal, ordinal, interval, and ratio scales - and explains their characteristics. It also discusses considerations for designing measurement scales like the number of items and response categories. Validity refers to how well a measure assesses the intended concept by minimizing errors. Reliability is the consistency of a measure and whether it provides the same results over time. Establishing both validity and reliability is important for good measurement.
Unit no 06_collecting primary data by communicationJaipal Dhobale
This document discusses methods for collecting primary data through communication. It covers structured versus unstructured questionnaires, disguised versus undisguised questionnaires, and four methods for administering questionnaires: personal interviews, telephone interviews, mail questionnaires, and internet-based questionnaires. For each method, it discusses three key aspects for comparison: sampling control, information control, and administrative control. The goal is to help researchers choose the most appropriate data collection method based on these considerations.
Unit no 05_collecting primary data by observationJaipal Dhobale
This document discusses methods for collecting primary data through observation. It outlines seven types of primary data that can be collected: demographic, personality, attitudes, awareness/knowledge, intentions, motivation, and behavior. Two main methods for obtaining primary data are communication (questionnaires) and observation. Observation research involves systematically watching and recording behaviors and has advantages of objectivity and accuracy over communication methods. Structured and unstructured observation as well as disguised and undisguised approaches are discussed.
Unit no 04_collecting secondary data from inside & outside the organizationJaipal Dhobale
The document discusses collecting and using secondary data from inside and outside an organization. It defines secondary data as data that was originally collected for another purpose. Some key points covered include:
- Secondary data can provide background knowledge and fill research gaps, while saving time and money compared to primary data. However, it may not fit the current problem well and could be outdated.
- Internal secondary data from within an organization is usually readily available and low-cost. Decision support systems (DSS) store and analyze internal secondary data to produce customized reports for managers.
- External secondary data comes from published sources or standardized marketing information services that collect industry data for multiple customers. Examples include customer profiling, sales measurement, and online
This document discusses different types of research designs: exploratory research, descriptive research, and causal research. Exploratory research aims to gain insights and ideas, descriptive research determines frequencies or relationships between variables, and causal research establishes cause-and-effect relationships. Some common exploratory techniques include literature reviews, interviews, focus groups, and case studies. Descriptive research describes characteristics or predicts relationships, while causal research uses experiments to test hypotheses about causal links between variables by establishing temporal sequence, concomitant variation, and ruling out spurious associations.
The document discusses research design formulation and outlines the problem formulation process. It explains that the problem formulation process involves meeting with clients, clarifying the problem, stating the manager's decision problem, developing possible research problems, selecting problems to address, and preparing a research agreement. The document also distinguishes between exploratory and conclusive research and outlines the typical components of a research proposal such as the problem definition, research design, sampling plan, and analysis.
This document provides an introduction to marketing research. It discusses the objectives of marketing research, including understanding market needs and measuring advertising effectiveness. It describes how marketing research works by collecting data and transforming it into useful information for decision making. Finally, it discusses the importance of ethics in marketing research and how researchers must protect participants' privacy and not deceive them. The key organizations that conduct marketing research are producers of products and services, advertising agencies, and dedicated marketing research companies.
This document discusses various aspects of disaster recovery including medium and long term recovery, community participation, impact assessment, rehabilitation, and capacity building. It outlines the objectives of recovery as restoring livelihoods and infrastructure, coordinating stakeholders, and reducing future risks. Medium-term recovery focuses on essential services while long-term recovery includes community-defined objectives and sustainable development. Participatory rehabilitation involves both physical and social infrastructure. Capacity building is key to effective reconstruction and rehabilitation.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
Dbms basics 02
1. Unit No. - II
DBMS
By Dr. Dhobale J V
Assistant Professor
IBS, IFHE, Hyderabad.
IBS Hyderabad 1
2. Objectives
E-R Diagram.
Features of RDBMS.
Normalization Process(upto 3NF).
Functional Dependencies.
Decomposition.
2IBS Hyderabad
3. Database Design and the E-R Model
In designing of database schema, we must
ensure that we avoid two major pitfalls –
1. Redundancy
2. Incompleteness
3IBS Hyderabad
4. Database Design and the E-R Model
The Entity-Relationship Model: The E-R Model
was developed to facilitate database design by
allowing specification of an enterprise schema
that represents the overall logical structure of
a database.
The E-R Model is very useful in mapping the
meaning and interactions of real-world
enterprises onto a conceptual schema.
4IBS Hyderabad
5. Database Design and the E-R Model
The E-R data model employs three basic
concepts – entity sets, relationship sets and
attributes.
An entity is a “thing” or “object” in the real
world that is distinguishable from all other
objects.
Ex- student, instructor, book.
5IBS Hyderabad
6. Database Design and the E-R Model
An entity has a set of properties, and the
values for some set of properties may uniquely
identify an entity.
Ex. – A person may have person_Id property
whose value uniquely identifies that person.
An entity set is a set of entities of the same
type that share the same properties.
6IBS Hyderabad
7. Database Design and the E-R Model
An entity is represented by a set of attributes.
Attributes are the descriptive properties
possessed by each member of an entity set.
Each entity has a value for each of its
attributes.
7IBS Hyderabad
9. Database Design and the E-R Model
Relationship: A relationship is an association
among several entities.
Ex. Instructor Crick advisor to student Zhang.
A relationship set is a set of relationships of
the same type.
9IBS Hyderabad
10. Database Design and the E-R Model
Consider the two entity sets instructor and
student
We define relationship advisor to denote the
association between instructor and student,
shown below -
10IBS Hyderabad
12. Database Design and the E-R Model
Recursive Relationship – The same entity set
participates in a relationship set more than
once in different roles.
Descriptive attribute
12IBS Hyderabad
14. Database Design and the E-R Model
The number of entity sets that participate in a
relationship set is the degree of the
relationship set.
A binary relationship set is of degree 2; & a
ternary relationship set is of degree 3.
14IBS Hyderabad
15. Database Design and the E-R Model
Attributes: For each attribute, there is a set of
permitted values, called domain, or value.
An attribute of an entity set is a function that
maps form the entity set into a domain.
Different types of attributes –
1. Simple attribute – The attribute which can not
be divided into subparts is called simple
attribute; Ex- Enrol_No.
15IBS Hyderabad
16. Database Design and the E-R Model
Attributes:
Different types of attributes –
2. Composite attribute – The attribute which can
be divided into subparts is called composite
attribute;
Ex- name can be divided into subparts like
fist_name, middle_name & last_name.
16IBS Hyderabad
17. Database Design and the E-R Model
Attributes:
Different types of attributes –
3. Single-valued attribute – attribute which has a
single value only.
Ex. Account_No.
4. Multi-valued attribute – attributes which has a
set of values.
Ex. Phone_No.
17IBS Hyderabad
18. Database Design and the E-R Model
Attributes:
Different types of attributes –
5. Derived attribute – The value for this type of
attribute can be derived from the values of
other related attributes or entities.
Ex. From DOB value we can derive age
attribute.
An attribute takes a null value when an entity
does not have a value for it.
18IBS Hyderabad
19. Database Design and the E-R Model
Constraints: An E-R enterprise schema may
define certain constraints to which the
contents of a database must conform.
Mapping Cardinalities/Cardinality ratios:
express the number of entities to which
another entity can be associated via a
relationship set
For a binary relationship set R between entity
sets A and B, the mapping cardinalities must
be one of the following -
19IBS Hyderabad
20. Database Design and the E-R Model
Constraints:.
Mapping Cardinalities/Cardinality ratios:
1. One-to-one – An entity in A is associated with
at most one entity in B, and an entity in B is
associated with at most one entity in A.
20IBS Hyderabad
21. Database Design and the E-R Model
Constraints:.
Mapping Cardinalities/Cardinality ratios:
2. One-to-many – An entity in A is associated
with any number (zero or more) of entities in
B. An entity in B, however, can be associated
with at most one entity in A.
21IBS Hyderabad
22. Database Design and the E-R Model
Constraints:.
Mapping Cardinalities/Cardinality ratios:
3. Many-to-One: An entity in A is associated
with at most one entity in B. An entity in B,
however, can be associated with any number
(zero or more) of entities in A.
22IBS Hyderabad
23. Database Design and the E-R Model
Constraints:.
Mapping Cardinalities/Cardinality ratios:
4. Many-to-Many: An entity in A is associated
with any number (zero or more) of entities in
B, and an entity in B is associated with any
number (zero or more) of entities in A.
23IBS Hyderabad
24. Database Design and the E-R Model
Keys: The values of the attribute for an entity
must be such that they can uniquely identify
the entity.
A key for an entity is a set of attributes that
suffice to distinguish entities from each other.
24IBS Hyderabad
25. Database Design and the E-R Model
Keys:
A super key of an entity set is a set of one or
more attributes whose values uniquely
determine each entity.
A candidate key of an entity set is a minimal
super key
Social-security is candidate key of customer.
Account-number is candidate key of account.
Although several candidate keys may exist,
one of the candidate keys is selected to be the
primary key.
25IBS Hyderabad
26. Database Design and the E-R Model
Keys:
The combination of primary keys of the
participating entity sets forms a candidate key
of a relationship set.
Must consider the mapping cardinality and the
semantics of the relationship set when selecting the
primary key.
(social-security, account-number) is the primary key
of depositor.
26IBS Hyderabad
27. Database Design and the E-R Model
Components of E-R Diagram:
Rectangles represent entity sets.
Ellipses represent attributes.
Diamonds represent relationship sets.
Lines link attributes to entity sets and entity sets to
relationship sets.
Double ellipses represent multivalued attributes.
Dashed ellipses denote derived attributes.
Primary key attributes are underlined.
27IBS Hyderabad
28. Database Design and the E-R Model
Components of E-R Diagram:
28IBS Hyderabad
29. Database Design and the E-R Model
Entity – Relationship Diagram:
29IBS Hyderabad
30. Database Design and the E-R Model
Entity – Relationship Diagram:
30IBS Hyderabad
31. Database Design and the E-R Model
Entity – Relationship Diagram:
31IBS Hyderabad
32. Database Design and the E-R Model
Weak Entity Sets:
An entity set that does not have a primary key
is referred to as a weak entity set.
The existence of a weak entity set depends on
the existence of a strong entity set; it must
relate to the strong set via a one-to-many
relationship set.
The discriminator (or partial key) of a weak
entity set is the set of attributes that
distinguishes among all the entities of a weak
entity set.
32IBS Hyderabad
33. Database Design and the E-R Model
Weak Entity Sets:
The primary key of a weak entity set is formed
by the primary key of the strong entity set on
which the weak entity set is existence
dependent, plus the weak entity set’s
discriminator.
We depict a weak entity set by double
rectangles.
We underline the discriminator of a weak entity
set with a dashed line.
33IBS Hyderabad
34. Database Design and the E-R Model
Weak Entity Sets:
payment-number – discriminator of the
payment entity set
Primary key for payment – (loan-number,
payment-number)
34IBS Hyderabad
35. Database Design and the E-R Model
Specialization:
Top-down design process; we designate
subgroupings within an entity set that are
distinctive from other entities in the set.
These subgroupings become lower-level entity
sets that have attributes or participate in
relationships that do not apply to the higher-
level entity set.
Depicted by a triangle component labeled ISA
(i.e., savings-account “is an” account)
35IBS Hyderabad
37. Database Design and the E-R Model
Generalization:
A bottom-up design process – combine a
number of entity sets that share the same
features into a higher-level entity set
Specialization and generalization are simple
inversions of each other; they are represented
in an E-R diagram in the same way.
Attribute Inheritance – a lower-level entity
set inherits all the attributes and relationship
participation of the higher-level entity set to
which it is linked.
37IBS Hyderabad
38. Database Design and the E-R Model
Design Constraints on a Generalization:
Constraint on which entities can be members
of a given lower-level entity set.
condition-defined
user-defined
Constraint on whether or not entities may
belong to more than one lower-level entity set
within a single generalization.
Disjoint
overlapping
38IBS Hyderabad
39. Database Design and the E-R Model
Design Constraints on a Generalization:
Completeness constraint – specifies whether or not an
entity in the higher-level entity set must belong to at
least one of the lower-level entity sets within a
generalization.
Total
partial
39IBS Hyderabad
40. Database Design and the E-R Model
Design Constraints on a Generalization:
40IBS Hyderabad
41. Database Design and the E-R Model
Aggregation: Aggregation is a process when
relation between two entity is treated as a
single entity.
Here the relation between Center and Course,
is acting as an Entity in relation with Visitor..
41IBS Hyderabad
42. Database Design and the E-R Model
Aggregation: Relationship sets borrower and
loan-officer represent the same information
Eliminate this redundancy via aggregation
Treat relationship as an abstract entity.
Allows relationships between relationships.
Abstraction of relationship into new entity.
Without introducing redundancy, the following
diagram represents that:
A customer takes out a loan.
An employee may be a loan officer for a customer-
loan pair.
42IBS Hyderabad
43. Database Design and the E-R Model
How to draw a basic ER diagram:
1. Purpose and scope: Define the purpose and
scope of what you’re analyzing or modeling.
2. Entities: Identify the entities that are
involved. When you’re ready, start drawing
them in rectangles (or your system’s choice
of shape) and labeling them as nouns.
43IBS Hyderabad
44. Database Design and the E-R Model
How to draw a basic ER diagram:
3. Relationships: Determine how the entities are all
related. Draw lines between them to signify the
relationships and label them. Some entities may not
be related, and that’s fine. In different notation
systems, the relationship could be labelled in a
diamond, another rectangle or directly on top of the
connecting line.
4. Attributes: Layer in more detail by adding key
attributes of entities. Attributes are often shown as
ovals.
5. Cardinality: Show whether the relationship is 1-1, 1-
many or many-to-many.
44IBS Hyderabad
45. Important Features of RDBMS
RDBMS is a database management
system based on relational model defined by
E.F.Codd.
Data is stored in the form
of rows and columns and also provides ACID
properties functionality.
A transaction in a database system must
maintain Atomicity, Consistency, Isolation, and
Durability − commonly known as ACID
properties − in order to ensure accuracy,
completeness, and data integrity.
45IBS Hyderabad
46. Important Features of RDBMS
Atomicity:
Atomicity requires that each transaction be "all
or nothing": if one part of the transaction fails,
then the entire transaction fails, and the
database state is left unchanged.
An atomic system must guarantee atomicity in
each and every situation, including power
failures, errors and crashes.
46IBS Hyderabad
47. Important Features of RDBMS
Consistency:
The consistency property ensures that any
transaction will bring the database from one
valid state to another.
Any data written to the database must be valid
according to all defined rules, including
constraints, cascades, triggers, and any
combination thereof.
47IBS Hyderabad
48. Important Features of RDBMS
Isolation:
The isolation property ensures that the
concurrent execution of transactions results in
a system state that would be obtained if
transactions were executed sequentially, i.e.,
one after the other.
Providing isolation is the main goal of
concurrency control. Depending on the
concurrency control method, the effects of an
incomplete transaction might not even be
visible to another transaction.
48IBS Hyderabad
49. Important Features of RDBMS
Durability:
The durability property ensures that once a
transaction has been committed, it will remain
so, even in the event of power loss, crashes,
or errors.
In a relational database, for instance, once a
group of SQL statements execute, the results
need to be stored permanently.
49IBS Hyderabad
50. Normalization
Without Normalization, it becomes difficult to
handle and update the database, without
facing data loss.
Insertion, Updation and Deletion Anamolies
are very frequent if Database is not
Normalized.
To understand these anomalies let us take an
example of Student table.
50IBS Hyderabad
51. Normalization
Updation Anomaly : To update address of a student
who occurs twice or more than twice in a table, we will
have to update S_Address column in all the rows,
else data will become inconsistent.
51IBS Hyderabad
52. Normalization
Insertion Anomaly : Suppose for a new
admission, we have a Student id(S_id), name
and address of a student but if student has not
opted for any subjects yet then we have to
insert NULLthere, leading to Insertion
Anomaly.
Deletion Anomaly : If (S_id) 401 has only one
subject and temporarily he drops it, when we
delete that row, entire student record will be
deleted along with it.
52IBS Hyderabad
53. Normalization
Normalization rule are divided into following
normal form.
1. First Normal Form
2. Second Normal Form
3. Third Normal Form
4. BCNF
53IBS Hyderabad
54. Normalization
1. First Normal Form :
As per the rule of first normal form, an
attribute (column) of a table cannot hold
multiple values. It should hold only atomic
values.
Example: Suppose a company wants to store
the names and contact details of its
employees. It creates a table that looks like
this:
54IBS Hyderabad
56. Normalization
1. First Normal Form:
Two employees (Jon & Lester) are having
two mobile numbers so the company stored
them in the same field as you can see in the
table above.
This table is not in 1NF as the rule says
“each attribute of a table must have atomic
(single) values”, the emp_mobile values for
employees Jon & Lester violates that rule.
56IBS Hyderabad
57. Normalization
1. First Normal Form:
To make the table complies with 1NF we
should have the data like this:.
57IBS Hyderabad
58. Functional Dependencies
The attributes of a table is said to be
dependent on each other when an attribute of
a table uniquely identifies another attribute of
the same table.
For example: Suppose we have a student
table with attributes: Stu_Id, Stu_Name,
Stu_Age.
Here Stu_Id attribute uniquely identifies the
Stu_Name attribute of student table because
if we know the student id we can tell the
student name associated with it.
58IBS Hyderabad
59. Functional Dependencies
This is known as functional dependency and
can be written as Stu_Id->Stu_Name or in
words we can say Stu_Name is functionally
dependent on Stu_Id.
Formally:
If column A of a table uniquely identifies the
column B of same table then it can
represented as A->B (Attribute B is
functionally dependent on attribute A).
59IBS Hyderabad
61. Functional Dependencies
Types of of Functional Dependencies:
1. Trivial functional dependency:
The dependency of an attribute on a set of attributes is
known as trivial functional dependency if the set of
attributes includes that attribute.
Symbolically: A ->B is trivial functional dependency if
B is a subset of A.
The following dependencies are also trivial: A->A & B-
>B.
61IBS Hyderabad
62. Functional Dependencies
Types of of Functional Dependencies:
1. Trivial functional dependency:
For example: Consider a table with two columns
Student_id and Student_Name.
{Student_Id, Student_Name} -> Student_Id is a trivial
functional dependency as Student_Id is a subset of
{Student_Id, Student_Name}.
That makes sense because if we know the values of
Student_Id and Student_Name then the value of
Student_Id can be uniquely determined.
Also, Student_Id -> Student_Id & Student_Name ->
Student_Name are trivial dependencies too.
62IBS Hyderabad
63. Functional Dependencies
Types of of Functional Dependencies:
2. Non-Trivial functional dependency: a functional
dependency X->Y holds true where Y is not a subset
of X then this dependency is called non trivial
Functional dependency.
63IBS Hyderabad
64. Functional Dependencies
Types of of Functional Dependencies:
2. Non-Trivial functional dependency:
For example:
An employee table with three attributes: emp_id,
emp_name, emp_address.
The following functional dependencies are non-trivial:
emp_id -> emp_name (emp_name is not a subset of
emp_id)
emp_id -> emp_address (emp_address is not a subset
of emp_id)
On the other hand, the following dependencies are
trivial:
{emp_id, emp_name} -> emp_name [emp_name is a
subset of {emp_id, emp_name}]
64IBS Hyderabad
65. Functional Dependencies
Types of of Functional Dependencies:
2. Non-Trivial functional dependency:
Completely non trivial FD:
If a FD X->Y holds true where X intersection Y is null
then this dependency is said to be completely non
trivial function dependency.
65IBS Hyderabad
66. Functional Dependencies
Types of of Functional Dependencies:
3. Multivalued dependency: Multivalued dependency
occurs when there are more than
one independent multivalued attributes in a table.
For example: Consider a bike manufacture
company, which produces two colors (Black and
white) in each model every year.
66IBS Hyderabad
67. Functional Dependencies
Types of of Functional Dependencies:
3. Multivalued dependency:
Here columns manuf_year and color are independent
of each other and dependent on bike_model. In this
case these two columns are said to be multivalued
dependent on bike_model.
67IBS Hyderabad
68. Functional Dependencies
Types of of Functional Dependencies:
3. Multivalued dependency:
These dependencies can be represented like this:
bike_model ->> manuf_year
68IBS Hyderabad
69. Functional Dependencies
Types of of Functional Dependencies:
4. Transitive dependency: A functional dependency is
said to be transitive if it is indirectly formed by two
functional dependencies. For e.g.
X -> Z is a transitive dependency if the following three
functional dependencies hold true:
X->Y
Y does not ->X
Y->Z
Note: A transitive dependency can only occur in a
relation of three of more attributes. This dependency
helps us normalizing the database in 3NF (3rd Normal
Form).
69IBS Hyderabad
70. Functional Dependencies
Types of of Functional Dependencies:
4. Transitive dependency:
Example: Let’s take an example to understand it
better:
70IBS Hyderabad
71. Functional Dependencies
Types of of Functional Dependencies:
4. Transitive dependency:
Example: Let’s take an example to understand it
better:
{Book} ->{Author} (if we know the book, we knows the
author name)
{Author} does not ->{Book}
{Author} -> {Author_age}
Therefore as per the rule of transitive dependency:
{Book} -> {Author_age} should hold, that makes sense
because if we know the book name we can know the
author’s age.
71IBS Hyderabad
72. Normalization
2. Second normal form (2NF):
A table is said to be in 2NF if both the
following conditions hold:
1. Table is in 1NF (First normal form)
2. No non-prime attribute is dependent on the
proper subset of any candidate key of table.
An attribute that is not part of any candidate
key is known as non-prime attribute.
72IBS Hyderabad
73. Normalization
2. Second normal form (2NF):
Ex. Suppose a school wants to store the data
of teachers and the subjects they teach. They
create a table that looks like this: Since a
teacher can teach more than one subjects,
the table can have multiple rows for a same
teacher.
73IBS Hyderabad
74. Normalization
2. Second normal form (2NF):
Ex.
Candidate Keys: {teacher_id, subject}
Non prime attribute: teacher_age
74IBS Hyderabad
75. Normalization
2. Second normal form (2NF):
The table is in 1 NF because each attribute
has atomic values. However, it is not in 2NF
because non prime attribute teacher_age is
dependent on teacher_id alone which is a
proper subset of candidate key.
This violates the rule for 2NF as the rule says
“no non-prime attribute is dependent on the
proper subset of any candidate key of the
table”.
75IBS Hyderabad
76. Normalization
2. Second normal form (2NF):
To make the table complies with 2NF we can
break it in two tables like this:
teacher_details table:.
76IBS Hyderabad
77. Normalization
2. Second normal form (2NF):
To make the table complies with 2NF we can
break it in two tables like this:
teacher_subject table:
Now the tables comply with Second normal
form (2NF). 77IBS Hyderabad
78. Normalization
3. Third normal form (3NF): A table design is
said to be in 3NF if both the following
conditions hold:
1. Table must be in 2NF
2. Transitive functional dependency of non-
prime attribute on any super key should be
removed.
An attribute that is not part of any candidate
key is known as non-prime attribute.
78IBS Hyderabad
79. Normalization
3. Third normal form (3NF): In other words
3NF can be explained like this: A table is in
3NF if it is in 2NF and for each functional
dependency X-> Y at least one of the
following conditions hold:
X is a super key of table
Y is a prime attribute of table
An attribute that is a part of one of the
candidate keys is known as prime attribute.
79IBS Hyderabad
80. Normalization
3. Third normal form (3NF):
Example: Suppose a company wants to store
the complete address of each employee, they
create a table named employee_details that
looks like this:
80IBS Hyderabad
81. Normalization
3. Third normal form (3NF):
Example:
Super keys: {emp_id}, {emp_id, emp_name},
{emp_id, emp_name, emp_zip}…so on
Candidate Keys: {emp_id}
Non-prime attributes: all attributes except
emp_id are non-prime as they are not part of
any candidate keys.
81IBS Hyderabad
82. Normalization
3. Third normal form (3NF):
Example:
Here, emp_state, emp_city & emp_district
dependent on emp_zip. And, emp_zip is
dependent on emp_id that makes non-prime
attributes (emp_state, emp_city &
emp_district) transitively dependent on super
key (emp_id). This violates the rule of 3NF.
To make this table complies with 3NF we have
to break the table into two tables to remove
the transitive dependency:
82IBS Hyderabad
85. Decomposition
A functional decomposition is the process of
breaking down the functions of an
organization into progressively greater (finer
and finer) levels of detail.
In decomposition, one function is described in
greater detail by a set of other supporting
functions.
85IBS Hyderabad
86. Decomposition
The decomposition of a relation scheme R
consists of replacing the relation schema by
two or more relation schemas that each
contain a subset of the attributes of R and
together include all attributes in R.
Decomposition helps in eliminating some of
the problems of bad design such as
redundancy, inconsistencies and anomalies.
86IBS Hyderabad
87. Decomposition
There are two types of decomposition :
1. Lossy Decomposition
2. Lossless Join Decomposition
87IBS Hyderabad
88. Decomposition
1. Lossy Decomposition: The decomposition of
relation R into R1 & R2 is lossy when the join
(union) of R1 & R2 does not yield the same
relation as inn R”.
One of the disadvantages of decomposition
into two or more relational schemes (or
tables) is that some information is lost during
retrieval of original relation or table.
88IBS Hyderabad
89. Decomposition
1. Lossy Decomposition: Consider that we have
table STUDENT with three attribute roll_no ,
s_name and department.
This relation is decomposed into two relation
S_name and Dept_name
89IBS Hyderabad
Roll_no S_name Dept
111 Parimal Computer
222 Soham Electrical
333 Parimal Electrical
91. Decomposition
1. Lossy Decomposition:
In lossy decomposition ,spurious tuples are
generated when a natural join is applied to
the relations in the decomposition
The above decomposition is a bad
decomposition or Lossy decomposition.
91IBS Hyderabad
Roll_no S_name Dept
111 Parimal Computer
222 Soham Electrical
333 Parimal Electrical
92. Decomposition
2. Lossyless join Decomposition: “The
decomposition of relation R into R1 and R2
is lossless when the join of R1 and R2 yield
the same relation as in R.“
A relational table is decomposed (or factored)
into two or more smaller tables, in such a way
that the designer can capture the precise
content of the original table by joining the
decomposed parts. This is called lossless-
join (or non-additive join) decomposition.
92IBS Hyderabad
93. Decomposition
2. Lossyless join Decomposition: This is also
refferd as non-additive decomposition.
The lossless-join decomposition is always
defined with respect to a specific set F of
dependencies.
Consider that we have table STUDENT with
three attribute roll_no , s_name and
department.
93IBS Hyderabad
94. Decomposition
2. Lossyless join Decomposition:
This relation is decomposed into two relation
S_name and Dept_name
94IBS Hyderabad
Roll_no S_name Dept_Name
111 Parimal Computer
222 Soham Electrical
333 Parimal Electrical
96. Decomposition
2. Lossyless join Decomposition:
Now ,when these two relations are joined on
the common column 'roll_no' ,the resultant
relation will look like stu_joined.
In lossless decomposition, no any spurious
tuples are generated when a natural joined is
applied to the relations in the decomposition. 96IBS Hyderabad
Roll_no S_name Dept_Name
111 Parimal Computer
222 Soham Electrical
333 Parimal Electrical