The document discusses database design, including transforming entity-relationship diagrams into normalized relations, integrating different user views, choosing data storage formats, designing efficient database tables, file organization, and indexes. It covers key database concepts such as relations, primary keys, normalization, foreign keys, and data types. The goal of database design is to structure data in stable, normalized tables that are efficient for storage and access.
Fundamentals of Database Systems questions and answers with explanation for fresher's and experienced for interview, competitive examination and entrance test.
What is Instruction format
CPU organisation
Types of instructions
Types of address
Two address instruction
One address instruction
Three address instruction
Stack Organisation
What is a programme
Zero address instruction
Fundamentals of Database Systems questions and answers with explanation for fresher's and experienced for interview, competitive examination and entrance test.
What is Instruction format
CPU organisation
Types of instructions
Types of address
Two address instruction
One address instruction
Three address instruction
Stack Organisation
What is a programme
Zero address instruction
● Data Modeling and Data Models.
● Business Rules (Translating Business Rules into Data Model Components).
● Emerging Data Models: Big Data and NoSQL.
● Degrees of Data Abstraction (External, Conceptual, Internal and Physical model).
● Data Modeling and Data Models.
● Business Rules (Translating Business Rules into Data Model Components).
● Emerging Data Models: Big Data and NoSQL.
● Degrees of Data Abstraction (External, Conceptual, Internal and Physical model).
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
1. Copyright 2002 Prentice-Hall, Inc.
Modern Systems Analysis
and Design
Third Edition
Jeffrey A. Hoffer
Joey F. George
Joseph S. Valacich
Chapter 12
Designing Databases
12.1
2. Learning Objectives
Define each of the following database terms
Relation
Primary key
Normalization
Functional dependency
Foreign key
Referential integrity
Field
Data type
Null value
Denormalization
File organization
Index
Secondary key
12.2
3. Learning Objectives
Discuss the role of designing databases in
the analysis and design of an information
system
Learn how to transform an Entity-Relationship
(ER) Diagram into an equivalent set of well-
structured relations
Learn how to merge normalized relations
from separate user views into a consolidated
set of well-structured relations
12.3
4. Learning Objectives
Explain choices of storage formats for
database fields
Learn how to transform well-structured
relations into efficient database tables
Discuss use of different types of file
organizations to store database files
Discuss indexes and their purpose
12.4
5. Purpose of Database Design
Structure the data in stable structures, called
normalized tables
Not likely to change over time
Minimal redundancy
Develop a logical database design that
reflects actual data requirements
Develop a logical database design from which
a physical database design can be developed
12.5
6. Purpose of Database Design
Translate a relational database model
into a technical file and database
design that balances several
performance factors
Choose data storage technologies that
will efficiently, accurately and securely
process database activities
12.6
7. Process of Database Design
Logical Design
Based upon the conceptual data model
Four key steps
1. Develop a logical data model for each known user interface
for the application using normalization principles
2. Combine normalized data requirements from all user
interfaces into one consolidated logical database model
3. Translate the conceptual E-R data model for the application
into normalized data requirements
4. Compare the consolidated logical database design with the
translated E-R model and produce one final logical database
model for the application
12.7
8. Process of Database Design
Physical Design
Based upon results of logical database design
Key decisions
1. Choosing storage format for each attribute from the
logical database model
2. Grouping attributes from the logical database model
into physical records
3. Arranging related records in secondary memory (hard
disks and magnetic tapes) so that records can be
stored, retrieved and updated rapidly
4. Selecting media and structures for storing data to
make access more efficient
12.8
9. Deliverables and Outcomes
Logical database design must account
for every data element on a system
input or output
Normalized relations are the primary
deliverable
Physical database design results in
converting relations into files
12.9
10. Relational Database Model
Data represented as a set of related tables or
relations
Relation
A named, two-dimensional table of data. Each
relation consists of a set of named columns and
an arbitrary number of unnamed rows
Properties
Entries in cells are simple
Entries in columns are from the same set of values
Each row is unique
The sequence of columns can be interchanged without
changing the meaning or use of the relation
The rows may be interchanged or stored in any
sequence12.10
11. Relational Database Model
Well-Structured Relation
A relation that contains a minimum amount
of redundancy and allows users to insert,
modify and delete the rows without errors
or inconsistencies
12.11
12. Normalization
The process of converting complex
data structures into simple, stable data
structures
Second Normal Form (2NF)
Each nonprimary key attribute is
identified by the whole key (called full
functional dependency)
12.12
13. Normalization
Third Normal Form (3NF)
Nonprimary key attributes do not depend
on each other (called transitive
dependencies)
The result of normalization is that
every nonprimary key attribute
depends upon the whole primary key
12.13
14. Functional Dependencies and
Primary Keys
Functional Dependency
A particular relationship between two attributes.
For a given relation, attribute B is functionally
dependent on attribute A is, for every valid value
of A, that value of A uniquely determines the value
of B
Instances (or sample data) in a relation do not
prove the existence of a functional dependency
Knowledge of problem domain is most reliable
method for identifying functional dependency
Primary Key
An attribute whose value is unique across all
occurrences of a relation12.14
15. Functional Dependencies and
Primary Keys
Second Normal Form (2NF)
A relation is in second normal form (2NF) if
any of the following conditions apply:
The primary key consists of only one attribute
No nonprimary key attributes exist in the
relation
Every nonprimary key attribute is functionally
dependent on the full set of primary key
attributes
12.15
16. Functional Dependencies and
Primary Keys
Conversion to second normal form
(2NF)
To convert a relation into 2NF, decompose
the relation into new relations using the
attributes, called determinants, that
determine other attributes
The determinants become the primary key
of the new relation
12.16
17. Functional Dependencies and
Primary Keys
Third Normal Form (3NF)
A relation is in third normal form (3NF) if it
is in second normal form (2NF) and there
are no functional (transitive) dependencies
between two (or more) nonprimary key
attributes
12.17
18. Functional Dependencies and
Primary Keys
Foreign Key
An attribute that appears as a nonprimary key
attribute in one relation and as a primary key
attribute (or part of a primary key) in another
relation
Referential Integrity
An integrity constraint specifying that the value (or
existence) of an attribute in one relation depends
on the value (or existence) of the same attribute in
another relation
12.18
19. Transforming E-R Diagrams
into Relations
It is useful to transform the conceptual
data model into a set of normalized
relations
Steps
Represent entities
Represent relationships
Normalize the relations
Merge the relations
12.19
20. Transforming E-R Diagrams
into Relations
Represent Entities
Each regular entity is transformed into a relation
The identifier of the entity type becomes the
primary key of the corresponding relation
The primary key must satisfy the following two
conditions
a.The value of the key must uniquely identify every row in
the relation
b.The key should be nonredundant
12.20
21. Transforming E-R Diagrams
into Relations
Represent Relationships
Binary 1:N Relationships
Add the primary key attribute (or attributes) of the entity
on the one side of the relationship as a foreign key in the
relation on the right side
The one side migrates to the many side
Binary or Unary 1:1
Three possible options
a. Add the primary key of A as a foreign key of B
b. Add the primary key of B as a foreign key of A
c. Both of the above
12.21
22. Transforming E-R Diagrams into
Relations
Represent Relationships (continued)
Binary and Higher M:N relationships
Create another relation and include primary keys of all
relations as primary key of new relation
Unary 1:N Relationships
Relationship between instances of a single entity type
Utilize a recursive foreign key
A foreign key in a relation that references the primary key
values of that same relation
Unary M:N Relationships
Create a separate relation
Primary key of new relation is a composite of two
attributes that both take their values from the same
primary key
12.22
24. Transforming E-R Diagrams
into Relations
Merging Relations (View Integration)
Purpose is to remove redundant relations
View Integration Problems
Synonyms
Two different names used for the same attribute
When merging, get agreement from users on a single,
standard name
Homonyms
A single attribute name that is used for two or more
different attributes
Resolved by creating a new name
Dependencies between nonkeys
Dependencies may be created as a result of view
integration
In order to resolve, the new relation must be normalized
12.24
25. Physical File and Database
Design
The following information is required
Normalized relations, including volume estimates
Definitions of each attribute
Descriptions of where and when data are used,
entered, retrieved, deleted and updated
(including frequencies)
Expectations or requirements for response time
and data integrity
Descriptions of the technologies used for
implementing the files and database
12.25
26. Designing Fields
Field
The smallest unit of named application data recognized by system software
Each attribute from each relation will be represented as one or more fields
Choosing data types
Data Type
A coding scheme recognized by system software for representing
organizational data
Four objectives
Minimize storage space
Represent all possible values of the field
Improve data integrity of the field
Support all data manipulations desired on the field
Calculated fields
A field that can be derived from other database fields
12.26
27. Methods of Controlling Data
Integrity
Default Value
A value a field will assume unless an explicit value is
entered for that field
Range Control
Limits range of values which can be entered into field
Referential Integrity
An integrity constraint specifying that the value (or
existence) of an attribute in one relation depends on the
value (or existence) of the same attribute in another relation
Null Value
A special field value, distinct from 0, blank, or any other
value, that indicates that the value for the field is missing or
otherwise unknown
12.27
28. Designing Physical Tables
Relational database is a set of related tables
Physical Table
A named set of rows and columns that specifies
the fields in each row of the table
Design Goals
Efficient use of secondary storage (disk space)
Disks are divided into units that can be read in one
machine operation
Space is used most efficiently when the physical length
of a table row divides close to evenly with storage unit
Efficient data processing
Data are most efficiently processed when stored next to
each other in secondary memory12.28
29. Designing Physical Tables
Denormalization
The process of splitting or combining normalized
relations into physical tables based on affinity of
use of rows and fields
Partitioning
Capability to split a table into separate sections
Oracle 8i implements three types
Range
Hash
Composite
Optimizes certain operations at the expense of
others
12.29
30. Designing Physical Tables
Denormalization
Three common situations where
denormalization may be used
1.Two entities with a one-to-one relationship
2.A many-to-many relationship with nonkey
attributes
3.Reference data
12.30
31. Designing Physical Tables
Arranging Table Rows
Physical File
A named set of table rows stored in a contiguous
section of secondary memory
Each table may be a physical file or whole
database may be one file, depending on
database management software utilized
12.31
32. Designing Physical Tables
File Organization
A technique for physically arranging the records
of a file
Objectives for choosing file organization
1. Fast data retrieval
2. High throughput for processing transactions
3. Efficient use of storage space
4. Protection from failures or data loss
5. Minimizing need for reorganization
6. Accommodating growth
7. Security from unauthorized use
12.32
33. Designing Physical Tables
Types of File Organization
Sequential
The rows in the file are stored in sequence according to a
primary key value
Updating and adding records may require rewriting the file
Deleting records results in wasted space
Indexed
The rows are stored either sequentially or nonsequentially
and an index is created that allows software to locate
individual rows
Index
A table used to determine the location of rows in a file that
satisfy some condition
Secondary Index
Index based upon a combination of fields for which more than
one row may have same combination of values
12.33
34. Designing Physical Tables
Guidelines for choosing indexes
Specify a unique index for the primary key of each
table
Specify an index for foreign keys
Specify an index for nonkey fields that are
referenced in qualification, sorting and grouping
commands for the purpose of retrieving data
Hashed File Organization
The address for each row is determined using an
algorithm
12.34
36. Designing Controls for Files
Backup Techniques
Periodic backup of files
Transaction log or audit trail
Change log
Data Security Techniques
Coding or encrypting
User account management
Prohibiting users from working directly with the
data. Users work with a copy which updates the
files only after validation checks
12.36
37. Summary
Key Terms
Relation
Primary key
Normalization
Functional dependency
Foreign key
Referential integrity
Field
Data type
Denormalization
File organization
Index
Secondary key
12.37
38. Summary
Transforming E-R diagram into well-
structured relations
View integration
Storage formats for database fields
Efficient database table design
Efficient use of secondary storage
Data processing speed
File organization
Indexes
12.38