Fundamentals of database system - Database System Concepts and ArchitectureMustafa Kamel Mohammadi
In this chapter you will learn
DBMS evolution
Data model
Three schema architecture
DBMS language
DBMS interfaces
DBMS components
Classification of DBMS
Fundamentals of database system - Database System Concepts and ArchitectureMustafa Kamel Mohammadi
In this chapter you will learn
DBMS evolution
Data model
Three schema architecture
DBMS language
DBMS interfaces
DBMS components
Classification of DBMS
A database is a collection of information that is organized so that it can easily be accessed, managed, and updated. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.....
A database is a collection of information that is organized so that it can easily be accessed, managed, and updated. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.....
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
1.introduction qb
1. Q1. What are the advantages of database system? Explain them briefly. Jan – Feb 2005, Jul
2007
Controlling Redundancy: Storing same data multiple times leads to several problems.
First, there is need to perform single logical update multiple times.
Second, storage space is wasted when the same data is stored repeatedly.
Third, files that represent the same data may become inconsistent. This may happen
because an update is applied to some of the files but not to others.
In the database approach, the views of different user groups are integrated during
database design so that it stores each logical data item such as- name or birth date- in only one
place in the database. This does not permit any inconsistency & it saves storage space.
1) Restricting Unauthorized Access: When multiple users share a database, DBA can define
authorization checks like passwords to be carried out whenever access to sensitive data is
attempted. Different checks can be established for each type of access (retrieve, modify,
delete, etc) to each piece of information in the database.
2) Persistent Storage for objects & Data Structures: Objects in an object oriented program
language exist only during the program execution. An object oriented database provides
capabilities so that objects can be created to exist permanently or persist & be shared by
numerous programs. Hence object oriented database (OODB) store persistent objects
permanently on secondary storage & allow the sharing of these objects among multiple
programs & applications.
3) Database Inferencing Using Deduction Rules: Database systems provide capabilities for
defining deduction rules for inferencing new information from the stored database facts.
Such systems are called as deductive database systems.
Example: There may be complex rules for determining when a student is on probation.
These can be specified declaratively as deduction rules which when executed can
determine all students on probation.
5) Providing Multiple User Interfaces Because many types of users use the database, a DBMS
should provide a variety of user interfaces. These include query languages for casual
users, programming language interfaces for application programmers, forms & command
codes for parametric users & menu-driven interfaces for stand alone users.
6) Representing Complex Relationships among Data: A Database may include numerous
varieties of data that are interrelated in many ways. A DBMS must have the capability to
represent a variety of complex relationships among data easily & efficiently.
7) Enforcing Integrity Constraints: Most database applications have certain integrity
constraints that must hold for data. The simplest type of integrity constraint involves
specifying a data type for each data item.
8) Providing Back-up & Recovery: A DBMS must provide facilities for recovering from
hardware & software failures. The backup & recovery sub systems of the DBMS is
responsible for recovery.
Q2: Define DBMS. Dec- Jan 2008
Ans: A database management (DBMS) is a collection of programs that enables users to
create and maintain a database. The DBMS is hence a general purpose software system
that facilitates the process of defining, constructing manipulating and sharing databases
for various applications. Defining a database involves specifying the data types,
structures and constraints for the data to be stored in the database. Constructing the
database is the process of storing the data itself on some storage media that is controlled
1
2. by the DBMS. Manipulating a database includes function such as querying the database
to retrieve specific data, updating the database and generating reports from the data.
Q3. What are the functions of DBA. Jan – Feb 2006, Jul 2007, Jun-Jul 2008
Database administrators: In any organizations where many persons use the same resources, there
is a need for a chief administrator to oversee & manage these resources. In a database
environment, the primary resource is the database itself & the secondary resource is the DBMS
& related software. Administrating these resources is the responsibility of Database
Administrator (DBA). The DBA is responsible for authorizing access to the database, for
coordinating & monitoring its use & for acquiring software & hardware resources as needed. The
DBA is accountable for problems such as breach of security or poor system response time.
Q4. Name the different types of end users.
Ans: End users are the persons who access the database for querying, updating & generating
reports. The following are the distinct categories of end users.
a) Casual End Users: They occasionally access the database but they may need different
information each time.
Ex: Middle & high-level managers or other
occasional browsers.
b) Naïve or Parametric End users: They constantly query & update the database using
standard queries & updates called Canned Transactions that have been carefully
programmed & tested.
Ex: Bank tellers check balances, post
withdrawals & deposits, Reservation clerks for airlines, hotels etc.
c) Sophisticated End Users: This category includes engineers, scientists, business analysts&
others who thoroughly familiarize themselves with the facilities of DBMS so as to meet
their requirements.
d) Stand Alone Users: They maintain personal databases using ready made program
packages that provide easy to use menu & graphics based interfaces.
Ex: The user of tax package that stores a variety of personal
financial data for tax purpose
Q5: What are the characteristics of Database approach?
Ans: Following are the characteristics of database approach:
1.Self describing nature of a database system
2.Insulation between programs & data , data abstraction
3.Support for multiple views of the data
4.Sharing of data & multiuser transaction processing
- Self-describing nature of a database system: A DBMS catalog stores the description of the database.
The description is called meta-data). This allows the DBMS software to work with different databases.
- Insulation between programs and data: Called programdata independence. Allows changing data
storage structures and operations without having to change the DBMS access programs.
- Data Abstraction: A data model is used to hide storage details and present the users with a conceptual
view of the database.
- Support of multiple views of the data: Each user may see a different view of the database, which
describes only the data of interest to that user.
2
3. - Sharing of data and multi-user transaction processing : allowing a set of concurrent users to retrieve and
to update the database. Concurrency control within the DBMS guarantees that each transaction is
correctly executed or completely aborted.
OLTP (Online Transaction Processing) is a major part of database applications.
Q6. With a neat block diagram, explain 3-Schema architecture of DBMS. Jul – Aug 2005,
Jul 2006
Ans: Following figure depicts the 3-Schema architecture of DBMS
EXTERNAL
LEVEL
External
View 1
End Users
External
View n
………………
External / conceptual
Mapping
CONCEPTUAL LEVEL
Conceptual / internal
Mapping
Conceptual
Internal
Schema
Schema
INTERNAL
LEVEL
Stored Database
1. The internal level has an internal schema, which describes the physical storage structure of the
database. The internal schema uses physical data model and describes the complete details of data
storage & access path for the database.
2. The conceptual level has a conceptual schema which describes the structure of the whole database for
a community of users. The conceptual schema hides the details of physical storage structures &
describes entities, data types, relationships, user operation & constraints.
3. The external or view level includes a number of external schemas or user views. Each external
schema describes the part of the database that a particular user group is interested in & hides the rest
of the database from that user group. A high level data model or an implementation data model can be
used at this level.
Q7: Define data independence. Also explain its types.
Ans:Data independence can be defined as the capacity to change the schema at one level of the database
system without having to change the schema at the next higher level.
There are 2 types of data independence.
3
4. 1. Logical Data Independence:
It’s the capacity to change the conceptual schema without having to change external schemas or
application programs. Conceptual schema may change, to expand the database or to reduce the database
the database. In either case only the view definition & the mapping need to be changed in a DBMS that
supports logical data independence. Application program that references the external schema constructs
work as before after the conceptual schema undergoes a logical reorganization.
2. Physical Data Independence :
It’s the capacity to change to the interval schema without having to change the conceptual &
external schemas. Changes to the internal schema may be needed because some physical files had to be
reorganized.
Ex: By creating additional access structure to improve the performance of retrieval or update.
Q8: Define the terms i) DDL ii) DML iii) Host Language iv) Data Sublanguage
Ans:
Data Definition Language (DDL): DDL is used (by the DBA & by database designers) to define both
schemas. The DBMS will have a DDL compiler which process DDL statements in order to identify
description of the schema constructs & to store schema description in the DBMS catalog.
Data Manipulation Language (DML): There are two main types of DMLs.
• A High – Level or Non – Procedural DML can be used on its own to specify complex
database operations in a concise manner. High level DML statements can be either entered
interactively from a terminal or to be embedded in a general-purpose programming language.
High level DMLs can specify & retrieve many records in a single statement & are hence
called set-at-a time or set-oriented DMLs.
• Low-level or procedural DML must be embedded in a general purpose programming
language. This type of DML retrieves individual records from the database & processes each
record separately. Because of this property they are also called as record-at-a-time DMLs.
Data Sub Language & Host Language :Whenever DML commands (whether high level or low level) are
embedded in a general purpose programming language, that language is called host language & DML is
called data sublanguage. [In never DBMSs such as 0.0. systems the host & data sublanguages typically
form are integrated languages such as c++].
Q9: List out the various interfaces supported by DBMS Jun-Jul 2008
Ans: Menu-Based Interface:
These interfaces present the user with lists of options called menus, which lead the user through
the formation of a request.
Graphical Interfaces:
A graphical interface displays a schema to the user in diagrammatic form. The user can then
specify a query by manipulating the diagram. In many cases, graphical interfaces are combined with
menus.
Forms-Based Interfaces:
It displays a form to each user. User can fill out all of the form entries to insert new data or they
fill out only certain entries in which case the DBMS will retrieve matching data for the remaining entries.
Many DBMSs have special languages called forms.
Natural Language Interfaces:
These interfaces accept requests written in english or some other language & attempt understand
them. A natural language interface usually has its own schema which is similar to the database conceptual
schema. The interface refers to the words in its schema as well as to a set of standard words in
interpreting the request. If the interpretation is successful, the interface generates a high-level query
corresponding to the natural language request & submits it to the DBMS for processing. Otherwise user
will be asked to classify the request.
4
5. Interface for Parametric Users:
Parametric users such as bank tellers often have a small set of operations that they must perform
repeatedly. Here usually function keys in a terminal are programmed to initiate various commands this
allows the parametric user to proceed with a minimal number of key strokes.
Interface for DBA:
Most database systems contain privileged commands that can be used only by the DBA staff.
These include commands for creating accounts, setting system parameters, granting account
authorization, changing a schema & reorganizing the storage structure of a database.
Q10: Explain DBMS component module along with its diagram. Jan – Feb 2005, Jun-Jul
2008
Ans: The following fig. illustrates typical DBMS components.
Application
Program
Privileged
Command
DDL
Statement
Precompiler
Casual
users
DBA Staff
Application
Programmers
Parametric
Users
Interactive
Query
Query
Compiler
Host Language
compiler
DML
Compiled
(Canned)
Transaction
Statement
A
DDL
Compiler
E
Stored
Data
Manager
System
Catalog
/
Data
Dictiona
ry
Execution
B
Execution DML compiler
C
Run time
Database
Processor
D
Execution
Concurrency control / backup
Recovery subsystems
STORE D DATABASE
5
6. •
•
•
•
•
•
The database & the DBMS catalog are stored on disk. Access to disk is controlled by o.s. which
schedules disk input/out put.
The stored data manager of DBMS controls access to DBMS information stored on disk, whether
its part of database or of the catalog. The dotted lines & circles marked by A, B, C, D & E in the
fig. illustrate accesses that are under the control of this stored data manager.
The DDL compiler process schema definitions specified in the DDL & stores Meta – data in the
DBMS catalog.
The run -time data processor handles database accesses at run- time. It receives different
operations & carries them-out on the database.
The query compiler handles high level queries that are entered interactively. It passes & analyzes
a query & then generates calls to the run-time processor for executing the request.
The precompiler extracts DML commands from an application program written in a host
programming language. These commands are sent to the DML compiler for compilation into
object code for database access. The rest of the program is sent to the host language compiler.
The object codes for the DML commands & the rest of the program are linked forming a canned
transaction whose executable code includes calls to the run-time database processor.
Q11: Define the terms i) Data model ii) Schema iii) Instance. Jan – Feb 2005
Ans:
Data Model: A data model is the set of concepts that can be used to describe the structure of a data
structure. The structure of a database includes the data type, relationships & constraints that should hold
for data.
Schema: The description of a database is called the Database Schemas or Meta–data which is specified
during database design and is not expected to change frequently. When a schema is displayed
diagrammatically, it is called schema diagram.
Instance: The data in the database at a particular moment in time is called a Database State or
Occurrence or Instances or snap shot.
Q12: Name the different database utilities.
Ans:
1: Loading:
A loading utility is used to load existing data files such as text files or sequential files into the
Database. Usually the source format of the data file & the target data file structure are specified to the
utility which then automatically reformats the data & stores it in the database.
2: Backup:
A backup utility creates a backup copy of the database which can be used to restore the Database
in case of any failure.
3: File Reorganization:
This utility can be used to reorganize a database file into a different file organization to
improve performance.
4: Performance Monitoring:
Such a utility monitors database usage & provides statistics to the DBA. The DBA use the
statistics in making decisions such as whether or not to reorganize files to improve performance.
5: Data Dictionary System:
It stores information such as schemas, constraints, design decision, application description &
6
7. user information. A combined catalog/ data dictionary which can be accessed by both users &
the DBMS is called data directory or an active data directory. A data dictionary that can be
accessed by users & the DBA but not by the DBMS is called passive data directory.
6. Communication Facilities:
The DBMS can also be interfaced with communication software, whose function is to allow
user at remote locations from the database system site to access the database through computer terminals,
workstations etc. these are connected to the database site through data communications hardware such as
phone lines, long-haul networks or satellite communication devices. The integrated DBMS & data
communication system is called a db / dc system.
Q13: Write a note on classification of DBMS.
Ans:
•
•
•
DBMSs are categorized as relational network hierarchical & object-oriented based on the data
model that DBMS is using.
The second criterion used to classify is the number of users supported by the system. Single-user
systems support only one user at a time. While multi-user systems support many users
concurrently.
A third criterion is the number of sites over which the database is distributed.
Centralized DBMS: their data is stored at a single computer site
A distributed DBMS (DDBMS) can have the actual database & DBMS software distributed over
many sites, connected by a computer network.
Homogeneous DDBMSs use the same DBMS software at multiple sites,
•
DBMSs can be also classified on the basis of types of access path options available for storing
files.
Ex. DBMS based on inverted file structures.
•
When DBMS is designed & built for a specific application it’s called special-purpose DBMS. It
can not be used for other application.
Ex: Airline reservations, telephone directory system etc.
In such cases, DBMS is also called as On-line transaction processing (OLTP) systems .
Q14: write a short note on Data model.
Ans: A data model is the set of concepts that can be used to describe the structure of a data structure.
The structure of a database includes the data type, relationships & constraints that should hold for data.
Most data models include a set of basic operation for specifying retrievals & update on the data base
And behavior which refers to specifying a set of valid user-defined operation that are allowed on the
database.
Data models are categorized based on the types of concepts they provide to describe the database
structure.
High- Level or Conceptual data models provide concepts that are close to the way many users perceive
data.
High level models use concepts such as entities, attributes & relationships.
7
8. Low- Level or Physical data models provide concepts that describe the details of how data is stored in the
computer.
Physical data models describe how data is stored in the computer by representing information such
as record formats, record orderings & access paths.
Representational or implementation data models provide concepts that may be understood by end
users but that are not too for removed from the way data is organized within the computer.
Representational data model hide some details of data storage (but can be implemented on a computer
system in a direct way).
Representational data models include 3 most widely used data models – relational, network &
hierarchical. They represent data by using record structures & have are some times called record – based
data models.
Q15: Write a note on Actors on the scene.
Ans:
• Database administrators: responsible for authorizing access to the database, for co-coordinating and
monitoring its use, acquiring software and hardware resources, controlling its use and monitoring
efficiency of operations.
• Database Designers: responsible to define the content, the structure, the constraints, and functions or
transactions against the database. They must communicate with the end-users and understand their needs.
• End-users: they use the data for queries, reports and some of them actually update the database content.
Q16: What are the implications of using data base approach?
Ans:
- Potential for enforcing standards: this is very crucial for the success of database applications in large
organizations Standards refer to data item names, display formats, screens, report structures, meta-data
(description of data) etc.
- Reduced application development time: incremental time to add each new application is reduced.
- Flexibility to change data structures: database structure may evolve as new requirements are defined.
- Availability of up-to-date information – very important for on-line transaction systems such as airline,
hotel, car reservations.
- Economies of scale: by consolidating data and applications across departments wasteful overlap of
resources and personnel can be avoided.
Q17: 2-tier & 3-tier architecture
8