This document is a thesis submitted for the degree of Master of Technology in Information Technology. It contains 10 questions related to advanced database management systems. The document provides certification that the assignment was completed under supervision and evaluates the responses to the questions. It includes tables of contents, figures, and a bibliography.
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
ADVANCE DATABASE MANAGEMENT SYSTEM CONCEPTS & ARCHITECTURE by vikas jagtapVikas Jagtap
The data that indicates the earth location (latitude & longitude, or height & depth ) of these rendered objects is known as spatial data.
When the map is rendered, objects of this spatial data are used to project the location of the objects on 2-Dimentional piece of paper.
The spatial data management systems are designed to make the storage, retrieval, & manipulation of spatial data (i.e points, lines and polygons) easier and natural to users, such as GIS.
While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types.
These are typically called geometry or feature.
Programming Fundamentals With OOPs Concepts (Java Examples Based)indiangarg
This presentation gives you various types of programming models, A clear concept of object oriented languages, Classes and Object Concept, Different types of programming paradigms, program tokens, statements, expressions, Concepts of Inheritance, Encapsulation, Abstraction, Polymorphism, Interface etc.
oops(object oriented programing ) is introduced in c++ to enhance the 'c' programming. oops concept includes many important concepts like class,objects,abstraction,encapsulation,inheritance etc.
Online resort reservation system report (practicum)Sumaiya Ismail
An online reservation system is a software you can use for managing reservations. They allow hotels, tours, and activity operators to accept bookings online and better manage their phone and in-person bookings. They also do so much more than that.
Final Internship Report at Institute of Information Technology, University of Dhaka (IIT, DU [http://www.iit.du.ac.bd]); performed at Jantrik Technologies Ltd. [http://www.jantrik.com]
NexGen Solutions for cloud platforms, powered by GenQAIVijayananda Mohire
This is our next generation solutions powered by emerging technologies like AI, quantum computing, Blockchain, quantum cryptography etc. We have various offers that can help improved productivity, help automate and improve ease of doing business. We offer cloud based solutions and have a Hub to interface major cloud platforms.
This is our project work at our startup for Data Science. This is part of our internal training and focused on data management for AI, ML and Generative AI apps
This is our contributions to the Data Science projects, as developed in our startup. These are part of partner trainings and in-house design and development and testing of the course material and concepts in Data Science and Engineering. It covers Data ingestion, data wrangling, feature engineering, data analysis, data storage, data extraction, querying data, formatting and visualizing data for various dashboards.Data is prepared for accurate ML model predictions and Generative AI apps
Considering the need and demand for high quality digital platforms that can help clients to get the most of the newer technology, we have proposed an IT Hub that allows for rapid on boarding of clients to various modules on a need basis, allowing them to subscribe to modules they need only. We have various modules.
This document offers a high level overview of our IT Hub that offers various modules allowing for clients to onboard faster and get the benefits of a large set of vendor products, tools, IDE related to AI, Quantum and Generative AI technologies.
This is my hands-on projects in quantum technologies. These are few of the key projects that I worked with that demonstrates my skills in using various concepts, tools, IDE and deriving the solutions by using quantum principles like superposition, and entanglement along with quantum circuits in realizing the concepts
This is my journey taken from year 2012 on wards, after graduation in my MS with major in AI. I have taken various certification courses, trainings, hands-on labs; few key ones are from Google, and Microsoft.
Agricultural and allied industries play a vital role in the progress of a nation and sustainable economic growth. Farmers play a vital role in this progress. Their hard work and efforts need to be praised and possibly offer them various tools and digital assets that can automate some of their various repetitive tasks such as back office operations, crop monitoring, and post-harvesting routines that might divert the attention of farmers from their core job.
We, at Bhadale IT have developed various products and services that are revolutionary and can offer effective solutions with our industrial partnerships with digital technology leaders like Intel and Microsoft. We have drafted this solution brief to illustrate our products and service offerings for the agricultural industry. We can tailor make highly customized solutions to meet individual project and farmer needs that can include use of various technologies like artificial intelligence, machine learning, data science and related machinery like drones and geo-spatial datasets and various information that can offer precise farming techniques and use of technology in improving production, improvised use of fertilizers, organic farming and reduced crop loss due to rodents, insects and regional diseases.
The focus of this solution is for farmers to adopt and migrate to digital cloud platform to Microsoft Azure that can boost quality and quantity of crop production and improve their supply chain and offer faster and mature downstream business operations.
This is our cloud offerings based on our partnership and relationship with Intel and Microsoft. We offer highly optimized Intel motherboards, memory, and software stack that is best suited for Azure cloud platform and can handle various types of models (IaaS, PaaS, SaaS) and Azure workloads in the public or private cloud.
Explore the fundamentals of GitHub Copilot and its potential to enhance productivity and foster innovation for both individual developers and businesses. Discover how to implement it within your organization and unleash its power for your own projects.
In this learning path, you'll:
Gain a comprehensive understanding of the distinctions between GitHub Copilot for Individuals, GitHub Copilot for Business, and GitHub Copilot X.
Explore various use cases for GitHub Copilot for Business, including real-life examples showcasing how customers have leveraged it to boost their productivity.
Receive step-by-step instructions on enabling GitHub Copilot for Individuals and GitHub Copilot for Business, ensuring a seamless integration into your workflows.
Practical ChatGPT From Use Cases to Prompt Engineering & Ethical ImplicationsVijayananda Mohire
This journey provides learners with a thorough exploration of ChatGPT, starting with an introduction to large language models and their capabilities, the series progresses through practical applications, advanced techniques, industry impacts, and important ethical considerations. Each course aims to equip learners with an in-depth understanding of the model, its functionality, and its wide-ranging applications.
Red Hat Enterprise Linux (RHEL) and Hybrid Cloud Infrastructure. Products that are developed for multi-cloud hybrid platform enabling seamless integration and portability of workloads across Red Hat and partner Infrastructure, public and private clouds.
Learners will be exposed to the foundations of Red Hat, Red Hat Enterprise Linux (RHEL) portfolio including Hybrid Cloud Infrastructure, how to identify target customers, distinguish Red Hat solutions from the competition, review key use cases, align to the sales conversation framework for positioning the solutions, and much more!
Upon completing this learning path, learners will receive the Red Hat Sales Specialist - Red Hat Enterprise Linux accreditation and be prepared to advance to the Red Hat Sales Specialist - Red Hat Enterprise Linux II learning path
This is my annual learning at Red Hat related to accreditation and courses at Red Hat partner training portal.
Learners will be exposed to the foundations of Red Hat, Red Hat Enterprise Linux (RHEL) portfolio including Hybrid Cloud Infrastructure, how to identify target customers, distinguish Red Hat solutions from the competition, review key use cases, align to the sales conversation framework for positioning the solutions, and much more!
Generative AI is a cutting-edge technology that will transform nearly every business function, ranging from content creation and product design, to improving customer experience and marketing new ideas. While the benefits of Generative AI are immense, the technology has its limitations and poses some ethical considerations. In this Journey, learners of all levels will develop a shared understanding of what Generative AI is, the guardrails for use and identify of how to use, build and experiment with the technology in a responsible manner. Learners will also develop skills for leading through this disruption with empathy, while cultivating the human skills to sustain the transformation
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
1. Advanced DBMS
(Assignment –I)
Submitted in partial fulfilment of the requirements for the degree of
Master of Technology in Information TechnologyMaster of Technology in Information TechnologyMaster of Technology in Information TechnologyMaster of Technology in Information Technology
by
Vijayananda D Mohire
(Enrolment No.921DMTE0113)
Information Technology Department
Karnataka State Open University
Manasagangotri, Mysore – 570006
Karnataka, India
(2009)
3. MT-14-I
3
CERTIFICATECERTIFICATECERTIFICATECERTIFICATE
This is to certify that the Assignment-I entitled (Advanced DBMS, subject
code: MT14) submitted by Vijayananda D Mohire having Roll Number
921DMTE0113 for the partial fulfilment of the requirements of Master of
Technology in Information Technology degree of Karnataka State Open
University, Mysore, embodies the bonafide work done by him under my
supervision.
Place: ________________Place: ________________Place: ________________Place: ________________ Signature of the Internal SupervisorSignature of the Internal SupervisorSignature of the Internal SupervisorSignature of the Internal Supervisor
NameNameNameName
Date: ________________Date: ________________Date: ________________Date: ________________ DesignationDesignationDesignationDesignation
4. MT-14-I
4
For EvaluationFor EvaluationFor EvaluationFor Evaluation
QuestionQuestionQuestionQuestion
NumberNumberNumberNumber
Maximum MarksMaximum MarksMaximum MarksMaximum Marks Marks awardedMarks awardedMarks awardedMarks awarded Comments, if anyComments, if anyComments, if anyComments, if any
1 1
2 1
3 1
4 1
5 1
6 1
7 1
8 1
9 1
10 1
TOTALTOTALTOTALTOTAL 10
Evaluator’s Name and Signature Date
5. MT-14-I
5
PrefacePrefacePrefacePreface
This document has been prepared specially for the assignments of M.Tech – IT I
Semester. This is mainly intended for evaluation of assignment of the academic M.Tech
- IT, I semester. I have made a sincere attempt to gather and study the best answers to
the assignment questions and have attempted the responses to the questions. I am
confident that the evaluator’s will find this submission informative and evaluate based on
the provide content.
For clarity and ease of use there is a Table of contents and Evaluators section to make
easier navigation and recording of the marks. A list of references has been provided in
the last page – Bibliography that provides the source of information both internal and
external. Evaluator’s are welcome to provide the necessary comments against each
response, suitable space has been provided at the end of each response.
I am grateful to the Infysys academy, Koramangala, Bangalore in making this a big
success. Many thanks for the timely help and attention in making this possible within
specified timeframe. Special thanks to Mr. Vivek and Mr. Prakash for their timely help
and guidance.
Candidate’s Name and Signature Date
9. MT-14-I
9
Question 1Question 1Question 1Question 1 What are the basics of DBMS?
Answer 1Answer 1Answer 1Answer 1
A Database Management System (DBMS) is a software package designed to
store and manage databases.
DBMS Software that controls the organization, storage, retrieval, security and
integrity of data in a database. It accepts requests from the application and
instructs the operating system to transfer the appropriate data. The major DBMS
vendors are Oracle, IBM, Microsoft and Sybase (see Oracle database, DB2, SQL
Server and ASE). MySQL is a very popular open source product.
STRUCTURE OF DBMSSTRUCTURE OF DBMSSTRUCTURE OF DBMSSTRUCTURE OF DBMS (Goyal, 2008)
DBMS (Database Management System) acts as an interface between the user
and the database. The user requests the DBMS to perform various operations
(insert, delete, update and retrieval) on the database. The components of DBMS
perform these requested operations on the database and provide
necessary data to the users. The various components of DBMS are shown in
Figure 1
10. MT-14-I
10
1. DDL Compiler1. DDL Compiler1. DDL Compiler1. DDL Compiler - Data Description Language compiler processes schema
definitions specified in the DDL. It includes metadata information such as the
name of the files, data items, storage details of each file, mapping information and
constraints etc.
2. DML Compiler and Query optimizer2. DML Compiler and Query optimizer2. DML Compiler and Query optimizer2. DML Compiler and Query optimizer - The DML commands such as
insert, update, delete, retrieve from the application program are sent to the DML
compiler for compilation into object code for database access. The object code is
then optimized in the best way to execute a query by the query optimizer and then
send to the data manager.
3. Data Manager3. Data Manager3. Data Manager3. Data Manager - The Data Manager is the central software component of
the DBMS also knows as Database Control System.
4. Data Dictionary4. Data Dictionary4. Data Dictionary4. Data Dictionary - Data Dictionary is a repository of description of data in
the database. It contains information about
FigureFigureFigureFigure 1111 Structure of DBMS (Goyal, 2008)
11. MT-14-I
11
• Data - names of the tables, names of attributes of each table, length of
attributes, and number of rows in each table.
• Relationships between database transactions and data items referenced
by them which is useful in determining which transactions are affected
when certain data definitions are changed.
• Constraints on data i.e. range of values permitted.
• Detailed information on physical database design such as storage
structure, access paths, files and record sizes.
• Access Authorization - is the Description of database users their
responsibilities and their access rights.
• Usage statistics such as frequency of query and transactions.
Data dictionary is used to actually control the data integrity, database
operation and accuracy. It may be used as a important part of the DBMS.
5. Data Files5. Data Files5. Data Files5. Data Files ---- It contains the data portion of the database.
6. Compiled DML6. Compiled DML6. Compiled DML6. Compiled DML ---- The DML complier converts the high level Queries into
low level file access commands known as compiled DML.
7. End Users7. End Users7. End Users7. End Users ---- They are already discussed in previous section.
Major Features of a DBMS
Data SecurityData SecurityData SecurityData Security
The DBMS can prevent unauthorized users from viewing or updating the
database. Using passwords, users are allowed access to the entire database or a
12. MT-14-I
12
subset of it known as a "subschema." For example, in an employee database,
some users may be able to view salaries while others may view only work history
and medical data.
Data IntegrityData IntegrityData IntegrityData Integrity
The DBMS can ensure that no more than one user can update the same record at
the same time. It can keep duplicate records out of the database; for example, no
two customers with the same customer number can be entered.
Interactive QueryInteractive QueryInteractive QueryInteractive Query
Most DBMSs provide query languages and report writers that let users
interactively interrogate the database and analyze its data. This important feature
gives users access to all management information as needed.
Interactive Data Entry and UpdatingInteractive Data Entry and UpdatingInteractive Data Entry and UpdatingInteractive Data Entry and Updating
Many DBMSs provide a way to interactively enter and edit data, allowing you to
manage your own files and databases. However, interactive operation does not
leave an audit trail and does not provide the controls necessary in a large
organization. These controls must be programmed into the data entry and update
programs of the application.
This is a common misconception about desktop computer DBMSs. Complex
business systems can be developed, but not without programming. This is not the
same as creating lists of data for your own record keeping.
Data IndependenceData IndependenceData IndependenceData Independence
With DBMSs, the details of the data structure are not stated in each application
13. MT-14-I
13
program. The program asks the DBMS for data by field name; for example, a
coded equivalent of "give me customer name and balance due" would be sent to
the DBMS. Without a DBMS, the programmer must reserve space for the full
structure of the record in the program. Any change in data structure requires
changing all application programs.
Evaluator’s Comments if any:
14. MT-14-I
14
Question 2Question 2Question 2Question 2 What is information integration?
Answer 2Answer 2Answer 2Answer 2
Information integration is an important feature of Oracle9i database. Oracle
provides many features that allows customers to synchronously and
asynchronously integrate their data including Oracle streams, Oracle Advanced
Queuing, replication and distributed SQL. Oracle also provides gateways to non-
Oracle database servers, providing seamless interoperation in heterogeneous
environment.
Asynchronous integration with Oracle streams:
Oracle streams enables the propagation and management of data, transactions
and events in a data stream within a database, or from one database to another.
The stream routes published information to subscribed destinations. The result is
a new feature that provides greater functionality and flexibility than traditional
solutions for capturing and managing events, and sharing the events with other
databases and applications. Oracle streams provide both message queuing and
replication functionality within the Oracle database. To simplify deployment of
these specialized information integration solutions, Oracle provides customized
APIs and tools for message queuing, replication and data protection solutions.
15. MT-14-I
15
These include:
• Advanced queuing for message queuing
• Replication with Oracle streams
Asynchronous integration with Oracle Advanced replication
Oracle also includes Advanced Replication. This is a powerful replication feature
first introduced in Oracle 7. Advanced replication is useful for replicating object
between Oracle9i database and older versions of the Oracle server. Advanced
replication also provides Oracle materialized views; a replication solution targeted
for mass deployments and disconnected replicas.
Synchronous Integration with distributed SQL
Oracle distributed SQL enables synchronous access from one Oracle database,
to other Oracle or non-Oracle databases. Distributed SQL supports location
independence, making remote objects appear local, facilitating movement of
objects between databases without application code changes. Distributed SQL
supports both queries and DML operations, and can intelligently optimize
execution plans to access data in the most efficient manner.
Evaluator’s Comments if any:
16. MT-14-I
16
Question 3Question 3Question 3Question 3 What is the meaning of extended ER models?
Answer 3Answer 3Answer 3Answer 3
• The Extended Entity-Relationship (EER) model is a conceptual (or
semantic) data model, capable of describing the data requirements for a
new information system in a direct and easy to understand graphical
notation.
• Data requirements for a database are described in terms of a conceptual
schema, using the EER model.
• EER schemata are comparable to UML class diagrams.
17. MT-14-I
17
FigureFigureFigureFigure 2222 Extended ER Model Constructs (Mylopoulos, 2004)
The EER model mainly consists of entity super types, entity subtypes and entity
clustering which are modeled in ER diagram.
Evaluator’s Comments if any:
Question 4Question 4Question 4Question 4 What are client server systems?
18. MT-14-I
18
Answer 4Answer 4Answer 4Answer 4 ( One response is Similar to answer 10, alternate described below).
Alternate response:
Client/Server technology is the computer architecture used in almost all
automated library systems now being offered to libraries. (KSOU, 2009)
FigureFigureFigureFigure 3333 Client - Server Architecture
This is a computer architecture that divides functions into client and server
subsystems, with standard communication method to facilitate the sharing of
information between them. Among the characteristics of a client/server
architecture following are the key ones:
• The client and server can be distinguished from one another by the
differences in tasks they perform.
19. MT-14-I
19
• The client and server usually operate on different computer platforms
• Either the client or the server may be upgraded without affecting the
other
• Client may connect to one or more servers; servers may connect to
multiple clients concurrently. Clients always initiate the dialogue by
requesting a service
• Client/server is most easily differentiated from hierarchical processing,
which used a host and slave , by the way a PC functions within a
system. In client/server the PC based client communicates with the
server as a computer; in hierarchical processing the PC emulates a
dumb terminal to communicate with the host. In client/server the client
controls part of the activity, but in hierarchical processing the host
controls all activities. A client PC almost always does the following in a
client/server environment: screen handling, menu or command
interpretation, data entry, help processing, and error recovery.
The dividing line between the client and server can be anywhere along a
broad continuum: at one end only the user interface has been moved onto the
20. MT-14-I
20
client; at the other, almost all applications have been moved onto the client
and the database may be distributed.
Evaluator’s Comments if any:
Question 5Question 5Question 5Question 5 What is normalization?
Answer 5Answer 5Answer 5Answer 5
In the field of relational database design, normalization (Anonymous, 2009) is a
systematic way of ensuring that a database structure is suitable for general-
purpose querying and free of certain undesirable characteristics—insertion,
update, and deletion anomalies—that could lead to a loss of data integrity. E.F.
Codd, the inventor of the relational model, introduced the concept of
normalization and what we now know as the First Normal Form (1NF) in
1970.Codd went on to define the Second Normal Form (2NF) and Third Normal
Form (3NF) in 1971,and Codd and Raymond F. Boyce defined the Boyce-Codd
Normal Form in 1974.Higher normal forms were defined by other theorists in
21. MT-14-I
21
subsequent years, the most recent being the Sixth Normal Form (6NF) introduced
by Chris Date, Hugh Darwen, and Nikos Lorentzos in 2002.
Informally, a relational database table (the computerized representation of a
relation) is often described as "normalized" if it is in the Third Normal Form. Most
3NF tables are free of insertion, update, and deletion anomalies, i.e. in most
cases 3NF tables adhere to BCNF, 4NF, and 5NF (but typically not 6NF).
The normal forms (abbrev. NF) of relational database theory provide criteria for
determining a table's degree of vulnerability to logical inconsistencies and
anomalies. The higher the normal form applicable to a table, the less vulnerable it
is to inconsistencies and anomalies. Each table has a "highest normal form"
(HNF): by definition, a table always meets the requirements of its HNF and of all
normal forms lower than its HNF; also by definition, a table fails to meet the
requirements of any normal form higher than its HNF.
The normal forms are applicable to individual tables; to say that an entire
database is in normal form n is to say that all of its tables are in normal form n.
Newcomers to database design sometimes suppose that normalization proceeds
in an iterative fashion, i.e. a 1NF design is first normalized to 2NF, then to 3NF,
and so on. This is not an accurate description of how normalization typically
works. A sensibly designed table is likely to be in 3NF on the first attempt;
furthermore, if it is 3NF, it is overwhelmingly likely to have an HNF of 5NF.
22. MT-14-I
22
Achieving the "higher" normal forms (above 3NF) does not usually require an
extra expenditure of effort on the part of the designer, because 3NF tables usually
need no modification to meet the requirements of these higher normal forms.
The main normal forms are summarized below.
TableTableTableTable 1111 Normal Forms
Normal formNormal formNormal formNormal form Defined byDefined byDefined byDefined by Brief definitionBrief definitionBrief definitionBrief definition
First normal form
(1NF)
E.F. Codd (1970)
Table faithfully represents a relation and
has no repeating groups
Second normal form
(2NF)
E.F. Codd (1971)
No non-prime attribute in the table is
functionally dependent on a part (proper
subset) of a candidate key
Third normal form
(3NF)
E.F. Codd (1971)
Every non-prime attribute is non-
transitively dependent on every key of
the table
Boyce-Codd normal
form (BCNF)
Raymond F. Boyce
and E.F. Codd (1974)
Every non-trivial functional dependency
in the table is a dependency on a
superkey
23. MT-14-I
23
Fourth normal form
(4NF)
Ronald Fagin (1977)
Every non-trivial multivalued
dependency in the table is a
dependency on a superkey
Fifth normal form
(5NF)
Ronald Fagin (1979)
Every non-trivial join dependency in the
table is implied by the superkeys of the
table
Domain/key normal
form (DKNF)
Ronald Fagin (1981)
Every constraint on the table is a logical
consequence of the table's domain
constraints and key constraints
Sixth normal form
(6NF)
Chris Date, Hugh
Darwen, and Nikos
Lorentzos (2002)
Table features no non-trivial join
dependencies at all (with reference to
generalized join operator)
Evaluator’s Comments if any:
Question 6Question 6Question 6Question 6 What are views? Explain its various types?
24. Answer 6Answer 6Answer 6Answer 6 (Ahmad, 1997)
In 1975, the American National Standards Institute (ANSI) Standards Planning
and Requirements Committee (SPARC) developed standard based on a
level approach.
Three level approach with data dictionary:
1. External level or View
2. Conceptual level or View
3. Internal level or View
• The External level is the way users perceive the data
• The Internal level is the way the DBMS and the Operating
perceive the data
• The Conceptual level provides both the mapping and the independence
between the external and the internal levels
FigureFigureFigureFigure 4444 Database Views
24
(Ahmad, 1997)
In 1975, the American National Standards Institute (ANSI) Standards Planning
and Requirements Committee (SPARC) developed standard based on a
Three level approach with data dictionary:
External level or View
Conceptual level or View
Internal level or View
The External level is the way users perceive the data
The Internal level is the way the DBMS and the Operating
perceive the data
The Conceptual level provides both the mapping and the independence
between the external and the internal levels
Database Views (Ahmad, 1997)
MT-14-I
In 1975, the American National Standards Institute (ANSI) Standards Planning
and Requirements Committee (SPARC) developed standard based on a three
The External level is the way users perceive the data
The Internal level is the way the DBMS and the Operating System
The Conceptual level provides both the mapping and the independence
25. MT-14-I
25
External View :
• The user’s view of the database
• Consists of different external views of the database
1. entities
2. attributes
• Relationships
• Different representation of the same data, e.g. date
Conceptual View:
• Describes what data is stored in the database and their relationships
• Independent of any storage considerations e.g. no. of bytes occupied
by Staff No.
• Consists of the logical structure of the database as seen by the DBA
1. all entities, attributes and their relationships
2. constraints on data
3. meaning of data
4. security and integrity
Internal View:
• The physical representation of the database in the computer
• Describes how data is stored in the data-base to achieve optimal
performance and storage space utilization
26. MT-14-I
26
1. storage allocation for data and indexes
2. record description for storage
3. record placement
4. data compression & encryption techniques
Answer 6Answer 6Answer 6Answer 6 (Alternate answer)
In database theory, a view consists of a stored query accessible as a virtual table
composed of the result set of a query. Unlike ordinary tables (base tables) in a
relational database, a view does not form part of the physical schema: it is a
dynamic, virtual table computed or collated from data in the database. Changing
the data in a table alters the data shown in subsequent invocations of the view.
Views can provide advantages over tables:
• Views can represent a subset of the data contained in a table
• Views can join and simplify multiple tables into a single virtual table
• Views can act as aggregated tables, where the database engine
aggregates data (sum, average etc) and presents the calculated results
as part of the data
27. MT-14-I
27
• Views can hide the complexity of data; for example a view could appear
as Sales2000 or Sales2001, transparently partitioning the actual
underlying table
• Views take very little space to store; the database contains only the
definition of a view, not a copy of all the data it presents
• Depending on the SQL engine used, views can provide extra security
• Views can limit the degree of exposure of a table or tables to the outer
world
Just as functions (in programming) can provide abstraction, so database users
can create abstraction by using views. In another parallel with functions, database
users can manipulate nested views, thus one view can aggregate data from other
views. Without the use of views the normalization of databases above second
normal form would become much more difficult. Views can make it easier to
create lossless join decomposition.
Just as rows in a base table lack any defined ordering, rows available through a
view do not appear with any default sorting. A view is a relational table, and the
relational model defines a table as a set of rows. Since sets are not ordered - by
definition - the rows in a view are not ordered, either. Therefore, an ORDER BY
clause in the view definition is meaningless. The SQL standard (SQL:2003) does
not allow an ORDER BY clause in a subselect in a CREATE VIEW statement, just
as it is not allowed in a CREATE TABLE statement. However, sorted data can be
28. MT-14-I
28
obtained from a view, in the same way as any other table - as part of a query
statement. In Oracle 10g, a view can be created with an ORDER BY clause in a
subquery.
Evaluator’s Comments if any:
Question 7Question 7Question 7Question 7 What is Dynamic SQL?
Answer 7Answer 7Answer 7Answer 7
Dynamic SQL is a programming technique that enables you to build SQL
statements dynamically at runtime. You can create more general purpose, flexible
applications by using dynamic SQL because the full text of a SQL statement may
be unknown at compilation. For example, dynamic SQL lets you create a
procedure that operates on a table whose name is not known until runtime.
Oracle includes two ways to implement dynamic SQL in a PL/SQL application:
• Native dynamic SQL, where you place dynamic SQL statements directly
into PL/SQL blocks.
• Calling procedures in the DBMS_SQL package.
29. MT-14-I
29
You should use dynamic SQL only if you cannot use static SQL to accomplish
your goals, or if using static SQL is cumbersome compared to dynamic SQL.
However, static SQL has limitations that can be overcome with dynamic SQL.
You can use dynamic SQL to create applications that execute dynamic queries,
whose full text is not known until runtime. Many types of applications need to use
dynamic queries, including:
• Applications that allow users to input or choose query search or sorting
criteria at runtime
• Applications that allow users to input or choose optimizer hints at run time
• Applications that query a database where the data definitions of tables are
constantly changing
• Applications that query a database where new tables are created often
Language features:
• If you use C/C++, you can call dynamic SQL with the Oracle Call Interface
(OCI), or you can use the Pro*C/C++ precompiler to add dynamic SQL
extensions to your C code.
• If you use COBOL, you can use the Pro*COBOL precompiler to add
dynamic SQL extensions to your COBOL code.
• If you use Java, you can develop applications that use dynamic SQL with
30. MT-14-I
30
JDBC.
Factors to be considered while designing Dynamic SQL. Careful analysis of the
below and their affects on the performance needs to be addressed.
1. Permissions
2. Query execution plans
3. Network traffic
4. Using Output parameters
5. How logic is encapsulated
Evaluator’s Comments if any:
Question 8Question 8Question 8Question 8 What is parsing?
Answer 8Answer 8Answer 8Answer 8
Parsing is a subset activity of Database Query Compiler and does the following
• Parses SQL query into parse treeparse treeparse treeparse tree
• Transforms parse tree into expression tree (logical query plan)expression tree (logical query plan)expression tree (logical query plan)expression tree (logical query plan)
31. MT-14-I
31
• Transforms logical query plan into physical query planphysical query planphysical query planphysical query plan
Above sequence is pictorially depicted in Figure
FigureFigureFigureFigure 5555 Query Parsing
FigureFigureFigureFigure 6666 Query parsing steps
32. MT-14-I
32
FigureFigureFigureFigure 7777 Parsing Tree
In Oracle, every statement , be it a DDL, DML, or anything else, gets parsed.
Parsing basically means what Oracle understands about the statement and based
on that, how it decides to execute it. This process is also known as Optimization
of the query. As the name says, the idea is how best Oracle can process the
query or in other words, optimize its execution.
Parsing is of two types, Hard parse and Soft parse. If the said query is found in
Oracle's cache, the query optimization is not needed. Oracle can pick up the
optimized query and can execute it. If the query is run for the first time or the
query's cached version is obsolete or flushed out from oracle's cache, query
needs to be optimized and the process is called Hard parse of the query. Hard
parse , in general , is unavoidable as for the very first time, each query needs to
33. MT-14-I
33
be parsed , at least for once. But in the subsequent executions, query should be
simply soft parsed and executed.
The mechanism which works in the backend for doing all this is called Optimizer.
There are two versions of it, Rule Based and Cost Based. The Rule Based
optimizer(RBO) is made deprecated from Oracle version release 10g. This was
not very much efficient as it was "statement driven". The Cost Based is now the
only and supported mode of optimizer which is, as the name suggests, cost
based and takes into the account the resource consumption of the query.
Evaluator’s Comments if any:
34. MT-14-I
34
Question 9Question 9Question 9Question 9(i)(i)(i)(i) Write a short note on –
Concurrency control by time stamps
Answer 9Answer 9Answer 9Answer 9(i)(i)(i)(i)
Timestamps are processed so that their execution is equivalent to a serial
execution in their timestamp order. The timestamp-based concurrency control
allows a transaction to read or write a data item x only if x had been last written by
an older transaction. Otherwise, it rejects the operation and restarts the
transaction.
Scheduler tasks:
• Scheduler assign each transaction T a unique number, it’s timestamp
TS(T).
• The timestamp ordering protocol ensures that any conflicting read and
write operations are executed in timestamp order.
• Each transaction is issued a timestamp when it enters the system. If an old
transaction T1 has timestamp TS(T1), a new transaction T2 is assigned
time-stamp TS(T2) such that TS(T1) <TS(T2)
Two approaches to generate TimestampsTwo approaches to generate TimestampsTwo approaches to generate TimestampsTwo approaches to generate Timestamps::::
1. Use the value of system, clock as the timestamp; that is, a
35. MT-14-I
35
transaction’s timestamp is equal to the value of the clock when the
transaction enters the system.
2. Use a logical counter that is incremented after a new timestamp has
been assigned; that is, a transaction’s is equal to the value of the
counter when the transaction enters the system.
The Rules of Timestamp-Based Scheduling, below example depicts the
rules
Four Classes
• the scheduler receives a request RT(X)
• the scheduler receives a request WT(X)
• the scheduler receives a request to commit T
• the scheduler receives a request to abort T or decides to rollback T
Scheduler receives a request RT(X)
(a) If TS(T)>=WT(X), the read is physically realizable.
i. If C(X) is true, grant the request.
ii. If C(X) is false, delay T until C(X) becomes true or the transaction
that wrote X aborts.
(b) If TS(T)<WT(X), the read is physically unrealizable, abort T and restart
it with a new, larger timestamp.
36. MT-14-I
36
Scheduler receives a request WT(X)
(a) If TS(T)>=RT(X) and TS(T)>=WT(X), the write is physically realizable
and must be performed.
i. Write the new value for X
ii.Set WT(X):=TS(T), and
iii. Set C(X):= false.
(b) If TS(T)>=RT(X), but TS(T)<WT(X), then the write is physically
realizable, but there is already a later value in X.
i. If C(X) is true, then the previous writers of X is committed, and
ignore the write by T.
ii. If C(X) is false, we must delay T.
(c) If TS(T)<=RT(X), the write is physically unrealizable, and T must be
rolled back.
The scheduler receives a request to commit T.
It must find all the database elements X written
by T, and set C(X):= true.
The scheduler receives a request to abort T or decides to rollback T.
Any transaction that was waiting on an element X that T wrote must
repeat its attempt to read or write, and see whether the action is now legal
37. MT-14-I
37
after the aborted transaction’s writes are cancelled.
Evaluator’s Comments if any:
Question 9(ii)Question 9(ii)Question 9(ii)Question 9(ii) Write a short note on –
Concurrency control by validation
Answer 9(ii)Answer 9(ii)Answer 9(ii)Answer 9(ii)
• Another type of optimistic concurrency control
• Maintains a record of what active transactions are doing
• Just before a transaction starts to write, it goes through a “validation
phase”
• If a there is a risk of physically unrealizable behavior, the transaction is
rolled back
Scheduler tasks:
38. MT-14-I
38
• Keep track of each transaction T’s
– Read set RS(T): the set of elements T read
– Write set WS(T): the set of elements T write
• Execute transactions in three phases:
– Read. T reads all the elements in RS(T)
– Validate. Validate T by comparing its RS(T) an WS(T) with those in
other transactions. If the validation fails, T is rolled back
– Write. T writes its values for the elements in WS(T)
Steps:
• START: the set of transactions that have started, but not yet completed
validation. For each T, maintain (T, START(T))
• VAL: the set of transactions that have been validated, but not yet finished.
For each T, maintain (T, START(T), VAL(T))
• FIN: the set of transaction that have completed. For each T, maintain (T,
START(T), VAL(T), FIN(T))
Evaluator’s Comments if any:
39. MT-14-I
39
Question 10Question 10Question 10Question 10 What is centralized and client server architectures?
Answer 10Answer 10Answer 10Answer 10
Centralized Systems:Centralized Systems:Centralized Systems:Centralized Systems:
• Run on a single computer system and do not interact with other computer
systems.
• Combines everything into single system including- DBMS software,
hardware, application programs, and user interface processing software.
• User can still connect through a remote terminal –however, all processing
is done at centralized site.
• General-purpose computer system: one to a few CPUs and a number of
device controllers that are connected through a common bus that provides
access to shared memory.
• Single-user system (e.g., personal computer or workstation): desk-top unit,
single user, usually has only one CPU and one or two hard disks; the OS
may support only one user.
Multi-user system: more disks, more memory, multiple CPUs, and a multi-user
OS. Serve a large number of users who are connected to the system vie
terminals. Often called server systems.
40. MT-14-I
40
FigureFigureFigureFigure 8888 Centralized Architecture
ClientClientClientClient----Server Systems:Server Systems:Server Systems:Server Systems:
• Server systems satisfy requests generated at m client systems, whose
general structure is shown below:
FigureFigureFigureFigure 9999 Client –Server Architecture
Specialized Servers with Specialized functions
• Print server
41. MT-14-I
41
• File server
• DBMS server
• Web server
• Email server
Clients can access the specialized servers as needed
Clients features:
• Provide appropriate interfaces through a client software module to access
and utilize the various server resources.
• Clients may be diskless machines or PCs or Workstations with disks with
only the client software installed.
• Connected to the servers via some form of a network.
Server features;
• Provides database query and transaction services to the clients
• Relational DBMS servers are often called SQL servers, query servers, or
transaction servers
• Applications running on clients utilize an Application Program Interface
(API) to access server databases via standard interface such as:
o ODBC: Open Database Connectivity standard
o JDBC: for Java programming access
42. MT-14-I
42
• Client and server must install appropriate client module and server module
software for ODBC or JDBC
Evaluator’s Comments if any:
43. MT-14-I
43
BibliographyBibliographyBibliographyBibliography
Ahmad, E. M. (1997). CMPB244: Database Design. Pahang, Malaysia: Universiti Tenaga Nasional.
Anonymous. (2009). Database normalization. Retrieved 2009, from Wikipedia:
http://en.wikipedia.org/wiki/Database_normalization
Goyal, A. (2008, Feb). STRUCTURE OF DBMS. Retrieved Oct 2009, from DBMSBASICS:
http://dbmsbasics.blogspot.com/2008/02/structure-of-dbms.html
KSOU. (2009). Advanced DBMS. Delhi: Virtual Education Trust.
Mylopoulos, J. (2004). Conceptual Modeling. Toronto, Canada: Department of Computer Science, University of
Toronto .