The technology of object oriented databases was introduced to system developers in
the late 1980’s. Object DBMSs add database functionality to object programming languages. A
major benefit of this approach is the unification of the application and database development into
a seamless data model and language environment. As a result, applications require less code, use
more natural data modeling, and code bases are easier to maintain.
This presentation discusses the following topics:
Object Oriented Databases
Object Oriented Data Model(OODM)
Characteristics of Object oriented database
Object, Attributes and Identity
Object oriented methodologies
Benefit of object orientation in programming language
Object oriented model vs Entity Relationship model
Advantages of OODB over RDBMS
This presentation discusses the following topics:
Introduction
Objectives
What is Data Mining?
Data Mining Applications
Data Warehousing
Advantages and disadvantages
Trends and Current Issues
Future Research Possibilities
Comparative Study on Graph-based Information Retrieval: the Case of XML DocumentIJAEMSJORNAL
The processing of massive amounts of data has become indispensable especially with the potential proliferation of big data. The volume of information available nowadays makes it difficult for the user to find relevant information in a vast collection of documents. As a result, the exploitation of vast document collections necessitates the implementation of automated technologies that enable appropriate and effective retrieval. In this paper, we will examine the state of the art of IR in XML documents. We will also discuss some works that have used graphs to represent documents in the context of IR. In the same vein, the relationships between the components of a graph are the center of our attention.
Unstructured multidimensional array multimedia retrival model based xml databaseeSAT Journals
Abstract Unstructured Data derived from the thought of data warehouse, data cube and xml, this paper presents a new database structure model which organizes the unstructured data in a multidimensional data cube based on XML Database. In this data cube of XML, clustered data are stored in instance table. A leading data corresponding are stored in dimension table. The relational model is helpful to construct data model, but it lacks flexibility, now the new data model can complement the defect of relational model. When querying, a leading data is gained from dimension table of XML then receiving the unstructured data through XQuery. Thus we increase the flexibility of XML database. Keywords: XML, multimedia, Multi-dimension, Database, Retrieval Model, multidimensional array, unstructured data.
The huge volume of text documents available on the internet has made it difficult to find valuable
information for specific users. In fact, the need for efficient applications to extract interested knowledge
from textual documents is vitally important. This paper addresses the problem of responding to user
queries by fetching the most relevant documents from a clustered set of documents. For this purpose, a
cluster-based information retrieval framework was proposed in this paper, in order to design and develop
a system for analysing and extracting useful patterns from text documents. In this approach, a pre-
processing step is first performed to find frequent and high-utility patterns in the data set. Then a Vector
Space Model (VSM) is performed to represent the dataset. The system was implemented through two main
phases. In phase 1, the clustering analysis process is designed and implemented to group documents into
several clusters, while in phase 2, an information retrieval process was implemented to rank clusters
according to the user queries in order to retrieve the relevant documents from specific clusters deemed
relevant to the query. Then the results are evaluated according to evaluation criteria. Recall and Precision
(P@5, P@10) of the retrieved results. P@5 was 0.660 and P@10 was 0.655.
This presentation discusses the following topics:
Architecture
Database System Components
Generating Reports
Database Components
Types of Data
The DBMS
Design Tools
DBMS Engine
Database Schema
Components of Applications
Database Development Terminology and followed by a Quiz
This presentation discusses the following topics:
Object Oriented Databases
Object Oriented Data Model(OODM)
Characteristics of Object oriented database
Object, Attributes and Identity
Object oriented methodologies
Benefit of object orientation in programming language
Object oriented model vs Entity Relationship model
Advantages of OODB over RDBMS
This presentation discusses the following topics:
Introduction
Objectives
What is Data Mining?
Data Mining Applications
Data Warehousing
Advantages and disadvantages
Trends and Current Issues
Future Research Possibilities
Comparative Study on Graph-based Information Retrieval: the Case of XML DocumentIJAEMSJORNAL
The processing of massive amounts of data has become indispensable especially with the potential proliferation of big data. The volume of information available nowadays makes it difficult for the user to find relevant information in a vast collection of documents. As a result, the exploitation of vast document collections necessitates the implementation of automated technologies that enable appropriate and effective retrieval. In this paper, we will examine the state of the art of IR in XML documents. We will also discuss some works that have used graphs to represent documents in the context of IR. In the same vein, the relationships between the components of a graph are the center of our attention.
Unstructured multidimensional array multimedia retrival model based xml databaseeSAT Journals
Abstract Unstructured Data derived from the thought of data warehouse, data cube and xml, this paper presents a new database structure model which organizes the unstructured data in a multidimensional data cube based on XML Database. In this data cube of XML, clustered data are stored in instance table. A leading data corresponding are stored in dimension table. The relational model is helpful to construct data model, but it lacks flexibility, now the new data model can complement the defect of relational model. When querying, a leading data is gained from dimension table of XML then receiving the unstructured data through XQuery. Thus we increase the flexibility of XML database. Keywords: XML, multimedia, Multi-dimension, Database, Retrieval Model, multidimensional array, unstructured data.
The huge volume of text documents available on the internet has made it difficult to find valuable
information for specific users. In fact, the need for efficient applications to extract interested knowledge
from textual documents is vitally important. This paper addresses the problem of responding to user
queries by fetching the most relevant documents from a clustered set of documents. For this purpose, a
cluster-based information retrieval framework was proposed in this paper, in order to design and develop
a system for analysing and extracting useful patterns from text documents. In this approach, a pre-
processing step is first performed to find frequent and high-utility patterns in the data set. Then a Vector
Space Model (VSM) is performed to represent the dataset. The system was implemented through two main
phases. In phase 1, the clustering analysis process is designed and implemented to group documents into
several clusters, while in phase 2, an information retrieval process was implemented to rank clusters
according to the user queries in order to retrieve the relevant documents from specific clusters deemed
relevant to the query. Then the results are evaluated according to evaluation criteria. Recall and Precision
(P@5, P@10) of the retrieved results. P@5 was 0.660 and P@10 was 0.655.
This presentation discusses the following topics:
Architecture
Database System Components
Generating Reports
Database Components
Types of Data
The DBMS
Design Tools
DBMS Engine
Database Schema
Components of Applications
Database Development Terminology and followed by a Quiz
This presentation discusses the following topics:
What is XML?
Syntax of XML Document
DTD (Document Type Definition)
XML Schema
XML Query Language
XML Databases
Oracle JDBC
This presentation discusses the following topics:
Concepts
Component Architecture for a DDBMS
Distributed Processing
Parallel DBMS
Advantages of DDBMSs
Disadvantages of DDBMSs
Homogeneous DDBMS
Heterogeneous DDBMS
Open Database Access and Interoperability
Multidatabase System
Functions of a DDBMS
Reference Architecture for DDBMS
Components of a DDBMS
Fragmentation
Transparencies in a DDBMS
Date’s 12 Rules for a DDBMS
This presentation will discusses about the following topics: Importance of Data Models
Basic Building Blocks
Business Rules
Translating Business Rules into Data Models
Evolution of Data Models
Hierarchical Data Model
Network Data Model
Relational Data Model
Entity Relational Model
Object Model
Summary
Followed by a Quiz
An Introduction to Architecture of Object Oriented Database Management System and how it differs from RDBMS means Relational Database Management System
Object-Oriented Database Model For Effective Mining Of Advanced Engineering M...cscpconf
Materials have become a very important aspect of our daily life and the search for better and
new kind of engineered materials has created some opportunities for the Information science
and technology fraternity to investigate in to the world of materials. Hence this combination of
materials science and Information science together is nowadays known as Materials
Informatics. An Object-Oriented Database Model has been proposed for organizing advanced engineering materials datasets.
A Comparative Study of RDBMs and OODBMs in Relation to Security of Datainscit2006
Mansaf Alam and Siri Krishan Wasan
Department of Computer Sciences, Jamia Millia Islamia, New Delhi, India.
Department of Mathematics, Jamia Millia Islamia, New Delhi, India.
The following topics are discussed in this presentation
Data and Information
Database
Database Management System
Objectives
Advantages
Components
Architecture
USING ONTOLOGIES TO OVERCOMING DRAWBACKS OF DATABASES AND VICE VERSA: A SURVEYcseij
For a same domain, several databases (DBs) exist. The emergence of classical web to the semantic web has
contributed to the appearance of the notion of ontology that have shared and consensual vocabulary. For a
given, it is more interesting to take advantage of existing databases, to build an ontology. Most of the data
are already stored in these databases. So many DBs can be integrated to enable reuse of existing data for
the semantic web. Even for existing ontologies, the relevance of the information they contain requires
regular updating. These databases can be useful sources to enrich these ontologies. In the other hand, for
these ontologies more than the ratio ‘size of the instances on the size of working memory’ is large more
than the management of these instances, in memory, is difficult. Finding a way to store these instances in a
structured manner to satisfy the needs of performance and reliability required for many applications
becomes an obligation. As a consequence, defining query languages to support these structures becomes a
challenge for SW community. We will show through this paper how ontologies can benefit from DBs to
increase system performance and facilitate their design cycle. The DBs in their turn suffers from several
drawbacks namely complexity of the design cycle and lack of semantics. Since ontologies are rich in
semantic, DBs can profit from this advantage to overcoming their drawbacks.
Sentimental classification analysis of polarity multi-view textual data using...IJECEIAES
The data and information available in most community environments is complex in nature. Sentimental data resources may possibly consist of textual data collected from multiple information sources with different representations and usually handled by different analytical models. These types of data resource characteristics can form multi-view polarity textual data. However, knowledge creation from this type of sentimental textual data requires considerable analytical efforts and capabilities. In particular, data mining practices can provide exceptional results in handling textual data formats. Besides, in the case of the textual data exists as multi-view or unstructured data formats, the hybrid and integrated analysis efforts of text data mining algorithms are vital to get helpful results. The objective of this research is to enhance the knowledge discovery from sentimental multi-view textual data which can be considered as unstructured data format to classify the polarity information documents in the form of two different categories or types of useful information. A proposed framework with integrated data mining algorithms has been discussed in this paper, which is achieved through the application of X-means algorithm for clustering and HotSpot algorithm of association rules. The analysis results have shown improved accuracies of classifying the sentimental multi-view textual data into two categories through the application of the proposed framework on online polarity user-reviews dataset upon a given topics.
A Survey on Graph Database Management Techniques for Huge Unstructured Data IJECEIAES
Data analysis, data management, and big data play a major role in both social and business perspective, in the last decade. Nowadays, the graph database is the hottest and trending research topic. A graph database is preferred to deal with the dynamic and complex relationships in connected data and offer better results. Every data element is represented as a node. For example, in social media site, a person is represented as a node, and its properties name, age, likes, and dislikes, etc and the nodes are connected with the relationships via edges. Use of graph database is expected to be beneficial in business, and social networking sites that generate huge unstructured data as that Big Data requires proper and efficient computational techniques to handle with. This paper reviews the existing graph data computational techniques and the research work, to offer the future research line up in graph database management.
Structured and Unstructured Information Extraction Using Text Mining and Natu...rahulmonikasharma
Information on web is increasing at infinitum. Thus, web has become an unstructured global area where information even if available, cannot be directly used for desired applications. One is often faced with an information overload and demands for some automated help. Information extraction (IE) is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents by means of Text Mining and Natural Language Processing (NLP) techniques. Extracted structured information can be used for variety of enterprise or personal level task of varying complexity. The Information Extraction (IE) in also a set of knowledge in order to answer to user consultations using natural language. The system is based on a Fuzzy Logic engine, which takes advantage of its flexibility for managing sets of accumulated knowledge. These sets may be built in hierarchic levels by a tree structure. Information extraction is structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. Data mining research assumes that the information to be “mined” is already in the form of a relational database. IE can serve an important technology for text mining. The knowledge discovered is expressed directly in the documents to be mined, then IE alone can serve as an effective approach to text mining. However, if the documents contain concrete data in unstructured form rather than abstract knowledge, it may be useful to first use IE to transform the unstructured data in the document corpus into a structured database, and then use traditional data mining tools to identify abstract patterns in this extracted data. We propose a novel method for text mining with natural language processing techniques to extract the information from data base with efficient way, where the extraction time and accuracy is measured and plotted with simulation. Where the attributes of entities and relationship entities from structured and semi structured information .Results are compared with conventional methods.
This presentation discusses the following topics:
What is XML?
Syntax of XML Document
DTD (Document Type Definition)
XML Schema
XML Query Language
XML Databases
Oracle JDBC
This presentation discusses the following topics:
Concepts
Component Architecture for a DDBMS
Distributed Processing
Parallel DBMS
Advantages of DDBMSs
Disadvantages of DDBMSs
Homogeneous DDBMS
Heterogeneous DDBMS
Open Database Access and Interoperability
Multidatabase System
Functions of a DDBMS
Reference Architecture for DDBMS
Components of a DDBMS
Fragmentation
Transparencies in a DDBMS
Date’s 12 Rules for a DDBMS
This presentation will discusses about the following topics: Importance of Data Models
Basic Building Blocks
Business Rules
Translating Business Rules into Data Models
Evolution of Data Models
Hierarchical Data Model
Network Data Model
Relational Data Model
Entity Relational Model
Object Model
Summary
Followed by a Quiz
An Introduction to Architecture of Object Oriented Database Management System and how it differs from RDBMS means Relational Database Management System
Object-Oriented Database Model For Effective Mining Of Advanced Engineering M...cscpconf
Materials have become a very important aspect of our daily life and the search for better and
new kind of engineered materials has created some opportunities for the Information science
and technology fraternity to investigate in to the world of materials. Hence this combination of
materials science and Information science together is nowadays known as Materials
Informatics. An Object-Oriented Database Model has been proposed for organizing advanced engineering materials datasets.
A Comparative Study of RDBMs and OODBMs in Relation to Security of Datainscit2006
Mansaf Alam and Siri Krishan Wasan
Department of Computer Sciences, Jamia Millia Islamia, New Delhi, India.
Department of Mathematics, Jamia Millia Islamia, New Delhi, India.
The following topics are discussed in this presentation
Data and Information
Database
Database Management System
Objectives
Advantages
Components
Architecture
USING ONTOLOGIES TO OVERCOMING DRAWBACKS OF DATABASES AND VICE VERSA: A SURVEYcseij
For a same domain, several databases (DBs) exist. The emergence of classical web to the semantic web has
contributed to the appearance of the notion of ontology that have shared and consensual vocabulary. For a
given, it is more interesting to take advantage of existing databases, to build an ontology. Most of the data
are already stored in these databases. So many DBs can be integrated to enable reuse of existing data for
the semantic web. Even for existing ontologies, the relevance of the information they contain requires
regular updating. These databases can be useful sources to enrich these ontologies. In the other hand, for
these ontologies more than the ratio ‘size of the instances on the size of working memory’ is large more
than the management of these instances, in memory, is difficult. Finding a way to store these instances in a
structured manner to satisfy the needs of performance and reliability required for many applications
becomes an obligation. As a consequence, defining query languages to support these structures becomes a
challenge for SW community. We will show through this paper how ontologies can benefit from DBs to
increase system performance and facilitate their design cycle. The DBs in their turn suffers from several
drawbacks namely complexity of the design cycle and lack of semantics. Since ontologies are rich in
semantic, DBs can profit from this advantage to overcoming their drawbacks.
Sentimental classification analysis of polarity multi-view textual data using...IJECEIAES
The data and information available in most community environments is complex in nature. Sentimental data resources may possibly consist of textual data collected from multiple information sources with different representations and usually handled by different analytical models. These types of data resource characteristics can form multi-view polarity textual data. However, knowledge creation from this type of sentimental textual data requires considerable analytical efforts and capabilities. In particular, data mining practices can provide exceptional results in handling textual data formats. Besides, in the case of the textual data exists as multi-view or unstructured data formats, the hybrid and integrated analysis efforts of text data mining algorithms are vital to get helpful results. The objective of this research is to enhance the knowledge discovery from sentimental multi-view textual data which can be considered as unstructured data format to classify the polarity information documents in the form of two different categories or types of useful information. A proposed framework with integrated data mining algorithms has been discussed in this paper, which is achieved through the application of X-means algorithm for clustering and HotSpot algorithm of association rules. The analysis results have shown improved accuracies of classifying the sentimental multi-view textual data into two categories through the application of the proposed framework on online polarity user-reviews dataset upon a given topics.
A Survey on Graph Database Management Techniques for Huge Unstructured Data IJECEIAES
Data analysis, data management, and big data play a major role in both social and business perspective, in the last decade. Nowadays, the graph database is the hottest and trending research topic. A graph database is preferred to deal with the dynamic and complex relationships in connected data and offer better results. Every data element is represented as a node. For example, in social media site, a person is represented as a node, and its properties name, age, likes, and dislikes, etc and the nodes are connected with the relationships via edges. Use of graph database is expected to be beneficial in business, and social networking sites that generate huge unstructured data as that Big Data requires proper and efficient computational techniques to handle with. This paper reviews the existing graph data computational techniques and the research work, to offer the future research line up in graph database management.
Structured and Unstructured Information Extraction Using Text Mining and Natu...rahulmonikasharma
Information on web is increasing at infinitum. Thus, web has become an unstructured global area where information even if available, cannot be directly used for desired applications. One is often faced with an information overload and demands for some automated help. Information extraction (IE) is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents by means of Text Mining and Natural Language Processing (NLP) techniques. Extracted structured information can be used for variety of enterprise or personal level task of varying complexity. The Information Extraction (IE) in also a set of knowledge in order to answer to user consultations using natural language. The system is based on a Fuzzy Logic engine, which takes advantage of its flexibility for managing sets of accumulated knowledge. These sets may be built in hierarchic levels by a tree structure. Information extraction is structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. Data mining research assumes that the information to be “mined” is already in the form of a relational database. IE can serve an important technology for text mining. The knowledge discovered is expressed directly in the documents to be mined, then IE alone can serve as an effective approach to text mining. However, if the documents contain concrete data in unstructured form rather than abstract knowledge, it may be useful to first use IE to transform the unstructured data in the document corpus into a structured database, and then use traditional data mining tools to identify abstract patterns in this extracted data. We propose a novel method for text mining with natural language processing techniques to extract the information from data base with efficient way, where the extraction time and accuracy is measured and plotted with simulation. Where the attributes of entities and relationship entities from structured and semi structured information .Results are compared with conventional methods.
Comparison of Relational Database and Object Oriented DatabaseEditor IJMTER
The object-oriented database (OODB) is the combination of object-oriented
programming language (OOPL) systems and persistent systems. Object DBMSs add database
functionality to object programming languages. They bring much more than persistent
storage of programming language objects. A major benefit of this approach is the unification
of the application and database development into a seamless data model and language
environment. This report presents the comparison between object oriented database and
relational database. It gives advantages of OODBMS over RDBMS. It gives applications of
OODBMS.
DATA MANAGEMENT ISSUES AND STUDY ON HETEROGENEOUS DATA STORAGE IN THE INTERNE...CSEIJJournal
The Internet of Things is a networking standard that connects various hardware, including digital,
physical, and virtual things that may communicate with one another and carry out user-requested tasks.
Traditional database management methods cannot be used in this entity because of the variety, large
volume and heterogeneous data generated by them. The rapid growth of heterogeneous data can only be
managed by distributed and parallel computer systems and databases. When it comes to handling vast
amount of diverse data, most relational databases have a variety of drawbacks because they were designed
for a certain format. One of the most difficulties in data management is investigating such heterogeneous
data. Consequently, IoT data management system design has to be considered with some distinct
principles. These various guiding concepts enable the suggestion of various IoT data management system
strategies. The solution should provide a unified format for the conversion of various heterogeneous data
which are generated by the sensors. The integration of generated data is made simple by some middleware
or architecture-oriented solutions. Other methods also offer effective storage of the unified data generated.
This paper surveys the challenges of IoT Data management and provides a survey about the storage of
heterogeneous data and the type of data used.
Data Management Issues and Study on Heterogeneous Data Storage in the Interne...CSEIJJournal
The Internet of Things is a networking standard that connects various hardware, including digital,
physical, and virtual things that may communicate with one another and carry out user-requested tasks.
Traditional database management methods cannot be used in this entity because of the variety, large
volume and heterogeneous data generated by them. The rapid growth of heterogeneous data can only be
managed by distributed and parallel computer systems and databases. When it comes to handling vast
amount of diverse data, most relational databases have a variety of drawbacks because they were designed
for a certain format. One of the most difficulties in data management is investigating such heterogeneous
data. Consequently, IoT data management system design has to be considered with some distinct
principles. These various guiding concepts enable the suggestion of various IoT data management system
strategies. The solution should provide a unified format for the conversion of various heterogeneous data
which are generated by the sensors. The integration of generated data is made simple by some middleware
or architecture-oriented solutions. Other methods also offer effective storage of the unified data generated.
This paper surveys the challenges of IoT Data management and provides a survey about the storage of
heterogeneous data and the type of data used.
Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
A Study on Graph Storage Database of NOSQLIJSCAI Journal
Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
A Study on Graph Storage Database of NOSQLIJSCAI Journal
Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
Organizing Data in a Traditional File Environment
File organization Term and Concepts
Computer system organizes data in a hierarchy
Bit: Smallest unit of data; binary digit (0,1)
Byte: Group of bits that represents a single character
Field: Group of characters as word(s) or number
Record: Group of related fields
File: Group of records of same type
Muhammad Sharif (Database systems handbook)database administrator SKMCHRC Lahore, Pakistan
I'm writing this book. I'm Muhammad Sharif write a Database systems handbook about dbms, rdbms database management system abrivations.
I have core knowledge of database systems and its structure and database system administration too.
I thanks to all my reader who ack.
Thanks
I'm Muhammad Sharif Software engineer, SKMCHRC Lahore, Database systems handbook is written by Muhammad Sharif is pure RDBMS having all core knowledge of databases and its related subjects.
Muhammad Sharif Database systems handbook
This Database management system DBMS is written by Muhammad Sharif Software Engineer SKMCHRC Lahore
It include RDBMS and File system contents and Database system to advance Databases like DBA Concepts.
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIPEditor IJMTER
System-on-chip (soc) based system has so many disadvantages in power-dissipation as
well as clock rate while the data transfer from one system to another system in on-chip. At the same
time, a higher operated system does not support the lower operated bus network for data transfer.
However an alternative scheme is proposed for high speed data transfer. But this scheme is limited to
SOCs. Unlike soc, network-on-chip (NOC) has so many advantages for data transfer. It has a special
feature to transfer the data in on-chip named as transitional encoder. Its operation is based on input
transitions. At the same time it supports systems which are higher operated frequencies. In this
project, a low-power encoding scheme is proposed. The proposed system yields lower dynamic
power dissipation due to the reduction of switching activity and coupling switching activity when
compared to existing system. Even-though many factors which is based on power dissipation, the
dynamic power dissipation is only considerable for reasonable advantage. The proposed system is
synthesized using quartus II 9.1 software. Besides, the proposed system will be extended up to
interlink PE communication with help of routers and PE’s which are performed by various
operations. To implement this system in real NOC’s contains the proposed encoders and decoders for
data transfer with regular traffic scenarios should be considered.
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...Editor IJMTER
Coins are important part of our life. We use coins in a places like stores, banks, buses, trains
etc. So it becomes a basic need that coins can be sorted, counted automatically. For this, there is
necessary that the coins can be recognized automatically. Automated Coin Recognition System for the
Indian Coins of Rs. 1, 2, 5 and 10 with the rotation invariance. We have taken images from the both
sides of coin. So this system is capable to recognizing coins from both sides. Features are taken from the
images using techniques as a Hough Transformation, Pattern Averaging etc.
Analysis of VoIP Traffic in WiMAX EnvironmentEditor IJMTER
Worldwide Interoperability for Microwave Access (WiMAX) is currently one of the
hottest technologies in wireless communication. It is a standard based on the IEEE 802.16 wireless
technology that provides a very high throughput broadband connections over long distances. In
parallel, Voice Over Internet Protocol (VoIP) is a new technology which provides access to voice
communication over internet protocol and hence it is becomes an alternative to public switched
telephone networks (PSTN) due to its capability of transmission of voice as packets over IP
networks. A lot of research has been done in analyzing the performances of VoIP traffic over
WiMAX network. In this paper we review the analysis carried out by several authors for the most
common VoIP codec’s which are G.711, G.723.1 and G.729 over a WiMAX network using various
service classes. The objective is to compare the results for different types of service classes with
respect to the QoS parameters such as throughput, average delay and average jitter.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Aging protocols that could incapacitate the InternetEditor IJMTER
The biggest threat to the Internet is the fact that it was never really designed. For e.g., the
BGP protocol is used by Internet routers to exchange information about changes to the Internet's
network topologies. However, it also is among the most fundamentally broken; as Internet routing
information can be poisoned with bogus routing information. Instead, it evolved in fits and start,
thanks to various protocols that have been cobbled together to fulfill the needs of the moment. Few
of protocols from them were designed with security in mind. or if they were sported no more than
was needed to keep out a nosy neighbor, not a malicious attacker. The result is a welter of aging
protocols susceptible to exploit on an Internet scale. Here are six Internet protocols that could stand
to be replaced sooner rather than later or are (mercifully) on the way out.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...Editor IJMTER
The emergence of exactitude agriculture has been promoted by the numerous developments within
the field of wireless sensing element actor networks (WSAN). These WSANs offer important data
for gathering, work management, development of crops, and limitation of crop diseases. Goals of
this paper to introducing cloud computing as a brand new way (technique) to be utilized in addition
to WSANs to any enhance their application and benefits to the area of agriculture.
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMESEditor IJMTER
Carpooling (also car-sharing, ride-sharing, lift-sharing), is the sharing of car journeys so
that more than one person travels in a car. It helps to resolve a variety of problems that continue to
plague urban areas, ranging from energy demands and traffic congestion to environmental pollution.
Most of the existing method used stochastic disturbances arising from variations in vehicle travel
times for carpooling. However it doesn’t deal with the unmet demand with uncertain demand of the
vehicle for car pooling. To deal with this the proposed system uses Chance constrained
formulation/Programming (CCP) approach of the problem with stochastic demand and travel time
parameters, under mild assumptions on the distribution of stochastic parameters; and relates it with a
robust optimization approach. Since real problem sizes can be large, it could be difficult to find
optimal solutions within a reasonable period of time. Therefore solution algorithm using tabu
heuristic solution approach is developed to solve the model. Therefore, we constructed a stochastic
carpooling model that considers the in- fluence of stochastic travel times. The model is formulated as
an integer multiple commodity network flow problem. Since real problem sizes can be large, it could
be difficult to find optimal solutions within a reasonable period of time.
Sustainable Construction With Foam Concrete As A Green Green Building MaterialEditor IJMTER
A green building is an environmentally sustainable building, designed, constructed and
operated to minimise the total environmental impacts. Carbon dioxide (CO2) is the primary
greenhouse gas emitted through human activities. It is claimed that 5% of the world’s carbon dioxide
emission is attributed to cement industry, which is the vital constituent of concrete. Due to the
significant contribution to the environmental pollution, there is a need for finding an optimal solution
along with satisfying the civil construction needs. Apart from normal concrete bricks, a clay brick,
Foam concrete is a new innovative technology for sustainable building and civil construction which
fulfills the criteria of being a Green Material. This paper concludes that Foam Concrete can be an
effective sustainable material for construction and also focuses on the cost effectiveness in using
Foam Concrete as a building material in replacement with Clay Brick or other bricks.
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TESTEditor IJMTER
A good education system is required for overall prosperity of a nation. A tremendous
growth in the education sector had made the administration of education institutions complex. Any
researches reveal that the integration of ICT helps to reduce the complexity and enhance the overall
administration of education. This study has been undertaken to identify the various functional areas
to which ICT is deployed for information administration in education institutions and to find the
current extent of usage of ICT in all these functional areas pertaining to information administration.
The various factors that contribute to these functional areas were identified. A theoretical model was
derived and validated.
Textual Data Partitioning with Relationship and Discriminative AnalysisEditor IJMTER
Data partitioning methods are used to partition the data values with similarity. Similarity
measures are used to estimate transaction relationships. Hierarchical clustering model produces tree
structured results. Partitioned clustering produces results in grid format. Text documents are
unstructured data values with high dimensional attributes. Document clustering group ups unlabeled text
documents into meaningful clusters. Traditional clustering methods require cluster count (K) for the
document grouping process. Clustering accuracy degrades drastically with reference to the unsuitable
cluster count.
Textual data elements are divided into two types’ discriminative words and nondiscriminative
words. Only discriminative words are useful for grouping documents. The involvement of
nondiscriminative words confuses the clustering process and leads to poor clustering solution in return.
A variation inference algorithm is used to infer the document collection structure and partition of
document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition
documents. DPM clustering model uses both the data likelihood and the clustering property of the
Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to
discover the latent cluster structure based on the DPM model. DPMFP clustering is performed without
requiring the number of clusters as input.
Document labels are used to estimate the discriminative word identification process. Concept
relationships are analyzed with Ontology support. Semantic weight model is used for the document
similarity analysis. The system improves the scalability with the support of labels and concept relations
for dimensionality reduction process.
Testing of Matrices Multiplication Methods on Different ProcessorsEditor IJMTER
There are many algorithms we found for matrices multiplication. Until now it has been
found that complexity of matrix multiplication is O(n3). Though Further research found that this
complexity can be decreased. This paper focus on the algorithm and its complexity of matrices
multiplication methods.
Malware is a worldwide pandemic. It is designed to damage computer systems without
the knowledge of the owner using the system. Software‟s from reputable vendors also contain
malicious code that affects the system or leaks information‟s to remote servers. Malware‟s includes
computer viruses, spyware, dishonest ad-ware, rootkits, Trojans, dialers etc. Malware detectors are
the primary tools in defense against malware. The quality of such a detector is determined by the
techniques it uses. It is therefore imperative that we study malware detection techniques and
understand their strengths and limitations. This survey examines different types of Malware and
malware detection methods.
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICEEditor IJMTER
Practical requirements for securely demonstrating identities between two handheld
devices are an important concern. The adversary can inject a Man-In- The-Middle (MITM) attack to
intrude the protocol. Protocols that employ secret keys require the devices to share private
information in advance, in which it is not feasible in the above scenario. Apart from insecurely
typing passwords into handheld devices or comparing long hexadecimal keys displayed on the
devices’ screen, many other human-verifiable protocols have been proposed in the literature to solve
the problem. Unfortunately, most of these schemes are unsalable to more users. Even when there are
only three entities attempt to agree a session key, these protocols need to be rerun for three times.
So, in the existing method a bipartite and a tripartite authentication protocol is presented using a
temporary confidential channel. Besides, further extend the system into a transitive authentication
protocol that allows multiple handheld devices to establish a conference key securely and efficiently.
But this method detects only the outsider attacks. Method does not consider the insider attacks. So,
in the proposed method trust score based method is introduced which computes the trust values for
the nodes and provide the security. The trust score is computed has a positive influence on the
confidence with which an entity conducts transactions with that node. Network the behavior of the
node will be monitored periodically and its trust value is also updated .So depending on the behavior
of the node in the network trust relation will be established between two nodes.
GLAUCOMA is a chronic eye disease that can damage optic nerve. According to WHO It
is the second leading cause of blindness, and is predicted to affect around 80 million people by 2020.
Development of the disease leads to loss of vision, which occurs increasingly over a long period of
time. As the symptoms only occur when the disease is quite advanced so that glaucoma is called the
silent thief of sight. Glaucoma cannot be cured, but its development can be slowed down by
treatment. Therefore, detecting glaucoma in time is critical. However, many glaucoma patients are
unaware of the disease until it has reached its advanced stage. In this paper, some manual and
automatic methods are discussed to detect glaucoma. Manual analysis of the eye is time consuming
and the accuracy of the parameter measurements also varies with different clinicians. To overcome
these problems with manual analysis, the objective of this survey is to introduce a method to
automatically analyze the ultrasound images of the eye. Automatic analysis of this disease is much
more effective than manual analysis.
Survey: Multipath routing for Wireless Sensor NetworkEditor IJMTER
Reliability is playing very vital role in some application of Wireless Sensor Networks
and multipath routing is one of the ways to increase the probability of reliability. More over energy
consumption is constraint. In this paper, we provide a survey of the state-of-the-art of proposed
multipath routing algorithm for Wireless Sensor Networks. We study the design, analyze the tradeoff
of each design, and overview several presenting algorithms.
Step up DC-DC Impedance source network based PMDC Motor DriveEditor IJMTER
This paper is devoted to the Quasi Z source network based DC Drive. The cascaded
(two-stage) Quasi Z Source network could be derived by the adding of one diode, one inductor,
and two capacitors to the traditional quasi-Z-source inverter The proposed cascaded qZSI inherits all
the advantages of the traditional solution (voltage boost and buck functions in a single stage,
continuous input current, and improved reliability). Moreover, as compared to the conventional qZSI,
the proposed solution reduces the shoot-through duty cycle by over 30% at the same voltage boost
factor. Theoretical analysis of the two-stage qZSI in the shoot-through and non-shoot-through
operating modes is described. The proposed and traditional qZSI-networks are compared. A
prototype of a Quasi Z Source network based DC Drive was built to verify the theoretical
assumptions. The experimental results are presented and analyzed.
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATIONEditor IJMTER
The paper reflects the spiritual philosophy of Aurobindo Ghosh which is helpful in today’s
education. In 19th century he wrote about spirituality, in accordance with that it is a core and vital part
of today’s education. It is very much essential for today’s kid. Here I propose the overview of that
philosophy.At the utmost regeneration of those values in today’s generation is the great deal with
education system. To develop the values and spiritual education in the youngers is the great moto of
mine. It is the materialistic world and without value redefinition among them is the harder task but not
difficult.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
1. Scientific Journal Impact Factor (SJIF): 1.711
International Journal of Modern Trends in Engineering
and Research
www.ijmter.com
@IJMTER-2014, All rights Reserved 205
e-ISSN: 2349-9745
p-ISSN: 2393-8161
Survey of Object Oriented Database
Pratiksha P. Gaurkhede1
, Prof.P.J. Pursani2
1
CSE Dept, HVPM COET, Amravati
2
Assistant Professor, CSE Dept,HVPM COET, Amravati
Abstract- The technology of object oriented databases was introduced to system developers in
the late 1980’s. Object DBMSs add database functionality to object programming languages. A
major benefit of this approach is the unification of the application and database development into
a seamless data model and language environment. As a result, applications require less code, use
more natural data modeling, and code bases are easier to maintain.
I. Introduction
1.1. Database
We are all familiar with concepts like records and fields. So a stack of cards like the one pictured
above is perhaps our mental image of a database. Thinking of a linear file of homogeneous
records as the archetype for a database is as reductive as thinking of a skateboard as the
archetype for a roadworthy vehicle. A flat file is only a very, very restricted form of database.
A database is an organized collection of related data held in a computer or a data bank, which is
designed to be accessible in various ways.
2. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 206
The data within a database is structured so as to model a real world structures and hierarchies so
as to enable conceptually convenient data storage, processing and retrieval mechanisms.
Clients (Services or applications) interact with databases through queries (remote or otherwise)
to Create, Retrieve, Update and Delete (CRUD) data within a database. This process is facilitated
through a Database Management System (DBMS)
1.2. Relational databases
Relational databases can be accessed by means of object oriented programming languages.
However that does not turn a relational database into an object oriented one. In relational
database model data are logically organized in two dimensional tables. Each individual fact or
type of information is stored in its own table. The relational database model was developed using
a branch of mathematics called set theory. In set theory a two dimensional collection of
information is called a relation. A relational database management system allows users to query
the tables to obtain information from one or more tables in a very flexible way. The relational
database is attractive from a user’s standpoint because end users often think of the data they need
as a table. The capability of a relational database management is to handle complex queries is
important. Althoughthe relational database is considered to be a dramatic improvement over the
network ad relational models, it have two disadvantages. First a relational database requires
much more computer memory and processing time than the earlier models. Increases in
computer processing speeds as well as a steady decrease I hardware costs have reduced the
impact of this first disadvantage. The second disadvantage is that the relational database allows
only text and numerical information to be stored in the database. It did not allow the inclusion of
complex object types such as graphics, video, audio, or geographic information. The desire to
include these complex objects in databases led to the development of object oriented databases.
1.3 Object-Oriented Databases:
An Object oriented databases is a database that subscribes to model with information represent
with object. Object oriented database is the niche offering in the relational database management
system field and not successfula well known as mainstream database engines. Object oriented
database systems enables direct access to objects defined in the programming language in
question and the storage of such objects in the database without conversion. It is precisely this
3. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 207
that is not possible with relational database systems, in which everything must be structured in
tables. Both simple and complex objects can be stored in an object oriented database model. In
object oriented database other types of data can be stored. Object oriented database includes
abstract data types that allow the users to define the characteristics of the data to be stored when
developing an application. This overcomes the limitations of relational databases. Relational
databases limit the types of data that can be stored in table columns. Instead of tables an object
oriented database stores the data in objects. An object can store attributes and instructions for
actions that can be performed on the object or its attributes. These instructions are called
encapsulated methods. Objects can be placed in a hierarchy attributes from objects higher in the
hierarchy. Many researchers have argued that the object oriented databases are superior to
relational database most organizations still use relational.
II. History:
Computerized Databases evolved with DBMS in the 1960s with the availability of disks and
drums to provide an easy alternative to maintaining large amount of diverse information. In the
1970s the main objective of database technology was to make the data independent of the logic
of application programs to enable concurrent access to different application programs. The first
generations of databases were navigational, where applications accessed data through record
pointers moving from one record to another. This was a precursor to the IBMs hierarchical
model (IMS System) and network model. This was followed by the relational model which
placed the emphasis on content rather than links for data retrieval. This kind of database the most
widely till date. Relational models were limiting in the kind of data that could be held, the
rigidity of the structure, and the lack of support for new data types such as graphics, xml, 2D and
3D data. In the 1980swith the advent of Object Oriented methodologies and languages,
integration of database capabilities with object oriented programming language provided a
unified programming environment. This led to the development of OODB and OODBMS where
objects are stored in databases rather than data such as integers, strings or real numbers.
Relational models were limiting in the kind of data that could be held, the rigidity of the
structure, and the lack of support for new data types such as graphics, xml, 2D and 3D data. In
the 1980s with the advent of Object Oriented methodologies and languages, integration of
database capabilities with object oriented programming language provided a unified
4. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 208
programming environment. This led to the development of OODB and OODBMS where objects
are stored in databases rather than data such as integers, strings or real numbers.
III. Features of Object Oriented database
3.1 Object identity
Every instance in the database has a unique identifier (OID), which is a property of an object that
distinguishes it from all other objects and remains for the lifetime of the object. In object-
oriented systems, an object has an existence (identity) independent of its value. Object oriented
database systems integrate the object identity in the database with the identity of objects in
memory. If you store an object then it knows if it corresponds to an object in the database, and
when you retrieve an object it knows if the object has already been loaded into program memory.
There is no need for the programmer to maintain the relationship between database objects and
objects in memory.
3.2 Encapsulation
Encapsulation means that code and data are packaged together to form objects, and that the
implementation of these objects can be hidden from the rest of the program. Object-oriented
models enforce encapsulation and information hiding. This means, the state of objects can be
manipulated and read only by invoking operations that are specified within the type definition
and made visible through the public clause. In an object-oriented database system encapsulation
is achieved if only the operations are visible to the programmer and both the data and the
implementation are hidden. There are two views of encapsulation: the programming language
view (which is the original view since the concept originated there) and the database adaptation
of that view. The idea of encapsulation in programming languages comes from abstract data
types. In this view, an object has an interface part and an implementation part. The interface part
is the specification of the set of operations that can be performed on the object. It is the only
visible part of the object. The implementation part has a data part and a procedural part. The data
part is the representation or state of the object and the procedure part describes, in some
programming language, the implementation of each operation. The database translation of the
principle is that an object encapsulates both program and data. In the database world, it is not
clear whether the structural part of the type is or is not part of the interface (this depends on the
5. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 209
system), while in the programming language world, the data structure is clearly part of the
implementation and not of the interface.
3.3. Inheritance
Inheritance is a powerful mechanism which lets a new class be built using code and data declared
in other classes. This allows common features of a set of classes to be expressed in a base class,
and the code of related class. Classes which are derived from other classes can be stored with
one operation. The database system must know the class hierarchy and manage object storage
accordingly. The relational database systems cannot store functions you must ensure that the
function hierarchy is restored when you read objects from the database. This usually requires
type information to be stored in the database to enable your application to build objects
appropriately before loading their data.
3.4. Polymorphism
Relational databases offer no support for polymorphism. We must be careful to store enough
information about our data types to enable you to reconstruct objects properly before loading
their data. Suppose you have a variety of Items, but the rules for computing their total price
differs. For instance, medical items might be exempt from sales tax in an American database
system, or a German database system might compute value added tax differently for different
items. You still want all Items to have a function called Total Price (), but instead of one Total
Price () function we now want different functions depending on the class.
3.5. Combination of object oriented programming and database technology:
The object oriented database technology combines object oriented programming with the
database technology to provide an integrated application development system. There are many
advantages of including the definition of operations with the definition of data. First the
operations which are defined are applied and are not depending on the specific database
application running at the moment. Second the data types can be extended to support complex
data.
3.6. Improves productivity: Inheritance allows programmers to develop solutions to complex
problems by defining the new objects in terms of defined objects in the past. Polymorphism and
6. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 210
dynamic binding allows programmers to define operations for one object and then to share the
specification of the operation with other object. These objects can further extend this operation to
provide behaviors that are unique to those objects.
3.7 Data access:
Object oriented databases represent relationships explicitly supporting both navigational and
associative access to information.
3.8 Extensibility:
OODBMSs allow new data types to be built from existing data types. The ability to factor out
common properties of several classes and form them into a super class that can be shared with
subclass can greatly reduce redundancy within system is regarded that one of the main advantage
of object orientation.Further,the reusability of classes promotes faster development and easier
maintenance of database and its applications.
3.9 Multiple large varieties of data types:
Unlike traditional databases(such as Hierarchical, Network or Relational)the object oriented
database are capable of storing different types of data, for example pictures voice video, include
text numbers and so on.
IV. Usage of object oriented databases:
Object oriented databases are today rarely used, but in some cases they are used. It is where the
data is very complex, in the sense that the data is not primitive data types, or if there is a need for
storing behavior, that the need for object oriented databases arises. For example if a company has
developed several software agents and want to reuse them in different ways. Object databases
should be used when there is complex data and/or complex data relationships. This includes a
many to many object relationship. Object databases should not be used when there would be few
join tables and there are large volumes of simple transactional data.
V. Advantages
5.1 Composite Objects and Relationships:
Objects in an OODBMS can store an arbitrary number of atomic types as well as other objects. It
is thus possible to have a large class which holds many medium sized classes which themselves
7. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 211
hold many smaller classes, ad infinitum. In a relational database this has to be done either by
having one huge table with lots of null fields or via a number of smaller, normalized tables which
are linked via foreign keys. Having lots of smaller tables is still a problem since a join has to be
performed every time one wants to query data based on the "Has-a" relationship between the
entities. Also an object is a better model of the real world entity than the relational tuples with
regards to complex objects. The fact that an OODBMS is better suited to handling complex,
interrelated data than an RDBMS means that an OODBMS can outperform an RDBMS by ten to
a thousand times depending on the complexity of the data being handled.
5.2. Class Hierarchy:
Data in the real world is usually have hierarchical characteristics. The ever popular Employee
example used in most RDBMS texts is easier to describe in an OODBMS than in an RDBMS.
An Employee can be a Manager or not, this is usually done in an RDBMS by having a type
identifier field or creating another table which uses foreign keys to indicate the relationship
between Managers and Employees. In an OODBMS, the Employee class is simply a parent class
of the Manager class.
5.3. No Impedance Mismatch: In a typical application that uses an object oriented
programming language and an RDBMS, a significant amount of time is usually spent mapping
tables to objects and back. There are also various problems that can occur when the atomic types
in the database do not map cleanly to the atomic types in the programming language and vice
versa. This "impedance mismatch" is completely avoided when using an OODBMS.
5.4. No Primary Keys: The user of an RDBMS has to worry about uniquely identifying tuples
by their values and making sure that no two tuples have the same primary key values to avoid
error conditions. In an OODBMS, the unique identification of objects is done behind the scenes
via OIDs and is completely invisible to the user. Thus there is no limitation on the values that
can be stored in an object.
5.5. One Data Model: A data model typically should model entities and their relationships,
constraints and operations that change the states of the data in the system. With an RDBMS it is
not possible to model the dynamic operations or rules that change the state of the data in the
system because this is beyond the scope of the database. Thus applications that use RDBMS
systems usually have an Entity Relationship diagram to model the static parts of the system and a
8. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 212
separate model for the operations and behaviors of entities in the application. With an OODBMS
there is no disconnect between the database model and the application model because the entities
are just other objects in the system. An entire application can thus be comprehensively modeled
in one UML diagram.
5.6. Support for long transactions: Current relational DBMSs enforce serializability on
concurrent transactions to maintain database consistency. OODBMSs use different types of long
duration transaction that are common in many database applications.
5.7. Better support for applications like software engineering or computer aided design
(CAD).Arguably better performance; though benchmarks have mainly been applied in areas like
engineering support to which OODBMS are better suited.
VI. Recent working on OOD:
These are currently using an OODBMS to handle mission critical data :
1.The Chicago Stock Exchange manages stock trades via a Versant ODBMS.
2.Radio Computing Services is the world's largest radio software company. Its product, Selector,
automates the needs of the entire radio station -- from the music library, to the newsroom, to the
sales department. RCS uses the POET ODBMS because it enabled RCS to integrate and organize
various elements, regardless of data types, in a single program environment.
3.The Objectivity/DB ODBMS is used as a data repository for system component naming,
satellite mission planning data, and orbital management data deployed by Motorola in The
Iridium System.
4. The Object Store ODBMS is used in South West Airline's Home Gate to provide self service
to travelers through the Internet.
5.Anjou University Medical Center in South Korea uses Inter Systems' Cache ODBMS to
support all hospital functions including mission-critical departments such as pathology,
laboratory, blood bank, pharmacy, and X-ray.
6. The Large Hadron Collider at CERN in Switzerland uses an Objectivity DB. The database is
currently being tested in the hundreds of terabytes at data rates up to 35 MB/second.
9. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 213
7.As of November, 2000, the Stanford Linear Accelerator Center (SLAC) stored 169 terabytes of
production data using Objectivity/DB. The production data is distributed across several hundred
processing nodes and over 30 on-line servers.
VII. Literature Review
1. A Gentle Introduction to Relational and Object Oriented Databases[1]
This report is an exact reproduction1 of his 1995 material. It consists of three parts: a talk
on relational databases, a talk on object oriented databases and a commented bibliography on
object oriented databases. The talks are intended as one hour introductions for an audience of
computer professionals, assumed to be technically competent but not familiar with the topics
discussed.
2. Object Oriented Database[2]
This presentation on Object Oriented Databases. It gives a basic introduction to the
concepts governing OODBs and looks at its details including its architecture, the query
languages used etc. A contrast between OODBs and RDBs is also presented. The reader will
gain insight into databases, data models, OODB architecture, Object Query Language,
OODBMS.
3. Advantages of object oriented over relational database on real life application- [3]
In this paper, author introduces the advantages of the object oriented database.
4. Object oriented databases a natural part of object oriented software development. [4]
The presentation on Object Oriented Databases gives a basic introduction to the concepts
governing OODBs and looks at its details including its architecture, the query languages used
etc. A contrast between OODBs and RDBs is also presented.
VIII. Conclusion
Now a day’s there is a trend of Relational databases, but as we are moving towards E-
commerce form traditional commerce there is a need of considering multimedia data which is
frequently found on internet which is not supported by relational database. Thus there is need for
object oriented database which is providing an alternative to relational database for the
representation, storage and access of non-traditional data forms that were increasingly found in
10. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 05, [November - 2014] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 214
advance application like Geographic Information System, CAD/CAM etc. Object Oriented
database can handle large collection of complex data including user defined data types and
supports for inheritance, polymorphism etc.
References
[1][HakanButuner]volum 5,2012Industrial Management and Engineering Co., Istanbul, Turkey
[2] [Arjun Gopalkrishna & Bhavya Udayashankar 2007]
[3].[Won Kim]1990 ACM Microelectronics and Computer Technology Corporation3500 West
Balcones Center Drive
[4].[IEEE] IEEE POSIX 1003.4 Working Group, POSIX Srd1003.lb-1993 (P1003.4)Real-Time
Extension. 1994, IEEE.
[5].[KRUP94 Kmpp P., et al.] Real-Time Processing and CORBA.presented at the OOPSLA 94
Conference Workshop on CORBA, Portland, OR, 1994.
[6].[Kappel, G., RauschSchott,S., Retschitzegger, W.] BottomUpDesign of Active
ObjectOrientedDatabases. Communications of the ACM, 2001
[7].[Fayad, Mohamed E., Schmidt, DouglasC]ObjectOrientedApplication Frameworks.
Communications of the ACM, 1997.Loomis, Mary E. S. ODBMS versus Relational. JOOP
Focus on ODBMS, 1992
[8].[cadf]The Committee for advance DBMS FUNCTION, Third Generation Database System
Manifeto, In Computer Standards and Interfaces 13 (1991), pages 41-54. North Holland. Also
appears in SIGMOD Record 19:3 Sept, 1990.
[9].[date] C. J. Date, “An introduction to database systems”, Vol. 1, 6th ed,Addison-Wesley, 1995
[10].[W. KIM AND LOCHOVSKY (EDS]),Object-Oriented Concepts, Databases, and
Applications, Addison-Wesley (Reading MA), 1989
[11].[ullman89] JEFFREY D. ULLMAN,Principles of Database and Knowledge-based
Systems, Vol.2,Computer Science Press, 1989(2nd volume of 2-volume textbook.)
[12].[ Kofler Michael, Ph.D.] Thesis, R-trees for Visualizing and OrganizingLarge 3D GIS
Databases, TechnischenUniversität Graz,July 1998,www.icg.tugraz.ac.at/kofler/thesis,
[13].[Kuzniarz, Ludwik and Staron,Miroslaw,]Customisation of UnifiedModeling Language for
Logical Database Modeling, Research report no2002:12, Department of Software Engineering
and Computer Science,Blekinge Institute of Technology.