This document discusses an integrated approach to ontology development methodology and provides a case study using a shopping mall domain. It begins by reviewing existing ontology development methodologies and identifying their pitfalls. An integrated methodology is then proposed which aims to reduce these pitfalls. The key steps in the proposed methodology are: 1) capturing motivating user scenarios or keywords, 2) generating formal/informal questions and answers from the scenarios, 3) extracting terms and constraints, and 4) building the ontology using a top-down approach. The methodology is applied to developing an ontology for a shopping mall domain to provide multilingual information to visitors.
Ontology matching finds correspondences between similar entities of different ontologies. Two ontologies may be similar in some aspects such as structure, semantic etc. Most ontology matching systems integrate multiple matchers to extract all the similarities that two ontologies may have. Thus, we face a major problem to aggregate different similarities.
Some matching systems use experimental weights for aggregation of similarities among different matchers while others use machine learning approaches and optimization algorithms to find optimal weights to assign to different matchers. However, both approaches have their own deficiencies.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
A Semi-Automatic Ontology Extension Method for Semantic Web ServicesIDES Editor
this paper provides a novel semi-automatic ontology
extension method for Semantic Web Services (SWS). This is
significant since ontology extension methods those existing
in literature mostly deal with semantic description of static
Web resources such as text documents. Hence, there is a need
for methods that can serve dynamic Web resources such as
SWS. The developed method in this paper avoids redundancy
and respects consistency so as to assure high quality of the
resulting shared ontologies.
A new approach based on the detection of opinion by sentiwordnet for automati...csandit
In this paper, we propose a new approach based on t
he detection of opinion by the
SentiWordNet for the production of text summarizati
on by using the scoring extraction
technique adapted to detecting of opinion. The tex
ts are decomposed into sentences then
represented by a vector of scores of opinion of thi
s sentences. The summary will be done by
elimination of sentences whose opinion is different
from the original text. This difference is
expressed by a threshold opinion. The following hyp
othesis: "textual units that do not share the
same opinion of the text are ideas used for the dev
elopment or comparison and their absences
have no vocation to reach the semantics of the abst
ract" Has been verified by the statistical
measure of Chi_2 which we used it to calculate a de
pendence between the unit textual and the
text. Finally we found an opinion threshold interva
l which generate the optimal assessments.
Ontology matching finds correspondences between similar entities of different ontologies. Two ontologies may be similar in some aspects such as structure, semantic etc. Most ontology matching systems integrate multiple matchers to extract all the similarities that two ontologies may have. Thus, we face a major problem to aggregate different similarities.
Some matching systems use experimental weights for aggregation of similarities among different matchers while others use machine learning approaches and optimization algorithms to find optimal weights to assign to different matchers. However, both approaches have their own deficiencies.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
A Semi-Automatic Ontology Extension Method for Semantic Web ServicesIDES Editor
this paper provides a novel semi-automatic ontology
extension method for Semantic Web Services (SWS). This is
significant since ontology extension methods those existing
in literature mostly deal with semantic description of static
Web resources such as text documents. Hence, there is a need
for methods that can serve dynamic Web resources such as
SWS. The developed method in this paper avoids redundancy
and respects consistency so as to assure high quality of the
resulting shared ontologies.
A new approach based on the detection of opinion by sentiwordnet for automati...csandit
In this paper, we propose a new approach based on t
he detection of opinion by the
SentiWordNet for the production of text summarizati
on by using the scoring extraction
technique adapted to detecting of opinion. The tex
ts are decomposed into sentences then
represented by a vector of scores of opinion of thi
s sentences. The summary will be done by
elimination of sentences whose opinion is different
from the original text. This difference is
expressed by a threshold opinion. The following hyp
othesis: "textual units that do not share the
same opinion of the text are ideas used for the dev
elopment or comparison and their absences
have no vocation to reach the semantics of the abst
ract" Has been verified by the statistical
measure of Chi_2 which we used it to calculate a de
pendence between the unit textual and the
text. Finally we found an opinion threshold interva
l which generate the optimal assessments.
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...ijasuc
There is a vast amount of researched literature available on Route Finding and Link Establishment in
MANET protocols based on various concepts such as “pro-active”, “reactive”, “power awareness”,
“cross-layering” etc. Most of these techniques are rather restrictive, taking into account a few of the
several aspects that go into effective route establishment. When we look at practical implementations of
MANETs, we have to take into account various factors in totality, not in isolation. The several factors that
decide and influence the routing have to be considered as a whole in the difficult task of finding the best
solution in route finding and optimization. The inputs to the system are manifold and apparently unrelated.
Most of the parameters are imprecise or non-crisp in nature. The uncertainty and imprecision lead to think
that intelligent routing techniques are essential and important in evolving robust and dependable solutions
to route finding. The obvious method by which this can be achieved is the deployment of soft computing
techniques such as Neural Nets, Fuzzy Logic and Genetic algorithms. Neural Networks help us to solve the
complex problem of transforming the inputs to outputs without apriori knowledge of what the relationship
is between inputs and outputs. Fuzzy Logic helps us to deal with imprecise and ill-conditioned data.
Genetic Algorithms help us to select the best possible solution from the solution space in an optimal sense.
Our paper presented here below seeks to explore new horizons in this direction. The results of our
experimentation have been very satisfactory and we have achieved the goal of optimal route finding to a
large extent. There is of course considerable room for further refinements.
Sentiment classification aims to detect information such as opinions, explicit , implicit feelings expressed
in text. The most existing approaches are able to detect either explicit expressions or implicit expressions of
sentiments in the text separately. In this proposed framework it will detect both Implicit and Explicit
expressions available in the meeting transcripts. It will classify the Positive, Negative, Neutral words and
also identify the topic of the particular meeting transcripts by using fuzzy logic. This paper aims to add
some additional features for improving the classification method. The quality of the sentiment classification
is improved using proposed fuzzy logic framework .In this fuzzy logic it includes the features like Fuzzy
rules and Fuzzy C-means algorithm.The quality of the output is evaluated using the parameters such as
precision, recall, f-measure. Here Fuzzy C-means Clustering technique measured in terms of Purity and
Entropy. The data set was validated using 10-fold cross validation method and observed 95% confidence
interval between the accuracy values .Finally, the proposed fuzzy logic method produced more than 85 %
accurate results and error rate is very less compared to existing sentiment classification techniques.
Importance of the neutral category in fuzzy clustering of sentimentsijfls
Social media is said to have an impact on the public discourse and communication in the society. It is increasingly
being used in the political context. Social networks sites such as Facebook, Twitter and other
microblogging services provide an opportunity for public to give opinions about some issues of interest.
Twitter is an ideal platform for users to spread not only information in general but also political opinions,
whereas Facebook provides the capability for direct dialogs. A lot of studies have shown that a need exists
for stakeholders to collect, monitor, analyze, summarize and visualize these social media views. Some authors
have tended to categorize these comments as either positive or negative ignoring the neutral category.
In this paper, we demonstrate the importance of the neutral category in the clustering of sentiments
from the social media. We then demonstrate the use of fuzzy clustering for this kind of task.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
A survey on phrase structure learning methods for text classificationijnlc
Text classification is a task of automatic classification of text into one of the predefined categories. The
problem of text classification has been widely studied in different communities like natural language
processing, data mining and information retrieval. Text classification is an important constituent in many
information management tasks like topic identification, spam filtering, email routing, language
identification, genre classification, readability assessment etc. The performance of text classification
improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local
behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the
first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several NLP tasks, which uses
syntactic information from phrase structure like grammar checkers, question answering, information
extraction, machine translation, text classification. The paper also provides different levels of classification
and detailed comparison of the phrase structure learning methods.
Taxonomy extraction from automotive natural language requirements using unsup...ijnlc
In this paper we present a novel approach to semi-automatically learn concept hierarchies from natural
language requirements of the automotive industry. The approach is based on the distributional hypothesis
and the special characteristics of domain-specific German compounds. We extract taxonomies by using
clustering techniques in combination with general thesauri. Such a taxonomy can be used to support
requirements engineering in early stages by providing a common system understanding and an agreedupon
terminology. This work is part of an ontology-driven requirements engineering process, which builds
on top of the taxonomy. Evaluation shows that this taxonomy extraction approach outperforms common
hierarchical clustering techniques.
Language Combinatorics: A Sentence Pattern Extraction Architecture Based on C...Waqas Tariq
A \"sentence pattern\" in modern Natural Language Processing is often considered as a subsequent string of words (n-grams). However, in many branches of linguistics, like Pragmatics or Corpus Linguistics, it has been noticed that simple n-gram patterns are not sufficient to reveal the whole sophistication of grammar patterns. We present a language independent architecture for extracting from sentences more sophisticated patterns than n-grams. In this architecture a \"sentence pattern\" is considered as n-element ordered combination of sentence elements. Experiments showed that the method extracts significantly more frequent patterns than the usual n-gram approach.
Neural perceptual model to global local vision for the recognition of the log...ijaia
This paper gives the definition of Transparent Neural Network “TNN” for the simulation of the globallocal
vision and its application to the segmentation of administrative document image. We have developed
and have adapted a recognition method which models the contextual effects reported from studies in
experimental psychology. Then, we evaluated and tested the TNN and the multi-layer perceptron “MLP”,
which showed its effectiveness in the field of the recognition, in order to show that the TNN is clearer for
the user and more powerful on the level of the recognition. Indeed, the TNN is the only system which makes
it possible to recognize the document and its structure.
TEXT SENTIMENTS FOR FORUMS HOTSPOT DETECTIONijistjournal
The user generated content on the web grows rapidly in this emergent information age. The evolutionary changes in technology make use of such information to capture only the user’s essence and finally the useful information are exposed to information seekers. Most of the existing research on text information processing, focuses in the factual domain rather than the opinion domain. In this paper we detect online hotspot forums by computing sentiment analysis for text data available in each forum. This approach analyses the forum text data and computes value for each word of text. The proposed approach combines K-means clustering and Support Vector Machine with PSO (SVM-PSO) classification algorithm that can be used to group the forums into two clusters forming hotspot forums and non-hotspot forums within the current time span. The proposed system accuracy is compared with the other classification algorithms such as Naïve Bayes, Decision tree and SVM. The experiment helps to identify that K-means and SVM-PSO together achieve highly consistent results.
A DOMAIN INDEPENDENT APPROACH FOR ONTOLOGY SEMANTIC ENRICHMENTcscpconf
Ontology automatic enrichment consists of adding automatically new concepts and/or new relations to an initial ontology built manually using a basic domain knowledge. In a concrete manner, enrichment is firstly, extracting concepts and relations from textual sources then putting them in their right emplacements in the initial ontology. However, the main issue in that process is how to preserve the coherence of the ontology after this operation. For this purpose, we consider the semantic aspect in the enrichment process by using similarity techniques between terms. Contrarily to other approaches, our approach is domain independent and the enrichment process is based on a semantic analysis. Another advantage of our approach is that it takes into account the two types of relations, taxonomic and non taxonomic
ones.
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...ijasuc
There is a vast amount of researched literature available on Route Finding and Link Establishment in
MANET protocols based on various concepts such as “pro-active”, “reactive”, “power awareness”,
“cross-layering” etc. Most of these techniques are rather restrictive, taking into account a few of the
several aspects that go into effective route establishment. When we look at practical implementations of
MANETs, we have to take into account various factors in totality, not in isolation. The several factors that
decide and influence the routing have to be considered as a whole in the difficult task of finding the best
solution in route finding and optimization. The inputs to the system are manifold and apparently unrelated.
Most of the parameters are imprecise or non-crisp in nature. The uncertainty and imprecision lead to think
that intelligent routing techniques are essential and important in evolving robust and dependable solutions
to route finding. The obvious method by which this can be achieved is the deployment of soft computing
techniques such as Neural Nets, Fuzzy Logic and Genetic algorithms. Neural Networks help us to solve the
complex problem of transforming the inputs to outputs without apriori knowledge of what the relationship
is between inputs and outputs. Fuzzy Logic helps us to deal with imprecise and ill-conditioned data.
Genetic Algorithms help us to select the best possible solution from the solution space in an optimal sense.
Our paper presented here below seeks to explore new horizons in this direction. The results of our
experimentation have been very satisfactory and we have achieved the goal of optimal route finding to a
large extent. There is of course considerable room for further refinements.
Sentiment classification aims to detect information such as opinions, explicit , implicit feelings expressed
in text. The most existing approaches are able to detect either explicit expressions or implicit expressions of
sentiments in the text separately. In this proposed framework it will detect both Implicit and Explicit
expressions available in the meeting transcripts. It will classify the Positive, Negative, Neutral words and
also identify the topic of the particular meeting transcripts by using fuzzy logic. This paper aims to add
some additional features for improving the classification method. The quality of the sentiment classification
is improved using proposed fuzzy logic framework .In this fuzzy logic it includes the features like Fuzzy
rules and Fuzzy C-means algorithm.The quality of the output is evaluated using the parameters such as
precision, recall, f-measure. Here Fuzzy C-means Clustering technique measured in terms of Purity and
Entropy. The data set was validated using 10-fold cross validation method and observed 95% confidence
interval between the accuracy values .Finally, the proposed fuzzy logic method produced more than 85 %
accurate results and error rate is very less compared to existing sentiment classification techniques.
Importance of the neutral category in fuzzy clustering of sentimentsijfls
Social media is said to have an impact on the public discourse and communication in the society. It is increasingly
being used in the political context. Social networks sites such as Facebook, Twitter and other
microblogging services provide an opportunity for public to give opinions about some issues of interest.
Twitter is an ideal platform for users to spread not only information in general but also political opinions,
whereas Facebook provides the capability for direct dialogs. A lot of studies have shown that a need exists
for stakeholders to collect, monitor, analyze, summarize and visualize these social media views. Some authors
have tended to categorize these comments as either positive or negative ignoring the neutral category.
In this paper, we demonstrate the importance of the neutral category in the clustering of sentiments
from the social media. We then demonstrate the use of fuzzy clustering for this kind of task.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
A survey on phrase structure learning methods for text classificationijnlc
Text classification is a task of automatic classification of text into one of the predefined categories. The
problem of text classification has been widely studied in different communities like natural language
processing, data mining and information retrieval. Text classification is an important constituent in many
information management tasks like topic identification, spam filtering, email routing, language
identification, genre classification, readability assessment etc. The performance of text classification
improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local
behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the
first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several NLP tasks, which uses
syntactic information from phrase structure like grammar checkers, question answering, information
extraction, machine translation, text classification. The paper also provides different levels of classification
and detailed comparison of the phrase structure learning methods.
Taxonomy extraction from automotive natural language requirements using unsup...ijnlc
In this paper we present a novel approach to semi-automatically learn concept hierarchies from natural
language requirements of the automotive industry. The approach is based on the distributional hypothesis
and the special characteristics of domain-specific German compounds. We extract taxonomies by using
clustering techniques in combination with general thesauri. Such a taxonomy can be used to support
requirements engineering in early stages by providing a common system understanding and an agreedupon
terminology. This work is part of an ontology-driven requirements engineering process, which builds
on top of the taxonomy. Evaluation shows that this taxonomy extraction approach outperforms common
hierarchical clustering techniques.
Language Combinatorics: A Sentence Pattern Extraction Architecture Based on C...Waqas Tariq
A \"sentence pattern\" in modern Natural Language Processing is often considered as a subsequent string of words (n-grams). However, in many branches of linguistics, like Pragmatics or Corpus Linguistics, it has been noticed that simple n-gram patterns are not sufficient to reveal the whole sophistication of grammar patterns. We present a language independent architecture for extracting from sentences more sophisticated patterns than n-grams. In this architecture a \"sentence pattern\" is considered as n-element ordered combination of sentence elements. Experiments showed that the method extracts significantly more frequent patterns than the usual n-gram approach.
Neural perceptual model to global local vision for the recognition of the log...ijaia
This paper gives the definition of Transparent Neural Network “TNN” for the simulation of the globallocal
vision and its application to the segmentation of administrative document image. We have developed
and have adapted a recognition method which models the contextual effects reported from studies in
experimental psychology. Then, we evaluated and tested the TNN and the multi-layer perceptron “MLP”,
which showed its effectiveness in the field of the recognition, in order to show that the TNN is clearer for
the user and more powerful on the level of the recognition. Indeed, the TNN is the only system which makes
it possible to recognize the document and its structure.
TEXT SENTIMENTS FOR FORUMS HOTSPOT DETECTIONijistjournal
The user generated content on the web grows rapidly in this emergent information age. The evolutionary changes in technology make use of such information to capture only the user’s essence and finally the useful information are exposed to information seekers. Most of the existing research on text information processing, focuses in the factual domain rather than the opinion domain. In this paper we detect online hotspot forums by computing sentiment analysis for text data available in each forum. This approach analyses the forum text data and computes value for each word of text. The proposed approach combines K-means clustering and Support Vector Machine with PSO (SVM-PSO) classification algorithm that can be used to group the forums into two clusters forming hotspot forums and non-hotspot forums within the current time span. The proposed system accuracy is compared with the other classification algorithms such as Naïve Bayes, Decision tree and SVM. The experiment helps to identify that K-means and SVM-PSO together achieve highly consistent results.
A DOMAIN INDEPENDENT APPROACH FOR ONTOLOGY SEMANTIC ENRICHMENTcscpconf
Ontology automatic enrichment consists of adding automatically new concepts and/or new relations to an initial ontology built manually using a basic domain knowledge. In a concrete manner, enrichment is firstly, extracting concepts and relations from textual sources then putting them in their right emplacements in the initial ontology. However, the main issue in that process is how to preserve the coherence of the ontology after this operation. For this purpose, we consider the semantic aspect in the enrichment process by using similarity techniques between terms. Contrarily to other approaches, our approach is domain independent and the enrichment process is based on a semantic analysis. Another advantage of our approach is that it takes into account the two types of relations, taxonomic and non taxonomic
ones.
Keystone Summer School 2015: Mauro Dragoni, Ontologies For Information RetrievalMauro Dragoni
The presentation provides an overview of what an ontology is and how it can be used for representing information and for retrieving data with a particular focus on the linguistic resources available for supporting this kind of task. Overview of semantic-based retrieval approaches by highlighting the pro and cons of using semantic approaches with respect to classic ones. Use cases are presented and discussed
Ontology Construction from Text: Challenges and TrendsCSCJournals
Ontology is one of the most popular representation model used for knowledge representation, sharing and reusing. In light of the importance of ontology, different methodologies for building ontologies have been proposed. Ontology construction is a difficult and time-consuming process. Many efforts have been made to help ontology engineers to construct ontologies and to overcome the bottleneck of knowledge acquisition. The aim of this paper is to give a brief overview of ontology learning approaches and to review some of ontology extraction systems and tools followed by a summarizing comparison of them. Also some of the current issues and main trends of ontology construction from texts will be discussed.
Recruitment Based On Ontology with Enhanced Security Featurestheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Implementation of a Knowledge Management Methodology based on Ontologies :Cas...rahulmonikasharma
in this paper, we suggest a methodology of knowledge management that makes use of the new possibilities offered by semantic web technologies and covers the various stages of the project life cycle. In fact, with this new vision of ontologies and semantic web, it is important to provide a strong methodological support in order to develop complex ontology-based systems.
A Survey of Ontology-based Information Extraction for Social Media Content An...ijcnes
The amount of information generated in the Web has grown enormously over the years. This information is significant to individuals, businesses and organizations. If analyzed, understood and utilized, it will provide a valuable insight to its stakeholders. However, many of these information are semi-structured or unstructured which makes it difficult to draw in-depth understanding of the implications behind those information. This is where Ontology-based Information Extraction (OBIE) and social media content analysis come into play. OBIE has now become a popular way to extract information coming from machine-readable sources. This paper presents a survey of OBIE, Ontology languages and tools and the process to build an ontology model and framework. The author made a comparison of two ontology building frameworks and identified which framework is complete.
A Comparative Study Ontology Building Tools for Semantic Web Applications IJwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
A Comparative Study Ontology Building Tools for Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.
A Comparative Study of Ontology building Tools in Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
Evaluating Scientific Domain Ontologies for the Electromagnetic Knowledge Dom...dannyijwest
The adoption of ontologies as a formalized approach to information codification is a constant-growing
phenomenon in scientific research. Moreover, knowledge sharing and reuse can be improved by adopting
hierarchical and modular frameworks and therein embedding available ontologies. Unfortunately, merging
procedures may bring about severe, time-consuming problems if a careful selection process is not carried
out. Based on these considerations, we propose a methodology for evaluating and selecting higher-level
ontologies, given the lower-level ones.
A N E XTENSION OF P ROTÉGÉ FOR AN AUTOMA TIC F UZZY - O NTOLOGY BUILDING U...ijcsit
The process of building ontology is a very
complex and time
-
consuming process
especially when dealing
with huge amount of data. Unfortunately current
marketed
tools are very limited and don’t meet
all
user
needs.
Indeed, t
hese software build the core of the ontology from initial data that generates
a
big number of
information.
In this paper, we
aim to resolve these problems
by adding an extension to the well known
ontology editor Protégé in order to work towards a complete
FCA
-
based framework
which resolves the
limitation of other tools in
building fuzzy
-
ontology
.
W
e will give
, in this paper
, some
details on
our
sem
i
-
automat
ic collaborative tool
called FOD Tab Plug
-
in
which
takes into consideration another degree of
granularity in the process of generation
.
In fact, i
t follows a bottom
-
up strategy based on conceptual
clustering, fuzzy logic and Formal Concept Analysis (FCA) a
nd it defines ontology between classes
resulting from a preliminary classification of data and not from the initial large amount of data
.
A Review on Evolution and Versioning of Ontology Based Information Systemsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Student information management system project report ii.pdf
0810ijdms02
1. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
DOI : 10.5121/ijdms.2010.2302 13
INTEGRATED APPROACH TO ONTOLOGY
DEVELOPMENT METHODOLOGY WITH CASE
STUDY
Sandeep Chaware1
, Srikantha Rao2
1,2
MPSTME, NMIMS University, Mumbai, INDIA
{1
smchaware@gmail.com, 2
dr_s_rao@yahoo.com}
Abstract
Knowledge can be represented by ontology. In an enterprise context, they reflect the relevant knowledge
based on enterprise-specific concepts and their relations. In order to develop ontology, there are various
methodologies, where each one may have some pitfalls depending on context .In this paper, based on an
analysis of existing methodologies, we explore the possibilities by proposing an integrated model for
developing ontology, which can be used to build any kind of ontology. Our main intention is to reduce
development time and effort. We had proposed the system with respect to shopping mall domain, where
dynamically ontologies can be prepared to get the information faster and correct. Further, these ontologies
can be used for mapping. We compare our model with the existing developing methodology, and we had
tried to remove the possible pitfalls of the existing techniques.
Keywords
Ontology, Skeleton, Sensus, Methontology, Gruinger and Fox, WordNet, Shopping Mall.
1. INTRODUCTION
There are various domains where there is need of an application which will take care of various
linguistics people to provide better services. For example, holy places, where millions of
multilinguistics devotees come and demands services in their local language. Another example is
government services, where each service should be in local language for common people. The
popular example is Shopping Mall, where millions of people come with various language
specking communities. In this case, it is very difficult to provide better and faster service,
especially when the people demands service in their local language. There are several solutions
suggested, the best one is use of ontology. The ontology can be used to represent the information
in a better way, which can be used to provide the service to all. Ontology can be defined as “An
explicit specification of a conceptualization”. Ontology is arranged in a lattice or taxonomy of
concepts in classes and subclasses. Each concept is typically associated with various properties
describing its features and attributes as well as various restrictions on them. It is a shared
conceptualization of knowledge in a particular domain. The top-level ontologies describe very
general concepts like space, time, matter, object, event, action etc. which are independent of a
particular problem or domain. Other ontologies are domain and task related to domain or activity.
Examples of existing ontologies are: top-level ontologies like SUO (Standard upper Ontology)
provides definition for general purpose terms, SENSUS, natural language-based ontology
developed by NLG at ISI to provide a broad conceptual structure for working in machine
translation, WordNet, which is a large lexical database for English created at Princeton University
or IITB for Indian languages and medical ontologies such as gene, Galen and Menelas [1]. These
ontology can be used in many applications like Information and Knowledge Management,
Military, Education, small-and-large enterprises, industrial risk analysis, medical, communication,
construction of emergency plans.
In this paper, we had made survey of various ontology development methodologies and found out
some pitfalls. We had proposed an integrated approach to the development of ontology. The case
study of Shopping Mall ontology had proved the approach the best one.
2. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
14
2. INTRODUCTION TO BUILDING ONTOLOGIES
The basic steps in building ontology are straightforward. Various methodologies exist to guide
the theoretical approach taken and numerous ontology building tools are available. Ontology is
typically built in more-or-less the following manner [1]:
1. Acquire domain knowledge: Assemble appropriate information resources and expertise
that will define, with consensus and consistency, the terms used formally to describe
things in the domain of interest. These definitions must be collected so that they can be
expressed in a common language selected for the ontology.
2. Organize the ontology: Design the overall conceptual structure of the domain. This will
likely involve identifying the domain's principal concrete concepts and their properties,
identifying the relationships among the concepts, creating abstract concepts as organizing
features, referencing or including supporting ontologies, distinguishing which concepts
have instances, and applying other guidelines of your chosen methodology.
3. Flesh out the ontology: Add concepts, relations, and individuals to the level of detail
necessary to satisfy the purposes of the ontology.
4. Check your work: Reconcile syntactic, logical, and semantic inconsistencies among the
ontology elements. Consistency checking may also involve automatic classification that
defines new concepts based on individual properties and class relationships.
5. Commit the ontology: Incumbent on any ontology development effort is a final
verification of the ontology by domain experts and the subsequent commitment of the
ontology by publishing it within its intended deployment environment.
3. ONTOLOGY METHODOLOGY: A SURVEY
Basically, a series of approaches have been reported for developing ontologies. In 1990, Lenat
and Guha published the general steps and some interesting points about the Cyc development.
Initially, the Enterprise Ontology and the TOVE (TOronto Virtual Enterprise) project ontology
had been proposed in the domain of enterprise modeling. Bernaras et al. presented a method used
to build ontology in the domain of electrical networks as part of the Esprit KACTUS project. The
methodology METHONTOLOGY appeared at the same time. In 1997, a new method was
proposed for building ontologies based on the SENSUS ontology. Some years later, the On-To-
Knowledge methodology appeared as a result of the project with the same name. However, all
these methods and methodologies do not consider collaborative and distributed construction of
ontologies. In this paper, we are describing some of the methodologies for building ontologies [1].
3.1 Skeleton Methodology
This methodology is based on the experience of developing the Enterprise Ontology, ontology for
enterprise modeling processes. A plan or draft for a project along with activities can be
represented as ontology. The steps are, first, identify the main purpose of the ontology, second
build the ontology, where the key concepts and its relationships can be captured and third, code it
with proper language and may be integrated with existing ontology. We can use either top-down
or bottom-up approach to represent the ontology [1] [2]. This method is simple to implement but
limited to scope.
3.2 Gruninger And Fox Methodology
Gruninger and Fox proposed a methodology that is inspired on the development of knowledge-
based systems using first order logic. This methodology has been suggested as TOVE project
ontology within domain of business processes and activities modeling. This represents logical
model of knowledge. The steps are: 1) Capture the motivating scenarios. 2) Formulation of
informal competency questions, where the scope of the ontology can be decided. 3) Formulation
3. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
15
of formal competency questions, which specify the terminology with definition and constraints.
4) Specification of axioms and definition within the formal language. 5) Finally, specify the
conditions under which the solutions to the questions are complete. In this methodology, the
ontology can be built by using questions and answers for motivating scenarios, which represents
main concepts, properties, relations and axioms on the ontology [1][2]. The methodology will
extend the scope but the procedure is complex.
3.3 Methontology Methodology
This methodology will give the construction of ontology at knowledge level. The ontology
development process is: 1) Determine the tasks to be performed when building ontology i.e.
scheduling, control, quality assurance, specification, knowledge acquisition, conceptualization,
integration, formalization, implementation, evaluation, maintenance, documentation and
configuration management. 2) Determine the life cycle of ontology as number of stages. This
represents the activities to be performed in each stage and how the stages are related. 3)
Determine the techniques used in each activity, the products that each activity output and how
they have to be evaluated [1][2]. These methodologies deal with software engineering concepts. It
handles all the activities in details.
3.4 Sensus Methodology
The method based on Sensus is a top-down approach for deriving domain specific ontologies
from huge ontologies. The steps are: 1) A series of terms are taken as seed. 2) These seed terms
are linked by hand to SENSUS. 3) All the concepts in the path from the seed terms to the root of
SENSUS are included. 4) Terms that could be relevant within the domain and have not yet
appeared are added. 5) Finally, for those nodes that have a large number of paths through them,
the entire subtree under the node is sometimes added, based on the idea that if many of the nodes
in a subtree have been found to be relevant, then the other nodes in the subtree are likely to be
relevant as well [2]. This methodology uses the existing ontology, where the merging will be
complex due to different structures.
3.5 WordNet Methodology
WordNet is a lexical database for the English language. It groups English words into sets of
synonyms called synsets, provides short, general definitions, and records the various semantic
relations between these synonym sets. The purpose is twofold: to produce a combination of
dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and
artificial intelligence applications.
The hypernym/hyponym relationships among the noun synsets can be interpreted as
specialization relations between conceptual categories. In other words, WordNet can be
interpreted and used as a lexical ontology in the computer science sense. However, such ontology
should be corrected before being used since it contains hundreds of basic semantic
inconsistencies such as (i) the existence of common specializations for exclusive categories and
(ii) redundancies in the specialization hierarchy. Furthermore, transforming WordNet into a
lexical ontology usable for knowledge representation should normally also involve
(i) distinguishing the specialization relations into subtypeOf and instanceOf relations, and
(ii) associating intuitive unique identifiers to each category. WordNet has also been converted to
a formal specification by means of a hybrid bottom-up top-down methodology to automatically
extract association relations from WordNet, and interpret these associations in terms of a set of
conceptual relations, formally defined in the DOLCE foundational ontology [3].
4. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
16
3.6 Pitfalls of Existing Ontology Development Methodologies
1) Some of the methodologies are too formal and only useful for small-scale applications or
contexts.
2) Some methodologies like Methontology, is more mature and detailed where as some
steps can be either integrated or rejected depending on context [5].
3) Integration of existing ontologies may be difficult due to change in structure or plan.
4) For each scenario, we can not decide the competency questions, which will represents the
definition and constraints of terms used in ontology.
5) Some of the methodologies are complex to build and takes long time and utilize large
resources.
4. PROPOSED INTEGRATED ONTOLOGY DEVELOPMENT METHODOLOGY
When ontology technologies emerged in the 1990s, the focus on knowledge acquisition
influenced the way new capabilities were put to use in the field. Early ontology methodologies
adopted the method for developing knowledge bases. This orientation is not as evident in today's
tools. There is also increasing support for common upper level ontologies like WordNet, Cyc, and
others.
Figure 1 shows the proposed integrated ontology development methodology. Here, we can
integrate the existing ontology methodologies in order to remove the pitfalls, and hence to
improve the overall procedure to build ontology. The modules are motivating user scenarios
module, formal/informal questions and answer generation module, extraction of terms and
constraints module and build ontology module. Each module is described below in brief.
1) Motivating User Scenarios Module/Keyword: This module is responsible for capturing
the motivating user scenarios for particular domain. In this module, a keyword can be
entered and processed to extract the abstract concept. This module can be manually
maintained or we can use UML diagram to represent them. With these scenarios, we can
formulate the exact purpose and need of ontology for the domain.
2) Formulation of Formal/Informal Questions and Answer Module: Within this module, the
possible informal and formal questions and answers can be generated for the motivating
scenarios. These questions and answers can be generated either manually or by the
system according to scenario or from abstract concept of entered keyword. This module
will determine the scope of the ontology. These questions and answers may be different
for different users and scenarios. No single ontology structure will satisfy the need of
user.
3) Extraction of Terms and Constraints Module: Once you know the scope of the ontology,
the terms and constraints can be extracted from them to know the concepts and their
relationships for the domain. This step can be carried out manually or by parsing of the
keywords from the questions/answers.
4) Build Ontology Module: Finally, we can build the ontology by looking at these concepts
and their relationships. We can use any approach, but the top-down approach is better
since it extract the terms from abstract to specific concepts.
5. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
17
Figure: 1 proposed Integrated Ontology Development
5. THE DEVELOPMENT OF SHOPPING MALL ONTOLOGY: A CASE STUDY
5.1 Purpose and Scope
The purpose is to design and develop ontology in an area where millions of users are visiting the
shopping mall every day, with multilingual background. The scope is limited to a number of
areas. The visitors are looking for information about a shopping mall and its shops. Each one will
look for the details of shops, available products, various schemes and services from shops within
shopping mall. The ontology will play an important role in order to serve all these information to
the visitors. The entire ontology will provide all the relevant information irrespective of
languages [4].
5.2 Domain and Source
Before addressing design issues, the first task was to decide upon an area to investigate as the
domain of interest. Shopping Mall database were chosen as the domain. It is a broad subject area
that was likely to yield a large number of concepts and associated relationships. These could be
used to test the initial hypothesis that the ‘is-a’ relationship is sufficient to express the semantics.
It is a mature discipline within computing with an agreed body of core knowledge that is readily
available. A Shopping Mall was used as the source – “Raghulila – The Mall, Kandivali (West)”.
There are advantages to using a shopping mall as the source of ontology concepts. First, coverage
of the domain of interest is extensive as the purpose of visitors is to provide a good grounding in
the subject. Second, when each new shop or product is introduced, new terms are explained, thus
providing the basis for concept definitions.
5.3 A Systematic Approach to Ontology Modeling
We are proposing the following steps to build ontology for shopping mall, according to the
proposed integrated version of methodology.
User
4
User
3
User 2
User
1
Motivating User
Scenarios OR
Keyword
Abstract
concept
Formulation of Informal/Formal Questions and
Answers Module
Extraction of Terms and Constraints Module Concepts and
Relationships
Build Ontology Module Top-Down
Approach/ Structure
Search keyword in
database to form Q/A
6. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
18
5.3.1 Step1=> Motivating User Scenarios/Keyword
We can capture various motivating user scenarios, such as first, getting the details about
Raghulila shopping mall, i.e. its location, address, phone numbers etc. Second, this shopping mall
is running which movies and its details, third, a user wishes to buy jeans of particular brand, he
wanted to know its details etc. A keyword can be entered to get the information as service. For
Example, Raghulila Shopping Mall, with this the abstract concept has been generated as
Raghulila Shopping Mall, further more information can be served from possible questions and
answers.
5.3.2 Step2=> Formulation of Formal/Informal Questions and Answers
In this step, formal and informal questions and answers for the various scenarios can be
formulated, such as, what is the location of shopping mall you are looking for? What details you
want? Which movie, rate of tickets for movie, availability of tickets, date & time? Which brand
for jeans? What is the price range? Etc. These questions and answers can be formulated either
manually or automatically as per keyword entered.
5.3.3 Step3=> Extraction of Terms and Constraints
The various terms and its constraints can be extracted from the answers from step 2. The terms
can be shopping mall name, address, phone numbers, and location. Name of the movie,
availability of tickets, date and time of movie etc. Availability of jeans of particular brand along
with name of the shop and its details. These terms leads to the concepts and its relationships. For
example, concept may be movie ‘Ravan’. Its attributes will be date & time of show, price of
tickets, availability etc. The relationships can be is-a or has etc.
5.3.4 Step4=> Build Ontology
For building ontology, we will use top-down approach, since we may know the abstract of the
shopping mall, and further derive the specification and gene rationalization about the concepts
and its relationships. To build the ontology, we can use the tree or graph like structure.
Figure 2: Summary of Ontology Development for Shopping Mall
Purpose
and
Scope
Source
Domain
Source: Shopping
Mall
Approach: top-
down
Elements:
Concepts and
relationships
Notation: Tree or
Graph
Motivating User
Scenarios/
Keyword
Formal/Informal
Questions and
Answers
Extraction of
Terms and
Constraints
Build Ontology
7. International Journal of Database Management Systems ( IJDMS ) Vol.2, No.3, August 2010
19
5.4 Proposed Algorithm for Ontology Development
We are proposing an algorithm for the development of ontology for domain Shopping Mall. The
figure 3 shows the steps.
Figure 3: Proposed Algorithm for Building Ontology for Shopping Mall
6. CONCLUSIONS
There are many ontology building methodologies suggested for various domains. With this
proposed methodology, we can effectively build the ontology with all possible user scenarios or
simple and complex keyword. We are developing the concepts and its relationships dynamically.
Also, this methodology is faster than the earlier one, since we are using top-down approach to
build the ontology.
REFERENCES
[1] Fernandez Lopez, M., Overview of Methodologies for building ontologies.
[2] Oscar Carcho et al., Methodologies, Tools and Languages for Building ontologies. Where is their
meeting point?, Data and Knowledge Engineering 46 (2003), 41-64.
[3] WordNet: Wikipedia, the free Encyclopedia.
[4] Sinead Boyce and Claus Pahl, Developing Domain Ontologies for Course Content, Educational
Technology and Society, 10(3), 275-288, 2007.
[5] Annika ohgren, kurt Sandkuhlm, Towards a Methodology for Ontology Development in Small
and Mesium-sized Enterprises, IADIA International conference on Applied Computing 2005.
Authors
1
He is working as Asst. Prof. at D.J.Sanghvi College of Engineering, Mumbai. He is pursuing PhD.
(Engineering) from NMIMS University, Mumbai, INDIA.
2
He is Director at Late Hiray College of Master in Computer Applications, Bandra (E) Mumbai. He is also
guiding PhD students at MPSTME, NMIMS University, Mumbai, INDIA.
Step1: Enter a keyword called proper name for the domain.
Step 2: If keyword is simple goto step3 or if it is complex parse the keyword.
Step 3: Look into the database as either table name or attribute name.
Step 4: If it is table name, formulate questions and answers from all the values of its
attributes otherwise goto step 5.
Step 5: If it is an attribute or value inside the table, formulate the questions and
answers from relevant tuple.
Step 6: Additional questions and answers can be formulated from dependency of the
table’s attribute.
Step 7: With the answers, every concepts and relationships can be structured to build
ontology in a tree or graph like structure. This can be obtained from the databases
maintained for the domain.