The large amount of information on web has led to impossible accurate search and integration of the information. One of the attractive procedures for facing the redundancy of information is the Semantic Web (SW). So, to structuring the information, improving the searches and presenting the meaning of the information, a technology is needed to create relationship between the existing information in the World Wide Web (WWW) and find the clear meaning among them. SW has meaningful relationship among information and is able to revolute the Information Retrieval (IR) method in web environment. SW is the development of the existing web by equipping it with the semantic cognitive elements and content mining, and then a combination of the continuous and accurate information will be produced. The SW creates a procedure by which information will be understandable for the machines. It is possible to suppose the SW as an effective way for presenting information in the web or as a global and universal link to the information database. In the web environment, there is need for a tool for integration of the information and techniques for processing the information because of the non-heterogeneous and non-concentrated information resources. Ontology is a suitable solution for fast and right access to the common information. SW uses the ontology via providing the conceptual and relational structure and makes possible the information be accessed by the users and be smartly retrieved. We in this paper to characteristics, advantages, architecture and problems of the SW and need implement it in the WWW.
NATURE: A TOOL RESULTING FROM THE UNION OF ARTIFICIAL INTELLIGENCE AND NATURA...ijaia
This paper presents the final results of the research project that aimed for the construction of a tool which
is aided by Artificial Intelligence through an Ontology with a model trained with Machine Learning, and is
aided by Natural Language Processing to support the semantic search of research projects of the Research
System of the University of Nariño. For the construction of NATURE, as this tool is called, a methodology
was used that includes the following stages: appropriation of knowledge, installation and configuration of
tools, libraries and technologies, collection, extraction and preparation of research projects, design and
development of the tool. The main results of the work were three: a) the complete construction of the
Ontology with classes, object properties (predicates), data properties (attributes) and individuals
(instances) in Protegé, SPARQL queries with Apache Jena Fuseki and the respective coding with
Owlready2 using Jupyter Notebook with Python within the virtual environment of anaconda; b) the
successful training of the model for which Machine Learning algorithms were used and specifically
Natural Language Processing algorithms such as: SpaCy, NLTK, Word2vec and Doc2vec, this was also
performed in Jupyter Notebook with Python within the virtual environment of anaconda and with
Elasticsearch; and c) the creation of NATURE by managing and unifying the queries for the Ontology and
for the Machine Learning model. The tests showed that NATURE was successful in all the searches that
were performed as its results were satisfactory
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mining in Ontology with Multi Agent System in Semantic Web : A Novel Approachijma
A large amount of data is present on the web. It contains huge number of web pages and to find suitable
information from them is very cumbersome task. There is need to organize data in formal manner so that
user can easily access and use them. To retrieve information from documents, there are many Information
Retrieval (IR) techniques. Current IR techniques are not so advanced that they can be able to exploit
semantic knowledge within documents and give precise results. IR technology is major factor responsible
for handling annotations in Semantic Web (SW) languages. With the rate of growth of web and huge
amount of information available on the web which may be in unstructured, semi structured or structured
form, it has become increasingly difficult to identify the relevant pieces of information on the internet. IR
technology is major factor responsible for handling annotations in Semantic Web (SW) languages.
Knowledgeable representation languages are used for retrieving information. So, there is need to build an
ontology that uses well defined methodology and process of developing ontology is called Ontology
Development. Secondly, Cloud computing and data mining have become famous phenomena in the current
application of information technology. With the changing trends and emerging of the new concept in the
information technology sector, data mining and knowledge discovery have proved to be of significant
importance. Data mining can be defined as the process of extracting data or information from a database
which is not explicitly defined by the database and can be used to come up with generalized conclusions
based on the trends obtained from the data. A database may be described as a collection of formerly
structured data. Multi agents data mining may be defined as the use of various agents cooperatively
interact with the environment to achieve a specified objective. Multi agents will always act on behalf of
users and will coordinate, cooperate, negotiate and exchange data with each other. An agent would
basically refer to a software agent, a robot or a human being Knowledge discovery can be defined as the
process of critically searching large collections of data with the aim of coming up with patterns that can be
used to make generalized conclusions. These patterns are sometimes referred to as knowledge about the
data. Cloud computing can be defined as the delivery of computing services in which shared resources,
information and software’s are provided over a network, for example, the information super highway.
Cloud computing is normally provided over a web based service which hosts all the resources required. As,
the knowledge mining is used in many fields of study such as in science and medicine, finance, education,
manufacturing and commerce. In this paper, the Semantic Web addresses the first part of this challenge by
trying to make the data also machine understandable in the form of Ontology, while Multi-Agen
A survey on ontology based web personalizationeSAT Journals
Abstract Over the last decade the data on World Wide Web has been growing in an exponential manner. According to Google the data is accelerating with a speed of billion pages per day [24]. Internet has around 2 million users accessing the World Wide Web for various information [25].These numbers certainly raise a severe concern over information over load challenges for the users. Many researchers have been working to overcome the challenge with web personalization, many researchers are looking at ontology based web personalization as an answer to the information overload, as each individual is unique. In this paper we present an overview of ontology based web personalization, Challenges and a survey of the work. This paper also points future work in web personalization. Index Terms: Web Personalization, Ontology, User modeling, web usage mining.
NATURE: A TOOL RESULTING FROM THE UNION OF ARTIFICIAL INTELLIGENCE AND NATURA...ijaia
This paper presents the final results of the research project that aimed for the construction of a tool which
is aided by Artificial Intelligence through an Ontology with a model trained with Machine Learning, and is
aided by Natural Language Processing to support the semantic search of research projects of the Research
System of the University of Nariño. For the construction of NATURE, as this tool is called, a methodology
was used that includes the following stages: appropriation of knowledge, installation and configuration of
tools, libraries and technologies, collection, extraction and preparation of research projects, design and
development of the tool. The main results of the work were three: a) the complete construction of the
Ontology with classes, object properties (predicates), data properties (attributes) and individuals
(instances) in Protegé, SPARQL queries with Apache Jena Fuseki and the respective coding with
Owlready2 using Jupyter Notebook with Python within the virtual environment of anaconda; b) the
successful training of the model for which Machine Learning algorithms were used and specifically
Natural Language Processing algorithms such as: SpaCy, NLTK, Word2vec and Doc2vec, this was also
performed in Jupyter Notebook with Python within the virtual environment of anaconda and with
Elasticsearch; and c) the creation of NATURE by managing and unifying the queries for the Ontology and
for the Machine Learning model. The tests showed that NATURE was successful in all the searches that
were performed as its results were satisfactory
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mining in Ontology with Multi Agent System in Semantic Web : A Novel Approachijma
A large amount of data is present on the web. It contains huge number of web pages and to find suitable
information from them is very cumbersome task. There is need to organize data in formal manner so that
user can easily access and use them. To retrieve information from documents, there are many Information
Retrieval (IR) techniques. Current IR techniques are not so advanced that they can be able to exploit
semantic knowledge within documents and give precise results. IR technology is major factor responsible
for handling annotations in Semantic Web (SW) languages. With the rate of growth of web and huge
amount of information available on the web which may be in unstructured, semi structured or structured
form, it has become increasingly difficult to identify the relevant pieces of information on the internet. IR
technology is major factor responsible for handling annotations in Semantic Web (SW) languages.
Knowledgeable representation languages are used for retrieving information. So, there is need to build an
ontology that uses well defined methodology and process of developing ontology is called Ontology
Development. Secondly, Cloud computing and data mining have become famous phenomena in the current
application of information technology. With the changing trends and emerging of the new concept in the
information technology sector, data mining and knowledge discovery have proved to be of significant
importance. Data mining can be defined as the process of extracting data or information from a database
which is not explicitly defined by the database and can be used to come up with generalized conclusions
based on the trends obtained from the data. A database may be described as a collection of formerly
structured data. Multi agents data mining may be defined as the use of various agents cooperatively
interact with the environment to achieve a specified objective. Multi agents will always act on behalf of
users and will coordinate, cooperate, negotiate and exchange data with each other. An agent would
basically refer to a software agent, a robot or a human being Knowledge discovery can be defined as the
process of critically searching large collections of data with the aim of coming up with patterns that can be
used to make generalized conclusions. These patterns are sometimes referred to as knowledge about the
data. Cloud computing can be defined as the delivery of computing services in which shared resources,
information and software’s are provided over a network, for example, the information super highway.
Cloud computing is normally provided over a web based service which hosts all the resources required. As,
the knowledge mining is used in many fields of study such as in science and medicine, finance, education,
manufacturing and commerce. In this paper, the Semantic Web addresses the first part of this challenge by
trying to make the data also machine understandable in the form of Ontology, while Multi-Agen
A survey on ontology based web personalizationeSAT Journals
Abstract Over the last decade the data on World Wide Web has been growing in an exponential manner. According to Google the data is accelerating with a speed of billion pages per day [24]. Internet has around 2 million users accessing the World Wide Web for various information [25].These numbers certainly raise a severe concern over information over load challenges for the users. Many researchers have been working to overcome the challenge with web personalization, many researchers are looking at ontology based web personalization as an answer to the information overload, as each individual is unique. In this paper we present an overview of ontology based web personalization, Challenges and a survey of the work. This paper also points future work in web personalization. Index Terms: Web Personalization, Ontology, User modeling, web usage mining.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The Semantic Web is an evolving development of the World Wide Web in which the word semantic stands for the meaning of. The semantic of something is the meaning of something. The Semantic Web or Web 2.0 or Web3.0 is a “Web of data” that enables machines to understand the semantics or meaning. Of information on the World Wide Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. Enabling automated agents to access the Web more intelligently and perform tasks on behalf of users. The term was coined by Tim Beemers-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium. Which oversees the development of the proposal Semantic Web standards? He defines the Semantic Web as “a web of data that can be processed directly and
indirectly by machines.”
Introduction to semantic web. Includes its goal, features, why we need, semantic web related framework, RDF's, Advantages, Uniform resource locator, web ontology language, micro-formats.
Multi-Mode Conceptual Clustering Algorithm Based Social Group Identification ...inventionjournals
The problem of web search time complexity and accuracy has been visited in many research papers, and the authors discussed many approaches to improve the search performance. Still the approaches does not produce any noticeable improvement and struggles with more time complexity as well. To overcome the issues identified, an efficient multi mode conceptual clustering algorithm has been discussed in this paper, which identifies the similar interested user groups by clustering their search context according to different conceptual queries. Identified user groups are shared with the related conceptual queries and their results to reduce the time complexity. The multi mode conceptual clustering, performs grouping of search queries and users according to number of users and their search pattern. The concept of search is identified by using Natural language processing methods and the web logs produced by the default web search engines. The author designed a dedicated web interface to collect the web log about the user search and the same data has been used to cluster the social groups according to number of conceptual queries. The search results has been shared between the users of identified social groups which reduces the search time complexity and improves the efficiency of web search in better manner
Design and Implementation of Meetings Document Management and Retrieval SystemCSCJournals
Meetings management system has components to capture, storing/archiving, retrieve, browse, and distribute documents from the system and Security to protect documents from unauthorized access. Lack of proper organization, storage and easy access of meeting documents, bottleneck of keeping paper documents, slow distribution, and misplacement of documents necessitated the need for this work. Document management software that can be used to organize and maintain the records of meetings has been developed. The system, developed as a web application, is based on the use of objects and Web technologies. A search facility is included to support rapid location of topics of interest, and navigation is enabled by the employment of hyperlinks. The system was implemented using asp.net. This document management system can enable users to follow the development of any topic through several meetings of a particular body or committee, Members of the body should be able to have instant and full access to what has been discussed and decided about the given issue no matter how long that had been.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
The World Wide Web is booming and radically vibrant due to the well established standards and widely accountable framework which guarantees the interoperability at various levels of the application and the society as a whole. So far, the web has been functioning at the random rate on the basis of the human intervention and some manual processing but the next generation web which the researchers called semantic web, edging for automatic processing and machine-level understanding. The well set notion, Semantic Web would be turn possible if only there exists the further levels of interoperability prevails among the applications and networks. In achieving this interoperability and greater functionality among the applications, the W3C standardization has already released the well defined standards such as RDF/RDF Schema and OWL. Using XML as a tool for semantic interoperability has not achieved anything effective and failed to bring the interconnection at the larger level. This leads to the further inclusion of inference layer at the top of the web architecture and its paves the way for proposing the common design for encoding the ontology representation languages in the data models such as RDF/RDFS. In this research article, we have given the clear implication of semantic web research roots and its ontological background process which may help to augment the sheer understanding of named entities in the web.
In this world of information technology, everyone has the tendency to do business electronically. Today
lot of businesses are happening on World Wide Web (WWW), it is very important for the website owner to
provide a better platform to attract more customers for their site. Providing information in a better way is
the solution to bring more customers or users. Customer is the end-user, who accessing the information
in a way it yields some credit to the web site owners. In this paper we define web mining and present a
method to utilize web mining in a better way to know the users and website behaviour which in turn
enhance the web site information to attract more users. This paper also presents an overview of the
various researches done on pattern extraction, web content mining and how it can be taken as a catalyst
for E-business.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The Semantic Web is an evolving development of the World Wide Web in which the word semantic stands for the meaning of. The semantic of something is the meaning of something. The Semantic Web or Web 2.0 or Web3.0 is a “Web of data” that enables machines to understand the semantics or meaning. Of information on the World Wide Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. Enabling automated agents to access the Web more intelligently and perform tasks on behalf of users. The term was coined by Tim Beemers-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium. Which oversees the development of the proposal Semantic Web standards? He defines the Semantic Web as “a web of data that can be processed directly and
indirectly by machines.”
Introduction to semantic web. Includes its goal, features, why we need, semantic web related framework, RDF's, Advantages, Uniform resource locator, web ontology language, micro-formats.
Multi-Mode Conceptual Clustering Algorithm Based Social Group Identification ...inventionjournals
The problem of web search time complexity and accuracy has been visited in many research papers, and the authors discussed many approaches to improve the search performance. Still the approaches does not produce any noticeable improvement and struggles with more time complexity as well. To overcome the issues identified, an efficient multi mode conceptual clustering algorithm has been discussed in this paper, which identifies the similar interested user groups by clustering their search context according to different conceptual queries. Identified user groups are shared with the related conceptual queries and their results to reduce the time complexity. The multi mode conceptual clustering, performs grouping of search queries and users according to number of users and their search pattern. The concept of search is identified by using Natural language processing methods and the web logs produced by the default web search engines. The author designed a dedicated web interface to collect the web log about the user search and the same data has been used to cluster the social groups according to number of conceptual queries. The search results has been shared between the users of identified social groups which reduces the search time complexity and improves the efficiency of web search in better manner
Design and Implementation of Meetings Document Management and Retrieval SystemCSCJournals
Meetings management system has components to capture, storing/archiving, retrieve, browse, and distribute documents from the system and Security to protect documents from unauthorized access. Lack of proper organization, storage and easy access of meeting documents, bottleneck of keeping paper documents, slow distribution, and misplacement of documents necessitated the need for this work. Document management software that can be used to organize and maintain the records of meetings has been developed. The system, developed as a web application, is based on the use of objects and Web technologies. A search facility is included to support rapid location of topics of interest, and navigation is enabled by the employment of hyperlinks. The system was implemented using asp.net. This document management system can enable users to follow the development of any topic through several meetings of a particular body or committee, Members of the body should be able to have instant and full access to what has been discussed and decided about the given issue no matter how long that had been.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
The World Wide Web is booming and radically vibrant due to the well established standards and widely accountable framework which guarantees the interoperability at various levels of the application and the society as a whole. So far, the web has been functioning at the random rate on the basis of the human intervention and some manual processing but the next generation web which the researchers called semantic web, edging for automatic processing and machine-level understanding. The well set notion, Semantic Web would be turn possible if only there exists the further levels of interoperability prevails among the applications and networks. In achieving this interoperability and greater functionality among the applications, the W3C standardization has already released the well defined standards such as RDF/RDF Schema and OWL. Using XML as a tool for semantic interoperability has not achieved anything effective and failed to bring the interconnection at the larger level. This leads to the further inclusion of inference layer at the top of the web architecture and its paves the way for proposing the common design for encoding the ontology representation languages in the data models such as RDF/RDFS. In this research article, we have given the clear implication of semantic web research roots and its ontological background process which may help to augment the sheer understanding of named entities in the web.
In this world of information technology, everyone has the tendency to do business electronically. Today
lot of businesses are happening on World Wide Web (WWW), it is very important for the website owner to
provide a better platform to attract more customers for their site. Providing information in a better way is
the solution to bring more customers or users. Customer is the end-user, who accessing the information
in a way it yields some credit to the web site owners. In this paper we define web mining and present a
method to utilize web mining in a better way to know the users and website behaviour which in turn
enhance the web site information to attract more users. This paper also presents an overview of the
various researches done on pattern extraction, web content mining and how it can be taken as a catalyst
for E-business.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
Information Organisation for the Future Web: with Emphasis to Local CIRs inventionjournals
Semantic Web is evolving as meaningful extension of present web using ontology. Ontology can play an important role in structuring the content in the current web to lead this as new generation web. Domain information can be organized using ontology to help machine to interact with the data for the retrieval of exact information quickly. Present paper tries to organize community information resources covering the area of local information need and evaluate the system using SPARQL from the developed ontology.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Semantically enriched web usage mining for predicting user future movementsIJwest
Explosive and quick growth of the World Wide Web has resulted in intricate Web sites, demanding
enhanced user skills and sophisticated tools to help the Web user to find the desi
red information. Finding
desired information on the Web has become a critical ingredient of everyday personal, educational, and
business life. Thus, there is a demand for more sophisticated tools to help the user to navigate a Web site
and find the desired
information. The users must be provided with information and services specific to
their needs, rather than an undiffere
ntiated mass of information.
For discovering interesting and frequent
navigation patterns from Web server logs many Web usage mining te
chniques have been applied. The
recommendation accuracy of solely usage based techniques can be improved by integrating Web site
content and site structure in the personalization process.
Herein, we propose Semantically enriched Web Usage Mining method (S
WUM), which combines the fields
of Web Usage Mining and Semantic Web. In the proposed method, the undirected graph derived from
usage data is enriched with rich semantic information extracted from the Web pages and the Web site
structure. The experimental
results show that the SWUM generates accurate recommendations with
integration of usage, semantic data and Web site structure. The results shows that proposed method is able
to achieve 10
-
20%
better accuracy than the solely usage based model, and 5
-
8% bet
ter than an ontology
based model.
Semantic Information Retrieval Using Ontology in University Domain dannyijwest
Today’s conventional search engines hardly do provide the essential content relevant to the user’s search
query. This is because the context and semantics of the request made by the user is not analyzed to the full
extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search
which combines Natural Language Processing and Artificial Intelligence. The objective of the work done
here is to design, develop and implement a semantic search engine- SIEU(Semantic Information
Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge
base for the information retrieval process. It is not just a mere keyword search. It is one layer above what
Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed
both syntactically and semantically. The developed system retrieves the web results more relevant to the
user query through keyword expansion. The results obtained here will be accurate enough to satisfy the
request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically.
The system will be of great use to the developers and researchers who work on web. The Google results are
re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which
fetches more apt results for the user query.
Intelligent Semantic Web Search Engines: A Brief Survey dannyijwest
The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies.
MULTIFACTOR NAÏVE BAYES CLASSIFICATION FOR THE SLOW LEARNER PREDICTION OVER M...ijcsa
The high school students must be observed for their slow learning or quick learning abilities to provide
them with the best education practices. Such analysis can be perfectly performed over the student
performance data. The high school student data has been obtained from the schools from the various
regions in Punjab, a pivotal state of India. The complete student data and the selective data of almost 1300
students obtained from one school in the regions has been undergone the test using the proposed model in
this paper. The proposed model is based upon the naïve bayes classification model for the data
classification using the multi-factor features obtained from the input dataset. The subject groups have been
divided into the two primary groups: difficult and normal. The classification algorithm has been applied
individually over data grouped in the various subject groups. Both of the early stage classification events
have produced the almost similar results, whereas the results obtained from the classification events over
the averaging factors and the floating factors told the different story than the early stage classification. The
proposed model results have shown that the deep analysis of the data tells the in-depth facts from the input
data. The proposed model can be considered as the effective classification model when evaluated from the
results described in the earlier sections.
MULTIFACTOR NAÏVE BAYES CLASSIFICATION FOR THE SLOW LEARNER PREDICTION OVER M...ijcsa
The high school students must be observed for their slow learning or quick learning abilities to provide
them with the best education practices. Such analysis can be perfectly performed over the student
performance data. The high school student data has been obtained from the schools from the various
regions in Punjab, a pivotal state of India. The complete student data and the selective data of almost 1300
students obtained from one school in the regions has been undergone the test using the proposed model in
this paper. The proposed model is based upon the naïve bayes classification model for the data
classification using the multi-factor features obtained from the input dataset. The subject groups have been
divided into the two primary groups: difficult and normal. The classification algorithm has been applied
individually over data grouped in the various subject groups. Both of the early stage classification events
have produced the almost similar results, whereas the results obtained from the classification events over
the averaging factors and the floating factors told the different story than the early stage classification. The
proposed model results have shown that the deep analysis of the data tells the in-depth facts from the input
data. The proposed model can be considered as the effectiv
Intelligent Semantic Web Search Engines: A Brief Survey dannyijwest
The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies.
Effective Performance of Information Retrieval on Web by Using Web Crawling dannyijwest
World Wide Web consists of more than 50 billion pages online. It is highly dynamic [6] i.e. the web
continuously introduces new capabilities and attracts many people. Due to this explosion in size, the
effective information retrieval system or search engine can be used to access the information. In this paper
we have proposed the EPOW (Effective Performance of WebCrawler) architecture. It is a software agent
whose main objective is to minimize the overload of a user locating needed information. We have designed
the web crawler by considering the parallelization policy. Since our EPOW crawler has a highly optimized
system it can download a large number of pages per second while being robust against crashes. We have
also proposed to use the data structure concepts for implementation of scheduler & circular Queue to
improve the performance of our web crawler. (Abstract)
WEB MINING: PATTERN DISCOVERY ON THE WORLD WIDE WEB - 2011Mustafa TURAN
Web Mining: Pattern Discovery on the World Wide Web
The study covers auto Turkish text content gathering from most popular social media platforms (twitter, facebook, blogs, news, etc...) and auto classifying sentimentally those contents.
Linked Data Generation for the University Data From Legacy Database dannyijwest
Web was developed to share information among the users through internet as some hyperlinked documents.
If someone wants to collect some data from the web he has to search and crawl through the documents to
fulfil his needs. Concept of Linked Data creates a breakthrough at this stage by enabling the links within
data. So, besides the web of connected documents a new web developed both for humans and machines, i.e.,
the web of connected data, simply known as Linked Data Web. Since it is a very new domain, still a very
few works has been done, specially the publication of legacy data within a University domain as Linked
Data.
Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning ...ijcseit
Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be
collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning
experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on
high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the
queries by the students. Completely Interactive learner’s query determine the customization of the
intelligent tutor. This paper analyses the Web 3.0 learning environment attributes. A Maximum spanning
tree model for the personalized web based collaborative learning is designed.
Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning ...ijcseit
Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on
high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the queries by the students. Completely Interactive learner’s query determine the customization of the intelligent tutor. This paper analyses the Web 3.0 learning environment attributes. A Maximum spanning
tree model for the personalized web based collaborative learning is designed.
Majority of the computer or mobile phone enthusiasts make use of the web for searching
activity. Web search engines are used for the searching; The results that the search engines get
are provided to it by a software module known as the Web Crawler. The size of this web is
increasing round-the-clock. The principal problem is to search this huge database for specific
information. To state whether a web page is relevant to a search topic is a dilemma. This paper
proposes a crawler called as “PDD crawler” which will follow both a link based as well as a
content based approach. This crawler follows a completely new crawling strategy to compute
the relevance of the page. It analyses the content of the page based on the information contained
in various tags within the HTML source code and then computes the total weight of the page. The page with the highest weight, thus has the maximum content and highest relevance.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Classification-based Retrieval Methods to Enhance Information Discovery on th...IJMIT JOURNAL
The widespread adoption of the World-Wide Web (the Web) has created challenges both for society as a whole and for the technology used to build and maintain the Web. The ongoing struggle of information retrieval systems is to wade through this vast pile of data and satisfy users by presenting them with information that most adequately it’s their needs. On a societal level, the Web is expanding faster than we can comprehend its implications or develop rules for its use. The ubiquitous use of the Web has raised important social concerns in the areas of privacy, censorship, and access to information. On a technical level, the novelty of the Web and the pace of its growth have created challenges not only in the development of new applications that realize the power of the Web, but also in the technology needed to scale applications to accommodate the resulting large data sets and heavy loads. This thesis presents searching algorithms and hierarchical classification techniques for increasing a search service's understanding of web queries. Existing search services rely solely on a query's occurrence in the document collection to locate relevant documents. They typically do not perform any task or topic-based analysis of queries using other available resources, and do not leverage changes in user query patterns over time. Provided within are a set of techniques and metrics for performing temporal analysis on query logs. Our log analyses are shown to be reasonable and informative, and can be used to detect changing trends and patterns in the query stream, thus providing valuable data to a search service.
Similar to SEMANTIC WEB: INFORMATION RETRIEVAL FROM WORLD WIDE WEB (20)
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
SEMANTIC WEB: INFORMATION RETRIEVAL FROM WORLD WIDE WEB
1. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
SEMANTIC WEB: INFORMATION RETRIEVAL FROM
WORLD WIDE WEB
Javad Pashaei Barbin1 and Isa Maleki2
1,2
Department of Computer Engineering, Naghadeh Branch, Islamic Azad University,
Naghadeh, Iran
ABSTRACT
The large amount of information on web has led to impossible accurate search and integration of the
information. One of the attractive procedures for facing the redundancy of information is the Semantic Web
(SW). So, to structuring the information, improving the searches and presenting the meaning of the
information, a technology is needed to create relationship between the existing information in the World
Wide Web (WWW) and find the clear meaning among them. SW has meaningful relationship among
information and is able to revolute the Information Retrieval (IR) method in web environment. SW is the
development of the existing web by equipping it with the semantic cognitive elements and content mining,
and then a combination of the continuous and accurate information will be produced. The SW creates a
procedure by which information will be understandable for the machines. It is possible to suppose the SW
as an effective way for presenting information in the web or as a global and universal link to the
information database. In the web environment, there is need for a tool for integration of the information
and techniques for processing the information because of the non-heterogeneous and non-concentrated
information resources. Ontology is a suitable solution for fast and right access to the common information.
SW uses the ontology via providing the conceptual and relational structure and makes possible the
information be accessed by the users and be smartly retrieved. We in this paper to characteristics,
advantages, architecture and problems of the SW and need implement it in the WWW.
KEYWORDS
Semantic Web, Ontology, Information Retrieval, World Wide Web
1. INTRODUCTION
The increasing growth of the resources on WWW, the non-heterogeneous subject information and
the high volume of the resources, the diversity of the users and the informational needs of them
are of the problems which make the IR more complex day by day on the web environment [1].
Although the specifications of web environment like non-continuousness and the lack of
possibility of creating relation among different information are of the most elite advantages on the
web environment, but it is possible for them to lead to some problems in IR. For instance the
users cannot find their needed information easily when searching the web. The number of the
users and the searchers of the web increases day by day and the IR of this environment get more
complex as time passes. IR in web is facing the challenges related to retrieval of the large amount
DOI: 10.5121/ijci.2013.2602
13
2. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
of the wanted and unwanted information and passing these challenges needs finding the ways
about simplifying the searching process for all of the users [2].
Now by development of WWW and the high amount of the saved information on it, the
techniques are used to retrieve the data saved on web environment and give the information of
these retrieving processes to the users. Web environment is one of the biggest general information
resources and is shaped by the structured and informational non-structured information [3]. Using
the searching motors, there are information given to the users to conclude about the data and the
logic relation between them, but when there is large amount of the data, even the master and
experienced users will not be able to find the useful information among the large amount of the
information [4]. So, because of the instability and the non-limited vastness of the resources in the
web environment, we face large amount of information in retrieving them. When we search a text
with different explanations, the results will be different in searching and different concepts will
be retrieved by the search engines [5]. Because it is possible for different people to use different
words explaining a same concept. But SW takes into consideration the factors like the relation of
the data and tries to make the process go forward fast and have high accuracy in retrieving the
data [6, 7]. SW emergence is a new revolution in accessing and retrieving the useful information
on web. SW makes the automatic access to the data according to the meaning of data which could
be processed by the machines, possible [8, 9].
In web environment, the saving and propagation of the information are done easily, but it is
possible for the IR be not related to the needs of the users. IR is a domain concerned with the
structure, analysis, organization, storage and searching. The web pages include a high amount of
the structured data which are located with no format or special structure on these pages [10]. The
search engines must be able to retrieve the structured data from web to present richer results to
the users. In web environment, the search engines return any word which conforms to the name
considered by the user. But by the daily increase of the information on web and the informational
needs of the uses, the existing searching engines used for searching for information on web are
not able to meet all needs of the users. So, retrieving the information from web must be structured
and understandable for all users. So, the goal of SW is not to transform the web to a network of
the redundant information but to a network of useful information [11, 12]. SW tries to make the
IR not include the continuous resources but to make the existing concept be clear in the resources.
But semantic means a meaning which could be processed by the machine. To reach the said goal,
SW must balance the images and the realities of the web resources. SW is a solution in saving
and retrieving problems of the data which aims the commonness of the information in web
environment smartly which is not only could be understood by the human being, but could be
understood by the machines [13, 14].
The most important goals of SW is to revolute the IR. Now, the IR in web environment takes
place generally according to the conformity of the word or the words searched using the key
words existing in the pages of the web. But the SW searches according to the subject, the
relationship between the information and the type of the information. SW is the third generation
of web which retrieves the web information which are related to each other to present a centered
search results [15]. And this is useful and very effective in IR. So, the goal of SW is retrieving the
information in a vast and efficient level which is not only very fast and accurate in retrieving the
information needed for the users, but also create the possibility for automatic servicing the users
by the machines in a smart way of finding the information and giving them to the searching
engines to retrieve them [16, 17].
14
3. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
The most important challenge in a large amount of the saved information in digital web libraries
is related to classification, replacing and the IR of them. Digital libraries have tried to classify,
update and be accessible from the very first time they were built. In fact many of the digital
libraries cannot meet the needs of the users now and are not able to search or retrieve the
information efficiently. Digital libraries are a procedure for collection of scientific web and the
multi-media elements of the producers. In [18] a digital meaningful library which is able in mart
searching and accessing better information is designed based on SW. The specifications
suggested based on SW for the digital libraries have made the IR takes place more accurately. So,
using the SW in digital libraries can affect the structure and the conclusion of the information
highly.
One of the applications of SW is to use it in Question Answer (QA) system. In QA system no
specified grammar structure is used to ask a question. The IR in these systems takes place using
the natural language process techniques. QA system is a complex form of IR which faces a large
amount of the information to process the information related to a special field. The goal of QA
system is the accurate retrieval of the question of the users and reducing the amount of the nonrelated retrieved information. Researches [19] show that using the SW, the QA system can
identify or analyze the question of the users more accurately. The results of the researches show
that SW could be a suitable method for searching based on with no grammar questioning and the
IR of it will be more accurate.
In this paper, the effect of the SW in IR of the web environment is studied. This paper is
organized as follows: In Section 2, the SW is explained; in Section 3, SW and ontology are
discussed in web and finally in Section 4, we have presented the conclusion and future works to
be done.
2. SEMANTIC WEB
SW is the present web is completed more. SW is suggested to create a semantic environment for
the web information and also reaching the goals like searching, retrieving, combining and
transferring the information in the existing web environment using the World Wide Web
Consortium (W3C) [20]. There are many explanations for SW, but these three are of the most
important explanations for SW [21, 22]:
SW is a global technology for relating the information in a way to be understood or
processed by the computer.
SW is a network of the information in global scale in a way that their process is simply
possible by the machines.
SW is a technology for making the web information smart which could be processed or
understood by the machines.
Web environment is important from two points of view: quality and quantity [23, 24]. From
quantity point of view the amount of the information hidden in web is much more than the
surface part or the clear one and from the quality point of view the information hidden in web are
always valuable resources. For instance, many of the continuous electronic information bases
which are present on web are not accessible to the search engines. According to this fact that the
search engines are not the only way to reach the information, using the SW the hiddenness level
on web must be reduced in the search engines and the utilization must be increased.
15
4. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
2.1. Semantic Web Architecture
In SW the layered architecture is implemented and designed in a way that any layer must be able
to understand the others and process the others data [20]. In Figure (1), the SW architecture is
shown.
Figure 1: Semantic Web Architecture.
SW is a new architecture for WWW, which locates the concepts of the existing web with a formal
and understandable meaning for the machines next to each other [20]. Different architecture
layers of SW are shown in Figure (1). Any of the layers has its own responsibilities.
Unicode and Uniform Resource Identifier (URI): This layer shows the text and the sending
method of them to the web environment. Also, it is possible to use this layer and make all web
environment resources accessible and explainable. The HTML pages are the documents which
are implemented in this layer. URI is used to identify the concepts in SW. In fact it is a kind of
URI to identify the resources in web. Unicode is used to support the multi- language.
Extensible Markup Language (XML): This layer is a standard language for serializing the data
using tagging and like HTML, it is script language and to some extant goes through the way [25,
26]. But it differs in this way that the information on it is saved in syntactic form and is easily
accessible or to be linked. This specification makes different elements of the web pages attributed
different and favored specifications.
Resource Description Framework (RDF) and RDF Schema (RDFS): RDF is a XML based
language which is created to explain the concepts and create the documents in SW. And using a
set of mathematic and semantic relations, it is possible to form the relation between the data in a
logic way which is accessible directly [27]. RDF documents are able to explain the words in a
way that it is understandable to the machine. Now, the different languages are used for presenting
the contents of the web pages like HTML and XML. These languages all have some shortages in
presenting the structure of the semantic of the documents and could not be used in ontologies. So,
16
5. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
the standard languages are created in web for coding, opening and relating the data. These
languages are logic based, frame based or web based [28], and the W3C has developed the RDF
language in this way. RDF is a language for coding the existing information in the web pages
which uses the XML language for explaining the resources and is a new base for processing the
metadata. The use of this model is much seen in creation of metadata. The metadata can create a
link between the textual information in the documents and concepts in the ontology. Metadata can
also be used to create a link between the information in the document and instances of the
concepts. These instances are stored in a knowledgebase. Thus the ontology bears the same
relationship to the knowledgebase as a relational database schema bears to the information in the
database.
Ontology: Ontologies are of the SW layers and the base columns of it. Ontology is the main and
basic element for SW. SW links the real world meaning understandable by the human being to
the web environment. In SW, the relationship between the existing concepts in the web
documents and ontology is identified [30]. Machines will be able to understand and process the
documents using ontologies and will be able to create relationship between them [31]. So, the
main parts of the SW are the ontologies which create relationship between the documents tags of
the SW and real things which explain the said documents [32]. This layer plays an important role
in SW structure. Also, creation of common understanding about the words used in special web
fields is the responsibility of the ontology. In other words, ontology identifies the relation
between the concepts in web documents and the real world. By this way the documents will be
understood or processed by the machines and the sharing of the data will be easier. In web, the
ontologies provide a common concept of the domains. Such common understanding is needed for
solving the multi-meaning problem because two applicative programs may use two different
words for a same meaning or vice versa. In fact ontologies create semantic relation. So, ontology
creates a common understanding of a domain for making the relationships easier between people
and the non-heterogeneous and distributed systems on web [33].
Ontology is a conceptual model which models the real entities in a special domain and the
relationship between them formally and clearly. Using the ontology in SW leads to structural
organization of information and provides a suitable format among the information. Also, using
ontology not only leads to identification of the information elements, but also contributes
identification of the explanations in different sciences. So, ontology is a method for organizing
the information and improving the retrieving process in web environment. The language of
ontology is Ontology Web Language (OWL). OWL could be used as the presentation and
description language of the formal concepts in ontology [34]. This language identifies the formal
methods for utilizing and processing the meanings besides symbolic presentation of them. In this
language the search and discovering the relations between the concepts, finding the
incompatibilities in ontology, processing the information inside the documents take place easily.
This language could be used for presenting the clear meaning of the words and the relationship
between them. The huge increase in the amount and complexity of reachable information in the
WWW caused an excessive demand for tools and techniques that can handle data semantically.
The current practice in information retrieval mostly relies on keyword-based search over full-text
data, which is model as a bag-of-words. However, such a model misses the actual semantic
information of the text. In order to deal with this issue, ontologies are proposed [35] for
knowledge representation, which are nowadays the backbone of SW applications. Both the
information extraction and IR processes can benefit from such metadata, which gives semantics
to plain text.
17
6. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
Logic: Logic layer creates a defined format and standard regulations for the search engines which
are responsible to produce the final information in SW. This layer which is located up to the
ontology layer is used to explain the understandable words in machine level. In ontology layer,
search machine can understand the SW base concepts. So, to improve the power of semantic
processing of the search engines, the logic layer is used.
Proof: Regulations proof means that the machines must lay on them retrieving the information in
large amount of the information. These regulations are in fact scripts and the software programs.
Trust: One of the goals of SW is to reach trust. As in SW the search operation takes place using
the search engines, the users must have the sense of reliability about the security, correction and
quality of saving and retrieving the information. This layer is very important in online shopping
from security and trust points of view.
2.2. Advantages of Semantic Web
Searching techniques in existing web are focused on finding the documents via key words in
contrast to SW and the semantic relations among the resources are ignored. Some of the SW
advantages are as follows [36, 37 and 38].
Expanding the searching and picturing the relations between the explanations
Automation of the questioning in searches
Classification of the subjects according to the searching of the information of the users
Solving the problems of keywords search
Creating a semantic structure in a special field and the relationship among them
Simplifying the discovery and retrieval of the resources process
Supporting the structured learning and presentation of the information
The languages and the standards of SW are more advanced and complete than HTML.
Intelligent retrieval of the information in a formal and meaningful format
In SW, like the present web HTTP protocol is used for transferring the data but XML is used
instead of HTML language. URI is used instead of URL to identify the resources. In SW it is tried
to explain all information according to the specifications. And this causes simplifying the process
and retrieval of the information. In Figure (2) the IR system from web environment based on SW
is presented.
18
7. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
Figure 2: IR from web environment base on SW
The main factor of development and growth of SW is production of ontology in different
domains. Production of ontology is usually time-consuming, with error and dependent to the
information of engineering the domains in a special field. One of the main problems in ontology
production is accessing the valid and complete words. In [39] using the web pages sampling
method, the natural language process the statistical analysis and the IR techniques an ontology is
suggested. The main goal of it is to create a large set of words and the basic concepts in an
automatic manner which makes ontology production easier and fast. So the web pages retrieval
related to the search of the users is utilized using a crawler based on the words of the ontology.
The behavior of the crawler is generally similar to the crawler of the search engines but it differs
in this point that when the web pages are studied, if they are favorite ones it will then save it or
index it, else it will not study it.
Castells et al. [40] showed how a comprehensive, personalized retrieval framework can be used to
build user requirements into an ontology-based user profile. The extraction of the requirements
for the ontology is achieved by applying clustering algorithms to terms in documents the user
selects for viewing. The profile enhances the personalized search results in terms of precision and
recall. In [41] suggested a personalized browsing method using the personal ontology that builds
a user profile from the Web pages the user has visited. The similarities between the terms in Web
pages and the concepts of reference ontology are calculated to define the personal ontology,
which reflects the user‟s interests and produces moderate improvements when applied to search
results. Mittal et al. [42] suggested a hybrid approach for personalized IR using the user‟s
ontology, a dynamic user profile, and collaborative filtering. The role of the ontology is to extend
the user‟s query and to identify the user‟s context as expressed in the user profile, which is
divided into short-term and long-term preferences and is updated often. Moreover, the search
engine can reorganize the personalized search results to be more useful since relevant documents
are recommended from similar users, a list of which is generated based on a collaborative
filtering algorithm.
In [43] presented a novel approach to SW search, which allows for a semantic processing of Web
search queries relative to an underlying ontology, and for evaluating ontology-based complex
19
8. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
Web search queries that involve reasoning over the Web. Have showed how the approach can be
implemented on top of standard Web search engines and ontological inference technologies.
Have developed the formal model behind this approach, and we have also generalized the
PageRank technique to our approach. We have then provided a technique for processing SW
search queries, which consists of an offline ontological inference step and an online reduction to
standard Web search queries (which can be implemented using efficient relational database
technology), and we have proved it ontologically correct (and in many cases also ontologically
complete). The offline inference compiles terminological knowledge into completed annotations,
which have a polynomial size, can be computed in polynomial time, and are also rather small in
practice. Have reported on two prototype implementations in desktop search, and provided very
positive experimental results on the running time of the online query processing step, and the
precision and the recall of our approach to SW search. Have also showed that SW search can be
readily applied to existing Web pages without annotations.
3. DISCUSSION
Ontologies are of the SW tools. Ontology creates a method to imagine the knowledge and then
the machines will be able to explain the information. In fact, ontology is an accurate explanation
of a shared concept. Ontology models a domain of a knowledge using a set of primary elements
like classes, specifications and relations. And by this way explanation of the data for applicative
programs is produced. Very ontologies are present in different fields like medical sciences and
the use of them increases day by day. As most of the ontology of SW cover each other in different
areas, it is possible to use the pre-produced ontologies in building the new ones. But many of
these ontologies are very big in size. It is hard to maintain these ontologies which sometimes
include thousands of concepts. Also it will be difficult, costing and time consuming to mine and
study the whole ontology for reuse. But it is possible to modulate the ontologies and reuse a part
of the ontology which would be applicative. By this method, the complexity of ontology will be
reduced and the understanding and then data management will be easier.
Ontology is a powerful tool for presenting and explaining the knowledge related to a special field
in a formal and processing format by the machine. Using the ontology, it is possible to create
relation between the non-heterogeneous systems and improve the mutual relation among the
programs, machines and the non-heterogeneous systems. But in real it would be problematic to
focus a lot on the ontology. In SW discussion, ontology is a main and basic infrastructure. If two
systems use one ontology to get in relationship to each other that ontology will naturally be like a
common language and a common ontology and understandable one for both systems. But taking
into consideration that these two ontologies are independently and using two different
perspectives, it must not be expected that they will completely conform to each other and the
system the two ontologies use will simply use the other ontology. In other words, the important
point is that although ontology is the solution of the non-heterogeneous systems, it will add a
non-heterogeneous layer to the web. So as a result the lack of conformity between ontologies is a
point which must be considered. So, if all parts of an ontology are related to a special area, they
must be related to each other. This is not only needed from logic and conceptual point of view,
but also some of the calculations and the explanations which are done of the ontology graph must
be controlled.
Ontology description languages, inference systems and engines, as well as implementations with
reasoning capabilities have been developed in order to improve IR and enable knowledge
20
9. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
discovery, if possible. Examining existing approaches reveals that no well established and
standardized methodology is followed in general. In addition, the expressiveness achieved varies,
not always fulfilling the expressiveness needs of the SW. Finally, current approaches for
composing and performing intelligent queries do not seem to be satisfyingly declarative. Most of
the time, the user is burdened with the task of collecting the appropriate information needed to
construct the query.
Although ontology has many advantages, it could have not been developed completely in web
because of these reasons.
Accessibility: SW content is similar to the present web content which is created using the
ontology between the concepts. So, the present web content must be transferred to the
SW environment and as the information amount is large it is very difficult now to do it.
Expandability: The ontologies which are usable in all domains have not been developed
yet. Also the methodology and standard tools for creation of ontology are under evolution
now.
Scalability: No firm optimized solution for saving and indexing the SW pages and also
searching the SW environment is created.
Instability: The used languages in SW are under evolution. So, this makes using SW
difficult.
Ambiguity: There are ambiguous terms in SW for which no unique understanding is
created. Combining these terms and trying to create new concepts make the situation
more difficult in entrance of SW to the web environment and adds to the ambiguity.
Many struggles are taking place to overcome the said challenges and it is possible to the need for
the SW to be felt more and the present web be transferred to the SW in future.
The present web enjoys large amount of the information and different users and is still presenting
large amount of information to the users using the technology tools like infra-text links and
keyword searching process. This means that the present technologies in the web environment are
not able to make the complex searching processes to be executed in an integrate manner and
simplify the information process in any way. In addition the diversity of the dispread information
and the vast dispread information, the increase of the web contents and the error leading
specification of the links in the web have all led to the problems in retrieving the information via
searching process. Even the search engines are not able to reflect the information in web
environment. So, the SW solutions must be utilized in designing the search engines for
organization in sharing, transfer and retrieving the data. In SW, the structure of the data is
explained in a way that makes possible the information retrieving according to the syntactic and
conceptual identification, specifications and the relations among the information. On this base,
SW produces the data structure and the possibility of data transfer in the fields supposed to be
related.
Technology and the web capabilities in integration of information in special fields have made
tools and the new situations be created for IR process. SW is a suitable tool in organization of the
information, classification and the semantic relations. Ontology provides a set of the formal
explanations for special concepts in special fields and defines the relations among them and is
used in SW and it also creates the path to form an information base in the related fields. SW is a
tool for removing the problems in keywords searching according to the semantic structure for
21
10. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
simplifying the discovery and retrieval, expanding the questioning, picturing the relations among
words and searching the concepts. The main goal of information organization is discovering the
hidden data and their links to the other data and concepts which are saved in printing texts,
information bases or information databases in web. To access this hidden knowledge, the data and
their links to the other data or concepts must be defined in an integrated manner. In addition to
this is the increasing growth and the fast changes in the web make the useful IR face problems
and have introduced mining in web process as one of the main shortages in search engines. So, to
increase the IR process efficiency in search engines the SW is a good solution. Of the other needs
in information organization process is to use the standard languages which provide the possibility
of transferring and linking the structured information. Now, XML and RDF are trying to simplify
the information in web environment.
Retrieval process of information based on SW is presented via ontology. As different information
is presented in web environment, the SW is not able to retrieve the useful information
automatically easily. To solve this problem and find SW services an intelligent factor named
ontology must be used. In result ontology can understand the way of relations among the
information.
4. CONCLUSION AND FUTURE WORKS
Web emergence is an information resource, which holds challenges and chances for retrieving the
useful information. The large amount of the different information like information databases,
journals, electronic resources and different documents in WWW has made information searching
be an inaccessible challenge. So, although web is the most vast information resource, a big part of
the information on it holds no favorite organization and finally it faces problems. Even the parts
of web like continuous information databases which are organized in saving and retrieving
system of themselves lack the relation with each other. IR tools like search engines are not
efficient in optimized retrieval of this large amount of information. So, SW is the best method for
effective management of the information and integration of information and conceptual
information processing. SW has produced a solution by which needed information for the users
are efficiently retrieved and the large amount of information is not faced. SW could be imagined
as a global space of intelligent calculations in which the data centers, information databases or
web pages or any other resources of information are located next to each other in a meaningful
manner and holding the ability of understanding each other. The most important layer in SW is
the ontology. Ontology produces a dynamic relation of the existing information in web resources
for special fields. Ontology can retrieve a vast volume of correct and continuous information
using the semantic relationships in a short time. By this paper we hope will be able to design a
system in future which will be able to retrieve the information in web environment accurately and
related to the search commands of the users.
22
11. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
K. Selvakumar, S. Sendhilkumar, “Challenges and Recent Trends in Personalized Web Search: A
Survey”, Third International Conference on Advanced Computing, IEEE, Chennai, pp. 333-339,
2011.
A. Laender, B.A.R. Neto, A.S. da Silva, J.S. Teixeira, “A Brief Survey of Web Data Extraction
Tools”, ACM Sigmod Record, Vol. 31, No. 2, pp. 84-93, 2002.
G. Chengxia, H. Dongmei, “Research on Domain Ontology Based Information Retrieval Model”,
International Symposium on Intelligent Ubiquitous Computing and Education, IEEE, Chengdu, pp.
541-543, 2009.
H. Dong, F.K. Hussain, E. Chang, “A Survey in Semantic Web Technologies-Inspired Focused
Crawlers”, Third International Conference on Digital Information Management, IEEE, London, pp.
934-936, 2008.
H. Sheng, H. Chen, T. Yu, Y. Feng, “Linked Data based Semantic Similarity and Data Mining”, In
proceeding of IEEE International Conference on Information Reuse & Integration, pp. 104-108, 2010.
W. Li, X. Zhang, X. Wei, “Semantic Web-Oriented Intelligent Information Retrieval System”,
International Conference on BioMedical Engineering and Informatics, IEEE, Sanya, Vol. 1, pp. 357361, 2008.
K. Saruladha, G. Aghila, S. Raj, “A Survey of Semantic Similarity Methods for Ontology Based
Information Retrieval”, Second International Conference on Machine Learning and Computing,
IEEE, Bangalore, pp. 297-301, 2010.
U. Shah, T. Finin, A. Joshi, R.S. Cost, J. Mayfield, “Information Retrieval on the Semantic Web”, In
10th International Conference on Information and Knowledge Management, 2003.
J. Huiping, “Information Retrieval and the Semantic Web”, International Conference on Educational
and Information Technology (ICEIT), Chongqing, Vol. 3, pp.461-463, 2010.
S. Raman, V.K. Chaurasiya, S. Venkatesan, “Performance Comparison of Various Information
Retrieval Models used in Search Engines”, International Conference on Communication, Information
& Computing Technology, IEEE, Mumbai, pp. 1-4, 2012.
J. Jiang, Z. Wang, C. Liu, Z. Tan, X. Chen, M. Li, “The Technology of Intelligent Information
Retrieval Based on the Semantic Web”, 2nd International Conference on Signal Processing Systems
(ICSPS), IEEE, Dalian, China, Vol. 2, pp. 824-827, 2010.
W. Yong-Gui, J. Zhen, “Research on Semantic Web Mining”, International Conference on Computer
Design and Applications (ICCDA), IEEE, Qinhuangdao, Vol. 1, pp. 67-70, 2010.
G. Madhu, A. Govardhan, T.V. Rajinikanth, “Intelligent Semantic Web Search Engines: A Brief
Survey”, International journal of Web & Semantic Technology (IJWesT), Vol.2, No.1, pp. 34-42,
2011.
S. Panigrahi, S. Biswas, “Next Generation Semantic Web and Its Application”, IJCSI International
Journal of Computer Science Issues, Vol. 8, Issue 2, pp. 385-392, 2011.
J. Mustafa, S. Khan, K. Latif, “Ontology Based Semantic Information Retrieval”, 4th International
IEEE Conference Intelligent Systems, IEEE, Varna, Vol. 3, pp. 14-19, 2008.
O. Dridi, M.B. Ahmed, “Building an Ontology-Based Framework for Semantic Information
Retrieval: Application to Breast Cancer”, 3rd International Conference on Information and
Communication Technologies: From Theory to Applications, IEEE, Damascus, pp. 1-6, 2008.
V. Jain, M. Singh, “Ontology Based Information Retrieval in Semantic Web: A Survey”, International
Journal Information Technology and Computer Science, Vol. 10, pp. 62-69, 2013.
J. Qiu, Y. Song, F. Ma, “Digital Library System Integration Model Construction Based on Semantic
Web Service”, International Conference on Management and Service Science (MASS), IEEE,
Wuhan, pp. 1-4, 2 1 .
Qing-lin Guo, M. Zhang, “Semantic Information Integration and Question Answering Based on
Pervasive Agent Ontology”, Expert Systems with Applications, Vol. 36, pp. 1 68-10077, Elsevier
Ltd, 2009.
23
12. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
[20] L. Zhiqiang, “Research on Information Search Mechanism Based on Semantic Web Services”,
International Symposium on Computer Science and Society, IEEE, Kota Kinabalu, pp. 142-145,
2011.
[21] B. Kapoor, S. Sharma, “A Comparative Study Ontology Building Tools for Semantic Web
Applications”, International journal of Web & Semantic Technology (IJWesT), Vol. 1, No. 3, pp. 113, 2010.
[22] K.S. Esmaili, H. Abolhassani, “A Categorization Scheme for Semantic Web Search Engines”, In
IEEE Computer Systems and Applications, pp. 171-178, 2006.
[23] W. Liangshen, H. Jie, X. Zaiyu, W. Xiaochen, Q. Caiyue, L. Hui, “Problems and Solutions of Web
Search Engines”, International Conference on Consumer Electronics Communications and Networks,
IEEE, XianNing, pp. 5134-5137, 2011.
[24] J.A. Josephine, S. Sathiyadevi, “Ontology Based Relevance Criteria for Semantic Web Search
Engine”, 3rd International Conference on Electronics Computer Technology (ICECT), IEEE,
Kanyakumari, Vol. 3, pp. 60-64, 2011.
[25] Cohen, S. Mamou, J. Kanza, Y. Sagiv, “XSEarch: A Semantic Search Engine for XML”, Proceedings
of the International Conference on Very Large Databases, pp. 45-56, 2003.
[26] N. Yahia, S.A. Mokhtar, A. Ahmed, “Automatic Generation of OWL Ontology from XML Data
Source”, International Journal of Computer Science Issues, Vol. 9, Issue 2, No. 2, pp. 77-83, 2012.
[27] W. Zhang, Z. Deng, “Searching Semantic Relation Paths Using an RDF Digraph Retrieval Model”,
11th International Symposium on Computing and Applications to Business, Engineering & Science,
IEEE, Guilin, pp. 486-490, 2012.
[28] S. Decker, P. Mitra, S. Melnik, “Framework for the Semantic Web: an RDF Tutorial”, Internet
Computing, IEEE, Vol. 4, Issue 6, pp. 68-73, 2000.
[29] S.L. Phyue, M.M. Thein, T.T. Win, M.M.S. Thwin, “Semantic Web Information Retrieval in XML by
Mapping to RDF Schema”, International Conference on Networking and Information Technology,
IEEE, Manila, pp. 500-503, 2010.
[30] Z. Ying-Hui, F. Feng-Rui, Y. Chao, “The Method of Ontology Composition on Web”, 3rd
International Conference on Advanced Computer Theory and Engineering (ICACTE), IEEE,
Chengdu, Vol. 3, pp. 339-343, 2010.
[31] M. Sir, Z. Bradac, M.Kupcik, “Ontology and Diagnostics in Automation Technology”, 13th
International Carpathian Control Conference (ICCC), IEEE, High Tatras, pp. 649-652, 2012.
[32] M. Shamsfard, A.A. Barforoush, “Learning Ontologies from Natural Language Texts”, HumanComputer Studies, Vol. 60, No. 1, pp. 17-63, 2004.
[33] M.N. Ahmad, R.M. Colomb, J. Cole, “An Ontology Server: A Knowledge Tool for Systems
Interoperability in Semantic Web Context”, Journal of Information Technology, Vol. 18, No. 1, pp. 123, 2006.
[34] D.V. Gasevic, D.O. Djuric, V.B. Devedzic, “Bridging MDA and OWL Ontologies”, Journal of Web
Engineering, Vol. 4, No. 2, pp. 118-143, 2005.
[35] T.R. Gruber, “Toward Principles for the Design of Ontologies used for Knowledge Sharing”,
International Journal of Human Computer Studies, Vol. 43, pp. 907-928, 1995.
[36] F. Zeshan, R. Mohamad, “Semantic Web Service Composition Approaches: Overview and
Limitations”, International Journal on New Computer Architectures and Their Applications
(IJNCAA), Vol. 1, No. 3, pp. 640-651, 2011.
[37] R.J. Aroma, M. Kurian, “A Semantic Web: Intelligence in Information Retrieval”, International
Conference on Emerging Trends in Computing, Communication and Nanotechnology, IEEE,
Tirunelveli, pp. 203-206, 2013.
[38] N.Y. Ibrahim, S.A. Mokhtar, H.M. Harb, “Towards an Ontology Based Integrated Framework for
Semantic Web”, International Journal of Computer Science and Information Security, Vol. 1 , No. 9,
pp. 91-99, 2012.
[39] C. Nédellec, A. Nazarenko, “Ontologies and Information Extraction”, Handbook on Ontologies in
Information Systems, Springer-Verlag, pp. 1-23, 2004.
[40] P. Castells, M. Fernandez, D. Vallet, P. Mylonas, Y. Avrithis, “Self-tuning Personalized Information
Retrieval in an Ontology-based Framework”, In Proceedings of the First International Workshop on
Web Semantics, LNCS, Vol. 3762, pp. 977-986, 2005.
24
13. International Journal on Cybernetics & Informatics ( IJCI) Vol.2, No.6, December 2013
[41] S. Gauch, J. Chaffee, A. Pretschner, “Ontology-Based Personalized Search and Browsing”, Web
Intelligence and Agent Systems, Vol. 1, pp. 219-234, 2003.
[42] N. Mittal, R. Nayak, M.C. Govil, K.C. Jain, “Evaluation of a Hybrid Approach of Personalized Web
Information Retrieval using the FIRE Data Set”, In Proceedings of the First Amrita ACM-W
Celebration on Women in Computing in India (A2CWiC „1 ), ACM, New York, NY, USA, 2 1 .
[43] B. Fazzinga, G. Gianforme, G. Gottlob, T. Lukasiewicz, “Semantic Web Search Based on
Ontological Conjunctive Queries”, Web Semantics: Science, Services and Agents on the World Wide
Web, Vol. 9, pp. 453-473, 2011.
Authors
Javad Pashaei Barbin is Currently Ph.D. Candidate in Department of Computer
Engineering, Islamic Azad University, Qazvin, Iran. His a Lecturer and Member
of The Research Committee of The Department of Computer Engineering,
Naghadeh Branch, Islamic Azad University, Naghadeh, Iran. His Interested
Research Areas Are in the Software Project Scheduling Problem, Machine
Learning, Data Mining and Wireless Networks.
Isa Maleki is a Lecturer of The Department of Computer Engineering,
Naghadeh Branch, Islamic Azad University, Naghadeh, Iran. He received his
M.Sc. degree in Department of Computer Engineering, cience and Research
S
Branch, Islamic Azad University, West Azerbaijan, Iran, in 2013. He is a
Member of Editorial Board and Review Board in Several International Journals
and National Conferences. He has published over 30 papers in International
journals and conference proceedings. His Interested Research Areas Are in the
Machine Learning, Data Mining, Optimization and Artificial Intelligence.
25