The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies.
NATURE: A TOOL RESULTING FROM THE UNION OF ARTIFICIAL INTELLIGENCE AND NATURA...ijaia
This paper presents the final results of the research project that aimed for the construction of a tool which
is aided by Artificial Intelligence through an Ontology with a model trained with Machine Learning, and is
aided by Natural Language Processing to support the semantic search of research projects of the Research
System of the University of Nariño. For the construction of NATURE, as this tool is called, a methodology
was used that includes the following stages: appropriation of knowledge, installation and configuration of
tools, libraries and technologies, collection, extraction and preparation of research projects, design and
development of the tool. The main results of the work were three: a) the complete construction of the
Ontology with classes, object properties (predicates), data properties (attributes) and individuals
(instances) in Protegé, SPARQL queries with Apache Jena Fuseki and the respective coding with
Owlready2 using Jupyter Notebook with Python within the virtual environment of anaconda; b) the
successful training of the model for which Machine Learning algorithms were used and specifically
Natural Language Processing algorithms such as: SpaCy, NLTK, Word2vec and Doc2vec, this was also
performed in Jupyter Notebook with Python within the virtual environment of anaconda and with
Elasticsearch; and c) the creation of NATURE by managing and unifying the queries for the Ontology and
for the Machine Learning model. The tests showed that NATURE was successful in all the searches that
were performed as its results were satisfactory
Semantic Query Optimisation with Ontology Simulationdannyijwest
Semantic Web is, without a doubt, gaining momentum in both industry and academia. The word “Semantic” refers to “meaning” – a semantic web is a web of meaning. In this fast changing and result oriented practical world, gone are the days where an individual had to struggle for finding information on the Internet where knowledge management was the major issue. The semantic web has a vision of linking, integrating and analysing data from various data sources and forming a new information stream, hence a web of databases connected with each other and machines interacting with other machines to yield results which are user oriented and accurate. With the emergence of Semantic Web framework the naïve approach of searching information on the syntactic web is cliché. This paper proposes an optimised semantic searching of keywords exemplified by simulation an ontology of Indian universities with a proposed algorithm which ramifies the effective semantic retrieval of information which is easy to access and time saving.
Comparison of Semantic and Syntactic Information Retrieval System on the basi...Waqas Tariq
In this paper information retrieval system for local databases are discussed. The approach is to search the web both semantically and syntactically. The proposal handles the search queries related to the user who is interested in the focused results regarding a product with some specific characteristics. The objective of the work will be to find and retrieve the accurate information from the available information warehouse which contains related data having common keywords. This information retrieval system can eventually be used for accessing the internet also. Accuracy in information retrieval that is achieving both high precision and recall is difficult. So both semantic and syntactic search engine are compared for information retrieval using two parameters i.e. precision and recall.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design and Implementation of Meetings Document Management and Retrieval SystemCSCJournals
Meetings management system has components to capture, storing/archiving, retrieve, browse, and distribute documents from the system and Security to protect documents from unauthorized access. Lack of proper organization, storage and easy access of meeting documents, bottleneck of keeping paper documents, slow distribution, and misplacement of documents necessitated the need for this work. Document management software that can be used to organize and maintain the records of meetings has been developed. The system, developed as a web application, is based on the use of objects and Web technologies. A search facility is included to support rapid location of topics of interest, and navigation is enabled by the employment of hyperlinks. The system was implemented using asp.net. This document management system can enable users to follow the development of any topic through several meetings of a particular body or committee, Members of the body should be able to have instant and full access to what has been discussed and decided about the given issue no matter how long that had been.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
NATURE: A TOOL RESULTING FROM THE UNION OF ARTIFICIAL INTELLIGENCE AND NATURA...ijaia
This paper presents the final results of the research project that aimed for the construction of a tool which
is aided by Artificial Intelligence through an Ontology with a model trained with Machine Learning, and is
aided by Natural Language Processing to support the semantic search of research projects of the Research
System of the University of Nariño. For the construction of NATURE, as this tool is called, a methodology
was used that includes the following stages: appropriation of knowledge, installation and configuration of
tools, libraries and technologies, collection, extraction and preparation of research projects, design and
development of the tool. The main results of the work were three: a) the complete construction of the
Ontology with classes, object properties (predicates), data properties (attributes) and individuals
(instances) in Protegé, SPARQL queries with Apache Jena Fuseki and the respective coding with
Owlready2 using Jupyter Notebook with Python within the virtual environment of anaconda; b) the
successful training of the model for which Machine Learning algorithms were used and specifically
Natural Language Processing algorithms such as: SpaCy, NLTK, Word2vec and Doc2vec, this was also
performed in Jupyter Notebook with Python within the virtual environment of anaconda and with
Elasticsearch; and c) the creation of NATURE by managing and unifying the queries for the Ontology and
for the Machine Learning model. The tests showed that NATURE was successful in all the searches that
were performed as its results were satisfactory
Semantic Query Optimisation with Ontology Simulationdannyijwest
Semantic Web is, without a doubt, gaining momentum in both industry and academia. The word “Semantic” refers to “meaning” – a semantic web is a web of meaning. In this fast changing and result oriented practical world, gone are the days where an individual had to struggle for finding information on the Internet where knowledge management was the major issue. The semantic web has a vision of linking, integrating and analysing data from various data sources and forming a new information stream, hence a web of databases connected with each other and machines interacting with other machines to yield results which are user oriented and accurate. With the emergence of Semantic Web framework the naïve approach of searching information on the syntactic web is cliché. This paper proposes an optimised semantic searching of keywords exemplified by simulation an ontology of Indian universities with a proposed algorithm which ramifies the effective semantic retrieval of information which is easy to access and time saving.
Comparison of Semantic and Syntactic Information Retrieval System on the basi...Waqas Tariq
In this paper information retrieval system for local databases are discussed. The approach is to search the web both semantically and syntactically. The proposal handles the search queries related to the user who is interested in the focused results regarding a product with some specific characteristics. The objective of the work will be to find and retrieve the accurate information from the available information warehouse which contains related data having common keywords. This information retrieval system can eventually be used for accessing the internet also. Accuracy in information retrieval that is achieving both high precision and recall is difficult. So both semantic and syntactic search engine are compared for information retrieval using two parameters i.e. precision and recall.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design and Implementation of Meetings Document Management and Retrieval SystemCSCJournals
Meetings management system has components to capture, storing/archiving, retrieve, browse, and distribute documents from the system and Security to protect documents from unauthorized access. Lack of proper organization, storage and easy access of meeting documents, bottleneck of keeping paper documents, slow distribution, and misplacement of documents necessitated the need for this work. Document management software that can be used to organize and maintain the records of meetings has been developed. The system, developed as a web application, is based on the use of objects and Web technologies. A search facility is included to support rapid location of topics of interest, and navigation is enabled by the employment of hyperlinks. The system was implemented using asp.net. This document management system can enable users to follow the development of any topic through several meetings of a particular body or committee, Members of the body should be able to have instant and full access to what has been discussed and decided about the given issue no matter how long that had been.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...inventionjournals
Information is overloaded in the Internet due to the unstable growth of information and it makes information search as complicate process. Recommendation System (RS) is the tool and largely used nowadays in many areas to generate interest items to users. With the development of e-commerce and information access, recommender systems have become a popular technique to prune large information spaces so that users are directed toward those items that best meet their needs and preferences. As the exponential explosion of various contents generated on the Web, Recommendation techniques have become increasingly indispensable. Web recommendation systems assist the users to get the exact information and facilitate the information search easier. Web recommendation is one of the techniques of web personalization, which recommends web pages or items to the user based on the previous browsing history. But the tremendous growth in the amount of the available information and the number of visitors to web sites in recent years places some key challenges for recommender system. The recent recommender systems stuck with producing high quality recommendation with large information, resulting unwanted item instead of targeted item or product, and performing many recommendations per second for millions of user and items. To avoid these challenges a new recommender system technologies are needed that can quickly produce high quality recommendation, even for a very large scale problems. To address these issues we use two recommender system process using fuzzy clustering and collaborative filtering algorithms. Fuzzy clustering is used to predict the items or product that will be accessed in the future based on the previous action of user browsers behavior. Collaborative filtering recommendation process is used to produce the user expects result from the result of fuzzy clustering and collection of Web Database data items. Using this new recommendation system, it results the user expected product or item with minimum time. This system reduces the result of unrelated and unwanted item to user and provides the results with user interested domain.
Context Driven Technique for Document ClassificationIDES Editor
In this paper we present an innovative hybrid Text
Classification (TC) system that bridges the gap between
statistical and context based techniques. Our algorithm
harnesses contextual information at two stages. First it extracts
a cohesive set of keywords for each category by using lexical
references, implicit context as derived from LSA and wordvicinity
driven semantics. And secondly, each document is
represented by a set of context rich features whose values are
derived by considering both lexical cohesion as well as the extent
of coverage of salient concepts via lexical chaining. After
keywords are extracted, a subset of the input documents is
apportioned as training set. Its members are assigned categories
based on their keyword representation. These labeled
documents are used to train binary SVM classifiers, one for
each category. The remaining documents are supplied to the
trained classifiers in the form of their context-enhanced feature
vectors. Each document is finally ascribed its appropriate
category by an SVM classifier.
Metadata: Towards Machine-Enabled Intelligencedannyijwest
World Wide Web has revolutionized the means of data availability, but with its current structure model , it is becoming increasingly difficult to retrieve relevant information, with reasonable precision and recall, using the major search engines. However, with use of metadata, combined with the use of improved searching techniques, helps to enhance relevant information retrieval .The design of structured, descriptions of Web resources enables greater search precision and a more accurate relevance ranking of retrieved information .One such efforts towards standardization is , Dublin Core standard, which has been developed as Metadata Standard and also other standards which enhances retrieval of a wide range of information resources. This paper discuses the importance of metadata, various metadata schemas and elements, and the need of standardization of Metadata. This paper further discusses how the metadata can be generated using various tools which assist intelligent agents for efficient retrieval
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
The web has become a resourceful tool for almost all domains today. Search engines prominently use
inverted indexing technique to locate the web pages having the users query. The performance of inverted
index fundamentally depends upon the searching of keyword in the list maintained by search engine. Text
matching is done with the help of string matching algorithm. It is important to any string matching
algorithm to locate quickly the occurrences of the user specified pattern in large text. In this paper a new
string matching algorithm for keyword searching is proposed. The proposed algorithm relies on new
technique based on pattern length and FML (First-Middle-Last) character match. This proposed
algorithm is analysed and implemented. The extensive testing and comparisons are done with BoyerMoore, Naïve, Improved Naïve, Horspool and Zhu Takaoka. The result shows that the proposed
algorithm takes less time than other existing algorithm.
Clustering of Deep WebPages: A Comparative Studyijcsit
The internethas massive amount of information. This information is stored in the form of zillions of
webpages. The information that can be retrieved by search engines is huge, and this information constitutes
the ‘surface web’.But the remaining information, which is not indexed by search engines – the ‘deep web’,
is much bigger in size than the ‘surface web’, and remains unexploited yet.
Several machine learning techniques have been commonly employed to access deep web content. Under
machine learning, topic models provide a simple way to analyze large volumes of unlabeled text. A ‘topic’is
a cluster of words that frequently occur together and topic models can connect words with similar
meanings and distinguish between words with multiple meanings. In this paper, we cluster deep web
databases employing several methods, and then perform a comparative study. In the first method, we apply
Latent Semantic Analysis (LSA) over the dataset. In the second method, we use a generative probabilistic
model called Latent Dirichlet Allocation(LDA) for modeling content representative of deep web
databases.Both these techniques are implemented after preprocessing the set of web pages to extract page
contents and form contents.Further, we propose another version of Latent Dirichlet Allocation (LDA) to the
dataset. Experimental results show that the proposed method outperforms the existing clustering methods.
Context Based Indexing in Search Engines Using Ontology: Reviewiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
SEMANTIC INFORMATION EXTRACTION IN UNIVERSITY DOMAINcscpconf
Today’s conventional search engines hardly do provide the essential content relevant to the
user’s search query. This is because the context and semantics of the request made by the user
is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is
upcoming in the area of web search which combines Natural Language Processing and
Artificial Intelligence.
The objective of the work done here is to design, develop and implement a semantic search
engine- SIEU(Semantic Information Extraction in University Domain) confined to the
university domain. SIEU uses ontology as a knowledge base for the information retrieval
process. It is not just a mere keyword search. It is one layer above what Google or any other
search engines retrieve by analyzing just the keywords. Here the query is analyzed both
syntactically and semantically.
The developed system retrieves the web results more relevant to the user query through keyword
expansion. The results obtained here will be accurate enough to satisfy the request made by the
user. The level of accuracy will be enhanced since the query is analyzed semantically. The
system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query
META SEARCH ENGINE WITH AN INTELLIGENT INTERFACE FOR INFORMATION RETRIEVAL ON...ijcseit
This paper analyses the features of Web Search Engines, Vertical Search Engines, Meta Search Engines,
and proposes a Meta Search Engine for searching and retrieving documents on Multiple Domains in the
World Wide Web (WWW). A web search engine searches for information in WWW. A Vertical Search
provides the user with results for queries on that domain. Meta Search Engines send the user’s search
queries to various search engines and combine the search results. This paper introduces intelligent user
interfaces for selecting domain, category and search engines for the proposed Multi-Domain Meta Search
Engine. An intelligent User Interface is also designed to get the user query and to send it to appropriate
search engines. Few algorithms are designed to combine results from various search engines and also to
display the results.
Extracting and Reducing the Semantic Information Content of Web Documents to ...ijsrd.com
Ranking and optimization of web service compositions represent challenging areas of research with significant implication for realization of the "Web of Services" vision. The semantic web, where the semantics information is indicated using machine-process able language such as the Web Ontology Language (OWL) "Semantic web service" use formal semantic description of web service functionality and enable automated reasoning over web service compositions. These semantic web services can then be automatically discovered, composed into more complex services, and executed. Automating web service composition through the use of semantic technologies calculating the semantic similarities between outputs and inputs of connected constituent services, and aggregate these values into a measure of semantics quality for the composition. It propose a novel and extensible model balancing the new dimensions of semantic quality ( as a functional quality metric) with QoS metric, and using them together as a ranking and optimization criteria. It also demonstrates the utility of Genetic Algorithms to allow optimization within the context of a large number of services foreseen by the "Web of Service" vision. To reduce the semantics of the web documents then to support semantic document retrieval by using Network Ontology Language (NOL) and to improve QoS as a ranking and optimization.
Information Storage and Retrieval : A Case StudyBhojaraju Gunjal
Bhojaraju.G, M.S.Banerji and Muttayya Koganurmath (2004). Information Storage and Retrieval: A Case Study, In Proceedings of International Conference on Digital Libraries (ICDL 2004), New Delhi, Feb 24-27, 2004.
(Best Poster Presentation Award)
Decision Support for E-Governance: A Text Mining ApproachIJMIT JOURNAL
Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens’ petitions and stakeholders’ views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens’ opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.
Semantic Information Retrieval Using Ontology in University Domain dannyijwest
Today’s conventional search engines hardly do provide the essential content relevant to the user’s search
query. This is because the context and semantics of the request made by the user is not analyzed to the full
extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search
which combines Natural Language Processing and Artificial Intelligence. The objective of the work done
here is to design, develop and implement a semantic search engine- SIEU(Semantic Information
Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge
base for the information retrieval process. It is not just a mere keyword search. It is one layer above what
Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed
both syntactically and semantically. The developed system retrieves the web results more relevant to the
user query through keyword expansion. The results obtained here will be accurate enough to satisfy the
request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically.
The system will be of great use to the developers and researchers who work on web. The Google results are
re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which
fetches more apt results for the user query.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...inventionjournals
Information is overloaded in the Internet due to the unstable growth of information and it makes information search as complicate process. Recommendation System (RS) is the tool and largely used nowadays in many areas to generate interest items to users. With the development of e-commerce and information access, recommender systems have become a popular technique to prune large information spaces so that users are directed toward those items that best meet their needs and preferences. As the exponential explosion of various contents generated on the Web, Recommendation techniques have become increasingly indispensable. Web recommendation systems assist the users to get the exact information and facilitate the information search easier. Web recommendation is one of the techniques of web personalization, which recommends web pages or items to the user based on the previous browsing history. But the tremendous growth in the amount of the available information and the number of visitors to web sites in recent years places some key challenges for recommender system. The recent recommender systems stuck with producing high quality recommendation with large information, resulting unwanted item instead of targeted item or product, and performing many recommendations per second for millions of user and items. To avoid these challenges a new recommender system technologies are needed that can quickly produce high quality recommendation, even for a very large scale problems. To address these issues we use two recommender system process using fuzzy clustering and collaborative filtering algorithms. Fuzzy clustering is used to predict the items or product that will be accessed in the future based on the previous action of user browsers behavior. Collaborative filtering recommendation process is used to produce the user expects result from the result of fuzzy clustering and collection of Web Database data items. Using this new recommendation system, it results the user expected product or item with minimum time. This system reduces the result of unrelated and unwanted item to user and provides the results with user interested domain.
Context Driven Technique for Document ClassificationIDES Editor
In this paper we present an innovative hybrid Text
Classification (TC) system that bridges the gap between
statistical and context based techniques. Our algorithm
harnesses contextual information at two stages. First it extracts
a cohesive set of keywords for each category by using lexical
references, implicit context as derived from LSA and wordvicinity
driven semantics. And secondly, each document is
represented by a set of context rich features whose values are
derived by considering both lexical cohesion as well as the extent
of coverage of salient concepts via lexical chaining. After
keywords are extracted, a subset of the input documents is
apportioned as training set. Its members are assigned categories
based on their keyword representation. These labeled
documents are used to train binary SVM classifiers, one for
each category. The remaining documents are supplied to the
trained classifiers in the form of their context-enhanced feature
vectors. Each document is finally ascribed its appropriate
category by an SVM classifier.
Metadata: Towards Machine-Enabled Intelligencedannyijwest
World Wide Web has revolutionized the means of data availability, but with its current structure model , it is becoming increasingly difficult to retrieve relevant information, with reasonable precision and recall, using the major search engines. However, with use of metadata, combined with the use of improved searching techniques, helps to enhance relevant information retrieval .The design of structured, descriptions of Web resources enables greater search precision and a more accurate relevance ranking of retrieved information .One such efforts towards standardization is , Dublin Core standard, which has been developed as Metadata Standard and also other standards which enhances retrieval of a wide range of information resources. This paper discuses the importance of metadata, various metadata schemas and elements, and the need of standardization of Metadata. This paper further discusses how the metadata can be generated using various tools which assist intelligent agents for efficient retrieval
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
The web has become a resourceful tool for almost all domains today. Search engines prominently use
inverted indexing technique to locate the web pages having the users query. The performance of inverted
index fundamentally depends upon the searching of keyword in the list maintained by search engine. Text
matching is done with the help of string matching algorithm. It is important to any string matching
algorithm to locate quickly the occurrences of the user specified pattern in large text. In this paper a new
string matching algorithm for keyword searching is proposed. The proposed algorithm relies on new
technique based on pattern length and FML (First-Middle-Last) character match. This proposed
algorithm is analysed and implemented. The extensive testing and comparisons are done with BoyerMoore, Naïve, Improved Naïve, Horspool and Zhu Takaoka. The result shows that the proposed
algorithm takes less time than other existing algorithm.
Clustering of Deep WebPages: A Comparative Studyijcsit
The internethas massive amount of information. This information is stored in the form of zillions of
webpages. The information that can be retrieved by search engines is huge, and this information constitutes
the ‘surface web’.But the remaining information, which is not indexed by search engines – the ‘deep web’,
is much bigger in size than the ‘surface web’, and remains unexploited yet.
Several machine learning techniques have been commonly employed to access deep web content. Under
machine learning, topic models provide a simple way to analyze large volumes of unlabeled text. A ‘topic’is
a cluster of words that frequently occur together and topic models can connect words with similar
meanings and distinguish between words with multiple meanings. In this paper, we cluster deep web
databases employing several methods, and then perform a comparative study. In the first method, we apply
Latent Semantic Analysis (LSA) over the dataset. In the second method, we use a generative probabilistic
model called Latent Dirichlet Allocation(LDA) for modeling content representative of deep web
databases.Both these techniques are implemented after preprocessing the set of web pages to extract page
contents and form contents.Further, we propose another version of Latent Dirichlet Allocation (LDA) to the
dataset. Experimental results show that the proposed method outperforms the existing clustering methods.
Context Based Indexing in Search Engines Using Ontology: Reviewiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
SEMANTIC INFORMATION EXTRACTION IN UNIVERSITY DOMAINcscpconf
Today’s conventional search engines hardly do provide the essential content relevant to the
user’s search query. This is because the context and semantics of the request made by the user
is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is
upcoming in the area of web search which combines Natural Language Processing and
Artificial Intelligence.
The objective of the work done here is to design, develop and implement a semantic search
engine- SIEU(Semantic Information Extraction in University Domain) confined to the
university domain. SIEU uses ontology as a knowledge base for the information retrieval
process. It is not just a mere keyword search. It is one layer above what Google or any other
search engines retrieve by analyzing just the keywords. Here the query is analyzed both
syntactically and semantically.
The developed system retrieves the web results more relevant to the user query through keyword
expansion. The results obtained here will be accurate enough to satisfy the request made by the
user. The level of accuracy will be enhanced since the query is analyzed semantically. The
system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query
META SEARCH ENGINE WITH AN INTELLIGENT INTERFACE FOR INFORMATION RETRIEVAL ON...ijcseit
This paper analyses the features of Web Search Engines, Vertical Search Engines, Meta Search Engines,
and proposes a Meta Search Engine for searching and retrieving documents on Multiple Domains in the
World Wide Web (WWW). A web search engine searches for information in WWW. A Vertical Search
provides the user with results for queries on that domain. Meta Search Engines send the user’s search
queries to various search engines and combine the search results. This paper introduces intelligent user
interfaces for selecting domain, category and search engines for the proposed Multi-Domain Meta Search
Engine. An intelligent User Interface is also designed to get the user query and to send it to appropriate
search engines. Few algorithms are designed to combine results from various search engines and also to
display the results.
Extracting and Reducing the Semantic Information Content of Web Documents to ...ijsrd.com
Ranking and optimization of web service compositions represent challenging areas of research with significant implication for realization of the "Web of Services" vision. The semantic web, where the semantics information is indicated using machine-process able language such as the Web Ontology Language (OWL) "Semantic web service" use formal semantic description of web service functionality and enable automated reasoning over web service compositions. These semantic web services can then be automatically discovered, composed into more complex services, and executed. Automating web service composition through the use of semantic technologies calculating the semantic similarities between outputs and inputs of connected constituent services, and aggregate these values into a measure of semantics quality for the composition. It propose a novel and extensible model balancing the new dimensions of semantic quality ( as a functional quality metric) with QoS metric, and using them together as a ranking and optimization criteria. It also demonstrates the utility of Genetic Algorithms to allow optimization within the context of a large number of services foreseen by the "Web of Service" vision. To reduce the semantics of the web documents then to support semantic document retrieval by using Network Ontology Language (NOL) and to improve QoS as a ranking and optimization.
Information Storage and Retrieval : A Case StudyBhojaraju Gunjal
Bhojaraju.G, M.S.Banerji and Muttayya Koganurmath (2004). Information Storage and Retrieval: A Case Study, In Proceedings of International Conference on Digital Libraries (ICDL 2004), New Delhi, Feb 24-27, 2004.
(Best Poster Presentation Award)
Decision Support for E-Governance: A Text Mining ApproachIJMIT JOURNAL
Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens’ petitions and stakeholders’ views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens’ opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.
Semantic Information Retrieval Using Ontology in University Domain dannyijwest
Today’s conventional search engines hardly do provide the essential content relevant to the user’s search
query. This is because the context and semantics of the request made by the user is not analyzed to the full
extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search
which combines Natural Language Processing and Artificial Intelligence. The objective of the work done
here is to design, develop and implement a semantic search engine- SIEU(Semantic Information
Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge
base for the information retrieval process. It is not just a mere keyword search. It is one layer above what
Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed
both syntactically and semantically. The developed system retrieves the web results more relevant to the
user query through keyword expansion. The results obtained here will be accurate enough to satisfy the
request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically.
The system will be of great use to the developers and researchers who work on web. The Google results are
re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which
fetches more apt results for the user query.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
The World Wide Web is booming and radically vibrant due to the well established standards and widely accountable framework which guarantees the interoperability at various levels of the application and the society as a whole. So far, the web has been functioning at the random rate on the basis of the human intervention and some manual processing but the next generation web which the researchers called semantic web, edging for automatic processing and machine-level understanding. The well set notion, Semantic Web would be turn possible if only there exists the further levels of interoperability prevails among the applications and networks. In achieving this interoperability and greater functionality among the applications, the W3C standardization has already released the well defined standards such as RDF/RDF Schema and OWL. Using XML as a tool for semantic interoperability has not achieved anything effective and failed to bring the interconnection at the larger level. This leads to the further inclusion of inference layer at the top of the web architecture and its paves the way for proposing the common design for encoding the ontology representation languages in the data models such as RDF/RDFS. In this research article, we have given the clear implication of semantic web research roots and its ontological background process which may help to augment the sheer understanding of named entities in the web.
International conference On Computer Science And technologyanchalsinghdm
ICGCET 2019 | 5th International Conference on Green Computing and Engineering Technologies. The conference will be held on 7th September - 9th September 2019 in Morocco. International Conference On Engineering Technology
The conference aims to promote the work of researchers, scientists, engineers and students from across the world on advancement in electronic and computer systems.
Design Issues for Search Engines and Web Crawlers: A ReviewIOSR Journals
Abstract: The World Wide Web is a huge source of hyperlinked information contained in hypertext documents.
Search engines use web crawlers to collect these web documents from web for storage and indexing. The prompt
growth of the World Wide Web has posed incomparable challenges for the designers of search engines and web
crawlers; that help users to retrieve web pages in a reasonable amount of time. In this paper, a review on need
and working of a search engine, and role of a web crawler is being presented.
Key words: Internet, www, search engine, types, design issues, web crawlers.
HIGWGET-A Model for Crawling Secure Hidden WebPagesijdkp
The conventional search engines existing over the internet are active in searching the appropriate
information. The search engine gets few constraints similar to attainment the information seeked from a
different sources. The web crawlers are intended towards a exact lane of the web.Web Crawlers are limited
in moving towards a different path as they are protected or at times limited because of the apprehension of
threats. It is possible to make a web crawler,which will have the ability of penetrating from side to side the
paths of the web, not reachable by the usual web crawlers, so as to get a improved answer in terms of
infoemation, time and relevancy for the given search query. The proposed web crawler is designed to
attend Hyper Text Transfer Protocol Secure (HTTPS) websites including the web pages,which requires
verification to view and index.
Semantic Search of E-Learning Documents Using Ontology Based Systemijcnes
The keyword searching mechanism is traditionally used for information retrieval from Web based systems. However, this system fails to meet the requirements in Web searching of the expert knowledge base based on the popular semantic systems. Semantic search of E-learning documents based on ontology is increasingly adopted in information retrieval systems. Ontology based system simplifies the task of finding correct information on the Web by building a search system based on the meaning of keyword instead of the keyword itself. The major function of the ontology based system is the development of specification of conceptualization which enhances the connection between the information present in the Web pages with that of the background knowledge.The semantic gap existing between the keyword found in documents and those in query can be matched suitably using Ontology based system. This paper provides a detailed account of the semantic search of E-learning documents using ontology based system by making comparison between various ontology systems. Based on this comparison, this survey attempts to identify the possible directions for future research.
Effective Performance of Information Retrieval on Web by Using Web Crawling dannyijwest
World Wide Web consists of more than 50 billion pages online. It is highly dynamic [6] i.e. the web
continuously introduces new capabilities and attracts many people. Due to this explosion in size, the
effective information retrieval system or search engine can be used to access the information. In this paper
we have proposed the EPOW (Effective Performance of WebCrawler) architecture. It is a software agent
whose main objective is to minimize the overload of a user locating needed information. We have designed
the web crawler by considering the parallelization policy. Since our EPOW crawler has a highly optimized
system it can download a large number of pages per second while being robust against crashes. We have
also proposed to use the data structure concepts for implementation of scheduler & circular Queue to
improve the performance of our web crawler. (Abstract)
Abstract: In many fields, such as industry, commerce, government, and education, knowledge discovery and data
mining can be immensely valuable to the subject of Artificial Intelligence. Because of the recent increase in
demand for KDD techniques, such as those used in machine learning, databases, statistics, knowledge acquisition,
data visualisation, and high performance computing, knowledge discovery and data mining have grown in
importance. By employing standard formulas for computational correlations, we hope to create an integrated
technique that can be used to filter web world social information and find parallels between similar tastes of
diverse user information in a variety of settings
Information Organisation for the Future Web: with Emphasis to Local CIRs inventionjournals
Semantic Web is evolving as meaningful extension of present web using ontology. Ontology can play an important role in structuring the content in the current web to lead this as new generation web. Domain information can be organized using ontology to help machine to interact with the data for the retrieval of exact information quickly. Present paper tries to organize community information resources covering the area of local information need and evaluate the system using SPARQL from the developed ontology.
Mining in Ontology with Multi Agent System in Semantic Web : A Novel Approachijma
A large amount of data is present on the web. It contains huge number of web pages and to find suitable
information from them is very cumbersome task. There is need to organize data in formal manner so that
user can easily access and use them. To retrieve information from documents, there are many Information
Retrieval (IR) techniques. Current IR techniques are not so advanced that they can be able to exploit
semantic knowledge within documents and give precise results. IR technology is major factor responsible
for handling annotations in Semantic Web (SW) languages. With the rate of growth of web and huge
amount of information available on the web which may be in unstructured, semi structured or structured
form, it has become increasingly difficult to identify the relevant pieces of information on the internet. IR
technology is major factor responsible for handling annotations in Semantic Web (SW) languages.
Knowledgeable representation languages are used for retrieving information. So, there is need to build an
ontology that uses well defined methodology and process of developing ontology is called Ontology
Development. Secondly, Cloud computing and data mining have become famous phenomena in the current
application of information technology. With the changing trends and emerging of the new concept in the
information technology sector, data mining and knowledge discovery have proved to be of significant
importance. Data mining can be defined as the process of extracting data or information from a database
which is not explicitly defined by the database and can be used to come up with generalized conclusions
based on the trends obtained from the data. A database may be described as a collection of formerly
structured data. Multi agents data mining may be defined as the use of various agents cooperatively
interact with the environment to achieve a specified objective. Multi agents will always act on behalf of
users and will coordinate, cooperate, negotiate and exchange data with each other. An agent would
basically refer to a software agent, a robot or a human being Knowledge discovery can be defined as the
process of critically searching large collections of data with the aim of coming up with patterns that can be
used to make generalized conclusions. These patterns are sometimes referred to as knowledge about the
data. Cloud computing can be defined as the delivery of computing services in which shared resources,
information and software’s are provided over a network, for example, the information super highway.
Cloud computing is normally provided over a web based service which hosts all the resources required. As,
the knowledge mining is used in many fields of study such as in science and medicine, finance, education,
manufacturing and commerce. In this paper, the Semantic Web addresses the first part of this challenge by
trying to make the data also machine understandable in the form of Ontology, while Multi-Agen
Classification-based Retrieval Methods to Enhance Information Discovery on th...IJMIT JOURNAL
The widespread adoption of the World-Wide Web (the Web) has created challenges both for society as a whole and for the technology used to build and maintain the Web. The ongoing struggle of information retrieval systems is to wade through this vast pile of data and satisfy users by presenting them with information that most adequately it’s their needs. On a societal level, the Web is expanding faster than we can comprehend its implications or develop rules for its use. The ubiquitous use of the Web has raised important social concerns in the areas of privacy, censorship, and access to information. On a technical level, the novelty of the Web and the pace of its growth have created challenges not only in the development of new applications that realize the power of the Web, but also in the technology needed to scale applications to accommodate the resulting large data sets and heavy loads. This thesis presents searching algorithms and hierarchical classification techniques for increasing a search service's understanding of web queries. Existing search services rely solely on a query's occurrence in the document collection to locate relevant documents. They typically do not perform any task or topic-based analysis of queries using other available resources, and do not leverage changes in user query patterns over time. Provided within are a set of techniques and metrics for performing temporal analysis on query logs. Our log analyses are shown to be reasonable and informative, and can be used to detect changing trends and patterns in the query stream, thus providing valuable data to a search service.
Similar to Intelligent Semantic Web Search Engines: A Brief Survey (20)
IJWEST CFP (9).pdfCALL FOR ARTICLES...! IS INDEXING JOURNAL...! Internationa...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwestjournal@airccse.org / ijwest@aircconline.com or through Submission System.
mportant Dates
Submission Deadline : June 01, 2024
Notification :July 01, 2024
Final Manuscript Due : July 08, 2024
Publication Date : Determined by the Editor-in-Chief
Here's where you can reach us : ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Cybercrimes in the Darknet and Their Detections: A Comprehensive Analysis and...dannyijwest
Although the Dark web was originally used for maintaining privacy-sensitive communication for business or intelligence services for defence, government and business organizations, fighting against censorship and blocked content, later, the advantage of technologies behind the Dark web were abused by criminals to conduct crimes which involve drug dealing to the contract of assassinations in a widespread manner. Since the communication remains secure and untraceable, criminals can easily use dark web service via The Onion Router (TOR), can hide their illegal motives and can conceal their criminal activities. This makes it very difficult to monitor and detect cybercrimes over the dark web. With the evolution of machine learning, natural language processing techniques, computational big data applications and hardware, there is a growing interest in exploiting dark web data to monitor and detect criminal activities. Due to the anonymity provided by the Dark Web, the rapid disappearance and the change of the uniform resource locators (URLs) of the resources, it is not as easy to crawl the Drak web and get the data as the usual surface web which limits the researchers and law enforcement agencies to analyse the data. Therefore, there is an urgent need to study the technology behind the Dark web, its widespread abuse, its impact on society and the existing systems, to identify the sources of drug deal or terrorism activities. In this research, we analysed the predominant darker sides of the world wide web (WWW), their volumes, their contents and their ratios. We have performed the analysis of the larger malicious or hidden activities that occupy the major portions of the Dark net; tools and techniques used to identify cybercrimes which happen inside the dark web. We applied a systematic literature review (SLR) approach on the resources where the actual dark net data have been used for research purposes in several areas. From this SLR, we identified the approaches (tools and algorithms) which have been applied to analyse the Dark net data, the key gaps as well as the key contributions of the existing works in the literature. In our study, we find the main challenges to crawl the dark web and collect forum data are: scalability of crawler, content selection trade off, and social obligation for TOR crawler and the limitations of techniques used in automatic sentiment analysis to understand criminals’ forums and thereby monitor the forums. From the comprehensive analysis of existing tools, our study summarizes the most tools. However the forum topics rapidly change as their sources changes; criminals inject noises to obfuscate the forum’s main topic and thus remain undetectable. Therefore supervised techniques fail to address the above challenges. Semi-supervised techniques would be an interesting research direction.
FFO: Forest Fire Ontology and Reasoning System for Enhanced Alert and Managem...dannyijwest
Forest fires or wildfires pose a serious threat to property, lives, and the environment. Early detection and mitigation of such emergencies, therefore, play an important role in reducing the severity of the impact caused by wildfire. Unfortunately, there is often an improper or delayed mechanism for forest fire detection which leads to destruction and losses. These anomalies in detection can be due to defects in sensors or a lack of proper information interoperability among the sensors deployed in forests. This paper presents a lightweight ontological framework to address these challenges. Interoperability issues are caused due to heterogeneity in technologies used and heterogeneous data created by different sensors. Therefore, through the proposed Forest Fire Detection and Management Ontology (FFO), we introduce a standardized model to share and reuse knowledge and data across different sensors. The proposed ontology is validated using semantic reasoning and query processing. The reasoning and querying processes are performed on real-time data gathered from experiments conducted in a forest and stored as RDF triples based on the design of the ontology. The outcomes of queries and inferences from reasoning demonstrate that FFO is feasible for the early detection of wildfire and facilitates efficient process management subsequent to detection.
Call For Papers-10th International Conference on Artificial Intelligence and ...dannyijwest
** Registration is currently open **
Call for Research Papers!!!
Free – Extended Paper will be published as free of cost.
10th International Conference on Artificial Intelligence and Applications (AI 2024)
July 20 ~ 21, 2024, Toronto, Canada
https://csty2024.org/ai/index
Submission Deadline: May 11, 2024
Contact Us
Here's where you can reach us : ai@csty2024.org or ai.conference@yahoo.com
Submission System
https://csty2024.org/submission/index.php
#artificialintelligence #softcomputing #machinelearning #technology #datascience #python #deeplearning #tech #robotics #innovation #bigdata #coding #iot #computerscience #data #dataanalytics #engineering #robot #datascientist #software #automation #analytics #ml #pythonprogramming #programmer #digitaltransformation #developer #promptengineering #generativeai #genai #chatgpt
CALL FOR ARTICLES...! IS INDEXING JOURNAL...! International Journal of Web &...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwest@aircconline.com or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
• Submission Deadline: March 16, 2024
• Notification : April 13, 2024
• Final Manuscript Due : April 20, 2024
• Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us
ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Submission URL : https://airccse.com/submissioncs/home.html
ENHANCING WEB ACCESSIBILITY - NAVIGATING THE UPGRADE OF DESIGN SYSTEMS FROM W...dannyijwest
ENHANCING WEB ACCESSIBILITY - NAVIGATING THE UPGRADE OF DESIGN SYSTEMS FROM WCAG 2.0 TO WCAG 2.1
Hardik Shah
Department of Information Technology, Rochester Institute of Technology, USA
ABSTRACT
In this research, we explore the vital transition of Design Systems from Web Content Accessibility
Guidelines (WCAG) 2.0 to WCAG 2.1, emphasizing its role in enhancing web accessibility and inclusivity
in digital environments. The study outlines a comprehensive strategy for achieving WCAG 2.1 compliance,
encompassing assessment, strategic planning, implementation, and testing, with a focus on collaboration
and user involvement. It also addresses the challenges in using web accessibility tools, such as their
complexity and the dynamic nature of accessibility standards. The paper looks forward to the integration
of emerging technologies like AI, ML, NLP, VR, and AR in accessibility tools, advocating for universal
design and user-centered approaches. This research acts as a crucial guide for organizations aiming to
navigate the changing landscape of web accessibility, underscoring the importance of continuous learning
and adaptation to maintain and enhance accessibility in digital platforms.
KEYWORDS
Web accessibility, WCAG 2.1, Design Systems, Web accessibility tools, Artificial Intelligence
PDF LINK:https://aircconline.com/ijwest/V15N1/15124ijwest01.pdf
VOLUME LINK:https://www.airccse.org/journal/ijwest/vol15.html
OTHER INFORMATION:https://www.airccse.org/journal/ijwest/ijwest.html
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Intelligent Semantic Web Search Engines: A Brief Survey
1. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
DOI : 10.5121/ijwest.2011.2103 34
Intelligent Semantic Web Search Engines: A Brief
Survey
G.Madhu1
and Dr.A.Govardhan2
Dr.T.V.Rajinikanth3
1
G.Madhu, Sr.Asst. Professor, Dept of Information Technology, VNR VJIET, Hyderabad-90,A.P. INDIA.
E-mail: madhu_g@vnrvjiet.in
2
Dr. A.Govardhan, Principal, J.N.T.U College of Engineering, Jagityal. Karimnagar Dist-505 452. A.P.INDIA
E-mail: govardhan_cse@yahoo.co.in
3
Dr. T.V. RajiniKanth,, Professor & HOD, Dept of Information Tech, GRIET,Hyderabad-500072., A.P INDIA
E-mail: rajinitv_03@yahoo.co.in
ABSTRACT
The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories
globally. The amount of information grows billions of databases. We need to search the information will specialize
tools known generically search engine. There are many of search engines available today, retrieving meaningful
information is difficult. However to overcome this problem in search engines to retrieve meaningful information
intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine
generations and the role of search engines in intelligent web and semantic search technologies.
KEYWORDS
Information retrieval, Intelligent Search, Search Engine, Semantic web.
1. INTRODUCTION
The Semantic Web is an extension of the current Web [1] that allows the meaning of information
to be precisely described in terms of well-defined vocabularies that are understood by people and
computers. On the Semantic Web information is described using a new W3C standard called the
Resource Description Framework (RDF). Semantic Web Search is a search engine for the
Semantic Web. Current Web sites can be used by both people and computers to precisely locate
and gather information published on the Semantic Web. Ontology [2] is one of the most
important concepts used in the semantic web infrastructure, and RDF(S) (Resource Description
Framework/Schema) and OWL (Web Ontology Languages) are two W3C recommended data
representation models which are used to represent ontologies. The Semantic Web will support
more efficient discovery, automation, integration and reuse of data and provide support for
interoperability problem which can not be resolved with current web technologies. Currently
research on semantic web search engines are in the beginning stage, as the traditional search
engines such as Google, Yahoo, and Bing (MSN) and so forth still dominate the present markets
of search engines.
Most of the search engines search for keywords to answer the queries from users. The search
engines usually search web pages for the required information. However they filter the pages
from searching unnecessary pages by using advanced algorithms. These search engines can
answer topic wise queries efficiently and effectively by developing state-of art algorithms.
However they are vulnerable in answering intelligent queries from the user due to the dependence
of their results on information available in web pages. The main focus of these search engines is
solving these queries with close to accurate results in small time using much researched
algorithms. However, it shows that such search engines are vulnerable in answering intelligent
queries using this approach. They either show inaccurate results with this approach or show
accurate but (could be) unreliable results. With the keywords based searches they usually provide
2. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
35
results from blogs (if available) or other discussion boards. The user cannot have a satisfaction
with these results due to lack of trusts on blogs etc. To overcome this problem in search engines
to retrieve relevant and meaningful information intelligently, semantic web technology deals with
a great role [3]. Intelligent semantic technology gives the nearer to desired results by search
engines to the user.
In this paper, we will make a preliminary survey over the existing literature regarding intelligent
semantic search engines and semantic web search. By classifying the literature into few main
categories, we review their characteristics respectively. In addition, the issues within the reviewed
intelligent semantic search methods and engines are analyzed and concluded based on
perspectives.
2. BACKGROUND
Information retrieval by searching information on the web is not a fresh idea but has different
challenges when it is compared to general information retrieval. Different search engines return
different search results due to the variation in indexing and search process. Google, Yahoo, and
Bing have been out there which handles the queries after processing the keywords. They only
search information given on the web page, recently, some research group’s start delivering results
from their semantics based search engines, and however most of them are in their initial stages.
Till none of the search engines come to close indexing the entire web content, much less the
entire Internet.
Current web is the biggest global database that lacks the existence of a semantic structure and
hence it makes difficult for the machine to understand the information provided by the user.
When the information was distributed in web, we have two kinds of research problems in search
engine i.e.
How can a search engine map a query to documents where information is available but
does not retrieve in intelligent and meaning full information?
The query results produced by search engines are distributed across different documents
that may be connected with hyperlink. How search engine can recognize efficiently such a
distributed results?
Semantic web [4] [5], can solve the first problem in web with semantic annotations to produce
intelligent and meaningful information by using query interface mechanism and ontology’s. Other
one can be solved by the graph-based query models [6]. The Semantic web would require solving
extraordinarily difficult problems in the areas of knowledge representation, natural language
understanding. The following figure depicts the semantic web frame work it also referred as
the semantic web layercake by W3C.
3. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
36
Fig.1. Semantic Web Frame Work
2.1 Current Web & Limitations
Present World Wide Web is the longest global database that lacks the existence of a semantic
structure and hence it becomes difficult for the machine to understand the information provided
by the user in the form of search strings. As for results, the search engines return the ambiguous
or partially ambiguous result data set; Semantic web is being to be developed to overcome the
following problems for current web.
• The web content lacks a proper structure regarding the representation of information.
• Ambiguity of information resulting from poor interconnection of information.
• Automatic information transfer is lacking.
• Usability to deal with enormous number of users and content ensuring trust at all levels.
• Incapability of machines to understand the provided information due to lack of a
universal format.
Hakia [7] is a general purpose semantic search engine that search structured text like Wikipedia.
Hakia calls itself a “meaning-based (semantic) search engine” [8]. They’re trying to provide
search results based on meaning match, rather than by the popularity of search terms. The
presented news, Blogs, Credible, and galleries are processed by hakia's proprietary core semantic
technology called QDEXing [7]. It can process any kind of digital artifact by its Semantic Rank
technology using third party API feeds [9].
3. INTELLIGENT SEMANTIC WEB
3.1 Intelligent Search Engines
Currently, a couple of Intelligent search engines are designed and implemented for different
working environments, and the mechanisms that realize these search engine are distinct.
Fu-Ming Hung and Jenn-Hwa Yang present an intelligent search engine with semantic
technologies. This research has combine description logic inference system and digital library
ontology to complete intelligent search engine [10]. According to search engine mechanism,
presenting demands and a formula evaluating present related technology of that can solve and
promote the efficiency of search engine, and formulating the demands of wisdom search engine.
If uses Description Logic Inference System to integrate the digital library ontology to proceed
4. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
37
with the inference of user requirement, and combines the content search mechanism and
knowledge inference to accomplish the study of intelligent search engine.
Inamdar and Shinde [11] discussed agent based intelligent search engine system for web mining.
Most of the web search engines make use of the text only on a web page. Agents are used to
perform some action or activity on behalf of a user of a computer system. Each user is assisted by
his/her own personal agent to search the web. The major goal of each personal agent is to propose
to its user and to other agent’s links to web pages that are considered relevant for their search.
Personal agents can use different internal and external sources of information. The personal
agents are software agents running on the server [12].
Patrick Lambrix and Nahid Shahmehri and Niclas Wahllof [13] presents a search engine is
described as one that tackles the problem of enhancing the precision and recall for retrieval of
documents. The main techniques that they apply here are the use of subsumption information and
the use of default information. The use of subsumption information allows for the retrieval of
documents that include information about the desired topic as well as information about more
specific topics. The use of default information allows for retrieving of documents that include
typical content information about a topic. The strict and default information are represented in an
extension of description logics that can deal with defaults. There have been tested the system on
small-scale databases with promising results.
Satya Sai Prakash et al, present architecture and design specifications for new generation search
engines highlighting the need for intelligence in search engines and give a knowledge framework
to capture intuition. Simulation methodology to study the search engine behavior and
performance is described. Simulation studies are conducted using fuzzy satisfaction function and
heuristic search criterion after modeling client behavior and web dynamics [14].
Dan Meng, Xu Huang discussed an interactive intelligent search engine model based on user
information preference [15]. This model can be an effective and useful way to realize the
individuation information search for different user information preference. This model frame
work, used some artificial intelligent methods and technologies to improve the quality and
effectiveness of information retrieval.
Xiajiong Shen Yan Xu Junyang Yu Ke Zhang forward an intelligent search engine where
Information Retrieval model is found on formal context of FCA (formal concept analysis) and
incorporates with a browsing mechanism for such a system based on the concept lattice. Test data
validates its feasibility, and implement of the FCA-search engine indicates that the concept lattice
of FCA is a useful way of supporting the flexible management of documents according to
conceptual relation [16].
4. TYPES OF SEMANTIC SEARCH ENGINES
Semantic is the process of communicating enough meaning to result in an action. A sequence of
symbols can be used to communicate meaning, and this communication can then affect behavior.
Semantics has been driving the next generation of the Web as the Semantic Web, where the focus
is on the role of semantics for automated approaches to exploiting Web resources. ‘Semantic’
also indicates that the meaning of data on the web can be discovered not just by people, but also
by computers. Then the Semantic Web was created to extend the web and make data easy to reuse
everywhere.
5. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
38
Semantic web is being developed to overcome the following main limitations of the current Web
[17]:
The web content lacks a proper structure regarding the representation of information.
Ambiguity of information resulting from poor interconnection of information.
Automatic information transfer is lacking.
Unable to deal with enormous number of users and content ensuring trust at all levels.
Incapability of machines to understand the provided information due to lack of a
universal format.
4.1 Semantic search engines
Currently many of semantic search engines are developed and implemented in different working
environments, and these mechanisms can be put into use to realize present search engines.
Alcides Calsavara and Glauco Schmidt proposes and defines a novel kind of service for the
semantic search engine. A semantic search engine stores semantic information about Web
resources and is able to solve complex queries, considering as well the context where the Web
resource is targeted, and how a semantic search engine may be employed in order to permit
clients obtain information about commercial products and services, as well as about sellers and
service providers which can be hierarchically organized [18]. Semantic search engines may
seriously contribute to the development of electronic business applications since it is based on
strong theory and widely accepted standards.
Sara Cohen Jonathan Mamou et al presented a semantic search engine for XML (XSEarch)
[19].It has a simple query language, suitable for a naïve user. It returns semantically related
document fragments that satisfy the user’s query. Query answers are ranked using extended
information-retrieval techniques and are generated in an order similar to the ranking. Advanced
indexing techniques were developed to facilitate efficient implementation of XSEarch. The
performance of the different techniques as well as the recall and the precision were measured
experimentally. These experiments indicate that XSEarch is efficient, scalable and ranks quality
results highly.
Bhagwat and Polyzotis propose a Semantic-based file system search engine- Eureka, which uses
an inference model to build the links between files and a File Rank metric to rank the files
according to their semantic importance [20]. Eureka has two main parts: a) crawler which extracts
file from file system and generates two kinds of indices: keywords’ indices that record the
keywords from crawled files, and rank index that records the File Rank metrics of the files; b)
when search terms are entered, the query engine will match the search terms with keywords’
indices, and determine the matched file sets and their ranking order by an information retrieval-
based metrics and File Rank metrics.
Wang et al. project a semantic search methodology to retrieve information from normal tables,
which has three main steps: identifying semantic relationships between table cells; converting
tables into data in the form of database; retrieving objective data by query languages [21]. The
research objective defined by the authors is how to use a given table and a given domain
knowledge to convert a table into a database table with semantics. The authors’ approach is to
denote the layout by layout syntax grammar and match these
6. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
39
denotation with given templates which can be used to analyze the semantics of table cells. Then
semantic preserving transformation is used to transform tables to database format.
Kandogan et al. develop a semantic search engine-Avatar, which combines the traditional text
search engine with use of ontology annotations [22]. Avatar has two main functions: a) extraction
and representation – by means of UIMA framework, which is a workflow consisting of a chain of
annotators extracted from documents and stored in the annotation store; b) interpretation – a
process of automatically transforming a keyword search to several precise searches. Avatar
consists of two main parts: semantic optimizer and user interaction engine. When a query is
entered into the former, it will output a list of ranked interpretations for the query; then the top-
ranked interpretations are passed to the latter, which will display the interpretations and the
retrieved documents from the interpretations.
4.2 Ontology search engines
Maedche et al. designed an integrated approach for ontology searching, reuse and update [23]. In
its architecture, an ontology registry is designed to store the metadata about ontologies and
ontology server stores the ontologies. The ontologies in distributed ontology servers can be
created, replicated and evolved. Ontology metadata in ontology registry can be queried and
registered when a new ontology is created. Ontology search in ontology registry is executed
under two conditions -query-by-example is to restrict search fields and search terms, and query-
by-term is to restrict the hyponyms of terms for search.
Georges Gardarin et al. discussed a SEWISE [24] is an ontology-based Web information system
to support Web information description and retrieval. According to domain ontology, SEWISE
can map text information from various Web sources into one uniform XML structure and make
hidden semantic in text accessible to program. The textual information of interest is automatically
extracted by Web Wrappers from various Web sources and then text mining techniques such as
categorization and summarization are used to process retrieved text information.
5. SOME COMMON ISSUES
We have discussed a preliminary survey of the existing and dynamic area in intelligent semantic
search engines and methods. Although we have not claimed this survey is comprehensive, some
common issues in the current semantic search engines and methods are concluded as follows:
a) Low precision and high recall
Some Intelligent semantic search engines cannot show their significant performance in
improving precision and lowering recall. In Ding’s semantic flash search engine, the resource
of the search engine is based on the top-50 returned results from Google that is not a semantic
search engine, which could be low precision and high recall [25].
b) Identity intention of the user
User intention identification plays an important role in the intelligent semantic search engine.
For example, in chiung-Hon leon lee introduced a method for analyzing the request terms to
fit user intention, so that the service provided will be more suitable for the user [26].
7. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
40
c) Individual user patterns can be extrapolated to global users.
In early search engine that offered disambiguation to search terms. A user could enter in a
search term that was ambiguous (e.g., Java) and the search engine would return a list of
alternatives (coffee, programming language, island in the South Seas).
d) Inaccurate queries.
We have user typically domain specific knowledge. And users don’t include all potential
Synonyms and variations in the query, actually user have a problem but aren’t sure how to
phrase.
6 CONCLUSIONS
In this paper, we make a brief survey of the existing literature regarding intelligent semantic
search technologies. We review their characteristics respectively. In addition, the issues within
the reviewed intelligent semantic search methods and engines are concluded based on four
perspectives differentiations between designers and users’ perceptions, static knowledge
structure, low precision and high recall and lack of experimental tests.
In the future, our work will focus on the deeper and broader research in the field of intelligent
semantic search, with the purpose of concluding the current situation of the field and promote the
further development of intelligent semantic search engine technologies.
REFERENCES
[1] Berners-Lee, T., Hendler, J. and Lassila, O. “The Semantic Web”, Scientific
American, May 2001.
[2] Deborah L. McGuinness. “Ontologies Come of Age”. In Dieter Fensel, J im Hendler,
Henry Lieberman, and Wolfgang Wahlster, editors. Spinning the Semantic Web:
Bringing the World Wide Web to Its Full Potential. MIT Press, 2002.
[3] Ramprakash et al “Role of Search Engines in Intelligent Information Retrieval on Web”,
Proceedings of the 2nd National Conference; INDIACom-2008.
[4] T.Berner-Lee and M. Fishetti, Weaving the web “chapter Machines and the
web,”Chapter Machines and the web, pp. 177-198, 1999.
[5] D.Fensal, W. Wahlster, H. Lieberman, "Spanning the semantic web: Bringing the
worldwide web to its full potential, “MIT Press 2003.
[6] G. Bholotia et al.: “Keyword searching and browsing in database using BANKS,” 18th
Intl. conf. on Data Engineering (ICDE 2002), San Jose, USA, 2002.
[7] D. Tümer, M. A. Shah, and Y. Bitirim, An Empirical Evaluation on Semantic Search
Performance of Keyword-Based and Semantic Search Engines: Google, Yahoo, Msn and
Hakia, 2009 4th International Conference on Internet Monitoring and Protection (ICIMP
’09) 2009.
[8] "Top 5 Semantic Search Engines".http://www.pandia.com/.
[9] H. Dietze and M. Schroeder, GoWeb: a semantic search engine for the life
science web. BMC bioinformatics,Vol. 10, No. Suppl 10, pp. S7, 2009.
[10] Fu-Ming Huang et al. “Intelligent Search Engine with Semantic Technologies”
8. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
41
[11] S. A. Inamdar1 and G. N. Shinde “An Agent Based Intelligent Search Engine System for
Web mining” Research, Reflections and Innovations in Integrating ICT in education.
2008.
[12] Li Zhan, Liu Zhijing, , ‘ Web Mining Based On Multi-Agents ’, COMPUTER
SOCIETY,IEEE(2003).
[13] Patrick Lambrix et al, “ Dwebic :An Intelligent Search Engine based on Default
Description Logics”-1997.
[14] K. Satya Sai Prakash and S. V. Raghavan “Intelligent Search Engine: Simulation to
Implementation”, In the proceedings of 6th International conference on Information
Integration and Web-based Applications and Services (iiWAS2004), pp. 203-212,
September 27 - 29, 2004, Jakarta, Indonesia, ISBN 3-85403-183-01.
[15] Dan Meng, Xu Huang “An Interactive Intelligent Search Engine Model Research Based
on User Information Preference”, 9th International Conference on Computer Science and
Informatics, 2006 Proceedings, ISBN 978-90-78677-01-7.
[16] Xiajiong Shen Yan Xu Junyang Yu Ke Zhang “Intelligent Search Engine Based on
Formal Concept Analysis” IEEE International Conference on Granular Computing, pp
669, 2-4 Nov, 2007.
[17] Sanjib kumar, Sanjay kumar malik “TOWARDS SEMANTIC WEB BASED SEARCH
ENGINES” National Conference on “Advances in Computer Networks & Information
Technology (NCACNIT-09) March 24-25,
[18] F. F. Ramos, H. Unger, V. Larios (Eds.): LNCS 3061, pp. 145–157, Springer-Verlag
Berlin Heidelberg 2004.
[19] Cohen, S. Mamou, J. Kanza, Y. Sagiv, Y “XSEarch: A Semantic Search Engine for
XML” proceedings of the international conference on very large databases, pages 45-56,
2003.
[20] D. Bhagwat and N. Polyzotis, "Searching a file system using inferred semantic links," in
Proceedings of HYPERTEXT '05 Salzburg, 2005,
pp. 85-87.
[21] H. L. Wang, S. H. Wu, I. C. Wang, C. L. Sung, W. L. Hsu, and W. K. Shih, "Semantic
search on Internet tabular information extraction for answering queries," in Proceedings
of CIKM '00 McLean, 2000, pp.243-249.
[22] E. Kandogan, R. Krishnamurthy, S. Raghavan, S. Vaithyanathan, and H. Zhu, "Avatar
semantic search: a database approach to information retrieval," in Proceedings of
SIGMOD '06 Chicago, 2006, pp. 790-792.
[23] A. Maedche, B. Motik, L. Stojanovic, R. Studer, and R. Volz, "An infrastructure for
searching, reusing and evolving distributed ontologies," in Proceedings of WWW '03
Budapest, 2003, pp. 439-448.
[24] www.georges.gardarin.free.fr/Articles /Sewise_NLDB2003.pdf.
[25] D. Ding, J. Yang, Q. Li, L. Wang, and W. Liu, "Towards a flash search engine based on
expressive semantics," in Proceedings of WWW Alt.'04 New York, 2004, pp. 472-473.
[26] Chiung-Hon Leon Lee, Alan Liu, "Toward Intention Aware Semantic Web Service
Systems," scc, vol. 1, pp.69-76, 2005 IEEE International Conference on Services
Computing (SCC'05) Vol-1, 2005.
9. International journal of Web & Semantic Technology (IJWesT) Vol.2, No.1, January 2011
42
Authors
1
G.Madhu completed his Master degree in Mathematics from J.N.T.University, Hyderabad in 2000 and his
M.Tech degree in computer science & engineering from J.N.T.University, Hyderabad, INDIA, in 2008.
Now pursuing PhD in Computer Science and Engineering from J.N.T.University, Hyderabad. He is
presently working as Sr. Assistant Professor in Information Technology Department at VNR VJIET
Hyderabad. His current research interest includes ANN, data mining, rough sets, and semantic web. He is a
professional member of Indian Society for Rough Sets, and ISTE.
2
Dr.A.Govardhan did his BE in Computer Science and Engineering from Osmania University College of
Engineering, Hyderabad,M.Tech from Jawaharlal Nehru University, Delhi and Ph.D from Jawaharlal
Nehru Technological University, Hyderabad. He is presently working as Principal, JNTU Jagtial,
Karimnagar, A.P, INDIA. He has 63 research publications at International/National Journals and
Conferences. He is also a reviewer of research papers of various conferences. His areas of interest include
Databases, Data Warehousing & Mining, Information Retrieval, Computer Networks, Image Processing
and Object Oriented Technologies.
3
Dr. T.V.K.Rajinikanth received M.Tech from Osmania University, Hyderabad in 2001, PhD from
Osmania University, and Hyderabad in 2007. He is currently working as HOD, Dept of IT, at GRIET,
Hyderabad, A.P.INDIA; He has several national & International publications and conferences. His research
interest includes Data warehouse & mining, Semantic Web, Spatial Data mining, ANN.