The document describes an agent-based personalized e-catalog service system that uses ontologies to provide personalized catalog search and retrieval. The system includes several agents that work together: a user agent manages the user interface, a query generation agent formats search queries, a reasoning agent expands queries based on a user profile ontology, a search agent retrieves catalog results which are then filtered before a ranking agent personalizes the results for the individual user. The system aims to improve on traditional keyword-based catalog search through the use of ontologies to represent user interests and product domains semantically.
IJRET : International Journal of Research in Engineering and TechnologyImprov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
With quick advancement of investigative databases and web data databases are turning out to be exceptionally colossal in size and complex in nature. These databases hold extensive and heterogeneous information, with huge number of relations and qualities. So it is exceptionally hard to outline an arrangement of static inquiry structures to answer different specially appointed database inquirieson these cutting edge databases. Along these lines there is need of such framework which create Query Forms powerfully as indicated by the clients need at run time. The proposed framework Dynamic Query Form i.e.DQF framework going to give an answer by the inquiry interface in extensive and complex databases. In proposed framework, the center idea is to catch client intrigues all through client associations and to adjust the inquiry sort iteratively. Each cycle comprises of 2 sorts of client collaborations: Query Form Enrichment and Query Execution. In Query Form Enrichment DQF would prescribe a positioned rundown of question structure part to client so he/she can choose sought structure segments into current inquiry structure. In Query Execution client fills current inquiry shape and submit question, DQF going to show result and take input from client on gave question results. A client would have office to fill the inquiry frame and submit questions to see the inquiry result at every cycle. So that a question structure could be progressively refined till the client fulfills with the inquiry results.
Framework for Product Recommandation for Review Datasetrahulmonikasharma
In the social networking era, product reviews have a significant influence on the purchase decisions of customers so the market has recognized this problem The problem with this is that the customers do not know how these systems work which results in trust issues. Therefore a different system is needed that helps customers with their need to process the information in product reviews. There are different approaches and algorithms of data filtering and recommendation .Most existing recommender systems were developed for commercial domains with millions of users. In this paper we have discussed the recommendation system and its related research and implemented different techniques of the recommender system .
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and TechnologyImprov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
With quick advancement of investigative databases and web data databases are turning out to be exceptionally colossal in size and complex in nature. These databases hold extensive and heterogeneous information, with huge number of relations and qualities. So it is exceptionally hard to outline an arrangement of static inquiry structures to answer different specially appointed database inquirieson these cutting edge databases. Along these lines there is need of such framework which create Query Forms powerfully as indicated by the clients need at run time. The proposed framework Dynamic Query Form i.e.DQF framework going to give an answer by the inquiry interface in extensive and complex databases. In proposed framework, the center idea is to catch client intrigues all through client associations and to adjust the inquiry sort iteratively. Each cycle comprises of 2 sorts of client collaborations: Query Form Enrichment and Query Execution. In Query Form Enrichment DQF would prescribe a positioned rundown of question structure part to client so he/she can choose sought structure segments into current inquiry structure. In Query Execution client fills current inquiry shape and submit question, DQF going to show result and take input from client on gave question results. A client would have office to fill the inquiry frame and submit questions to see the inquiry result at every cycle. So that a question structure could be progressively refined till the client fulfills with the inquiry results.
Framework for Product Recommandation for Review Datasetrahulmonikasharma
In the social networking era, product reviews have a significant influence on the purchase decisions of customers so the market has recognized this problem The problem with this is that the customers do not know how these systems work which results in trust issues. Therefore a different system is needed that helps customers with their need to process the information in product reviews. There are different approaches and algorithms of data filtering and recommendation .Most existing recommender systems were developed for commercial domains with millions of users. In this paper we have discussed the recommendation system and its related research and implemented different techniques of the recommender system .
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Classification-based Retrieval Methods to Enhance Information Discovery on th...IJMIT JOURNAL
The widespread adoption of the World-Wide Web (the Web) has created challenges both for society as a whole and for the technology used to build and maintain the Web. The ongoing struggle of information retrieval systems is to wade through this vast pile of data and satisfy users by presenting them with information that most adequately it’s their needs. On a societal level, the Web is expanding faster than we can comprehend its implications or develop rules for its use. The ubiquitous use of the Web has raised important social concerns in the areas of privacy, censorship, and access to information. On a technical level, the novelty of the Web and the pace of its growth have created challenges not only in the development of new applications that realize the power of the Web, but also in the technology needed to scale applications to accommodate the resulting large data sets and heavy loads. This thesis presents searching algorithms and hierarchical classification techniques for increasing a search service's understanding of web queries. Existing search services rely solely on a query's occurrence in the document collection to locate relevant documents. They typically do not perform any task or topic-based analysis of queries using other available resources, and do not leverage changes in user query patterns over time. Provided within are a set of techniques and metrics for performing temporal analysis on query logs. Our log analyses are shown to be reasonable and informative, and can be used to detect changing trends and patterns in the query stream, thus providing valuable data to a search service.
A New Algorithm for Inferring User Search Goals with Feedback SessionsIJERA Editor
When different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. The Novel approach to infer user search goals by analyzing search engine query logs. Once the User entered the query, the Resultant URLs will be filtered and the Pseudo-Documents are generated. Once the Pseudo documents are generated the Server will apply the Clustering Mechanism to URL’s. So that the URLs are listed as different categories. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of user. Second, we propose a novel approach to generate pseudo documents to better represents the feedback sessions for clustering. Finally we proposed new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods. Third, the distributions of user search goals can also be useful in applications such as re ranking web search results that contain different user search goals.
Context Driven Technique for Document ClassificationIDES Editor
In this paper we present an innovative hybrid Text
Classification (TC) system that bridges the gap between
statistical and context based techniques. Our algorithm
harnesses contextual information at two stages. First it extracts
a cohesive set of keywords for each category by using lexical
references, implicit context as derived from LSA and wordvicinity
driven semantics. And secondly, each document is
represented by a set of context rich features whose values are
derived by considering both lexical cohesion as well as the extent
of coverage of salient concepts via lexical chaining. After
keywords are extracted, a subset of the input documents is
apportioned as training set. Its members are assigned categories
based on their keyword representation. These labeled
documents are used to train binary SVM classifiers, one for
each category. The remaining documents are supplied to the
trained classifiers in the form of their context-enhanced feature
vectors. Each document is finally ascribed its appropriate
category by an SVM classifier.
User search goal inference and feedback session using fast generalized – fuzz...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This work describes a new system User Profile Relevant Results -
UProRevs which would filter the results given by a search engine based on the user’s profile.
“UProRevs - User Profile Relevant Results” has been published by the IEEE - Computer Society as the proceedings for the 10th International Conference on Information Technology.
Classifying web users in a personalised search setup is cumbersome due the very nature of dynamism in
user browsing history. This fluctuating nature of user behaviour and user interest shall be well interpreted
within a fuzzy setting. Prior to analysing user behaviour, nature of user interests has to be collected. This
work proposes a fuzzy based user classification model to suit a personalised web search environment. The
user browsing data is collected using an established customised browser designed to suit personalisation.
The data are fuzzified and fuzzy rules are generated by applying decision trees. Using fuzzy rules, the
search pages are labelled to aid grouping of user search interests. Evaluation of the proposed approach
proves to be better when compared with Bayesian classifier.
Identity Resolution across Different Social Networks using Similarity Analysisrahulmonikasharma
Today the Social Networking Sites have become very popular and are used by most of the people. This is because the Social Networking sites are playing different roles in different fields and facilitating the needs of its users from time to time. The most common purpose why people join in to these websites is to get connected with people and sharing information. An individual may be signed in on more than one Social Networking Site so identifying the same individual on different Social Networking sites is a task. To accomplish this task the proposed system uses the Similarity Analysis method on the available information details.
Intelligent Semantic Web Search Engines: A Brief Survey dannyijwest
The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies.
Designing of Semantic Nearest Neighbor Search: SurveyEditor IJCATR
Conventional spatial queries, such as range search and nearest neighbor retrieval, involve only conditions on objects’
geometric properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial
predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query
would instead ask for the restaurant that is the closest among those whose menus contain ―steak, spaghetti, brandy‖ all at the same
time. Currently the best solution to such queries is based on the IR2-tree, which, as shown in this paper, has a few deficiencies that
seriously impact its efficiency. Motivated by this, we develop a new access method called the spatial inverted index that extends the
conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries
with keywords in real time. As verified by experiments, the proposed techniques outperform the IR2-tree in query response time
significantly, often by a factor of orders of magnitude.
Design of STT-RAM cell in 45nm hybrid CMOS/MTJ processEditor IJCATR
This paper evaluates the performance of Spin-Torque Transfer Random Access Memory (STT-RAM) basic memory cell
configurations in 45nm hybrid CMOS/MTJ process. Switching speed and current drawn by the cells have been calculated and
compared. Cell design has been done using cadence tools. The results obtained show good agreement with theoretical results.
A Review on a web based Punjabi t o English Machine Transliteration SystemEditor IJCATR
The paper presents the transliteration of noun phrases from Punjabi to English using statistical machine translation
approach.Transliteration maps the letters of source scrip
ts to letters of another language.Forward transliteration converts an original
word or phrase in the source language into a word in the target language.Backward transliteration is the reverse process that
converts
the transliterated word or phrase back int
o its original word or phrase.Transliteration is an important part of research in NLP.Natural
Language Processing (NLP) is the ability of a
computer program to understand human speech as it is spoken.NLP is an important
component of AI.Artificial Intellig
ence is a branch of science which deals with helping machines find solutions to complex programs
in a human like fashion.The transliteration system is going to developed using SMT.Statistical Machine Translation (SMT) is a
data
oriented statistical framewo
rk for translating text from one natural language to another based on the knowledge
Classification-based Retrieval Methods to Enhance Information Discovery on th...IJMIT JOURNAL
The widespread adoption of the World-Wide Web (the Web) has created challenges both for society as a whole and for the technology used to build and maintain the Web. The ongoing struggle of information retrieval systems is to wade through this vast pile of data and satisfy users by presenting them with information that most adequately it’s their needs. On a societal level, the Web is expanding faster than we can comprehend its implications or develop rules for its use. The ubiquitous use of the Web has raised important social concerns in the areas of privacy, censorship, and access to information. On a technical level, the novelty of the Web and the pace of its growth have created challenges not only in the development of new applications that realize the power of the Web, but also in the technology needed to scale applications to accommodate the resulting large data sets and heavy loads. This thesis presents searching algorithms and hierarchical classification techniques for increasing a search service's understanding of web queries. Existing search services rely solely on a query's occurrence in the document collection to locate relevant documents. They typically do not perform any task or topic-based analysis of queries using other available resources, and do not leverage changes in user query patterns over time. Provided within are a set of techniques and metrics for performing temporal analysis on query logs. Our log analyses are shown to be reasonable and informative, and can be used to detect changing trends and patterns in the query stream, thus providing valuable data to a search service.
A New Algorithm for Inferring User Search Goals with Feedback SessionsIJERA Editor
When different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. The Novel approach to infer user search goals by analyzing search engine query logs. Once the User entered the query, the Resultant URLs will be filtered and the Pseudo-Documents are generated. Once the Pseudo documents are generated the Server will apply the Clustering Mechanism to URL’s. So that the URLs are listed as different categories. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of user. Second, we propose a novel approach to generate pseudo documents to better represents the feedback sessions for clustering. Finally we proposed new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods. Third, the distributions of user search goals can also be useful in applications such as re ranking web search results that contain different user search goals.
Context Driven Technique for Document ClassificationIDES Editor
In this paper we present an innovative hybrid Text
Classification (TC) system that bridges the gap between
statistical and context based techniques. Our algorithm
harnesses contextual information at two stages. First it extracts
a cohesive set of keywords for each category by using lexical
references, implicit context as derived from LSA and wordvicinity
driven semantics. And secondly, each document is
represented by a set of context rich features whose values are
derived by considering both lexical cohesion as well as the extent
of coverage of salient concepts via lexical chaining. After
keywords are extracted, a subset of the input documents is
apportioned as training set. Its members are assigned categories
based on their keyword representation. These labeled
documents are used to train binary SVM classifiers, one for
each category. The remaining documents are supplied to the
trained classifiers in the form of their context-enhanced feature
vectors. Each document is finally ascribed its appropriate
category by an SVM classifier.
User search goal inference and feedback session using fast generalized – fuzz...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This work describes a new system User Profile Relevant Results -
UProRevs which would filter the results given by a search engine based on the user’s profile.
“UProRevs - User Profile Relevant Results” has been published by the IEEE - Computer Society as the proceedings for the 10th International Conference on Information Technology.
Classifying web users in a personalised search setup is cumbersome due the very nature of dynamism in
user browsing history. This fluctuating nature of user behaviour and user interest shall be well interpreted
within a fuzzy setting. Prior to analysing user behaviour, nature of user interests has to be collected. This
work proposes a fuzzy based user classification model to suit a personalised web search environment. The
user browsing data is collected using an established customised browser designed to suit personalisation.
The data are fuzzified and fuzzy rules are generated by applying decision trees. Using fuzzy rules, the
search pages are labelled to aid grouping of user search interests. Evaluation of the proposed approach
proves to be better when compared with Bayesian classifier.
Identity Resolution across Different Social Networks using Similarity Analysisrahulmonikasharma
Today the Social Networking Sites have become very popular and are used by most of the people. This is because the Social Networking sites are playing different roles in different fields and facilitating the needs of its users from time to time. The most common purpose why people join in to these websites is to get connected with people and sharing information. An individual may be signed in on more than one Social Networking Site so identifying the same individual on different Social Networking sites is a task. To accomplish this task the proposed system uses the Similarity Analysis method on the available information details.
Intelligent Semantic Web Search Engines: A Brief Survey dannyijwest
The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies.
Designing of Semantic Nearest Neighbor Search: SurveyEditor IJCATR
Conventional spatial queries, such as range search and nearest neighbor retrieval, involve only conditions on objects’
geometric properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial
predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query
would instead ask for the restaurant that is the closest among those whose menus contain ―steak, spaghetti, brandy‖ all at the same
time. Currently the best solution to such queries is based on the IR2-tree, which, as shown in this paper, has a few deficiencies that
seriously impact its efficiency. Motivated by this, we develop a new access method called the spatial inverted index that extends the
conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries
with keywords in real time. As verified by experiments, the proposed techniques outperform the IR2-tree in query response time
significantly, often by a factor of orders of magnitude.
Design of STT-RAM cell in 45nm hybrid CMOS/MTJ processEditor IJCATR
This paper evaluates the performance of Spin-Torque Transfer Random Access Memory (STT-RAM) basic memory cell
configurations in 45nm hybrid CMOS/MTJ process. Switching speed and current drawn by the cells have been calculated and
compared. Cell design has been done using cadence tools. The results obtained show good agreement with theoretical results.
A Review on a web based Punjabi t o English Machine Transliteration SystemEditor IJCATR
The paper presents the transliteration of noun phrases from Punjabi to English using statistical machine translation
approach.Transliteration maps the letters of source scrip
ts to letters of another language.Forward transliteration converts an original
word or phrase in the source language into a word in the target language.Backward transliteration is the reverse process that
converts
the transliterated word or phrase back int
o its original word or phrase.Transliteration is an important part of research in NLP.Natural
Language Processing (NLP) is the ability of a
computer program to understand human speech as it is spoken.NLP is an important
component of AI.Artificial Intellig
ence is a branch of science which deals with helping machines find solutions to complex programs
in a human like fashion.The transliteration system is going to developed using SMT.Statistical Machine Translation (SMT) is a
data
oriented statistical framewo
rk for translating text from one natural language to another based on the knowledge
Dielectric and Thermal Characterization of Insulating Organic Varnish Used in...Editor IJCATR
In recent days, a lot of attention was being drawn towards the polymer nanocomposites for use in electrical applications due
to encouraging results obtained for their dielectric properties. Polymer nanocomposites were commonly defined as a combination of
polymer matrix and additives that have at least one dimension in the nanometer range scale. Carbon nanotubes were of a special
interest as the possible organic component in such a composite coating. The carbon atoms were arranged in a hexagonal network and
then rolled up to form a seamless cylinder which measures several nanometers across, but can be thousands of nanometers long. There
were many different types, but the two main categories are single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs),
which are made from multiple layers of graphite. Carbon nanotubes were an example of a nanostructure varying in size from 1-100
nanometers (the scale of atoms and molecules). Nano composites were one of the fastest growing fields in nanotechnology. Extensive
literature survey has been done on the nanocomposites, synthesis and preparation of nano filler. The following objectives were set
based on the literature survey and understanding the technology.
Complete study of Organic varnish and CNT
Chemical properties
Electrical properties
Thermal properties
Mechanical properties
Synthesis and characterization of carbon nanotubes
Preparation of polymer nanocomposites
Study of characteristics of the nanocomposite insulation
Dimensioning an insulation system requires exact knowledge of the type, magnitude and duration of the electric stress while
simultaneously considering the ambient conditions. But, on the other hand, properties of the insulating materials in question must also
be known, so that in addition to the proper material, the optimum, e.g. the most economical design of the insulation system must be
chosen.
IA Literature Review on Synthesis and Characterization of enamelled copper wi...Editor IJCATR
This paper discusses about the survey on the various magazines, conference papers and journals for understanding the
properties of enamelled copper wires mixed with nano fillers, fundamental methods for synthesis and characterization of carbon
nanotubes. From all these papers, it was noted that the research work carried out in an enamelled copper wires filled with nano fillers
has shown better results. It was also recorded that the research work was carried mostly with single metal catalysts and very little
amount of research work has been carried out on the synthesis of carbon nanotubes using bimetallic catalysts.
Building a recommendation system based on the job offers extracted from the w...IJECEIAES
Recruitment, or job search, is increasingly used throughout the world by a large population of users through various channels, such as websites, platforms, and professional networks. Given the large volume of information related to job descriptions and user profiles, it is complicated to appropriately match a user's profile with a job description, and vice versa. The job search approach has drawbacks since the job seeker needs to search a job offers in each recruitment platform, manage their accounts, and apply for the relevant job vacancies, which wastes considerable time and effort. The contribution of this research work is the construction of a recommendation system based on the job offers extracted from the web and on the e-portfolios of job seekers. After the extraction of the data, natural language processing is applied to structured data and is ready for filtering and analysis. The proposed system is a content-based system, it measures the degree of correspondence between the attributes of the e-portfolio with those of each job offer of the same list of competence specialties using the Euclidean distance, the result is classified with a decreasing way to display the most relevant to the least relevant job offers
An Improvised Fuzzy Preference Tree Of CRS For E-Services Using Incremental A...IJTET Journal
Abstract—Web mining is the amalgamation of information accumulated by traditional data mining methodologies and techniques with information collected over the World Wide Web. A Recommendation system is a profound application that comforts the user in a decision-making process, where they lack of personal experience to choose an item from the confound set of alternative products or services. The key challenge in the development of recommender system is to overcome the problems like single level recommendation and static recommendation, which are exists in the real world e-services. The goal is to achieve and enhance predicting algorithm to discover the frequent items, which are feasible to be purchasable. At this point, we examine the prior buying patterns of the customers and use the knowledge thus procured, to achieve an item set, which co-ordinates with the purchasing mentality of a particular set of customers. Potential recommendation is concerned as a link structure among the items within E-commerce website, which supports the new customers to find related products in a hurry. In Existing system, a fuzzy set consists of user preference and item features alone, so the recommendations to the customers are irrelevant and anonymous. In this paper, we suggest a recommendation technique, which practices the wild spreading and data sharing competency of a huge customer linkage and also this method follows a fuzzy tree- structured model, in which fuzzy set techniques are utilized to express user preferences and purchased items are in a clustered form to develop a user convenient recommendations. Here, an incremental association rule mining is employed to find interesting relation between variables in a large database.
Machine learning based recommender system for e-commerceIAESIJAI
Nowadays, e-commerce is becoming an essential part of business for many reasons, including the simplicity, availability, richness and diversity of products and services, flexibility of payment methods and the convenience of shopping remotely without losing time. These benefits have greatly optimized the lives of users, especially with the technological development of mobile devices and the availability of the Internet anytime and anywhere. Because of their direct impact on the revenue of e-commerce companies, recommender systems are considered a must in this field. Recommender systems detect items that match the customer's needs based on the customer's previous actions and make them appear in an interesting way. Such a customized experience helps to increase customer engagement and purchase rates as the suggested items are tailored to the customer's interests. Therefore, perfecting recommendation systems that allow for more personalized and accurate item recommendations is a major challenge in the e-marketing world. In our study, we succeeded in developing an algorithm to suggest personal recommendations to customers using association rules via the Frequent-Pattern-Growth algorithm. Our technique generated good results with a high average probability of purchasing the next product suggested by the recommendation system.
A novel method for generating an elearning ontologyIJDKP
The Semantic Web provides a common framework that allows data to be shared and reused across
applications, enterprises, and community boundaries. The existing web applications need to express
semantics that can be extracted from users' navigation and content, in order to fulfill users' needs. Elearning
has specific requirements that can be satisfied through the extraction of semantics from learning
management systems (LMS) that use relational databases (RDB) as backend. In this paper, we propose
transformation rules for building owl ontology from the RDB of the open source LMS Moodle. It allows
transforming all possible cases in RDBs into ontological constructs. The proposed rules are enriched by
analyzing stored data to detect disjointness and totalness constraints in hierarchies, and calculating the
participation level of tables in n-ary relations. In addition, our technique is generic; hence it can be applied
to any RDB.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Recommendation System Using Social Networking ijcseit
With the proliferation of electronic commerce and knowledge economy environment both organizations and
individuals generate and consume a large amount of online information. With the huge availability of
product information on website, many times it becomes difficult for a consumer to locate item he wants to
buy. Recommendation Systems [RS] provide a solution to this. Many websites such as YouTube, e-Bay,
Amazon have come up with their own versions of Recommendation Systems. However Issues like lack of
data, changing data, changing user preferences and unpredictable items are faced by these
recommendation systems. In this paper we propose a model of Recommendation systems in e-commerce
domain which will address issues of cold start problem and change in user preference problem. Our work
proposes a novel recommendation system which incorporates user profile parameters obtained from Social
Networking website. Our proposed model SNetRS is a collaborative filtering based algorithm, which
focuses on user preferences obtained from FaceBook. We have taken domain of books to illustrate our
model.
Mining the Web Data for Classifying and Predicting Users’ RequestsIJECEIAES
Consumers are the most important asset of any organization. The commercial activity of an organization booms with the presence of a loyal customer who is visibly content with the product and services being offered. In a dynamic market, understanding variations in client‟s behavior can help executives establish operative promotional campaigns. A good number of new consumers are frequently picked up by traders during promotions. Though, several of these engrossed consumers are one-time deal seekers, the promotions undeniably leave a positive impact on sales. It is crucial for traders to identify who can be converted to loyal consumer and then have them patronize products and services to reduce the promotion cost and increase the return on investments. This study integrates a classifier that allows prediction of the type of purchase that a customer would make, as well as the number of visits that he/she would make during a year. The proposed model also creates outlines of users and brands or items used by them. These outlines may not be useful only for this particular prediction task, but could also be used for other important tasks in e-commerce, such as client segmentation, product recommendation and client base growth for brands.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Decay Property for Solutions to Plate Type Equations with Variable Coefficients
Agent based Personalized e-Catalog Service System
1. International Journal of Computer Applications Technology and Research
Volume 3– Issue 9, 564 - 569, 2014
Agent based Personalized e-Catalog Service System
M.Thangaraj,
Department of Computer Science
Madurai Kamaraj University ,Madurai ,
Tamilnadu
M Chamundeeswari,
Department of Computer science
V.V.V College for Women
(Affiliated to Madurai Kamaraj university)
Virudhunagar, Tamil Nadu
Abstract: With the emergence of the e-Catalog, there has been an increasingly wide application of commodities query in distributed
environment in the field of e-commerce. But e-Catalog is often autonomous and heterogeneous, effectively integrating and querying
them is a delicate and time-consuming task. Electronic catalog contains rich semantics associated with products, and serves as a
challenging domain for ontology application. Ontology is concerned with the nature and relations of being. It can play a crucial role
in e-commerce as a formalization of e-Catalog. User personalized catalog ontology aims at capturing the users' interests in a working
domain, which forms the basis of providing personalized e-Catalog services. This paper describes a prototype of an ontology-based
Information retrieval agent. User personalized catalog ontology aims at capturing the users' interests in a working domain, which
forms the basis of providing personalized e-Catalog services. In this paper, we present an ontological model of e-Catalogs, and design
an Agent based personalized e-Catalog service system (ABPECSS), which achieves match user personalized catalog ontology and
domain e-Catalog ontology based on ontology integrated
Keywords: personalization, semantic web, information retrieval, ontology, re-ranking algorithms, knowledge base ,user profile,e-catalog
1. INTRODUCTION
As Internet technologies develop rapidly, companies
are shifting their business activities to e-Business on the
Internet. Worldwide competition among corporations
accelerates the reorganization of corporate sections and
partner groups, resulting in a break of the conventional
steady business relationships. For instance, a marketplace
would lower the barriers of industries and business
categories, and then connect their enterprise systems.
Electronic catalogs contain the data of parts and products
information used in the heavy electric machinery industry.
They contain not only the commercial specifications for
parts (manufacturer name, price, etc.), but also the technical
specifications (physical size, performance, quality, etc.).
Clearly defined product information is a necessary
foundation for collaborative business processes.
Furthermore, semantically enriched product information may
enhance the quality and effectiveness of business
transactions. As a multifunctional applied system, it serves
for advertisement, marketing, selling and client support, and
at the same time it is a retail channel.
As the number of Internet users and the number of
accessible Web pages grow, it is becoming more and more
difficult for users to find documents among e-Catalogs that
are relevant to their particular needs. Users can search with a
search engine which allows users to enter keywords to
retrieve e-Catalogs that contain these keywords. The
navigation policy and search have their own problems.
Indeed, approximately one half of all retrieved documents
have been reported to be irrelevant. The main reasons for
obtaining poor search results are that (1) many words have
multiple meanings (2) key words are not enough to express
the rich concepts and the natural semantics of customers'
queries. (3) The property query lacks of semantic support,
and is difficult to search for knowledge, and has other
problems of mechanisms. (4) Related merchandises cannot
be returned. What is needed is a solution that will
personalize the e-Catalog selection and be presented to each
user. A semantically rich user model and an efficient way of
processing semantics are the keys to provide personalized e-
Catalog services. In view of the existing limitations, we
develop a personalized ontology based on user model, called
user personalized catalog ontology, which has the same level
of semantics as domain ontology.
The rest of this paper is structured as follows:
Section 2 describes related work. Section 3 , explains the
theory of propose system. Section 4 we put forward our
modeling methodology for generating user personalized
catalog and product domain ontology. Then in Section 5, we
present the implementation of the system and its evaluation.
Conclusion and future work are drawn in Section 6
2. RELATED WORK
E-catalogues play a critical role in e-procurement
marketplaces. They can be used in both the
tendering (pre-award) and the purchasing (post-award)
processes. Companies use e-catalogues to exchange product
information with business partner’s .Suppliers use e-catalogues
to describe goods or services that they offer for
sale. Mean while buyers may use e-catalogues to specify the
items that they want to buy [1, 2] Matching a product
request from a buyer with products e-catalogs that have been
provided by the suppliers, helps companies to reduce the
efforts needed to find partners in e-marketplaces [5, 7]
.
2.1 E-Catalog Ontology Design
Researches in recent years show that
applying ontology to e-commerce scenarios would bring
benefits such as solving the interoperability problems
between different e-commerce systems [3, 4]. Especially, e-
Catalog, which is a key component of e-commerce systems,
seems to be the most adequate domain within e-commerce
scenarios where ontology can realize the expression of e-
www.ijcat.com 564
2. International Journal of Computer Applications Technology and Research
Volume 3– Issue 9, 564 - 569, 2014
Catalog on a semantic level. It is possible for e-business
systems to offer diverse interoperable services by sharing
well-defined e-Catalog model containing rich semantics.
Fensel [5] described in principle how ontology’s can support
the integration of heterogeneous and distributed information
in ecommerce scenarios which is mainly based on product
catalogs, and what tasks are needed to be mastered. E-Catalog
ontology model is defined as ECO (concepts,
relationship, properties, axioms and individuals).
The traditional key-based retrieval method cannot
satisfy massive heterogeneous personalized catalog service,
then [8] introduce meta search engines, but this method is
passive service. [9] Provided an intelligent catalog
recommend method using customer requirements mapping
with product categories. [10] Brought forward personalized e-
Catalog model based on customer interests and [11] is a
personalized catalog service community, WebCatalog [12]
designed enterprise e-Catalog based on customer behavior.
The knowledge representation and acquisition of client
catalog turns into the key problems. In order to reach an
effective method, K-clustering algorithm and e-Catalog
segmentation approach are described in [13] and [14]
described the customer segmentation method based on brand
and product, price. In [15] the author researched personalized
catalog service with one-to-one market by association rules
and CART. In recent years, personalized ontology’s (also
known as private ontology, such as [9] are introduced into e-
Catalog service, Peter Haase put forward personalized
ontology learning theory based on user access and interest
coordination [16]. In distributed system, there are sharing
concepts of domain ontology’s and personalized knowledge
ontology’s [17]. Therefore, it has important theoretical and
practical significance to apply personalized ontology’s to
personalized e-Catalog service.
3. PROPOSED ARCHITECTURE
The personalized information retrieval system based
on multi-agent adopts the working fashion of multi-agent
cooperation, multi-agent collaborate mutually and
communicate to one another for accomplishing task.
The system consists of User Agent, Query
Generation Agent, Reasoning and Expanding Agent,
Searching Agent and Filtering Agent, Personalized Ranking
Agent and Knowledge Base. It is shown in Figure 1.[23] All
agents are monitored entirely to fulfill proprietary system
functions, including information retrieval and Knowledge
Base update.
User profile
Personalized
user catalog
Semantic
matching
module
Semantic
matching
Algorithm
Domain e-catalog
Ontology
Cata
log
data
base
Cata
log
dat
aba
se
User
User Agent
Query
generation agent
Reasoning and
expanding agent
Searching &
Filtering agent
Personalized
ranking agent
Figure 1 Architecture of Agent based personalized e-Catalog
service system (ABPESS)
(1) User Agent: User Agent is the mutual interface between
user and system, and provides a friendly platform to users.
User Agent also takes over result from Personalized ranking
agent and presents personally these results to user. User’s
browsing or evaluating behavior can be stored and learned by
User Agent, so user interest model may be updated and
improved in time.
(2). Query Generating Agent: QGA incepts user’s retrieval
request, which is transformed to prescriptive format, and
transmits the formatted user request to Reasoning and
expanding agent.
(3) Reasoning and expanding agent: In the personalized
information retrieval system, Reasoning and expanding agent
takes charge of receiving formatted user request from QGA,
and the user request is expanded according to user interest
model. Afterwards, the perfected user request is transmitted to
Searching & Filtering Agent.
(4) Searching Agent and Filtering Agent: Searching Agent
collects all data from initiative Searching Agent or meta-
Searching Agent, takes out invalid links, deleting excrescent
information, and finally processed data are transmitted to
Personalized Re-ranking agent . Filtering Agent analyses the
returned data from Searching Agent, filtrating useless
information, and processed results are send to Personalized
Re-ranking agent.It also completes search result statistic, user
browse statistic, and retrieval keywords statistic, etc. Various
statistic outcomes are stored in Knowledge Base.
Algorithm of e-Catalog- searching and filtering:
Constructing semantic results SR, where DO is domain ontology,
expanding ontologies, SRD is r.
Keyset KS= { k1,k2..kn }
Input: keyword, basic ontology DO;
Output: semantic results SR;
Search(KS,DO)
Begin
for(each KS) {finding DO mapping Ki , according to the semantic
mapping table; }
for(sub-ontology s in DO){
if( Rw
d (Oi,s)≥m && s isn’t in DO)
find the result s for semantic query
copy the components of s to SR;}
return SR;
End
(4) Personalized Re-ranking agent : it is the decision-making
center of personalized information retrieval system
based on multi-agent, and assorts with data communication
and task assignment. Personalized Re-ranking agent use re-ranking
alg. To find the new score based on user interest.
PR (uid)
Begin
If uid exits{ Re-ranking(CP,uid,interest)}
else
{
For each user entered
{
userProf i ledb() ->uid,uinterest ,keyword weight
Result Set
www.ijcat.com 565
3. International Journal of Computer Applications Technology and Research
Volume 3– Issue 9, 564 - 569, 2014
For each search
{
Usersearchdb() -> uid,keyword, interest
Apply Assoicationarlg(uid,keyword,interest)
Cp()<-keyword, interest }} }
(5) Knowledge Base: This is an auxiliary component, used by
the integration mechanism. It contains semantically-enhanced
inter-domain and intra-domain knowledge bases representing
dependencies and relationships between various user, item
and context features. The data stored in the knowledge bases
facilitate resolving the heterogeneities in the obtained user
modeling data. For example, it allows reconciliation of the
ontology’s exploited by various recommender systems,
converting the terms used by certain systems to a standard
representation, and even provides machine translation tools
resolving cross-lingual dependencies.
(6) Semantic ontology: It contains some product knowledge
used to generate the queries. It was designed as a hierarchical
tree, with a frame based representation approach. This
ontology must be at some degree context free, but it has to
point elements of the search engines used by the Query
Generation module.
4. METHODOLOGY:
4.1 Method of Designing User Catalog ontology:
In order to satisfy customer's personalized
requirement, we should master more information of the
customers. Sometimes customers also cannot describe their
own thought, to understand their potential mind, we need user
e-Catalog ontology. Based on consumer behavior, we propose
a personalized approach to build personalized catalog
ontology (PCO).
PCO supposed to be formed by
First, build user personal ontology (PCO) based on
users' personal information and preferences
Second, extract user catalog information from user
purchase history, user searching keywords, user
browsing catalog, user feedback information
Third, web resource according to user catalog
ontology information
Agent based e-catalog organizes a group of keywords
expressing users' interest through PCO, when users puts
semantic query, it is no longer a simple keywords match, but
considering users' personal preference and information, and
tightly integrates the users and products, so that the system
can improve the semantic query precision rate and recall rate,
as well as be conducive to sort query results.
Figure 2 framework of user Personalized Catalog Ontology
Figure 2 shows a user catalog ontology framework, in which
we describe user interest information, user preference and
product concepts, properties and individuals that users are
interested in, including product area, brand and quality
authentication. Users associate with the product by property
hasPreference, and we set aside a weight interface in property
"has Preference", indicating the fact users' different
observation extent about different properties of a product
which is shown in Figure 3.
Figure 3 The Relationship of user Personalized Catalog
Ontology
Generating Semantic Catalog ontology (SCO):
Generation domain e-Catalog ontology is divided into three
steps:
Extraction of the core concepts and properties for
domain e-Catalog ontology’s, according to the
UNSPSC standards, wordNet standards and
semantic catalog dictionary.
Construction of a SCO model..
Acquisition standardized DECO by e-Catalog
ontology pruning subsystem, combining WorldNet
and semantic catalog dictionary.
www.ijcat.com 566
4. International Journal of Computer Applications Technology and Research
Volume 3– Issue 9, 564 - 569, 2014
4.2 Semantic Match Based on Ontology
One critical step of semantic match is that calculation
semantic match degree between the terms of ontology
concepts. There have been many methods to calculate
conceptual semantic match in e-commerce scenarios [18].
Common calculation methods and models are: (1) Identifier-based
method [19], which uses word-building to find the
semantic match degree between the concepts, and primarily
reflects the linguistic similarity of the two concepts; (2)
Synonym dictionary-based method [20], which organizes all
concepts to a tree hierarchy structure according to synonym
dictionary where there is only one path between any two
nodes and this path length is taken as a measure of semantic
distance of the two concepts; (3) Feature Match-based model
[21], which calculates semantic match of concepts by the
collection of properties; and (4) Semantic relationship-based
model [22], also known as the semantic distance-based model,
which calculates semantic match of concepts based on
hierarchy information and is mainly used in the same
ontology. In this paper, we need to calculate the semantic
match of UPCO and DECO using Individual-based Semantic
Match methods.
4.3. Individual-based Semantic Match
To query user preferences product, we should get the product
similar with user preferences, namely calculating the instance
similarity between SCO individual and PCO individual. We
calculate the semantic match of the individuals by the
property value-based method.
calculate the semantic match method based on linguistics,
when we calculate semantic match degree of the property
values
Explanation:
| C1 | is the length of the string C1, | C2 | the length of the string
C2, ed(A,C2) is the same number of characters in C1 and C2.
String C1 and C2 are input parameters, in the process, which
are the properties values of two products calculate the
individual semantic match of the two products through
comparing several groups property semantic match degree.
4.4 Basic function of ABPECSS:
To implement agent based E-service first of all,
personalized user catalog ontology’s are customized
according to consumers(PCO) ; secondly, we need to build
domain e-Catalog ontology’s(SCO) ; thirdly, we match the
two kinds of ontology’s by match algorithm through semantic
reasoning and expanding agent which generates match result
sets.
The basic the theory of distributed semantic query
based on e-Catalog ontology is: users input key words,
phrases, sentences or paragraphs (users' queries, Uq) in user
querying interface; query generator module translates Uq to
ontology descript; query reasoning and expanding module is
responsible for reasoning and expanding the descript using the
semantic match result set is, then outputs semantic queries
(Sq) in forms of Sparql and finally extract data from
distributed e-Catalog database. Searching and Filtering
module combines the distributed results and filters repetitive
and invalid results .personalized ranking agent rearrange the
result sets and recommended to the user.
4.5 Results Personalization
The personalization helps in getting
relevant results for the user’s query. As shown in the query-processing
steps, the personalization starts with the query
enrichment step, where we utilize the user profile to expand
the query and to fill in the incomplete query templates. Here,
we go into more detail with the results personalization steps
and show how we capture the user’s feedback.
Results personalization steps
Personalizing the results involves
presenting the results in the most effective way possible
through several steps. The first step is answering the user’s
query in the same language he asks it in, regardless of the
language of the ontology and the knowledge base, which has
the annotated data. The second step is answering the user’s
query in appropriate syntax based on the question type; a
confirmation question is different than a subjective question,
as the user expects a “yes” or “no” answer in the first type,
while s/he expects a list of items in the second type. So, an
answer is personalized to express the understanding of the
query and to be familiar to the user. The third step is ranking
the results based on the user’s preferences and interests.
Finally, it filters the non-relevant food or health information
based on the user profile.
4.6 User’s feedback
Continuous feedback collection is required to
sharpen the user’s experiences. Feedback is not only explicit,
but also implicit, as it can be collected through different
measures. Many measures could help in reflecting the implicit
feedback, such as time spent in browsing the results, clicks on
the data sources, clicks on the result facets related to the
search results, etc. All interactions and feedback are recorded
and logged in the usage log which is analyzed after each
query to know how effective the results are and how we can
improve the future recommendations. This is reflected in the
user profile ontology
5. IMPLEMENTATION AND
EXPERIMENTATION
In this section experiments carried out to
evaluate the performance of proposed system will be
discussed from a quantitative point of view by running some
experiments to evaluate the precision of the results. The basic
idea of the experiment is to compare the search result from
keyword based search engine with proposed one on the same
category and the same keywords.
The proposed system ABPECSS is implemented in
C#.Net as Web-based system using Visual Studio 2008, .NET
Framework 3.5, and SQL Server 2005. The system was
evaluated by having 20 users implement the system to create
personal ontology’s. The user was given a query interface to
www.ijcat.com 567
5. International Journal of Computer Applications Technology and Research
Volume 3– Issue 9, 564 - 569, 2014
input his/her query parameters and view each one of their
concepts and every concept from the SCO that had been
matched to the personalize catalog concept. Also the user was
able to decide which concept or property was not needed
when reasoned and expanded the query. In the experiment, we
take different electronic items as an example. The user was
asked to compare the semantic query result and that from the
keyword-based search engines and decide if ABPECSS was
the better. Therefore, we manually create the domain e-
Catalog ontology (SCO) and user personalized catalog
ontology (PCO) and calculate semantic match degree in the
system.
Table 1 Experimental results statistics for query manipulation
concepts Total
found
concept
s
concept
Found
correct
Correct
concepts
manually
Precision
Recall
Dell Inspiron 15R
i3531-1200BK
89 71 74 91.36% 80.43%
Dell Alienware 18
Gaming Laptop.
89 71 93 71 % 5.54%
Canon EOS 6D Black
SLR Digital.
50 16
56
78.00% 84.21%
Nikon D810 DSLR
Camera (Body Only)
90 53
78
90.00% 81.54%
Nikon 1 AW1 14.2MP
Waterproof.
50 10
13` 89.00%
86.92%
Bargains Depot USB
Cable Lead Cord
45 19
39 93.00% 8.95%
We evaluated the system with two measures, precision and
relevance, shown in Figure 4 Precision measures the number
of relevant pages that were seen vs. the total number of pages
that were seen. Relevance measures the number of relevant
pages seen plus the number irrelevant pages not seen vs. the
total number queried
Figure 4 Precision Vs Recal l graph for proposed
system Vs GOOGLE
The next experiment aims at determining the importance of
personalization by using generated dynamic user model
during using the system. The user model is used to re-rank the
retrieved documents to match the user interest
Personalization time:
Time to retrieve any information depends
on the type of search engine, size of data set, relevancy
between query and doc. User history & re-ranking algorithm
used.
Figure 5 Performance efficiency of the new system
Figure 5 discuss the performance efficiency of the system
when the system uses to retrieve the result.
It is observed that 80% users, out of 30 users in our data set,
have found improved precision with the proposed approach in
comparison to the standard search engine(Google) results,
while 34% users have achieved equal precision with both
approaches. It has been observed that users who posed
Queries in unpopular context than well liked context got
better performance. In addition, when the system can extract
the exact context of user’s need, the Precision and recall is
found better than other search engine results.
6. CONCLUSION AND FUTURE WORK
In this paper, we propose a framework for semantic query
manipulation and personalization of Electronic catalog service
systems. We present the user profile ontology and its relation
to other domain ontology’s. Then, we explain the semantic
query processing steps and present the result personalization
steps. A complete scenario is illustrated to visualize the
framework followed by experimental results. The empirical
evaluation shows promising improvements in the relevancy of
the retrieved results and of the user’s satisfaction. It can be
used in other domain by editing the domain ontology using
export option of new system and building the domain
concepts weight table .In future work, we will focus on: (1)
automatically learn e-Catalog ontological concepts, properties
and relationship from web to build PCO; (2) add business
properties besides general properties to SCO; (3) construct the
Reasoning and Expending Module of ABPECSS, to set rules
onto SCO.
7. REFERENCES
[1]. J. de Bruijn, D. Fensel, and M. Kerrigan, Modeling
Semantic Web Services, Heidelberg: Springer-Verlag,2008,
pp. 30-52.
[2] . E. Casasola, ProFusion personal assistant: An agent for
personalized Information filtering on the WWW, M.S. thesis,
The University of Kansas, Kansas, KCK, U.S.A., 1998.
[3]. I. Chen, J. Ho, and C. Yang, On hierarchical web catalog
integration with conceptual relationships in
thesaurus, in Proceedings of the 29th Annual International
ACM SIGIR Conference on Research and
Development in Information Retrieval, Washington, 2006, pp.
635-636.
[4]. O. Corcho, A. Gómez-Pérez, Solving integration
problems of e-Commerce standards and initiatives through
ontological mappings, in Proceedings of the 17th International
Joint Conference on Artificial Intelligence,
Seattle, 2001.
[5]. R. Cyganiak, A relational algebra for SPARQL. Digital
Media Systems Laboratory HP Laboratories Bristol.
www.ijcat.com 568
6. International Journal of Computer Applications Technology and Research
Volume 3– Issue 9, 564 - 569, 2014
HPL-2005-170, September 28, 2005.
[6]. Z. Cui, D. Jones, and P. O'Brien, Semantic B2B
Integration: Issues in Ontology-based Approaches, SIGMOD
Record, vol. 31, no. 11, 2002.
[ 7]. S. Gauch, J. Chaffee, and A. Pretschner, Ontology-based
personalized search and browsing, Web Intelligence and
Agent Systems, vol. 1, no. 3-4, pp. 219-234, 2003.
[8]. L. Kwon and C. O. Kim, Recommendation of e-commerce
sites by matching category-based buyer query and
product e-Catalogs, Computers in Industry, vol. 59, no. 4, pp.
380-394, 2008.
[9] J. Lee and T. Lee, Massive catalog index based search for
e-Catalog matching, in Proceedings of the 9thIEEE
International Conference on e-Commerce Technology. Tokyo.
IEEE Computer Society, 2007, pp. 341-348
[10]. H. Lee, J. Shim, S. Lee, and S. Lee, Modeling
considerations for product ontology, in Lecture Notes in
Computer Science, Advances in Conceptual Modeling:
Theory and Practice, vol. 4231, Tucson, AZ: Springer, 2006,
pp. 291-300.
[11]. J. Leukel, V. Schmitz, and F. Dorloff, A modeling
approach for product classification systems, in Proceedings of
13th International Conference on the Database and Expert
Systems Applications. Aix-en-Provence, 2002, pp. 868-874.
[12]. H. Li, XML and industrial standards for electronic
commerce, Knowledge and Information Systems, vol. 2,no. 4,
pp. 487-497, 2000.
[13]. S. Liao, C. Chen, C. Hsieh, and S. Hsiao, Mining
information users’ knowledge for one-to-one marketing on
information appliance, Expert Systems with Applications, vol.
36, no. 3, pp. 4967-4979, 2009.
[14]. L. Lim and M. Wang, Managing e-Commerce catalogs
in a DBMS with native XML support, in Proceedings of the
IEEE International Conference on e-Business Engineering,
Beijing, 2005, pp. 564-571.
[15]. C. Lin and C. Hong, Using customer knowledge in
designing electronic catalog, Expert Systems with
Applications, vol. 34, no. 1, pp. 119-127, 2008.
[16]. D. Liu, Y. Lin, and C. Chen, Deployment of
personalized e-Catalogues: An agent-based framework
integrated with XML metadata and user models, Journal of
Network and Computer Applications, vol. 24, no. 3, pp. 201-
228, 2001.
[17]. K. Masanobu, D. Kobayashi, D. Xiaoyong, and I.
Naohiro, Evaluating word similarity in a semantic
network,Informatics, 2000, vol. 24, no. 1, pp. 192-202.
[18]. H. Paik and B. Benatallah, Personalised organisation of
dynamic e–Catalogs, in Web Services, e-Business,and the
Semantic Web (C. Bussler, R. Hull, S. McIlraith, M. E.
Orlowska, B. Pernici and J. Yang, Eds.).Heidelberg, Berlin:
Springer Verlag, 2002, pp. 139-152.
[19] E. Prud’hommeaux and A. Seaborne. (2005, July)
SPARQL Query Language for RDF. W3C Working Draft.
[Online]. Available: http://www.w3.org/TR/2005/WD-rdf-sparql-
query-20050721/.
[20]. R. Rada, H. Mili, E. Bicknell, and M. Blettner,
Development and application of a metric on semantic nets,
IEEE Transaction on System, Man and Cybernetics, vol. 19,
no. 1, pp. 17-30, 1989.
[21]. H. Sun-Young and K. Eun-Gyung, A study on the
improvement of query processing performance of OWL
data based on Jena, in Proceedings of the International
Conference on Convergence and Hybrid
Information Technology, Daejeon, 2008, pp. 678-681.
[22]. A. Tversky, Feature of similarity, Psychological
Review, vol. 84, no. 4, pp. 327-352, 1977.
[23] Dr.M.Thangaraj and Mrs. M.Chamundeeswari Agent
Based personalized Semantic Web Information Retrieval
System in (IJACSA) International Journal of Advanced
Computer Science and Applications, Vol. 5, No. 8,
2014
www.ijcat.com 569