Web server logs have abundant information about
the nature of users accessing it. The analysis of the users
current interestbased on the navigational behavior may help
the organizationsto guide the users in their browsing activity
and obtain relevantinformation in a shorter span of time
[1].Web usage mining is used to discover interesting user
navigationpatterns and can be applied to many real-world
problems, such as improving Web sites/pages,
makingadditional topic or product recommendations, user/
customer behavior studies, etc [23].Web usage mining, in
conjunction with standard approaches to personalization
helps to address some of the shortcomings of these
techniques, including reliance on subjective lack of
scalability, poor performance, user ratings and sparse
data[2,3,4,5,6]. But, it is not sufficient to discover patterns
from usage data for performing the personalization tasks.
It is necessary to derive a good quality of aggregate usage
profiles which indeed will help to devise efficient
recommendation for web personalization [11, 12, 13].Also
the unsupervised and competitive learning algorithms has
help to efficiently cluster user based access patterns by
mining web logs [19, 20, 24].
This paper presents and experimentally evaluates a
technique for finely tuninguser clusters based on similar
web access patterns on their usage profiles by approximating
through least square approach. Each cluster is having users
with similar browsing patterns. These clusters are useful
in web personalization so that it communicates better with
its users.Experimental results indicate thatusing the
generated aggregate usage profiles with approximating
clusters through least square approach effectively
personalize at early stages of user visits to a site without
deeper knowledge about them.
This document summarizes an article from the International Journal of Engineering Research and Applications titled "Explicit User Profiles for Semantic Web Search Using XML" by C. Srinvas.
The article discusses how to build explicit user profiles using XML to personalize semantic web searches. It covers collecting user data, inferring user profiles, and representing profiles. The key approach is to define an ontology of topics and annotate concepts with interest scores that are continuously updated based on user behavior using spreading activation methods. This builds ontological user profiles that represent user interests to improve search accuracy.
This document provides a survey of semantic web personalization techniques. It begins by defining semantic web personalization and its advantages over traditional web personalization. It then classifies semantic web personalization approaches into several categories, including ontology-based, context-based, and hybrid recommendation systems. For each category, it provides examples of approaches and compares their methods and steps for personalization. The goal of the survey is to analyze and compare different techniques used for personalization in the semantic web.
SUPPORTING PRIVACY PROTECTION IN PERSONALIZED WEB SEARCHnikhil421080
Personalized web search (PWS) has demonstrated its effectiveness in improving the quality of various search services on the Internet. However, evidence shows that users’ reluctance to disclose their private information during search has become a major barrier for the wide proliferation of PWS.
We study privacy protection in PWS applications that model user preferences as hierarchical user profiles. We propose a PWS framework called UPS that can adaptively generalize profiles by queries while respecting user-specified privacy requirements. Our runtime generalization aims at striking a balance between two predictive metrics that evaluate the utility of personalization and the privacy risk of exposing the generalized profile.
We present two greedy algorithms, namely GreedyDP and GreedyIL, for runtime generalization. We also provide an online prediction mechanism for deciding whether personalizing a query is beneficial. Extensive experiments demonstrate the effectiveness of our framework. The experimental results also reveal that GreedyIL significantly outperforms GreedyDP in terms of efficiency.
Web search engines (e.g. Google, Yahoo, Microsoft Live Search, etc.) are widely used to find certain data among a huge amount of information in a minimal amount of time. These useful tools also pose a privacy threat to users. Web search engines profile their users on the basis of past searches submitted by them. In the proposed system, we can implement the String Similarity Match Algorithm (SSM Algorithm) for improving better search quality results. To address this privacy threat, current solutions propose new mechanisms that introduce a high cost in terms of computation and communication. Personalized search is a promising way to improve the accuracy of web searches. However, effective personalized search requires collecting and aggregating user information, which often raises serious concerns of privacy infringement for many users. Indeed, these concerns have become one of the main barriers to deploying personalized search applications, and how to do privacy-preserving personalization is a great challenge. In this, we propose and try to resist adversaries with broader background knowledge, such as richer relationship among topics. Richer relationship means we generalize the user profile results by using the background knowledge which is going to store in history. Through this, we can hide the user search results. By using this mechanism, we can achieve privacy.
IRJET- A Novel Technique for Inferring User Search using Feedback SessionsIRJET Journal
This document proposes a novel technique to infer user search goals using feedback sessions. It aims to address limitations in existing approaches like noisy search results, small numbers of clicked URLs, and lack of consideration of user feedback. The proposed approach generates feedback sessions from user click logs, pre-processes the data, extracts keywords from restructured results, re-ranks the results based on keywords and user history, and categorizes the re-ranked results using predefined categories. The technique is evaluated using Average Precision, which compares it to other clustering and classification algorithms. The goal is to improve information retrieval by better representing user search interests and needs.
A New Algorithm for Inferring User Search Goals with Feedback SessionsIJERA Editor
When different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. The Novel approach to infer user search goals by analyzing search engine query logs. Once the User entered the query, the Resultant URLs will be filtered and the Pseudo-Documents are generated. Once the Pseudo documents are generated the Server will apply the Clustering Mechanism to URL’s. So that the URLs are listed as different categories. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of user. Second, we propose a novel approach to generate pseudo documents to better represents the feedback sessions for clustering. Finally we proposed new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods. Third, the distributions of user search goals can also be useful in applications such as re ranking web search results that contain different user search goals.
Personalized web search using browsing history and domain knowledgeRishikesh Pathak
This document proposes a framework for improving personalized web search by constructing an enhanced user profile using both the user's browsing history and domain knowledge. The enhanced user profile is used to better suggest relevant web pages to the user based on their search query. An experiment found that suggestions made using the enhanced user profile performed better than using a standard user profile alone. The framework involves modeling the user, re-ranking search results, and displaying personalized results based on the enhanced user profile.
IRJET- Personalize Travel Recommandation based on Facebook DataIRJET Journal
The document summarizes a proposed system for personalized travel recommendation based on Facebook data. It begins by discussing existing challenges with cold start recommendations and existing recommendation techniques. It then proposes a new framework called Implicit-feedback based Content-aware Collaborative Filtering (ICCF) that incorporates semantic content from social networks to address cold start recommendations without negative sampling. Finally, it evaluates ICCF on a large location-based social network dataset and finds it outperforms existing baselines particularly for cold start scenarios by leveraging user profile information.
Human Interactions in Mixed Service-Oriented SystemsDaniel Schall
An overview of human provided services and mixed service oriented systems is given. The core theme of this research is to support dynamic interactions between humans and services based on evolving interaction preferences.
The application of such a system can be found in crowdsourcing scenarios.
This document summarizes an article from the International Journal of Engineering Research and Applications titled "Explicit User Profiles for Semantic Web Search Using XML" by C. Srinvas.
The article discusses how to build explicit user profiles using XML to personalize semantic web searches. It covers collecting user data, inferring user profiles, and representing profiles. The key approach is to define an ontology of topics and annotate concepts with interest scores that are continuously updated based on user behavior using spreading activation methods. This builds ontological user profiles that represent user interests to improve search accuracy.
This document provides a survey of semantic web personalization techniques. It begins by defining semantic web personalization and its advantages over traditional web personalization. It then classifies semantic web personalization approaches into several categories, including ontology-based, context-based, and hybrid recommendation systems. For each category, it provides examples of approaches and compares their methods and steps for personalization. The goal of the survey is to analyze and compare different techniques used for personalization in the semantic web.
SUPPORTING PRIVACY PROTECTION IN PERSONALIZED WEB SEARCHnikhil421080
Personalized web search (PWS) has demonstrated its effectiveness in improving the quality of various search services on the Internet. However, evidence shows that users’ reluctance to disclose their private information during search has become a major barrier for the wide proliferation of PWS.
We study privacy protection in PWS applications that model user preferences as hierarchical user profiles. We propose a PWS framework called UPS that can adaptively generalize profiles by queries while respecting user-specified privacy requirements. Our runtime generalization aims at striking a balance between two predictive metrics that evaluate the utility of personalization and the privacy risk of exposing the generalized profile.
We present two greedy algorithms, namely GreedyDP and GreedyIL, for runtime generalization. We also provide an online prediction mechanism for deciding whether personalizing a query is beneficial. Extensive experiments demonstrate the effectiveness of our framework. The experimental results also reveal that GreedyIL significantly outperforms GreedyDP in terms of efficiency.
Web search engines (e.g. Google, Yahoo, Microsoft Live Search, etc.) are widely used to find certain data among a huge amount of information in a minimal amount of time. These useful tools also pose a privacy threat to users. Web search engines profile their users on the basis of past searches submitted by them. In the proposed system, we can implement the String Similarity Match Algorithm (SSM Algorithm) for improving better search quality results. To address this privacy threat, current solutions propose new mechanisms that introduce a high cost in terms of computation and communication. Personalized search is a promising way to improve the accuracy of web searches. However, effective personalized search requires collecting and aggregating user information, which often raises serious concerns of privacy infringement for many users. Indeed, these concerns have become one of the main barriers to deploying personalized search applications, and how to do privacy-preserving personalization is a great challenge. In this, we propose and try to resist adversaries with broader background knowledge, such as richer relationship among topics. Richer relationship means we generalize the user profile results by using the background knowledge which is going to store in history. Through this, we can hide the user search results. By using this mechanism, we can achieve privacy.
IRJET- A Novel Technique for Inferring User Search using Feedback SessionsIRJET Journal
This document proposes a novel technique to infer user search goals using feedback sessions. It aims to address limitations in existing approaches like noisy search results, small numbers of clicked URLs, and lack of consideration of user feedback. The proposed approach generates feedback sessions from user click logs, pre-processes the data, extracts keywords from restructured results, re-ranks the results based on keywords and user history, and categorizes the re-ranked results using predefined categories. The technique is evaluated using Average Precision, which compares it to other clustering and classification algorithms. The goal is to improve information retrieval by better representing user search interests and needs.
A New Algorithm for Inferring User Search Goals with Feedback SessionsIJERA Editor
When different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. The Novel approach to infer user search goals by analyzing search engine query logs. Once the User entered the query, the Resultant URLs will be filtered and the Pseudo-Documents are generated. Once the Pseudo documents are generated the Server will apply the Clustering Mechanism to URL’s. So that the URLs are listed as different categories. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of user. Second, we propose a novel approach to generate pseudo documents to better represents the feedback sessions for clustering. Finally we proposed new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods. Third, the distributions of user search goals can also be useful in applications such as re ranking web search results that contain different user search goals.
Personalized web search using browsing history and domain knowledgeRishikesh Pathak
This document proposes a framework for improving personalized web search by constructing an enhanced user profile using both the user's browsing history and domain knowledge. The enhanced user profile is used to better suggest relevant web pages to the user based on their search query. An experiment found that suggestions made using the enhanced user profile performed better than using a standard user profile alone. The framework involves modeling the user, re-ranking search results, and displaying personalized results based on the enhanced user profile.
IRJET- Personalize Travel Recommandation based on Facebook DataIRJET Journal
The document summarizes a proposed system for personalized travel recommendation based on Facebook data. It begins by discussing existing challenges with cold start recommendations and existing recommendation techniques. It then proposes a new framework called Implicit-feedback based Content-aware Collaborative Filtering (ICCF) that incorporates semantic content from social networks to address cold start recommendations without negative sampling. Finally, it evaluates ICCF on a large location-based social network dataset and finds it outperforms existing baselines particularly for cold start scenarios by leveraging user profile information.
Human Interactions in Mixed Service-Oriented SystemsDaniel Schall
An overview of human provided services and mixed service oriented systems is given. The core theme of this research is to support dynamic interactions between humans and services based on evolving interaction preferences.
The application of such a system can be found in crowdsourcing scenarios.
This document summarizes research on ontology-based web personalization. It discusses how web personalization aims to personalize content based on a user's navigational behavior. Ontology-based approaches use formal domain knowledge to build more accurate user profiles than traditional web mining methods alone. The document surveys recent works applying ontologies to areas like user modeling, recommendation systems, and information retrieval. It also outlines challenges in developing personalized systems, such as building accurate user profiles and addressing privacy and scalability issues. Future work opportunities include better integrating ontology and web mining techniques to improve personalization over time as a user's interests evolve.
A survey on ontology based web personalizationeSAT Journals
Abstract Over the last decade the data on World Wide Web has been growing in an exponential manner. According to Google the data is accelerating with a speed of billion pages per day [24]. Internet has around 2 million users accessing the World Wide Web for various information [25].These numbers certainly raise a severe concern over information over load challenges for the users. Many researchers have been working to overcome the challenge with web personalization, many researchers are looking at ontology based web personalization as an answer to the information overload, as each individual is unique. In this paper we present an overview of ontology based web personalization, Challenges and a survey of the work. This paper also points future work in web personalization. Index Terms: Web Personalization, Ontology, User modeling, web usage mining.
A NOVEL WEB IMAGE RE-RANKING APPROACH BASED ON QUERY SPECIFIC SEMANTIC SIGNAT...Journal For Research
Image re-ranking, is an effective way to improve the results of web-based image search. Given a query keyword, a pool of images are initailly retrieved primarily based on textual data, the remaining images are re-ranked based on their visual similarities with the query image corresponding to the user input. A major challenge is that the similarities of visual features don't well correlate with images’ semantic meanings that interpret users’ search intention. Recently people proposed to match pictures in a semantic space that used attributes or reference categories closely associated with the semantic meanings of images as basis. Even though, learning a universal visual semantic space to characterize extremely diverse images from the internet is troublesome and inefficient. In this thesis, we propose a completely distinctive image re-ranking framework that learns completely different semantic spaces for numerous query keywords automatically at the on-line stage. The visual features of images are projected into their corresponding semantic spaces to induce semantic signatures. At the online stage, images are re-ranked by scrutiny their semantic signatures obtained from the semantic spaces such that by the query keyword. The proposed query-specific semantic signatures considerably improve both the accuracy and efficiency of image re-ranking.
IJRET : International Journal of Research in Engineering and TechnologyImprov...eSAT Publishing House
This document summarizes techniques for improving web search results through web personalization. It discusses how web usage mining can be used to optimize information by monitoring user interaction histories and profiles. The proposed system aims to reduce manual user feedback by implicitly gathering preferences from behaviors like click-through rates and dwell times. It introduces an algorithm that calculates new ranking values for websites based on keyword matches and time spent on pages, and swaps ranks accordingly. This system provides personalized search results by continuously updating rankings based on implicit user interactions.
User search goal inference and feedback session using fast generalized – fuzz...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparative Analysis of Collaborative Filtering TechniqueIOSR Journals
The document compares different collaborative filtering techniques for making recommendations. It finds that hybrid collaborative filtering, which combines memory-based, model-based and other techniques, generally performs better than memory-based or model-based alone in terms of scalability, accuracy and memory consumption. It also proposes adding a normalization step to traditional collaborative filtering to improve accuracy by addressing the uneven distribution of ratings across items.
Following the user’s interests in mobile context aware recommender systemsBouneffouf Djallel
The wide development of mobile applications provides a considerable amount of data of all types (images, texts, sounds, videos, etc.). In this sense, Mobile Context-aware Recommender Systems (MCRS) suggest the user suitable information depending on her/his situation and interests. Two key questions have to be considered 1) how to recommend the user information that follows his/her interests evolution? 2) how to model the user’s situation and its related interests? To the best of our knowledge, no existing work proposing a MCRS tries to answer both questions as we do. This paper describes an ongoing work on the implementation of a MCRS based on the hybrid-ε-greedy algorithm we propose, which combines the standard ε-greedy algorithm and both content-based filtering and case-based reasoning techniques.
Recommendation generation by integrating sequential pattern mining and semanticseSAT Journals
Abstract As the Internet usage keeps increasing, the number of web sites and hence the number of web pages also keeps increasing. A recommendation system can be used to provide personalized web service by suggesting the pages that are likely to be accessed in future. Most of the recommendation systems are based on association rule mining or based on keywords. Using the association rule mining the prediction rate is less as it doesn’t take into account the order of access of the web pages by the users. The recommendation systems that are key-word based provides lesser relevant results. This paper proposes a recommendation system that uses the advantages of sequential pattern mining and semantics over the association rule mining and keyword based systems respectively. Keywords: Sequential Pattern Mining, Taxonomy, Apriori-All, CS-Mine, Semantic, Clustering
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes a research paper that proposes a bi-objective recommendation framework (BORF) for venue recommendation on mobile social networks. The BORF uses multiple objective optimization techniques to generate personalized recommendations. It addresses issues like cold start and data sparsity by preprocessing data using a Hub-Average inference model. Both scalar optimization using a Weighted Sum Approach and vector optimization using a NSGA-II evolutionary algorithm are implemented to provide optimal venue recommendations to users. Experimental results on a large real-world dataset confirm the accuracy of the proposed recommendation system.
Advance Clustering Technique Based on Markov Chain for Predicting Next User M...idescitation
According to the survey India is one of the
leading countries in the word for technical education and
management education. Numbers of students are increasing
day by day by the growth rate of 45% per annum. Advancement
in technology puts special effect on education system. This
helps in upgrading higher education. Some universities and
colleges are using these technologies. Weblog is one of them.
Main aim of this paper is to represent web logs using clustering
technique for predicting next user movement and user
behavior analysis. This paper moves around the web log
clustering technique based on Markov chain results .In this
paper we present an ideal approach to web clustering
(clustering web site users) and predicting their behavior for
next visit. Methodology: For generating effective result approx
14 engineering college web usage data is used and an advance
clustering approach is presenting after optimizing the other
clustering approach.Results: The user behavior is predicted
with the help of the advance clustering approach based on the
FPCM and k-mean. Proposed algorithm is used to mined and
predict user’s preferred paths. To predict the user behavior
existing approaches have been used. But the existing
approaches are not enough because of its reaction towards
noise. Thus with the help of ACM, noise is reduced, provides
more accurate result for predicting the user behavior. Approach
Implementation:The algorithm was implemented in MAT
LAB, DTRG and in Java .The experiment result proves that
this method is very effective in predicting user behavior. The
experimental results have validated the method’s effectiveness
in comparison with some previous studies.
Similarity based Dynamic Web Data Extraction and Integration System from Sear...IDES Editor
There is an explosive growth of information in
the World Wide Web thus posing a challenge to Web users
to extract essential knowledge from the Web. Search
engines help us to narrow down the search in the form of
Search Engine Result Pages (SERP). Web Content Mining
is one of the techniques that help users to extract useful
information from these SERPs. In this paper, we propose
two similarity based mechanisms; WDES, to extract desired
SERPs and store them in the local depository for offline
browsing and WDICS, to integrate the requested contents
and enable the user to perform the intended analysis and
extract the desired information. Our experimental results
show that WDES and WDICS outperform DEPTA [1] in
terms of Precision and Recall.
The document describes a proposed fuzzy logic-based model for classifying web users in a personalized search system. The model collects user browsing data using a customized browser. It then fuzzifies the data and generates fuzzy rules using decision trees. These rules are used to label search pages and group users according to their search interests. The model is evaluated against a Bayesian classifier and shown to perform better. The goal is to handle the dynamic and fluctuating nature of user behavior and interests that exist in a personalized web search environment.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a research paper that proposes a framework for personalized web search using query log and clickthrough data. The framework implements a re-ranking approach that combines user search context and browsing behavior to generate personalized search results with high relevance. The framework consists of five components: a request handler, query processor, result handler, event handler, and response handler. The result handler applies a re-ranking approach using query log and clickthrough data to personalize search results before returning them to the user. An evaluation found the framework and re-ranking approach to be effective for personalized web search and information retrieval.
This document discusses a navigation cost modeling technique based on ontology for effective navigation of query results from large datasets. It presents an approach that uses concept hierarchies built from annotated data to categorize results and reduce the navigation cost for users. An initial navigation tree is constructed from the dataset ontology and refined by removing empty nodes. The tree is then dynamically expanded at certain points to minimize the user's navigation cost and quickly reach the desired results.
Communication by Whispers Paradigm for Short Range Communication in Cognitive...IDES Editor
With ever increasing demand for efficiency
and increased throughput over the wireless domain, we
have now reached a point where in a method for efficient
utilization of the electromagnetic spectrum is in demand.
Cognitive radio provides us with a way in which this demand
could be catered, it does so by using devices which
autonomously adjust their communication parameters to
adapt to the external environment. However, the most
critical of this entire technology is the parameter relating to
the Spectrum Sensing aspects. In order for cognitive radio
devices to properly configure and identify the presence of
a primary carrier in a communication mode, spectrum
sensors and efficient sensing algorithms are required. There
have been many algo- rithms developed in this field.
However, these available spectrum sensing algorithms are
prohibitively expensive to implement in wireless devices that
cater to only short distance communication. In this paper,
we propose a parsimonious spectrum sensing algorithm
which would deal with sensing and couple them with short
distance wireless devices over WiFi networks operating in the
2.4 GHz band. The efficiency parameters are also shown
using demonstrations.
Extraction of Circle of Willis from 2D Magnetic Resonance AngiogramsIDES Editor
Magnetic resonance angiogram is a way to study
cerebrovascular structures. It helps to obtain information
regarding blood flow in a non-invasive fashion. Magnetic
resonance angiograms are examined basically for detection
of vascular pathologies, neurosurgery planning, and vascular
landmark detection. In certain cases it becomes complicated
for the doctors to assess the cerebral vessels or Circle of Willis
from the two-dimensional (2D) brain magnetic resonance
angiograms. In this paper an attempt has been made to extract
the Circle of Willis from 2D magnetic resonance angiograms,
so as to overcome such difficulties. The proposed method preprocesses
the magnetic resonance angiograms and
subsequently extracts the Circle of Willis. The extraction has
been done by color-based segmentation using K-means
clustering algorithm. As the developed method successfully
extracts the vasculature from the brain magnetic resonance
angiograms, therefore it will help the doctors for diagnosis
and serve as a step in the prevention of stroke. The algorithms
are developed on MATLAB 7.6.0 (R2008a) programming
platform.
A Traffic-Aware Key Management Architecture for Reducing Energy Consumption i...IDES Editor
In Wireless Sensor Networks (WSNs), most
of the existing key management schemes, establish shared
keys for all pairs of neighbor sensor nodes without
considering the communication between these nodes.
When the number of sensor nodes in WSNs is increased
then each sensor node is to be loaded with bulky amount
of keys. In WSNs a sensor node may communicate with a
small set of neighbor sensor nodes. Based on this fact, in
this paper, an energy efficient Traffic-Aware Key
Management (TKM) scheme is developed for WSNs,
which only establishes shared keys for active sensors
which participate in direct communication. The proposed
scheme offers an efficient Re-keying mechanism to
broadcast keys without the need for retransmission or
acknowledgements. Numerical results show that proposed
key management scheme achieves high connectivity. In
the simulation experiments, the proposed key
management scheme is applied for different routing
protocols. The performance evaluation shows that
proposed scheme gives stronger resilence, low energy
consumption and lesser end to end delay.
Optimally Learnt, Neural Network Based Autonomous Mobile Robot Navigation SystemIDES Editor
Neural network based systems have been used in
past years for robot navigation applications because of their
ability to learn human expertise and to utilize this knowledge
to develop autonomous navigation strategies. In this paper,
neural based systems are developed for mobile robot reactive
navigation. The proposed systems transform sensors’ input to
yield wheel velocities. Novel algorithm is proposed for optimal
training of neural network. With a view to ascertain the efficacy
of proposed system; developed neural system’s performance
is compared to other neural and fuzzy based approaches.
Simulation results show effectiveness of proposed system in
all kind of obstacle environments.
Conditional Averaging a New Algorithm for Digital FilterIDES Editor
This paper aims at designing a new algorithm for
digital filters. The traditional methods like FIR, IIR have been
improved in recent times with new approaches. However, the
developments have used complex arithmetic calculation and
dedicated DSP processors. In this research project, effort has
been made to reduce such complexities using a procedure
based on the technique of Conditional Averaging. The entire
algorithm is developed using more of conditional statements
and less of arithmetic calculations.
Digital signals are filtered at different stages of
signal processing. However high speed processor is used for
different calculations associated with filtration process. An
averaging is one such scheme used in simple FIR filter, which
performs low pass filtering operation. Conditional Averaging
is a new technique, which is one of the improvements in
continuous time averaging. Conditional Averaging algorithm
is explained in this practice with different examples for the
design of low pass filter. This algorithm has been successfully
tested using digital starter kit with TMS3206416v DSP
processor. Using code composer studio, the entire algorithm is
written in C/C++ language and compiled into an assembly
language. Conditional averaging can be implemented with any
general purpose processor to arrive at other types of filters
with certain necessary modifications.
The Effectiveness of PIES Compared to BEM in the Modelling of 3D Polygonal Pr...IDES Editor
The paper presents the effectiveness of the
parametric integral equation system (PIES) in solving 3D
boundary problems defined by Navier-Lame equations on the
example of the polygonal boundary and in comparison with
the boundary element method (BEM). Analysis was performed
using the commercial software “BEASY” which bases on the
BEM method, whilst in the case of PIES using an authors
software. On the basis of two examples the number of data
necessary to modeling of the boundary geometry, the number
of solved algebraic equations and the stability and accuracy
of the obtained numerical results were compared.
This document summarizes research on ontology-based web personalization. It discusses how web personalization aims to personalize content based on a user's navigational behavior. Ontology-based approaches use formal domain knowledge to build more accurate user profiles than traditional web mining methods alone. The document surveys recent works applying ontologies to areas like user modeling, recommendation systems, and information retrieval. It also outlines challenges in developing personalized systems, such as building accurate user profiles and addressing privacy and scalability issues. Future work opportunities include better integrating ontology and web mining techniques to improve personalization over time as a user's interests evolve.
A survey on ontology based web personalizationeSAT Journals
Abstract Over the last decade the data on World Wide Web has been growing in an exponential manner. According to Google the data is accelerating with a speed of billion pages per day [24]. Internet has around 2 million users accessing the World Wide Web for various information [25].These numbers certainly raise a severe concern over information over load challenges for the users. Many researchers have been working to overcome the challenge with web personalization, many researchers are looking at ontology based web personalization as an answer to the information overload, as each individual is unique. In this paper we present an overview of ontology based web personalization, Challenges and a survey of the work. This paper also points future work in web personalization. Index Terms: Web Personalization, Ontology, User modeling, web usage mining.
A NOVEL WEB IMAGE RE-RANKING APPROACH BASED ON QUERY SPECIFIC SEMANTIC SIGNAT...Journal For Research
Image re-ranking, is an effective way to improve the results of web-based image search. Given a query keyword, a pool of images are initailly retrieved primarily based on textual data, the remaining images are re-ranked based on their visual similarities with the query image corresponding to the user input. A major challenge is that the similarities of visual features don't well correlate with images’ semantic meanings that interpret users’ search intention. Recently people proposed to match pictures in a semantic space that used attributes or reference categories closely associated with the semantic meanings of images as basis. Even though, learning a universal visual semantic space to characterize extremely diverse images from the internet is troublesome and inefficient. In this thesis, we propose a completely distinctive image re-ranking framework that learns completely different semantic spaces for numerous query keywords automatically at the on-line stage. The visual features of images are projected into their corresponding semantic spaces to induce semantic signatures. At the online stage, images are re-ranked by scrutiny their semantic signatures obtained from the semantic spaces such that by the query keyword. The proposed query-specific semantic signatures considerably improve both the accuracy and efficiency of image re-ranking.
IJRET : International Journal of Research in Engineering and TechnologyImprov...eSAT Publishing House
This document summarizes techniques for improving web search results through web personalization. It discusses how web usage mining can be used to optimize information by monitoring user interaction histories and profiles. The proposed system aims to reduce manual user feedback by implicitly gathering preferences from behaviors like click-through rates and dwell times. It introduces an algorithm that calculates new ranking values for websites based on keyword matches and time spent on pages, and swaps ranks accordingly. This system provides personalized search results by continuously updating rankings based on implicit user interactions.
User search goal inference and feedback session using fast generalized – fuzz...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparative Analysis of Collaborative Filtering TechniqueIOSR Journals
The document compares different collaborative filtering techniques for making recommendations. It finds that hybrid collaborative filtering, which combines memory-based, model-based and other techniques, generally performs better than memory-based or model-based alone in terms of scalability, accuracy and memory consumption. It also proposes adding a normalization step to traditional collaborative filtering to improve accuracy by addressing the uneven distribution of ratings across items.
Following the user’s interests in mobile context aware recommender systemsBouneffouf Djallel
The wide development of mobile applications provides a considerable amount of data of all types (images, texts, sounds, videos, etc.). In this sense, Mobile Context-aware Recommender Systems (MCRS) suggest the user suitable information depending on her/his situation and interests. Two key questions have to be considered 1) how to recommend the user information that follows his/her interests evolution? 2) how to model the user’s situation and its related interests? To the best of our knowledge, no existing work proposing a MCRS tries to answer both questions as we do. This paper describes an ongoing work on the implementation of a MCRS based on the hybrid-ε-greedy algorithm we propose, which combines the standard ε-greedy algorithm and both content-based filtering and case-based reasoning techniques.
Recommendation generation by integrating sequential pattern mining and semanticseSAT Journals
Abstract As the Internet usage keeps increasing, the number of web sites and hence the number of web pages also keeps increasing. A recommendation system can be used to provide personalized web service by suggesting the pages that are likely to be accessed in future. Most of the recommendation systems are based on association rule mining or based on keywords. Using the association rule mining the prediction rate is less as it doesn’t take into account the order of access of the web pages by the users. The recommendation systems that are key-word based provides lesser relevant results. This paper proposes a recommendation system that uses the advantages of sequential pattern mining and semantics over the association rule mining and keyword based systems respectively. Keywords: Sequential Pattern Mining, Taxonomy, Apriori-All, CS-Mine, Semantic, Clustering
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes a research paper that proposes a bi-objective recommendation framework (BORF) for venue recommendation on mobile social networks. The BORF uses multiple objective optimization techniques to generate personalized recommendations. It addresses issues like cold start and data sparsity by preprocessing data using a Hub-Average inference model. Both scalar optimization using a Weighted Sum Approach and vector optimization using a NSGA-II evolutionary algorithm are implemented to provide optimal venue recommendations to users. Experimental results on a large real-world dataset confirm the accuracy of the proposed recommendation system.
Advance Clustering Technique Based on Markov Chain for Predicting Next User M...idescitation
According to the survey India is one of the
leading countries in the word for technical education and
management education. Numbers of students are increasing
day by day by the growth rate of 45% per annum. Advancement
in technology puts special effect on education system. This
helps in upgrading higher education. Some universities and
colleges are using these technologies. Weblog is one of them.
Main aim of this paper is to represent web logs using clustering
technique for predicting next user movement and user
behavior analysis. This paper moves around the web log
clustering technique based on Markov chain results .In this
paper we present an ideal approach to web clustering
(clustering web site users) and predicting their behavior for
next visit. Methodology: For generating effective result approx
14 engineering college web usage data is used and an advance
clustering approach is presenting after optimizing the other
clustering approach.Results: The user behavior is predicted
with the help of the advance clustering approach based on the
FPCM and k-mean. Proposed algorithm is used to mined and
predict user’s preferred paths. To predict the user behavior
existing approaches have been used. But the existing
approaches are not enough because of its reaction towards
noise. Thus with the help of ACM, noise is reduced, provides
more accurate result for predicting the user behavior. Approach
Implementation:The algorithm was implemented in MAT
LAB, DTRG and in Java .The experiment result proves that
this method is very effective in predicting user behavior. The
experimental results have validated the method’s effectiveness
in comparison with some previous studies.
Similarity based Dynamic Web Data Extraction and Integration System from Sear...IDES Editor
There is an explosive growth of information in
the World Wide Web thus posing a challenge to Web users
to extract essential knowledge from the Web. Search
engines help us to narrow down the search in the form of
Search Engine Result Pages (SERP). Web Content Mining
is one of the techniques that help users to extract useful
information from these SERPs. In this paper, we propose
two similarity based mechanisms; WDES, to extract desired
SERPs and store them in the local depository for offline
browsing and WDICS, to integrate the requested contents
and enable the user to perform the intended analysis and
extract the desired information. Our experimental results
show that WDES and WDICS outperform DEPTA [1] in
terms of Precision and Recall.
The document describes a proposed fuzzy logic-based model for classifying web users in a personalized search system. The model collects user browsing data using a customized browser. It then fuzzifies the data and generates fuzzy rules using decision trees. These rules are used to label search pages and group users according to their search interests. The model is evaluated against a Bayesian classifier and shown to perform better. The goal is to handle the dynamic and fluctuating nature of user behavior and interests that exist in a personalized web search environment.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a research paper that proposes a framework for personalized web search using query log and clickthrough data. The framework implements a re-ranking approach that combines user search context and browsing behavior to generate personalized search results with high relevance. The framework consists of five components: a request handler, query processor, result handler, event handler, and response handler. The result handler applies a re-ranking approach using query log and clickthrough data to personalize search results before returning them to the user. An evaluation found the framework and re-ranking approach to be effective for personalized web search and information retrieval.
This document discusses a navigation cost modeling technique based on ontology for effective navigation of query results from large datasets. It presents an approach that uses concept hierarchies built from annotated data to categorize results and reduce the navigation cost for users. An initial navigation tree is constructed from the dataset ontology and refined by removing empty nodes. The tree is then dynamically expanded at certain points to minimize the user's navigation cost and quickly reach the desired results.
Communication by Whispers Paradigm for Short Range Communication in Cognitive...IDES Editor
With ever increasing demand for efficiency
and increased throughput over the wireless domain, we
have now reached a point where in a method for efficient
utilization of the electromagnetic spectrum is in demand.
Cognitive radio provides us with a way in which this demand
could be catered, it does so by using devices which
autonomously adjust their communication parameters to
adapt to the external environment. However, the most
critical of this entire technology is the parameter relating to
the Spectrum Sensing aspects. In order for cognitive radio
devices to properly configure and identify the presence of
a primary carrier in a communication mode, spectrum
sensors and efficient sensing algorithms are required. There
have been many algo- rithms developed in this field.
However, these available spectrum sensing algorithms are
prohibitively expensive to implement in wireless devices that
cater to only short distance communication. In this paper,
we propose a parsimonious spectrum sensing algorithm
which would deal with sensing and couple them with short
distance wireless devices over WiFi networks operating in the
2.4 GHz band. The efficiency parameters are also shown
using demonstrations.
Extraction of Circle of Willis from 2D Magnetic Resonance AngiogramsIDES Editor
Magnetic resonance angiogram is a way to study
cerebrovascular structures. It helps to obtain information
regarding blood flow in a non-invasive fashion. Magnetic
resonance angiograms are examined basically for detection
of vascular pathologies, neurosurgery planning, and vascular
landmark detection. In certain cases it becomes complicated
for the doctors to assess the cerebral vessels or Circle of Willis
from the two-dimensional (2D) brain magnetic resonance
angiograms. In this paper an attempt has been made to extract
the Circle of Willis from 2D magnetic resonance angiograms,
so as to overcome such difficulties. The proposed method preprocesses
the magnetic resonance angiograms and
subsequently extracts the Circle of Willis. The extraction has
been done by color-based segmentation using K-means
clustering algorithm. As the developed method successfully
extracts the vasculature from the brain magnetic resonance
angiograms, therefore it will help the doctors for diagnosis
and serve as a step in the prevention of stroke. The algorithms
are developed on MATLAB 7.6.0 (R2008a) programming
platform.
A Traffic-Aware Key Management Architecture for Reducing Energy Consumption i...IDES Editor
In Wireless Sensor Networks (WSNs), most
of the existing key management schemes, establish shared
keys for all pairs of neighbor sensor nodes without
considering the communication between these nodes.
When the number of sensor nodes in WSNs is increased
then each sensor node is to be loaded with bulky amount
of keys. In WSNs a sensor node may communicate with a
small set of neighbor sensor nodes. Based on this fact, in
this paper, an energy efficient Traffic-Aware Key
Management (TKM) scheme is developed for WSNs,
which only establishes shared keys for active sensors
which participate in direct communication. The proposed
scheme offers an efficient Re-keying mechanism to
broadcast keys without the need for retransmission or
acknowledgements. Numerical results show that proposed
key management scheme achieves high connectivity. In
the simulation experiments, the proposed key
management scheme is applied for different routing
protocols. The performance evaluation shows that
proposed scheme gives stronger resilence, low energy
consumption and lesser end to end delay.
Optimally Learnt, Neural Network Based Autonomous Mobile Robot Navigation SystemIDES Editor
Neural network based systems have been used in
past years for robot navigation applications because of their
ability to learn human expertise and to utilize this knowledge
to develop autonomous navigation strategies. In this paper,
neural based systems are developed for mobile robot reactive
navigation. The proposed systems transform sensors’ input to
yield wheel velocities. Novel algorithm is proposed for optimal
training of neural network. With a view to ascertain the efficacy
of proposed system; developed neural system’s performance
is compared to other neural and fuzzy based approaches.
Simulation results show effectiveness of proposed system in
all kind of obstacle environments.
Conditional Averaging a New Algorithm for Digital FilterIDES Editor
This paper aims at designing a new algorithm for
digital filters. The traditional methods like FIR, IIR have been
improved in recent times with new approaches. However, the
developments have used complex arithmetic calculation and
dedicated DSP processors. In this research project, effort has
been made to reduce such complexities using a procedure
based on the technique of Conditional Averaging. The entire
algorithm is developed using more of conditional statements
and less of arithmetic calculations.
Digital signals are filtered at different stages of
signal processing. However high speed processor is used for
different calculations associated with filtration process. An
averaging is one such scheme used in simple FIR filter, which
performs low pass filtering operation. Conditional Averaging
is a new technique, which is one of the improvements in
continuous time averaging. Conditional Averaging algorithm
is explained in this practice with different examples for the
design of low pass filter. This algorithm has been successfully
tested using digital starter kit with TMS3206416v DSP
processor. Using code composer studio, the entire algorithm is
written in C/C++ language and compiled into an assembly
language. Conditional averaging can be implemented with any
general purpose processor to arrive at other types of filters
with certain necessary modifications.
The Effectiveness of PIES Compared to BEM in the Modelling of 3D Polygonal Pr...IDES Editor
The paper presents the effectiveness of the
parametric integral equation system (PIES) in solving 3D
boundary problems defined by Navier-Lame equations on the
example of the polygonal boundary and in comparison with
the boundary element method (BEM). Analysis was performed
using the commercial software “BEASY” which bases on the
BEM method, whilst in the case of PIES using an authors
software. On the basis of two examples the number of data
necessary to modeling of the boundary geometry, the number
of solved algebraic equations and the stability and accuracy
of the obtained numerical results were compared.
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
Analysis of GPSR and its Relevant Attacks in Wireless Sensor NetworksIDES Editor
Most of the routing protocols proposed for ad-hoc
networks and sensor networks are not designed with security
as a goal. Hence, many routing protocols are vulnerable to an
attack by an adversary who can disrupt the network or harness
valuable information from the network. Routing Protocols
for wireless sensor networks are classified into three types
depending on their network structure as Flat routing protocols,
Hierarchical routing protocol and Geographic routing
protocols. We mainly concentrate on location-based or
geographic routing protocol like Greedy Perimeter Stateless
Routing Protocol (GPSR). Sybil attack and Selective
forwarding attack are the two attacks feasible in GPSR. These
attacks are implemented in GPSR and their losses caused to
the network are analysed
Detecting Malicious Dropping Attack in the InternetIDES Editor
The current interdomain routing protocol, Border
Gateway Protocol, is limited in implementations of universal
security. Because of this, it is vulnerable to many attacks at
the AS to AS routing infrastructure. Initially, the major
concern about BGP security is that malicious BGP routers
can arbitrarily falsify BGP routing messages and spread
incorrect routing information. Recently, some authors have
pointed out the impact of a type of attack, namely selective
dropping or malicious dropping attack that has not studied
before. The malicious draping attack can result in data traffic
being blackholed or trapped in a loop. However, the authors
did not elaborate on how one can detect such attacks. In this
paper, we discuss and analyse a method that can be used to
detect malicious dropping attacks in the Internet.
Design and Analysis of Adaptive Neural Controller for Voltage Source Converte...IDES Editor
Usually a STATCOM is installed to support power
system networks that have a poor power factor and often poor
voltage regulation. It is based on a power electronics voltagesource
converter. Various PWM techniques make selective
harmonic elimination possible, which effectively control the
harmonic content of voltage source converters. The distribution
systems have to supply unbalanced nonlinear loads transferring
oscillations to the DC-side of the converter in a realistic
operating condition. Thus, additional harmonics are modulated
through the STATCOM at the point of common coupling
(PCC). This requires more attention when switching angles are
calculated offline using the optimal PWM technique. This
paper, therefore, presents the artificial neural network model
for defining the switching criterion of the VSC for the
STATCOM in order to reduce the total harmonic distortion
(THD) of the injected line current at the PCC. The model takes
into the account the dc capacitor effect, effects of other possible
varying parameters such as voltage unbalance as well as
network harmonics. A reference is developed for offline
prediction and then implemented with the help of back
propagation technique.
Design of a Pseudo-Random Binary Code Generator via a Developed Simulation ModelIDES Editor
This paper presents a developed tool for Pseudo-
Random Binary Code generator (PRBCG). Based on extensive
study of LFSR theory we developed the simulation model of
PRBCG. The developed model is faster and simulates the
process for very high length of Linear Feedback Shift Registers
(LFSRs). We tested our model for the value n = 300 where n is
the length of the LFSR. The developed software model is also
capable of providing the transition states of different bits of
LFSRs. Further, the model has capability of switching to any
possible characteristic polynomial (feedback connections) of
n-bit LFSR. Also, the model is designed such that it can
accommodate all the possible initial conditions (2n) of LFSR
Design of Equitable Dominating Set Based Semantic Overlay Networks with Optim...IDES Editor
Content Distribution Networks (CDNs) are overlay
networks for placing the content near the end clients with the
aim at reducing the delay, network congestion and balancing
the workload, hence improving the service quality perceived
by the end clients. The main objective of this work is to
construct a semantic overlay network of surrogate servers
based on equitable dominating set. This yields any replication
algorithm that can replicate the contents to minimum number
of surrogate servers within the SON. Such servers can be
accessed from anywhere. Then we propose a content
distribution algorithm named Optimal Fast Replica (O-FR)
and apply our proposed algorithm to distribute the content
over the Equitable Dominating set based Semantic Overlay
Networks (EDSON). We analyze the performance of our
proposed Optimal Fast Replica (O-FR) in terms of average
replication time, and maximum replication time and compare
its performance with existing content distribution algorithms
named Fast Replica and Resilient Fast Replica. The result of
such approach improves the service quality perceived by the
end clients. This paper also analyzes the use of equitable
dominating set for the construction of semantic overlay
networks and also investigates how it is useful for maintaining
the uniform utilization of the surrogate servers.
Performance Measures of Hierarchical Cellular Networks on Queuing Handoff CallsIDES Editor
This document summarizes a research paper that proposes a Markov model to analyze the performance of two low layers (picocell and femtocell) of a hierarchical cellular network with a FIFO queue. It describes modeling the femtocell layer with and without a queue, and the picocell layer without a queue. The blocking probabilities of new and handoff calls are calculated for different configurations. Results show that having a queue in the femtocell layer increases blocking probability compared to configurations without queues, since it introduces additional queuing delays. The paper contributes to understanding the impact of queues on performance in multi-layer cellular networks.
Novel Voltage Mode Multifunction Filter based on Current Conveyor Transconduc...IDES Editor
This paper presents a novel voltage mode (VM) first
order Single input three output multi function filter employing
second generation current conveyor transconductance
amplifier (CCII-TA). The proposed circuit employs only one
active element, one grounded capacitor and three resistors.
The angular pole frequency of the proposed circuits can be
tuned electronically with the help of bias current. The proposed
circuit is very appropriate to further develop into an integrated
circuit. Sensitivity study is provided and SPICE simulations
have been included which verify the workability of the circuit
An Efficient PMG based Wind Energy Conversion System with Power Quality Impro...IDES Editor
This paper presents an efficient small scale wind
energy conversion system using permanent magnet generators
(PMGs) and power electronic converters. In the proposed
system, variable magnitude and variable frequency of PMG
are converted to constant DC using a full bridge rectifier and
closed loop boost converter. This constant DC output is
converted to AC using Grid interfacing inverter. The inverter
is controlled in such a manner that it can perform as a
conventional inverter as well as an active power filter. The
inverter can thus be utilized as a power converter injecting
power generated from renewable energy source to the grid
and as a shunt active power filter to compensate for power
quality disturbances and load reactive power demand.
Mitigation of Harmonic Current through Hybrid Power Filter in Single Phase Re...IDES Editor
This document discusses a proposed hybrid power filter for harmonics and reactive power compensation in a single-phase rectifier system. The hybrid power filter combines a passive filter and active power filter. The passive filter is tuned to compensate for the 3rd and 5th order harmonics, while the active power filter compensates for the remaining harmonic components not addressed by the passive filter. This allows for a reduction in the rating and size of the active power filter components. The effectiveness of the hybrid power filter configuration is demonstrated through simulation results showing reduced harmonic distortion in the source current and improved power factor.
Performance Evaluation of ad-hoc Network Routing Protocols using ns2 SimulationIDES Editor
Ad-hoc networks are basically peer to peer multihop
mobile wireless networks in which the information packets
are transmitted in a ‘store and forward’ manner from a source
to an arbitrary destination via intermediate nodes. The main
objective of this paper is to evaluate the performance of various
ad-hoc networks routing protocols viz. DSDV (Destination
Sequence Distance Vector), DSR (Dynamic Source Routing)
and AODV (Ad-hoc On Demand Distance Vector). The
comparison of these protocols is based on different
performance metrics, which are throughput, packet delivery
ratio, routing overheads, packet drop and average end to end
delay. The performance evaluation has been done by using
simulation tool NS2 (Network Simulator) which is the main
simulator.
Partial Shading Detection and MPPT Controller for Total Cross Tied Photovolta...IDES Editor
This paper present Maximum Power Point Tracking
(MPPT) controller for solving partial shading problems in
photovoltaic (PV) systems. It is well-known that partial shading
is often encountered in PV system issue with many
consequences. In this research, PV array is connected using
TCT (total cross-tied) configuration including sensors to
measure voltage and currents. The sensors provide inputs for
MPPT controller in order to achieve optimum output power.
The Adaptive Neuro Fuzzy Inference System (ANFIS) is
utilized in this paper as the controller methods. Then, the
output of MPPT controller is the optimum power duty cycle
(α) to drive the performance DC-DC converter. The simulation
shows that the proposed MPPT controller can provide PV
voltage (VMPP) nearly to the maximum power point voltage.
The accuracy of our proposed method is measured by
performance index defined as Mean Absolute Percentage Error
(MAPE). In addition, the main purpose of this work is to
present a new method for detecting partial condition of
photovoltaic TCT configuration using only 3 sensors. Thus,
this method can streamline the time and reduce operating
costs.
A Review on Pattern Discovery Techniques of Web Usage MiningIJERA Editor
In the recent years with the development of Internet technology the growth of World Wide Web exceeded all expectations. A lot of information is available in different formats and retrieving interesting content has become a very difficult task. One possible approach to solve this problem is Web Usage Mining (WUM), the important application of Web Mining. Extracting the hidden knowledge in the log files of a web server, recognizing various interests of web users, discovering customer behavior while at the site are normally referred as the applications of web usage mining. In this paper we provide an updated focused survey on techniques of web usage mining.
Enactment of Firefly Algorithm and Fuzzy C-Means Clustering For Consumer Requ...IRJET Journal
The document proposes a novel methodology for predicting consumer demand and future requests on web pages using a hybrid approach. It first classifies consumers as potential or non-potential using a firefly-based neural network with Levenberg-Marquardt algorithm. Potential consumer data is then clustered using an improved fuzzy C-means clustering algorithm. Finally, upcoming consumer demand is predicted by analyzing patterns and recommending web pages with higher weights. The proposed approach is implemented in Java and CloudSim and aims to overcome limitations of existing recommendation systems by providing more accurate and efficient predictions in shorter time.
This document proposes a new technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The proposed model uses a multi-layered network architecture with backpropagation learning to analyze web log data. Data preprocessing steps like cleaning, user identification, and transaction identification are applied to prepare the enterprise proxy log data for analysis. The proposed framework aims to discover useful patterns from web log data through a combination of K-means clustering and a feedforward neural network.
Performance of Real Time Web Traffic Analysis Using Feed Forward Neural Netw...IOSR Journals
This document discusses using feed forward neural networks and K-means clustering to analyze real-time web traffic. It proposes a technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The model uses a multi-layered network architecture with backpropagation learning to discover and analyze knowledge from web log data. It also discusses preprocessing the web log data through cleaning, user identification, filtering, session identification and transaction identification before applying the neural network and K-means algorithms.
Certain Issues in Web Page Prediction, Classification and Clustering in Data ...IJAEMSJORNAL
Nowadays, data mining which is a part of web mining plays a vital role in various applications such as search engines, health care centers for extracting the individual patient details among huge database, analyzing disease based on basic criteria, education system for analyzing their performance level with other system, social networking, E-Commerce and knowledge management etc., which extract the information based on the user query. The issues are time taken to mine the target content or webpage from the search engines, space complexity and predicting the frequent webpage for the next user based on users’ behaviour.
Semantically enriched web usage mining for predicting user future movementsIJwest
Explosive and quick growth of the World Wide Web has resulted in intricate Web sites, demanding
enhanced user skills and sophisticated tools to help the Web user to find the desi
red information. Finding
desired information on the Web has become a critical ingredient of everyday personal, educational, and
business life. Thus, there is a demand for more sophisticated tools to help the user to navigate a Web site
and find the desired
information. The users must be provided with information and services specific to
their needs, rather than an undiffere
ntiated mass of information.
For discovering interesting and frequent
navigation patterns from Web server logs many Web usage mining te
chniques have been applied. The
recommendation accuracy of solely usage based techniques can be improved by integrating Web site
content and site structure in the personalization process.
Herein, we propose Semantically enriched Web Usage Mining method (S
WUM), which combines the fields
of Web Usage Mining and Semantic Web. In the proposed method, the undirected graph derived from
usage data is enriched with rich semantic information extracted from the Web pages and the Web site
structure. The experimental
results show that the SWUM generates accurate recommendations with
integration of usage, semantic data and Web site structure. The results shows that proposed method is able
to achieve 10
-
20%
better accuracy than the solely usage based model, and 5
-
8% bet
ter than an ontology
based model.
An effective search on web log from most popular downloaded contentijdpsjournal
A Web page recommender system effectively predicts the best related web page to search. While search
ing
a word from search engine it may display some unnecessary links and unrelated data’s to user so to a
void
this problem, the con
ceptual prediction model combines both the web usage and domain knowledge. The
proposed conceptual prediction model automatically generates a semantic network of the semantic Web
usage knowledge, which is the integration of domain knowledge and web usage i
nformation. Web usage
mining aims to discover interesting and frequent user access patterns from web browsing data. The
discovered knowledge can then be used for many practical web applications such as web
recommendations, adaptive web sites, and personali
zed web search and surfing
World Wide Web is a huge repository of information and there is a tremendous increase in the volume of
information daily. The number of users are also increasing day by day. To reduce users browsing time lot
of research is taken place. Web Usage Mining is a type of web mining in which mining techniques are
applied in log data to extract the behaviour of users. Clustering plays an important role in a broad range
of applications like Web analysis, CRM, marketing, medical diagnostics, computational biology, and many
others. Clustering is the grouping of similar instances or objects. The key factor for clustering is some sort
of measure that can determine whether two objects are similar or dissimilar. In this paper a novel
clustering method to partition user sessions into accurate clusters is discussed. The accuracy and various
performance measures of the proposed algorithm shows that the proposed method is a better method for
web log mining.
3 iaetsd semantic web page recommender systemIaetsd Iaetsd
This document discusses a semantic web-page recommender system that aims to improve upon traditional recommender systems. It proposes two novel knowledge representation models: 1) a semantic network that represents domain knowledge through terms, webpages, and relationships automatically constructed from website data, and 2) a conceptual prediction model that integrates semantic knowledge with web usage data to create a weighted semantic network for making recommendations. The system seeks to overcome limitations of prior systems by automating knowledge base construction and addressing the "new-item problem" through incorporation of semantic information. Evaluation shows the proposed approach yields better performance than existing web usage-based recommender systems.
AN EXTENSIVE LITERATURE SURVEY ON COMPREHENSIVE RESEARCH ACTIVITIES OF WEB US...James Heller
This document summarizes an extensive literature review on web usage mining. It begins by defining web mining and its three types: web content mining, web structure mining, and web usage mining. It then describes the main stages of web usage mining in detail: pre-processing, storage models, pattern discovery, optimization techniques, and pattern analysis. Finally, it discusses the characteristics of web server log data, including that it is unstructured, heterogeneous, distributed, contains different data types and dynamic content, and is voluminous, non-scalable, and incremental in nature.
Optimization of Search Results with Duplicate Page Elimination using Usage DataIDES Editor
The performance and scalability of search engines
are greatly affected by the presence of enormous amount of
duplicate data on the World Wide Web. The flooded search
results containing a large number of identical or near identical
web pages affect the search efficiency and seek time of the users
to find the desired information within the search results. When
navigating through the results, the only information left behind
by the users is the trace through the pages they accessed. This
data is recorded in the query log files and usually referred to
as Web Usage Data. In this paper, a novel technique for
optimizing search efficiency by removing duplicate data from
search results is being proposed, which utilizes the usage data
stored in the query logs. The duplicate data detection is
performed by the proposed Duplicate Data Detection (D3)
algorithm, which works offline on the basis of favored user
queries found by pre-mining the logs with query clustering.
The proposed result optimization technique is supposed to
enhance the search engine efficiency and effectiveness to a large
scale.
IRJET-A Survey on Web Personalization of Web Usage MiningIRJET Journal
S.Jagan, Dr.S.P.Rajagopalan "A Survey on Web Personalization of Web Usage Mining", International Research Journal of Engineering and Technology (IRJET),Volume 2,issue-01 Mar-2015. e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net , published by Fast Track Publications
Abstract
Now a day, World Wide Web (www) is a rich and most powerful source of information. Day by day it is becoming more complex and expanding in size to get maximum information details online. However, it is becoming more complex and critical task to retrieve exact information expected by its users. To deal with this problem one more powerful concept is personalization which is becoming more powerful now days. Personalization is a subclass of information filtering system that seek to predict the 'ratings' or 'preferences' that a user would give to an items, they had not yet considered, using a model built from the characteristics of an item (content-based approaches or collaborative filtering approaches). Web mining is an emerging field of data mining used to provide personalization on the web. It consist three major categories i.e. Web Content Mining, Web Usage Mining, and Web Structure Mining. This paper focuses on web usage mining and algorithms used for providing personalization on the web.
IRJET- An Integrated Recommendation System using Graph Database and QGISIRJET Journal
This document presents a recommendation system that uses a graph database and QGIS to find the shortest path for a user to shop at the nearest mall. It analyzes product reviews stored in the Neo4j graph database to determine which products the user may be interested in. It then uses QGIS to calculate the shortest distance from the user's current location to the nearest mall with the recommended products. The system aims to minimize the time a user spends shopping by providing personalized recommendations and routing them to the closest appropriate location. It discusses how graph databases and hybrid recommendation approaches can be used to integrate different recommendation techniques for improved performance.
IRJET- Text-based Domain and Image Categorization of Google Search Engine usi...IRJET Journal
This document discusses a proposed system for categorizing search engine results using conceptual clustering. The system analyzes the content of search results to extract relevant concepts, then uses a personalized conceptual clustering algorithm to generate a decision tree of query clusters. This tree can be used to identify categories for web pages and provide topically relevant results to users. The system aims to improve on traditional ranked search results by categorizing results based on the conceptual preferences and interests of individual users.
IRJET- A Literature Review and Classification of Semantic Web Approaches for ...IRJET Journal
This document discusses using semantic web approaches for web personalization. It begins with an abstract that outlines how web personalization can help address the problem of information overload by recommending and filtering web pages according to a user's interests. The document then reviews related work on using ontologies and semantic web technologies for personalized e-learning, recommender systems, and other applications. It categorizes different semantic web approaches that have been used for web personalization, including their pros and cons. The overall purpose is to survey semantic web techniques for personalization and how they have been applied in previous research.
Classification of User & Pattern discovery in WUM: A SurveyIRJET Journal
This document summarizes research on web usage mining techniques. It discusses how web usage mining involves discovering patterns from web server logs to understand how users interact with websites. The document reviews several papers on preprocessing log data, pattern discovery methods like clustering and classification, and classifying users based on patterns. It also provides an overview of the web usage mining process, which typically involves preprocessing, pattern discovery from cleaned logs, and using patterns to classify users. The goal is to help website administrators better understand users and personalize websites.
Mixed Recommendation Algorithm Based on Content, Demographic and Collaborativ...IRJET Journal
The document describes a proposed hybrid recommendation algorithm that incorporates content filtering, collaborative filtering, and demographic filtering. It begins with an overview of recommendation systems and different filtering techniques. Then, it discusses related work incorporating various filtering approaches. The methodology section outlines the original algorithm, which develops user profiles based on browsing history and ratings. It provides recommendations by calculating similarities between user and item profiles. The proposed methodology enhances this by incorporating demographic attributes into user profiles and using fuzzy logic to validate recommendations. It claims this integrated approach can provide more accurate and personalized recommendations.
This document summarizes research on techniques for improving database performance under heavy query loads. It discusses using fuzzy logic to classify queries by priority and batch similar queries to execute together. The document reviews related work on detecting misuse and insider threats, collaborative querying, risk-adaptive access control models, query recommendation, and ranking query results. The proposed approach uses fuzzy logic to identify high priority queries in a priority queue based on extracted features. Queries are clustered and batched by type to commit together, aiming to yield the best database performance even under high loads.
Web Page Recommendation Using Web MiningIJERA Editor
On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each web mining technique.3)We propose the architecture for the personalized web page recommendation.
A detail survey of page re ranking various web features and techniquesijctet
This document discusses techniques for page re-ranking on websites based on user behavior analysis. It describes how web usage mining involves analyzing web server logs to extract patterns in user behavior. Common techniques discussed for page re-ranking include Markov models, data mining approaches like clustering and association rule mining, and analyzing linked web page structures. The goal is to better understand user interests and predict future page access to improve information retrieval and optimize website design.
Similar to A Least Square Approach ToAnlayze Usage Data For Effective Web Personalization (20)
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
This document summarizes a research paper that proposes using artificial intelligence techniques and FACTS controllers for reactive power planning in real-time power transmission systems. The paper formulates the reactive power planning problem and incorporates flexible AC transmission system (FACTS) devices like static VAR compensators (SVC), thyristor controlled series capacitors (TCSC), and unified power flow controllers (UPFC). Evolutionary algorithms like evolutionary programming (EP) and differential evolution (DE) are applied to find the optimal locations and settings of the FACTS controllers to minimize losses and costs. Simulation results on IEEE 30-bus and 72-bus Indian test systems show that UPFC performs best in reducing losses compared to SVC and TCSC.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This document summarizes and analyzes secure multi-party negotiation protocols for electronic payments in mobile computing. It presents a framework for secure multi-party decision protocols using lightweight implementations. The main focus is on synchronizing security features to avoid agreement manipulation and reduce user traffic. The paper describes negotiation between an auctioneer and bidders, showing multiparty security is better than existing systems. It analyzes the performance of encryption algorithms like ECC, XTR, and RSA for use in the multiparty negotiation protocols.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
This document summarizes a proposed cloud security and data integrity framework that provides client accountability. The framework aims to address issues like lack of user control over cloud data, need for data transparency and tracking, and ensuring data integrity. It proposes using JAR (Java Archive) files for data sharing due to benefits like portability. The framework incorporates client-side verification using MD5 hashing, digital signature-based authentication of JAR files, and use of HMAC to ensure data integrity. It also uses password-based encryption of log files to keep them tamper-proof. The framework is intended to provide both accountability and security for data sharing in cloud environments.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.