Matching GPS Traces with Personal
Schedules,” Proc. First ACM Int’l Workshop
Personalized Context Modeling and
Management for UbiComp Applications
(PCM), 2009.
[8] X. Li, Y.-Y. Chen, T. Suel, and A.
Markowetz, “Efficient Query Processing in
Geographic Web Search,” Proc. Int’l ACM
SIGIR Conf. Research and Development in
Information Retrieval (SIGIR), 2006.
[9] B.J. Jansen, A. Spink, and T. Saracevic,
“Real Life, Real Users, and Real Needs: A
Study and Analysis of User Queries
A novel and innovative method for designing of rf mems deviceseSAT Journals
Abstract
The design complexity of the RF MEMS devices is increasing with fast rate, which require more accurate designing and simulation techniques. With upcoming technologies accurate simulation capability is a necessity for designing smaller chips. When devices are designed at nano scale, it presents a number of unique challenges. As the scale of the individual device decreases and the complexity of the physical structure increases, the nature of the device characteristics depart from those obtained from many of the classically held modeling concepts. Furthermore, the difficulty encountered in performing measurements on these devices means we have to put more emphasis on the results obtained from theoretical characteristics. Modeling also allows new device structures to be rigorously investigated prior to fabrication. This paper reports a novel and innovative method of design and simulation of MEMS based inductor by the method of co-simulation.
Keywords: Micro Electro Mechanical System, Radio Frequency, Co-Simulation, COMSOL Multiphysics, SOLIDWORKS, Computer Aided Design, Integrated Circuit
An efficient approach on spatial big data related to wireless networks and it...eSAT Journals
Abstract
Spatial big data acts as a important key role in wireless networks applications. In that spatial and spatio temporal problems contains the distinct role in big data and it’s compared to common relational problems. If we are solving those problems means describing the three applications for spatial big data. In each applications imposing the specific design and we are developing our work on highly scalable parallel processing for spatial big data in Hadoop frameworks by using map reduce computational model. Our results show that enables highly scalable implementations of algorithms using Hadoop for the purpose of spatial data processing problems. Inspite of developing these implementations requires specialized knowledge and user friendly.
Keywords: Spatial Big Data, Hadoop, Wireless Networks, Map reduce
Mobile Location Indexing Based On Synthetic Moving ObjectsIJECEIAES
Today, the number of researches based on the data they move known as mobile objects indexing came out from the traditional static one. There are some indexing approaches to handle the complicated moving positions. One of the suitable ideas is pre-ordering these objects before building index structure. In this paper, a structure, a presorted-nearest index tree algorithm is proposed that allowed maintaining, updating, and range querying mobile objects within the desired period. Besides, it gives the advantage of an index structure to easy data access and fast query along with the retrieving nearest locations from a location point in the index structure. A synthetic mobile position dataset is also proposed for performance evaluation so that it is free from location privacy and confidentiality. The detail experimental results are discussed together with the performance evaluation of KDtree-based index structure. Both approaches are similarly efficient in range searching. However, the proposed approach is especially much more save time for the nearest neighbor search within a range than KD tree-based calculation.
A novel incremental clustering for information extraction from social networkseSAT Journals
Abstract
The challenge of this project concentrates upon the issue of synopsis on the remark string regarding the particular message from social media. Because of the more fame of social media, amount of remarks may increment by the side of more ratio directly later the societal message is printed. Clients can want to achieve the detailed comprehension about remark string without study entire remark set, an attempt is made in order to bunch remarks by comparative substance all at once also produce the succinct judgment outline only for this message. Seeing that any time various clients can ask for a synopsis outcome, but the existing clustering strategies cannot fulfill the current requirement of this program. We design an incremental bunching issue for remark string synopsis upon social media also propose Incremental Clustering method it can incrementally bring up to date bunching outcomes including recent arriving remarks. And also, we design a presentation interface comprising of fundamental data, key-terms, and delegate remarks. This brief look presentation interface assists clients to rapidly achieve the outline comprehension about remark string. From the experimental results it is observed that Incremental Clustering method is more efficient than K-Means and Batch clustering methods.
Keywords: Clustering, Summarization, Remark Strings, SNS (Social Network Services).
Vertical Fragmentation of Location Information to Enable Location Privacy in ...ijasa
The aim of the development of Pervasive computing was to simplify our lives by integrating
communication technologies into real life. Location aware computing which is evolved from pervasive
computing performs services which are dependent on the location of the user or his communication
device. The advancements in this area have led to major revolutions in various application areas,
especially mass advertisements. It has long been evident that privacy of personal information, in this
case location of the user, is rather a touchy subject with most people. This paper explores the Location
Privacy issue in location aware computing. Vertical fragmentation of the stored location information of
users has been proposed as an effective solution for this issue.
An adaptive algorithm for task scheduling for computational grideSAT Journals
Abstract
Grid Computing is a collection of computing and storage resources that are collected from multiple administrative domains. Grid resources can be applied to reach a common goal. Since computational grids enable the sharing and aggregation of a wide variety of geographically distributed computational resources, an effective task scheduling is vital for managing the tasks. Efficient scheduling algorithms are the need of the hour to achieve efficient utilization of the unused CPU cycles distributed geographically in various locations. The existing job scheduling algorithms in grid computing are mainly concentrated on the system’s performance rather than the user satisfaction. This research work presents a new algorithm that mainly focuses on better meeting the deadlines of the statically available jobs as expected by the users. This algorithm also concentrates on the better utilization of the available heterogeneous resources.
Keywords: Task Scheduling, Computational Grid, Adaptive Scheduling and User Deadline.
Modeling the Adaption Rule in Contextaware Systemsijasuc
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing
systems. Each context-aware application has its own set of behaviors to react to context modifications. This
paper is concerned with the context modeling and the development methodology for context-aware systems.
We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware
systems. We illustrate this idea in an arithmetic game application.
IRJET- Factoid Question and Answering SystemIRJET Journal
This document describes a factoid question answering system that uses neural networks and the Tensorflow framework. The system takes in a text document and question as input. It then processes the input using techniques like gated recurrent units and support vector machines to classify the question. The system calculates attention between facts and the question, modifies its memory, and identifies the word closest to the answer to output as the response. Key aspects of the system include training a question answering engine with Tensorflow, storing and retrieving data, and generating the final answer.
A novel and innovative method for designing of rf mems deviceseSAT Journals
Abstract
The design complexity of the RF MEMS devices is increasing with fast rate, which require more accurate designing and simulation techniques. With upcoming technologies accurate simulation capability is a necessity for designing smaller chips. When devices are designed at nano scale, it presents a number of unique challenges. As the scale of the individual device decreases and the complexity of the physical structure increases, the nature of the device characteristics depart from those obtained from many of the classically held modeling concepts. Furthermore, the difficulty encountered in performing measurements on these devices means we have to put more emphasis on the results obtained from theoretical characteristics. Modeling also allows new device structures to be rigorously investigated prior to fabrication. This paper reports a novel and innovative method of design and simulation of MEMS based inductor by the method of co-simulation.
Keywords: Micro Electro Mechanical System, Radio Frequency, Co-Simulation, COMSOL Multiphysics, SOLIDWORKS, Computer Aided Design, Integrated Circuit
An efficient approach on spatial big data related to wireless networks and it...eSAT Journals
Abstract
Spatial big data acts as a important key role in wireless networks applications. In that spatial and spatio temporal problems contains the distinct role in big data and it’s compared to common relational problems. If we are solving those problems means describing the three applications for spatial big data. In each applications imposing the specific design and we are developing our work on highly scalable parallel processing for spatial big data in Hadoop frameworks by using map reduce computational model. Our results show that enables highly scalable implementations of algorithms using Hadoop for the purpose of spatial data processing problems. Inspite of developing these implementations requires specialized knowledge and user friendly.
Keywords: Spatial Big Data, Hadoop, Wireless Networks, Map reduce
Mobile Location Indexing Based On Synthetic Moving ObjectsIJECEIAES
Today, the number of researches based on the data they move known as mobile objects indexing came out from the traditional static one. There are some indexing approaches to handle the complicated moving positions. One of the suitable ideas is pre-ordering these objects before building index structure. In this paper, a structure, a presorted-nearest index tree algorithm is proposed that allowed maintaining, updating, and range querying mobile objects within the desired period. Besides, it gives the advantage of an index structure to easy data access and fast query along with the retrieving nearest locations from a location point in the index structure. A synthetic mobile position dataset is also proposed for performance evaluation so that it is free from location privacy and confidentiality. The detail experimental results are discussed together with the performance evaluation of KDtree-based index structure. Both approaches are similarly efficient in range searching. However, the proposed approach is especially much more save time for the nearest neighbor search within a range than KD tree-based calculation.
A novel incremental clustering for information extraction from social networkseSAT Journals
Abstract
The challenge of this project concentrates upon the issue of synopsis on the remark string regarding the particular message from social media. Because of the more fame of social media, amount of remarks may increment by the side of more ratio directly later the societal message is printed. Clients can want to achieve the detailed comprehension about remark string without study entire remark set, an attempt is made in order to bunch remarks by comparative substance all at once also produce the succinct judgment outline only for this message. Seeing that any time various clients can ask for a synopsis outcome, but the existing clustering strategies cannot fulfill the current requirement of this program. We design an incremental bunching issue for remark string synopsis upon social media also propose Incremental Clustering method it can incrementally bring up to date bunching outcomes including recent arriving remarks. And also, we design a presentation interface comprising of fundamental data, key-terms, and delegate remarks. This brief look presentation interface assists clients to rapidly achieve the outline comprehension about remark string. From the experimental results it is observed that Incremental Clustering method is more efficient than K-Means and Batch clustering methods.
Keywords: Clustering, Summarization, Remark Strings, SNS (Social Network Services).
Vertical Fragmentation of Location Information to Enable Location Privacy in ...ijasa
The aim of the development of Pervasive computing was to simplify our lives by integrating
communication technologies into real life. Location aware computing which is evolved from pervasive
computing performs services which are dependent on the location of the user or his communication
device. The advancements in this area have led to major revolutions in various application areas,
especially mass advertisements. It has long been evident that privacy of personal information, in this
case location of the user, is rather a touchy subject with most people. This paper explores the Location
Privacy issue in location aware computing. Vertical fragmentation of the stored location information of
users has been proposed as an effective solution for this issue.
An adaptive algorithm for task scheduling for computational grideSAT Journals
Abstract
Grid Computing is a collection of computing and storage resources that are collected from multiple administrative domains. Grid resources can be applied to reach a common goal. Since computational grids enable the sharing and aggregation of a wide variety of geographically distributed computational resources, an effective task scheduling is vital for managing the tasks. Efficient scheduling algorithms are the need of the hour to achieve efficient utilization of the unused CPU cycles distributed geographically in various locations. The existing job scheduling algorithms in grid computing are mainly concentrated on the system’s performance rather than the user satisfaction. This research work presents a new algorithm that mainly focuses on better meeting the deadlines of the statically available jobs as expected by the users. This algorithm also concentrates on the better utilization of the available heterogeneous resources.
Keywords: Task Scheduling, Computational Grid, Adaptive Scheduling and User Deadline.
Modeling the Adaption Rule in Contextaware Systemsijasuc
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing
systems. Each context-aware application has its own set of behaviors to react to context modifications. This
paper is concerned with the context modeling and the development methodology for context-aware systems.
We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware
systems. We illustrate this idea in an arithmetic game application.
IRJET- Factoid Question and Answering SystemIRJET Journal
This document describes a factoid question answering system that uses neural networks and the Tensorflow framework. The system takes in a text document and question as input. It then processes the input using techniques like gated recurrent units and support vector machines to classify the question. The system calculates attention between facts and the question, modifies its memory, and identifies the word closest to the answer to output as the response. Key aspects of the system include training a question answering engine with Tensorflow, storing and retrieving data, and generating the final answer.
This document discusses approaches to resolving ambiguity in user queries to search engines. It first introduces the problem of ambiguity, where a single search term can have multiple meanings. It then overviews existing ranking algorithms used by search engines. The document proposes a dynamic page ranking algorithm to better disambiguate user intent by considering the context of the query. It evaluates the approach using mean average precision on the ambiguous term "beat". Experimental results show the dynamic approach outperforms a baseline page ranking algorithm on this task.
Digital image hiding algorithm for secret communicationeSAT Journals
Abstract
It is important to keep the image secret with the growing popularity of digital media’s through world wide web to avoid geometric attacking and stealing of images. A new digital image hiding algorithm that provides the security of hidden image, Imperceptibility , Robustness, Anti attacking capability is proposed. To achieve this, two different algorithms are used in frequency domain. The first algorithm is Discrete wavelet transform(DWT) and the second one is Singular Value Decomposition(SVD). Wavelet coefficients of secret and carrier image is found using DWT. In this the images are divided into four frequency part called LL, LH, HL, HH. Low frequency component possess main information. So the low frequency part alone taken for the process of SVD to increase imperceptibility. The processed secret image is embedded into the transformed carrier image. From this implementation of new digital image hiding algorithm, image can be stored and transmitted without any loss in that image even after getting affected by external factors such as rotation and noise. The parameter used to test the robustness is peak signal to ratio (PSNR). The experimental results shows that the proposed method is more robust against different kinds of attacks.
Keywords: Image hiding, DWT, SVD, Frequency domain, Attacks.
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...Editor IJCATR
This document summarizes various supervised methods for face image retrieval based on Euclidean distance. It discusses literature on active shape models, principal component analysis, linear discriminant analysis, locality-constrained linear coding, bag-of-words models, local binary patterns, and support vector machines. It evaluates support vector machines as the best classifier for face image retrieval systems due to its ability to significantly reduce the need for labeled training data and accurately classify faces, proteins, and characters. The document concludes that a content-based face retrieval system using support vector machines improves detection performance by retrieving similar faces from a database based on Euclidean distance calculations between local binary pattern features of the query and database images.
Anomaly detection in the services provided by multi cloud architectures a surveyeSAT Publishing House
This document summarizes various anomaly detection techniques that can be used in multi-cloud architectures. It discusses statistical, data mining, and machine learning based techniques. A table compares 11 different anomaly detection models or frameworks, outlining their advantages and disadvantages. The document concludes that combining multiple techniques may generate better results for anomaly detection in clouds. Future work could optimize existing techniques or use unsupervised "black box" approaches without human intervention.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PREDICTIVE MAINTENANCE AND ENGINEERED PROCESSES IN MECHATRONIC INDUSTRY: AN I...ijaia
This document summarizes a case study on implementing predictive maintenance processes in a mechatronic industry using machine learning algorithms. A company installed sensors on a cutting machine to monitor blade status in real-time. A software platform was developed to analyze sensor data using k-Means clustering and LSTM algorithms to predict blade break conditions. The platform classified risk maps and predicted alert levels based on recent variable values. This approach aimed to optimize maintenance and reduce machine downtime for customers.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
IRJET- Quantify Mutually Dependent Privacy Risks with Locality DataIRJET Journal
This document discusses how co-location information shared on social networks can threaten users' location privacy by enabling more accurate localization of users' locations over time. It formalizes the problem of quantifying privacy risks from co-location data and location information, and proposes optimal and approximate localization attack algorithms to incorporate co-location data. Experimental evaluations on mobility trace data show that considering a single friend's co-locations can decrease a user's median location privacy by up to 62%. Differential privacy perspectives are also discussed. The study aims to quantify the effect of co-location information on location privacy risks.
Activity Context Modeling in Context-AwareEditor IJCATR
The explosion of mobile devices has fuelled the advancement of pervasive computing to provide personal assistance in this
information-driven world. Pervasive computing takes advantage of context-aware computing to track, use and adapt to contextual
information. The context that has attracted the attention of many researchers is the activity context. There are six major techniques that
are used to model activity context. These techniques are key-value, logic-based, ontology-based, object-oriented, mark-up schemes and
graphical. This paper analyses these techniques in detail by describing how each technique is implemented while reviewing their pros
and cons. The paper ends with a hybrid modeling method that fits heterogeneous environment while considering the entire of modeling
through data acquisition and utilization stages. The modeling stages of activity context are data sensation, data abstraction and
reasoning and planning. The work revealed that mark-up schemes and object-oriented are best applicable at the data sensation stage.
Key-value and object-oriented techniques fairly support data abstraction stage whereas the logic-based and ontology-based techniques
are the ideal techniques for reasoning and planning stage. In a distributed system, mark-up schemes are very useful in data
communication over a network and graphical technique should be used when saving context data into database.
The document describes a new algorithm for long-term robust visual tracking of moving objects in video sequences. The algorithm aims to overcome challenges such as geometric deformations, partial or total occlusions, and recovery after the target leaves the field of vision. It does not rely on a probabilistic process or require prior detection data. Experimental results on difficult video sequences demonstrate advantages over recent trackers. The algorithm can be used in applications like video surveillance, active vision, and industrial visual servoing.
This document discusses using data mining classification approaches and genetic algorithms to predict student grades in an online educational system. It proposes using student data like homework submissions and web usage to classify them and predict their final grades. A genetic algorithm is used to optimize combinations of different classifiers like k-nearest neighbor and k-means on a dataset from an online learning system called LON-CAPA. The performance of several classification algorithms is evaluated to predict student grades based on their online activities. Sample student and faculty datasets are also presented.
Adaptive Privacy Policy Prediction of User Uploaded Images on Content sharing...IJERA Editor
User Image sharing social site maintaining privacy has become a major problem, as demonstrated by a recent
wave of publicized incidents where users inadvertently shared personal information. In light of these incidents,
the need of tools to help users control access to their shared content is apparent. Toward addressing this need an
Adaptive Privacy Policy Prediction (A3P) system to help users compose privacy settings for their images. The
solution relies on an image classification framework for image categories which may be associated with similar
policies and on a policy prediction algorithm to automatically generate a policy for each newly uploaded image,
also according to user’s social features. Image Sharing takes place both among previously established groups of
known people or social circles and also increasingly with people outside the users social circles, for purposes of
social discovery-to help them identify new peers and learn about peers interests and social surroundings,
Sharing images within online content sharing sites, therefore, may quickly lead to unwanted disclosure. The
aggregated information can result in unexpected exposure of one’s social environment and lead to abuse of
one’s personal information.
IEEE Projects 2013 For ME Cse Seabirds ( Trichy, Thanjavur, Karur, Perambalur )SBGC
ieee projects 2013 for me cse trichy, ieee projects 2013 for me cse Karur, ieee projects 2013 for me cse chennai, ieee projects 2013 for me cse, ieee projects, ieee projects for cse, ieee projects 2013, ieee projects 2013 for me cse Thanjavur, ieee projects 2013 for me cse Perambalur,
A survey on location based serach using spatial inverted index methodeSAT Journals
Abstract Conventional spatial queries, nearest neighbor retrieval and vary search consists solely conditions on objects geometric property. But today, several fashionable applications support new kind of queries that aim to seek out objects that satisfies each spatial knowledge and their associated text. As an example instead of considering all the hotels, a nearest neighbor queries would instead elicit the building that's nearest to among people who offer services like pool, internet at a similar time. For this a sort of questioning a variant of inverted index is employed that's effective for multidimensional points associate degreed come with an R-tree which is constructed on each inverted list, and uses the algorithm of minimum bounding methodology which will answer the closest neighbor queries with keywords in real time. Keywords: Spatial database, nearest neighbor search, spatial index, keyword search.
The document proposes a new feature descriptor called Local Bit-Plane Wavelet Pattern (LBWP) to improve content-based retrieval of biomedical images like CT and MRI scans. LBWP encodes relationships between pixel intensities in different bit planes and applies a wavelet function, capturing more fine-grained image details than prior methods. Evaluation on a dataset from The Cancer Imaging Archive showed LBWP outperformed existing approaches like Local Wavelet Pattern with higher average retrieval precision, rate, and F-score.
With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
The Information Systems, Databases are vital and critical in the arena of Business, Enterprise, Governance etc., for the decision making. Further, the Information Systems are designed to reach maximum number of people for interaction with system. This intent of system usage appears to be resourceful but the identification of reliable users of system has challenged the technocrats
Most of the applications uses relational data base system (RDBMS) for the database support but with intelligent exploitation of vulnerability in Structured Query Language (SQL) of RDBMS, the dishonest people are interested in manipulation or processing of the data for business value reduction or damage.
This paper attempts to endow with security to a relational database system by the inclusion of newly designed semiotic to keyword SECURE into a lexicon of SQL. Then the semiotic is articulated in relational algebraic expression for current query engine execution.
A empresa está crescendo rapidamente e precisa contratar mais funcionários para acompanhar a demanda. Novos escritórios serão abertos em outras cidades para apoiar o crescimento. Um plano de longo prazo está sendo desenvolvido para garantir que a empresa continue crescendo de forma sustentável nos próximos anos.
Best Fotos es una empresa que ofrece servicios fotográficos. Se especializa en fotografía de eventos como bodas, graduaciones y fiestas. Lleva más de 6 años en el mercado brindando excelentes servicios fotográficos a sus clientes.
O Shopping Recife promoveu uma campanha educativa para incentivar clientes a limparem suas mesas após as refeições, usando ilustrações do artista Rafa Mattos pregando amor e respeito ao próximo. A campanha foi bem-sucedida, com mais de 50% das mesas sendo deixadas limpas após, em comparação aos menos de 10% antes.
This document discusses approaches to resolving ambiguity in user queries to search engines. It first introduces the problem of ambiguity, where a single search term can have multiple meanings. It then overviews existing ranking algorithms used by search engines. The document proposes a dynamic page ranking algorithm to better disambiguate user intent by considering the context of the query. It evaluates the approach using mean average precision on the ambiguous term "beat". Experimental results show the dynamic approach outperforms a baseline page ranking algorithm on this task.
Digital image hiding algorithm for secret communicationeSAT Journals
Abstract
It is important to keep the image secret with the growing popularity of digital media’s through world wide web to avoid geometric attacking and stealing of images. A new digital image hiding algorithm that provides the security of hidden image, Imperceptibility , Robustness, Anti attacking capability is proposed. To achieve this, two different algorithms are used in frequency domain. The first algorithm is Discrete wavelet transform(DWT) and the second one is Singular Value Decomposition(SVD). Wavelet coefficients of secret and carrier image is found using DWT. In this the images are divided into four frequency part called LL, LH, HL, HH. Low frequency component possess main information. So the low frequency part alone taken for the process of SVD to increase imperceptibility. The processed secret image is embedded into the transformed carrier image. From this implementation of new digital image hiding algorithm, image can be stored and transmitted without any loss in that image even after getting affected by external factors such as rotation and noise. The parameter used to test the robustness is peak signal to ratio (PSNR). The experimental results shows that the proposed method is more robust against different kinds of attacks.
Keywords: Image hiding, DWT, SVD, Frequency domain, Attacks.
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...Editor IJCATR
This document summarizes various supervised methods for face image retrieval based on Euclidean distance. It discusses literature on active shape models, principal component analysis, linear discriminant analysis, locality-constrained linear coding, bag-of-words models, local binary patterns, and support vector machines. It evaluates support vector machines as the best classifier for face image retrieval systems due to its ability to significantly reduce the need for labeled training data and accurately classify faces, proteins, and characters. The document concludes that a content-based face retrieval system using support vector machines improves detection performance by retrieving similar faces from a database based on Euclidean distance calculations between local binary pattern features of the query and database images.
Anomaly detection in the services provided by multi cloud architectures a surveyeSAT Publishing House
This document summarizes various anomaly detection techniques that can be used in multi-cloud architectures. It discusses statistical, data mining, and machine learning based techniques. A table compares 11 different anomaly detection models or frameworks, outlining their advantages and disadvantages. The document concludes that combining multiple techniques may generate better results for anomaly detection in clouds. Future work could optimize existing techniques or use unsupervised "black box" approaches without human intervention.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PREDICTIVE MAINTENANCE AND ENGINEERED PROCESSES IN MECHATRONIC INDUSTRY: AN I...ijaia
This document summarizes a case study on implementing predictive maintenance processes in a mechatronic industry using machine learning algorithms. A company installed sensors on a cutting machine to monitor blade status in real-time. A software platform was developed to analyze sensor data using k-Means clustering and LSTM algorithms to predict blade break conditions. The platform classified risk maps and predicted alert levels based on recent variable values. This approach aimed to optimize maintenance and reduce machine downtime for customers.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
IRJET- Quantify Mutually Dependent Privacy Risks with Locality DataIRJET Journal
This document discusses how co-location information shared on social networks can threaten users' location privacy by enabling more accurate localization of users' locations over time. It formalizes the problem of quantifying privacy risks from co-location data and location information, and proposes optimal and approximate localization attack algorithms to incorporate co-location data. Experimental evaluations on mobility trace data show that considering a single friend's co-locations can decrease a user's median location privacy by up to 62%. Differential privacy perspectives are also discussed. The study aims to quantify the effect of co-location information on location privacy risks.
Activity Context Modeling in Context-AwareEditor IJCATR
The explosion of mobile devices has fuelled the advancement of pervasive computing to provide personal assistance in this
information-driven world. Pervasive computing takes advantage of context-aware computing to track, use and adapt to contextual
information. The context that has attracted the attention of many researchers is the activity context. There are six major techniques that
are used to model activity context. These techniques are key-value, logic-based, ontology-based, object-oriented, mark-up schemes and
graphical. This paper analyses these techniques in detail by describing how each technique is implemented while reviewing their pros
and cons. The paper ends with a hybrid modeling method that fits heterogeneous environment while considering the entire of modeling
through data acquisition and utilization stages. The modeling stages of activity context are data sensation, data abstraction and
reasoning and planning. The work revealed that mark-up schemes and object-oriented are best applicable at the data sensation stage.
Key-value and object-oriented techniques fairly support data abstraction stage whereas the logic-based and ontology-based techniques
are the ideal techniques for reasoning and planning stage. In a distributed system, mark-up schemes are very useful in data
communication over a network and graphical technique should be used when saving context data into database.
The document describes a new algorithm for long-term robust visual tracking of moving objects in video sequences. The algorithm aims to overcome challenges such as geometric deformations, partial or total occlusions, and recovery after the target leaves the field of vision. It does not rely on a probabilistic process or require prior detection data. Experimental results on difficult video sequences demonstrate advantages over recent trackers. The algorithm can be used in applications like video surveillance, active vision, and industrial visual servoing.
This document discusses using data mining classification approaches and genetic algorithms to predict student grades in an online educational system. It proposes using student data like homework submissions and web usage to classify them and predict their final grades. A genetic algorithm is used to optimize combinations of different classifiers like k-nearest neighbor and k-means on a dataset from an online learning system called LON-CAPA. The performance of several classification algorithms is evaluated to predict student grades based on their online activities. Sample student and faculty datasets are also presented.
Adaptive Privacy Policy Prediction of User Uploaded Images on Content sharing...IJERA Editor
User Image sharing social site maintaining privacy has become a major problem, as demonstrated by a recent
wave of publicized incidents where users inadvertently shared personal information. In light of these incidents,
the need of tools to help users control access to their shared content is apparent. Toward addressing this need an
Adaptive Privacy Policy Prediction (A3P) system to help users compose privacy settings for their images. The
solution relies on an image classification framework for image categories which may be associated with similar
policies and on a policy prediction algorithm to automatically generate a policy for each newly uploaded image,
also according to user’s social features. Image Sharing takes place both among previously established groups of
known people or social circles and also increasingly with people outside the users social circles, for purposes of
social discovery-to help them identify new peers and learn about peers interests and social surroundings,
Sharing images within online content sharing sites, therefore, may quickly lead to unwanted disclosure. The
aggregated information can result in unexpected exposure of one’s social environment and lead to abuse of
one’s personal information.
IEEE Projects 2013 For ME Cse Seabirds ( Trichy, Thanjavur, Karur, Perambalur )SBGC
ieee projects 2013 for me cse trichy, ieee projects 2013 for me cse Karur, ieee projects 2013 for me cse chennai, ieee projects 2013 for me cse, ieee projects, ieee projects for cse, ieee projects 2013, ieee projects 2013 for me cse Thanjavur, ieee projects 2013 for me cse Perambalur,
A survey on location based serach using spatial inverted index methodeSAT Journals
Abstract Conventional spatial queries, nearest neighbor retrieval and vary search consists solely conditions on objects geometric property. But today, several fashionable applications support new kind of queries that aim to seek out objects that satisfies each spatial knowledge and their associated text. As an example instead of considering all the hotels, a nearest neighbor queries would instead elicit the building that's nearest to among people who offer services like pool, internet at a similar time. For this a sort of questioning a variant of inverted index is employed that's effective for multidimensional points associate degreed come with an R-tree which is constructed on each inverted list, and uses the algorithm of minimum bounding methodology which will answer the closest neighbor queries with keywords in real time. Keywords: Spatial database, nearest neighbor search, spatial index, keyword search.
The document proposes a new feature descriptor called Local Bit-Plane Wavelet Pattern (LBWP) to improve content-based retrieval of biomedical images like CT and MRI scans. LBWP encodes relationships between pixel intensities in different bit planes and applies a wavelet function, capturing more fine-grained image details than prior methods. Evaluation on a dataset from The Cancer Imaging Archive showed LBWP outperformed existing approaches like Local Wavelet Pattern with higher average retrieval precision, rate, and F-score.
With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
The Information Systems, Databases are vital and critical in the arena of Business, Enterprise, Governance etc., for the decision making. Further, the Information Systems are designed to reach maximum number of people for interaction with system. This intent of system usage appears to be resourceful but the identification of reliable users of system has challenged the technocrats
Most of the applications uses relational data base system (RDBMS) for the database support but with intelligent exploitation of vulnerability in Structured Query Language (SQL) of RDBMS, the dishonest people are interested in manipulation or processing of the data for business value reduction or damage.
This paper attempts to endow with security to a relational database system by the inclusion of newly designed semiotic to keyword SECURE into a lexicon of SQL. Then the semiotic is articulated in relational algebraic expression for current query engine execution.
A empresa está crescendo rapidamente e precisa contratar mais funcionários para acompanhar a demanda. Novos escritórios serão abertos em outras cidades para apoiar o crescimento. Um plano de longo prazo está sendo desenvolvido para garantir que a empresa continue crescendo de forma sustentável nos próximos anos.
Best Fotos es una empresa que ofrece servicios fotográficos. Se especializa en fotografía de eventos como bodas, graduaciones y fiestas. Lleva más de 6 años en el mercado brindando excelentes servicios fotográficos a sus clientes.
O Shopping Recife promoveu uma campanha educativa para incentivar clientes a limparem suas mesas após as refeições, usando ilustrações do artista Rafa Mattos pregando amor e respeito ao próximo. A campanha foi bem-sucedida, com mais de 50% das mesas sendo deixadas limpas após, em comparação aos menos de 10% antes.
A Esposende criou uma campanha no Facebook para o Dia das Mães onde os usuários podiam criar vídeos personalizados em homenagem às mães. Mais de 11 mil vídeos foram criados e visualizados por cerca de 100 mil pessoas. Um banner interativo em sites de notícias também foi bem-sucedido, com 16 mil usuários jogando.
No solo un lenguaje puede resolver los problemas que se enfrentan al desarrollar aplicaciones. Cada lenguaje tiene su semántica y sin duda su aplicación. En la plataforma Java desde hace algunos años se brinda soporte para numerosos lenguajes, muchos de ellos creados específicos para la plataforma y otros traídos y adaptados para que puedan explotar las bondades, herramientas y librerías que desde hace muchos años forman parte del ecosistema Java.
En esta charla mostraremos algunos de los lenguajes mas representativos y mas usados en la plataforma Java, los lenguajes que mostraremos son Jython, JRuby, Scala y Groovy. Veremos un poco de su historia y como fueron integrados a la plataforma, así como algunos casos de éxito del uso de estos lenguajes. Ademas de ello analizaremos algunas herramientas disponibles para su uso.
Al termino de la charla los asistentes sabrán que existen muchas alternativas de lenguajes de programación sobre la plataforma Java para desarrollar aplicaciones sumamente escalables y algunas de las tendencias del mercado.
Harry el gato presenta posible nefrolitiasis bilateral y nefritis intersticial leve bilateral. Se recomienda una dieta restrictiva en proteínas y sodio, así como medicamentos para apoyar la función renal, antioxidantes y analgésicos. Se requieren exámenes adicionales de sangre y control ecográfico para determinar el curso de tratamiento, que podría incluir cirugía si la condición progresa.
Hola!
Pense mucho en compartir esto ya que las imagenes son muy fuertes y me preocupaba un poco el que diran o molestar a alguien con ello, pero despues de un rato decidí hacerlo pq muchas veces estamos cegados a la muy triste realidad que se esta viviendo en este mundo, y algunos diran "que ganamos traumatizandonos con estas imagenes si no podemos hacer nada para detener esto!" se los digo pq lo llegue a pensar, pero entonces recorde un testimonio de una persona que estuvo en presencia de Dios y El Señor le pasaba una escena de su vida en la que vio alguna noticia de este tipo en la tv y se sintió triste y angustiado, pero no hizo nada, y Dios le dijo "Hijo, esa tristeza que te estrujaba el corazon al ver esta noticia era Yo que te gritaba que tu podias hacer algo al respecto!!, la oración, el sacrificio, el ayuno, las ofrendas, la caridad!!! son tantas cosas que pueden ayudar en esta situacion!" ... Recordando esto pense que no podia dejarlo pasar y me atreví a subir esta presentacion hecha por mi marido ya que tal vez mas de uno nos podamos sensibilizar y decidamos hacer algo, no se trata de hechar culpas, hacer polemica, ni estar de ningun bando segun historia ni religion, sino de lo que mencione al principio pq creo que juntos podriamos hacer alguna diferencia, esta en uds compartirlo. A pesar que estas imagenes abundan en la red, les pido disculpas si a alguien ofendi con esto. Y si por el contrario se logra subir una oracion mas al cielo, me sentire mas que satisfecha, Gracias.
El documento proporciona instrucciones para hacer una huerta, incluyendo los materiales necesarios como agua, sol, semillas y tierra. Explica que se necesitan herramientas como una pala y un rastrillo para cavar y remover la tierra. También define el oficio de jardinero como la persona que trabaja plantando y sembrando en el jardín o la huerta. A continuación, detalla los 7 pasos para hacer la huerta, que incluyen seleccionar semillas, hacer agujeros en las macetas, poner tierra y semillas
Dokumen tersebut membahas tentang manajemen sumber daya manusia, termasuk metode penilaian kinerja karyawan, kompensasi dan tunjangan, kesehatan dan keselamatan kerja, serta pemutusan hubungan kerja. Dokumen ini memberikan panduan praktis bagi perusahaan dalam mengelola karyawannya.
O documento promove o site www.animalog.com.br, que oferece animes e mangás online gratuitamente. Ele lista o nome do mangá "Death Note" e o capítulo 14 especificamente.
Las primeras civilizaciones surgieron alrededor del 6000 a.C. en el Cercano Oriente, en el sur y este de Asia, caracterizadas por el uso de metales, la división del trabajo, el crecimiento de ciudades urbanas y la organización de gobiernos eficientes. Los elementos constitutivos de una civilización incluyen una localización geográfica alrededor de ríos que proporcionan recursos, una población numerosa que vive en grandes ciudades, y una organización social jerárquica con diferentes estamentos y división del trabajo
This document does not contain any substantive information to summarize. It consists only of blank lines without any text. In 3 sentences or less, there is no meaningful summary that can be provided for this blank document.
O documento promove o site Animalog.com.br, que oferece animes e mangás online gratuitamente. Ele também lista o nome de um mangá específico disponível no site, Death Note Capítulo 05.
O documento promove o site www.animalog.com.br, que oferece animes e mangás online gratuitamente. Ele lista o nome do mangá "Death Note" e o capítulo 15 disponível no site.
El documento describe una lección sobre el lanzamiento de peso en educación física. Los estudiantes se dividen en grupos para grabarse lanzando una pelota medicinal y analizar su técnica en vídeo, con el objetivo de conocer la técnica básica del lanzamiento de peso y utilizar diferentes ángulos de cámara. La lección concluye con una evaluación basada en la participación y el análisis de los vídeos.
This document describes a Relaxed Context-Aware Machine Learning Middleware (RCAMM) for Android that was developed by students and a professor at V.E.S. Institute of Technology in Mumbai, India. RCAMM collects and stores context information from mobile devices and uses machine learning to provide personalized suggestions to users. It aims to reduce redundancy for developers by handling context collection and storage in a middleware, allowing apps to simply consume context data. The middleware uses a hybrid context model with JSON encoding and a relational database to store historical context values from devices.
Abstract: Smartphones have become very common. It has claims in all the fields such as private use, in government groups such as defence, military etc. It has become common because of occurrence of huge number of applications. Smartphones convey large amount of personal material, such as user’s private details, contacts, messages, emails, credit card information, etc. This delicate- materialpresent, needs security and privacy. The project suggests the idea of securing the Private information such as contacts, browsing history, cookies, credit card information etc, present on the user profile of a personalized search engine. Security is provided by locking and wiping the data of the phone from a remote server, in the event of phone getting lost or stolen. To achieve this, message authentication code technique is used. The authenticity and integrity of the message is proved by a short piece of information. Integrity governs accidental and intentional message changes and authenticity governs the message’s origin. It stops malicious users from launching denial of service attacks. A Secure Hash Algorithm is used to produce a hash value which Converts plaintext to encoded message and outputs a MAC. The MAC defends both message data integrity and its authenticity by permitting receivers who also possess the secret key to detect any changes to message content.Keywords: Clickthrough data, abstraction, location search, mobile search engine, ontology, user profiling.
Title: Private Mobile Search Engine Using RSVM Training
Author: Ms. Mayuri A. Auti, Dr. S. V. Gumaste
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
This document describes a place recommendation system developed as an Android application. The application uses a user's current GPS location and collaborative filtering to recommend nearby places of interest based on the user's preferences and previous check-ins. It analyzes similar users and places they have both visited to find places the active user may like. The application recommendations are generated using a self-maintained database of places that can be updated over time. The system aims to reduce confusion for users by tailored recommendations based on their behaviors and location.
Human Activity Recognition Using SmartphoneIRJET Journal
The document discusses human activity recognition using smartphone sensors. It proposes using a CNN-LSTM model to classify activities like walking, running, and sitting based on accelerometer and gyroscope sensor data from a smartphone. The CNN extracts features from the sensor data, while the LSTM recognizes sequences of activities over time. The model is implemented in an Android application that recognizes activities in real-time and also counts steps, distance, and calories burned. The application uses built-in smartphone sensors like accelerometer, gyroscope, and pedometer to recognize activities affordably and with high availability without external devices. The CNN-LSTM model achieves accurate activity recognition compared to other machine learning techniques.
Text classification based on gated recurrent unit combines with support vecto...IJECEIAES
As the amount of unstructured text data that humanity produce largely and a lot of texts are grows on the Internet, so the one of the intelligent technique is require processing it and extracting different types of knowledge from it. Gated recurrent unit (GRU) and support vector machine (SVM) have been successfully used to Natural Language Processing (NLP) systems with comparative, remarkable results. GRU networks perform well in sequential learning tasks and overcome the issues of “vanishing and explosion of gradients in standard recurrent neural networks (RNNs) when captureing long-term dependencies. In this paper, we proposed a text classification model based on improved approaches to this norm by presenting a linear support vector machine (SVM) as the replacement of Softmax in the final output layer of a GRU model. Furthermore, the cross-entropy function shall be replaced with a margin-based function. Empirical results present that the proposed GRU-SVM model achieved comparatively better results than the baseline approaches BLSTM-C, DABN.
MODELING THE ADAPTION RULE IN CONTEXTAWARE SYSTEMSijasuc
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing
systems. Each context-aware application has its own set of behaviors to react to context modifications. This
paper is concerned with the context modeling and the development methodology for context-aware systems.
We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware
systems. We illustrate this idea in an arithmetic game application.
This document discusses improving the usability of GIS (geographic information system) services by applying semantic ontology. It aims to develop a more user-friendly system that can understand user questions in natural language and provide accurate answers. The key aspects discussed are preprocessing user questions, using an ontology tool like Protégé to map questions to concepts and relationships in an OWL file, and then extracting answers from GIS data and services. The goal is to allow users to easily access and understand geographic information without extensive technical knowledge of GIS systems.
Study and Implementation of a Personalized Mobile Search Engine for Secure Se...IRJET Journal
This document describes a proposed personalized mobile search engine (PMSE) that learns users' content and location preferences from clickthrough data and GPS locations to provide personalized search results. The PMSE uses an ontology-based user profile learned from clicks and locations, without requiring extra user effort. It has a client-server architecture where the client handles interactions and stores privacy-sensitive click data, while the server performs computationally intensive tasks like concept extraction, training, and result reranking. The PMSE aims to improve search personalization by separately considering content and location concepts based on user interests.
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
Vertical intent prediction approach based on Doc2vec and convolutional neural...IJECEIAES
Vertical selection is the task of selecting the most relevant verticals to a given query in order to improve the diversity and quality of web search results. This task requires not only predicting relevant verticals but also these verticals must be those the user expects to be relevant for his particular information need. Most existing works focused on using traditional machine learning techniques to combine multiple types of features for selecting several relevant verticals. Although these techniques are very efficient, handling vertical selection with high accuracy is still a challenging research task. In this paper, we propose an approach for improving vertical selection in order to satisfy the user vertical intent and reduce user’s browsing time and efforts. First, it generates query embeddings vectors using the doc2vec algorithm that preserves syntactic and semantic information within each query. Secondly, this vector will be used as input to a convolutional neural network model for increasing the representation of the query with multiple levels of abstraction including rich semantic information and then creating a global summarization of the query features. We demonstrate the effectiveness of our approach through comprehensive experimentation using various datasets. Our experimental findings show that our system achieves significant accuracy. Further, it realizes accurate predictions on new unseen data.
A survey on context aware system & intelligent Middleware’sIOSR Journals
Abstract: Context aware system or Sentient system is the most profound concept in the ubiquitous computing.
In the cloud system or in distributed computing building a context aware system is difficult task and
programmer should use more generic programming framework. On the basis of layered conceptual design, we
introduce Context aware systems with Context aware middleware’s. On the basis of presented system we will
analyze different approaches of context aware computing. There are many components in the distributed system
and these components should interact with each other because it is the need of many applications. Plenty
Context middleware’s have been made but they are giving partial solutions. In this paper we are giving analysis
of different middleware’s and comprehensive application of it in context caching.
Keywords: Context aware system, Context aware Middleware’s, Context Cache
A Reconfigurable Component-Based Problem Solving EnvironmentSheila Sinclair
This technical report describes a reconfigurable component-based problem solving environment called DISCWorld. The key features discussed are:
1) DISCWorld uses a data flow model represented as directed acyclic graphs (DAGs) of operators to integrate distributed computing components across networks.
2) It supports both long running simulations and parameter search applications by allowing complex processing requests to be composed graphically or through scripting and executed on heterogeneous platforms.
3) Operators can be simple "pure Java" implementations or wrappers to fast platform-specific implementations, and some operators may represent sub-graphs that can be reconfigured to run across multiple servers for faster execution.
Performance Evaluation of Trajectory Queries on Multiprocessor and Clustercsandit
In this study, we evaluate the performance of traje
ctory queries that are handled by Cassandra,
MongoDB, and PostgreSQL. The evaluation is conducte
d on a multiprocessor and a cluster.
Telecommunication companies collect a lot of data f
rom their mobile users. These data must be
analysed in order to support business decisions, su
ch as infrastructure planning. The optimal
choice of hardware platform and database can be dif
ferent from a query to another. We use data
collected from Telenor Sverige, a telecommunication
company that operates in Sweden. These
data are collected every five minutes for an entire
week in a medium sized city. The execution
time results show that Cassandra performs much bett
er than MongoDB and PostgreSQL for
queries that do not have spatial features. Statio’s
Cassandra Lucene index incorporates a
geospatial index into Cassandra, thus making Cassan
dra to perform similarly as MongoDB to
handle spatial queries. In four use cases, namely,
distance query, k-nearest neigbhor query,
range query, and region query, Cassandra performs m
uch better than MongoDB and
PostgreSQL for two cases, namely range query and re
gion query. The scalability is also good for
these two use cases.
PERFORMANCE EVALUATION OF TRAJECTORY QUERIES ON MULTIPROCESSOR AND CLUSTERcscpconf
In this study, we evaluate the performance of trajectory queries that are handled by Cassandra, MongoDB, and PostgreSQL. The evaluation is conducted on a multiprocessor and a cluster.
Telecommunication companies collect a lot of data from their mobile users. These data must be analyzed in order to support business decisions, such as infrastructure planning. The optimal
choice of hardware platform and database can be different from a query to another. We use data
collected from Telenor Sverige, a telecommunication company that operates in Sweden. These data are collected every five minutes for an entire week in a medium sized city. The execution
time results show that Cassandra performs much better than MongoDB and PostgreSQL for queries that do not have spatial features. Statio’s Cassandra Lucene index incorporates a
geospatial index into Cassandra, thus making Cassandra to perform similarly as MongoDB to
handle spatial queries. In four use cases, namely, distance query, k-nearest neigbhor query, range query, and region query, Cassandra performs much better than MongoDB and
PostgreSQL for two cases, namely range query and region query. The scalability is also good for these two use cases
This document describes a proposed multi-agent system for searching distributed data. The system uses three types of agents - coordinator agents, search agents, and local agents. Coordinator agents coordinate the retrieval process by creating search agents and collecting results. Search agents carry queries to nodes containing relevant databases. Local agents reside at nodes with databases, accept queries from search agents, search the databases for answers, and return results to the search agents. The system aims to retrieve data from distributed databases with minimum network bandwidth consumption using this multi-agent approach.
This document discusses domain specific search and ranking model adaptation using a binary classifier. It proposes using a ranking adaptation support vector machine (RA-SVM) to adapt an existing broad-based ranking model to a new targeted domain with limited labeled data. The RA-SVM utilizes predictions from the existing model as well as domain specific features to build a robust ranking model for the target domain. It evaluates the adapted model using precision, recall, average precision and Kendall's Tau metrics. The goal is to minimize the amount of labeled data needed for the target domain while still achieving good performance.
This document summarizes a research paper that proposes a profile management system for Android mobile devices based on location. The system uses Global Positioning System (GPS) localization to detect when the mobile device reaches predefined locations and automatically changes the user's profile, such as switching to silent mode. When the device detects it is near a saved location, it calculates the distance to that location and triggers a notification and profile change. This allows profiles to be managed automatically based on the user's location rather than manually or based only on time. The paper outlines the existing solutions, proposed system design, and modules for creating profiles, accessing GPS location, and changing wallpapers and ringtones.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Cloud Architecture for Search Engine Application ...............................................................................1
A. L. Saranya and B. Senthil Murugan
Efficient Numerical Integration and Table Lookup Techniques for Real Time Flight Simulation.............. 8
P. Lathasree and Abhay A. Pashilkar
A Review of Literature on Cloud Brokerage Services................................................................................ 25
Dr. J. Akilandeswari and C. Sushanth
Improving Recommendation Quality with Enhanced Correlation Similarity in Modified Weighted Sum
.................................................................................................................................................................... 41
Khin Nila Win and Thiri Haymar Kyaw
Bounded Area Estimation of Internet Traffic Share Curve ...................................................................... 54
Dr. Sharad Gangele, Kapil Verma and Dr. Diwakar Shukla
Information Systems Projects for Sustainable Development and Social Change ................................... 68
James K. Ho and Isha Shah
Software Architectural Pattern to Improve the Performance and Reliability of a Business Application
using the Model View Controller .............................................................................................................. 83
G. Manjula and Dr. G. Mahadevan
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Climate Impact of Software Testing at Nordic Testing Days
B045041114
1. A. Praveena et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 5( Version 4), May 2014, pp.11-14
www.ijera.com 11 | P a g e
Android Based Effective Search Engine Retrieval System Using
Ontology
A. Praveena1
, R. Kalaiselvan2
, R. Anitha Rajalakshmi3
.
1
(M.E computer science and engineering, Parisutham Institute of Technology and Science, Affiliated to Anna
University, Chennai, India)
2
(Assistant Professor, Department of computer science and engineering, Parisutham Institute of Technology and
Science, Affiliated to Anna University, Chennai, India)
1
(M.E computer science and engineering, Parisutham Institute of Technology and Science, Affiliated to Anna
University, Chennai, India)
ABSTRACT
In the proposed model, users search for the query on either Area specified or user’s location, server retrieves all
the data to the user’s computer where ontology is applied. After applying the ontology, it will classify in to two
concepts such as location based or content based. User PC displays all the relevant keywords to the user’s
mobile, so that user selects the exact requirement. The client collects and stores locally then click through data
to protect privacy, whereas tasks such as concept extraction, training, and reranking are performed at the search
engine server. Ranking occurs and finally exactly mapped information is produced to the users mobile and
addresses the privacy problem by restricting the information in the user profile exposed to the search engine
server with two privacy parameters. Finally applied UDD algorithm to eliminate the duplication of records
which helps to minimize the number of URL listed to the user.
INDEX TERMS- Click through data, concept based, location based search, mobile search engine, ontology,
UDD.
I. INTRODUCTION
Android is made up of several essential and
dependent parts such as a hardware reference scheme
that defines the capabilities required for a mobile
device to maintenance the software stack. A Linux
operating system kernel that delivers low-level
interface with the hardware, memory management,
and process control, all enhanced for mobile devices.
Open-source libraries for application improvement,
including SQLite, Web Kit, OpenGL, and a media
executive. A run time used to perform and host
Android applications, including the Dalvik virtual
machine and the core libraries that run Android-
specific functionality. The run time is designed to be
trivial and efficient for use on mobile devices. An
application framework that agnostically uncovering
system services to the application layer, plus the
window manager and location manager, content
providers, telephony, and sensors. A user interface
structure used to host and launch applications.
Preinstalled applications transported as part of the
stack.
A major problem in mobile search is that the
interactions among the users and search engines are
imperfect by the small form reasons of the mobile
devices. As a result, mobile users be wont to to
submit shorter, hence, more uncertain queries
compared to their web search counterparts. In order
to return highly related results to the users, mobile
search engines must be capable to profile the user’s
comforts and personalize the search results
permitting to the users profiles. Detecting the need
for different types of concepts, present in this project
is personalized mobile search engine (PMSE) which
characterizes different types of concepts in different
ontology’s. In particular, knowing the importance of
location information in mobile search, we separated
the concepts into location concepts and content
concepts. For example, a user who is planning to visit
thanjavur may subject the query “hotel,” and click on
the search results around hotels in Thanjavur. From
the click through of the query “hotel” PMSE can
study the user’s content preference (e.g., “room rate”
and “facilities”) and location preferences
(“Thanjavur”). Consequently, PMSE will favour
results that are worried with hotel information in
Thanjavur for future queries on “hotel.” The
overview of location preferences offers PMSE an
additional width for catching a user’s interest and an
opportunity to improve search quality for users. To
incorporate context information exposed by user
mobility, we also gross into reason the visited
physical locations of users in the PMSE. Since this
information can be suitably attained by GPS devices,
it is hence mentioned to as GPS locations. GPS
locations play an vital role in mobile web search
This project expresses the unique features of content
and location concepts, and delivers a coherent
RESEARCH ARTICLE OPEN ACCESS
2. A. Praveena et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 5( Version 4), May 2014, pp.11-14
www.ijera.com 12 | P a g e
scheme using client-server architecture to integrate
them into a identical solution for the mobile
environment. The proposed personalized mobile
search engine is an advanced approach for
personalizing web search results. By taking out
content and location concepts for user profiling, it
utilizes both the content and location preferences to
personalize search consequences for a user. PMSE
combines a user’s physical locations in the
personalization development. We conduct tests to
study the effect of a user’s GPS locations in
personalization. The results illustration that GPS
locations helps increase retrieval effectiveness for
location queries (i.e., queries that retrieve lots of
location information).
Privacy preservation is a inspiring issue in PMSE,
where users send their user profiles beside with
queries to the PMSE server to attain personalized
search results. PMSE addresses the privacy issue by
tolerating users to control their privacy levels with
two privacy parameters, minDistance and expRatio.
Practical results show that our proposal facilitates
even privacy protective control, while maintaining
good ranking feature. And mainly it will used to
decrease the duplication of the result for user queries.
II. PROPOSED SYSTEM
In the Proposed Model, users search on the net for
query, either Area specified or users location, server
recovers all the data to the user’s Pc where ontology
technique is applied. User Pc shows all the relevant
keywords to the users mobile, so that user selects the
particular requirement. Ranking arises and finally
exactly drawndata is produced to the users mobile
and apply UDD algorithm to eradicate the duplication
of words as well as we get the feedback from the
mobile, which helps to rank the data. So that we can
provide the Effective Search Mechanism.
A) Algorithms and Techniques
1) ONTOLOGY:
Ontology represents information as a set of concepts
within a domain and the relations between those
concepts. It can be used to typical a domain and
support thoughtabout concepts. Study of relativities.
If the user provides the query as hotel, automatically
it will bring together the relative answer for the hotel
such as hotel location, reservation, facilities, and
restaurant etc. Example for ontology is given below
Fig1: ontology technique
2) RSVM (REDUCED SUPPORT VECTOR
MACHINE)
In algorithm is projected which generates a
nonlinear kernel-based separating surface that needs
as little as 1% of a large dataset for its clear
evaluation. To generate this nonlinear surface, the
whole dataset is used as a control in an optimization
problem with very few variables equivalent to the 1%
of the data kept. The remainder of the data can be
terrified away after solving the optimization problem.
This is attained by making use of a rectangular kernel
that greatly decreases the size of the quadratic
program to be resolved and simples the
categorization of the nonlinear separating surface.
Here, the m rows of A represent the new m data
points while the m rows of A represent aim
mordantly reduced m data points. Computational
results show that test set correctness for the reduced
support vector machine (RSVM), with a nonlinear
separating surface that hang on on a small randomly
selected portion of the dataset, is enhanced than that
of a conventional support vector machine (SVM)
with a nonlinear surface that openly depends on the
entire dataset, and much improved than a
conventional SVM using a small casual trial of the
data. Computational era, as well as memory usage,
are much lesser for RSVM than that of a
conventional SVM using the whole Dataset. Support
vector machines have originate to play a very
dominant role in data classification via a kernel-based
linear or nonlinear classier. Two major problems that
challenge large data classification by a nonlinear
kernel are the pure size of the mathematical
programming problem that needs to be solved and the
time it proceeds to solve, even for temperately sized
datasets.
3. A. Praveena et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 5( Version 4), May 2014, pp.11-14
www.ijera.com 13 | P a g e
The need of the nonlinear separating surface
on the whole dataset which creates heavy storage
problems that stops the use of nonlinear kernels for
anything but a small dataset. For example, even for a
thousand opinion dataset, one is confronted by a fully
compact quadratic program with 1001 variables and
1000 constraints bring about in constraint matrix with
over a million entries. In contrast, our projected
approach would typically decrease the problem to
one with 101 variables and a 1000 constraints which
is cheerfully resolved by a smoothing technique as
an unconstrained 101-dimensional minimization
problem. This generate a nonlinear separating surface
which depends on a hundred data points only, as a
substitute of the conventional nonlinear kernel
surface which would depend on the entire 1000
points. In [24], an approximate kernel has been
projected which is based on an eigen value
decomposition of a randomly selected subset of the
training set. However, unlike our approach, the whole
kernel matrix is generated within an iterative linear
equation solution procedure. We note that our data-
reduction method should effort just as well for 1-
norm based support vector Machines, chunking
methods as well as Platt's consecutive minimization
Optimization (SMO) .We brief outline the contents of
the paper now. In thiSection we describe Kernel-
based classification for linear and nonlinear kernels.
In Section 3 we outline our compact SVM approach.
Section 4 give computational and graphical result that
display the electiveness and control of RSVM.
Ranking SVM is active in our personalization
approach to learn the user's preferences. For a given
query, a set of content
Concepts and a set of location concepts are take out
from the search result as the document features. Since
each document can be signified by a feature vector, it
can be preserved as a point in the feature space.
Using click through data as the input, RSVM
purposes at finding a linear ranking function, this
holds for as several document preference pairs as
possible.
3. UDD (UN SUPERVISED DETECTION
ALGORITHM)
Data of high quality is the requirement for
the feat of data mining. Data cleaning is vital for
improving data quality in data addition. To detect and
reject duplicate records is a key stage for data
cleaning, and is also an vital problem for improving
data quality. Replica records are the records that
represent the same object in the real world while are
not recognized by DBMS due to different data format
or misspell. The resolution of duplicate record
detection is to match, combine and remove the fired
database records that denote the same entity while
with dissimilar data expression and produce the Exact
data to user without replication of the data as the user
wish. It is mainly used in the search engine system.
Duplicate record detection has expected
many disquiets and research. Rule-based technique
manually made full rules according to domain-
specific application and users. To achieve high
accuracy, large quantity of human energy is required.
To solve the problem of the boundaries of rules,
some methods based on supervised learning are
offered, which detect replica records by learning
concealed rules and knowledge from data models.
The drawback of this method is that it is tough to get
sample data. Unsupervised learning is the
development for this problem, which has been talk
over in related works. M. Elfeky proposed TAILOR
method, which use unsupervised k-means method to
develop three groups (matched, possible matched and
unmatched) and become a decision tree classifier
typical with matched and unmatched clusters. Then
the typical is used to handle the other records to
discover out all matched record sets.
This method cannot assurance the precision
of the training models for the decision tree, which
moves the results for duplication record detection.
LiangGU proposed a better clustering decision model
to detect replica records, which added unclear regions
for users to control the clustering result of matched
and unmatched record sets, concluding better result
for three clusters. However, this process only
considers the clustering and does not more combine
classifier for duplicate record recognition.
The statement of using web search engine
example is to highpoint the essential for an algorithm
that can switch huge amounts of data and be capable
to derive a single set that is most significant to the
user query.
Fig2: UDD algorithm
4. A. Praveena et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 5( Version 4), May 2014, pp.11-14
www.ijera.com 14 | P a g e
B. System Architecture
Fig3: system architecture for proposed system
This is the architecture of search engine
which is based on the android system. Once user
enter in to search engine it will move to the server
and server PC and collect those from the database.
Then again asked question to the mobile user. After
received the answer from the user it will segregate
the content as concept based or location based. Then
ontology is applied on those concepts and retrieved
the data from database. Then RSVM technique is also
applied on the result and gives the training to the
answer. it will separate the concept such as concept
based and location based. Location also classified in
to two such as based on user specification or gps
based. If the user mentioned the location it will
produce the result based on that otherwise it will drag
the user location and give the result for the user
queries.
III. CONCLUSION
In this PMSE to abstract and absorb a user’s
content and location predilections based on the user’s
click through. To adapt to the user mobility, we
combined the user’s GPS locations in the
personalization process. Detected that GPS locations
help to improve recovery effectiveness, especially for
location requests. And also proposed two privacy
parameters, min-distance and expRatio, to report
privacy issues in PMSE by permitting users to
control the amount of personal information visible to
the PMSE server. The privacy parameters enable
smooth control of privacy coverage while conserving
good ranking quality. For future work, will
investigate methods to deed regular travel patterns
and query designs from the GPS and click through
data to extra enhance the personalization
effectiveness of PMSE.
IV. FUTURE WORK
To acclimate to the user mobility, merged
the user’s GPS locations in the personalization
process..Explore methods to abuse regular travel
patterns and query patterns from the GPS and click
through data to further increase the personalization
efficiency of PMSE. Voice based credit for users
input instead of manually typing the queries to the
mobile by this way we can further improve the
efficiency of users input and make it more user
friendly to the user. Then also find the result while
user in traveling mode
REFERENCES
[1] E. Agichtein, E. Brill, and S. Dumais,
“Improving Web Search Ranking by
Incorporating User Behavior Information,”
Proc. 29th
Ann. Int’l ACM SIGIR Conf.
Research and Development in Information
Retrieval (SIGIR), 2006.
[2] E. Agichtein, E. Brill, S. Dumais, and R.
Ragno, “Learning User Interaction Models
for Predicting Web Search Result
Preferences,” Proc. Ann. Int’l ACM SIGIR
Conf. Research and Development in
Information Retrieval (SIGIR), 2006.
[3] Y.-Y. Chen, T. Suel, and A. Markowetz,
“Efficient Query Processing in Geographic
Web Search Engines,” Proc. Int’l ACM
SIGIR Conf. Research and Development in
Information Retrieval (SIGIR), 2006.
[4] K.W. Church, W. Gale, P. Hanks, and D.
Hindle, “Using Statistics in Lexical
Analysis,” Lexical Acquisition: Exploiting
On-Line Resources to Build a Lexicon,
Psychology Press, 1991.
[5] Q. Gan, J. Attenberg, A. Markowetz, and T.
Suel, “Analysis of Geographic Queries in a
Search Engine Log,” Proc. First Int’l
Workshop Location and the Web (Loc
Web), 2008.
[6] T. Joachims, “Optimizing Search Engines
Using Clickthrough Data,” Proc. ACM
SIGKDD Int’l Conf. Knowledge Discovery
and Data Mining, 2002.
[7] K.W.-T. Leung, D.L. Lee, and W.-C. Lee,
“Personalized Web Search with Location
Preferences,” Proc. IEEE Int’l Conf. Data
Mining (ICDE), 2010.
[8] H. Li, Z. Li, W.-C. Lee, and D.L. Lee, “A
Probabilistic Topic-Based Ranking
Framework for Location-Sensitive Domain
Information Retrieval,” Proc. Int’l ACM
SIGIR Conf. Research and Development in
Information Retrieval (SIGIR), 2009.