This document summarizes two algorithms - MFA and ATRA - for processing top-k spatial preference queries. MFA is a threshold-based algorithm that partitions queries into three features - spatial, preference, and text - and retrieves objects with the highest aggregate scores. ATRA uses a hybrid indexing structure called AIR-tree to more efficiently retrieve only relevant objects without revisiting the same data. The paper then proposes using an R-tree index structure combined with an enhanced branch-and-bound search algorithm to answer preference-based top-k spatial keyword queries by ranking objects based on feature quality in their neighborhoods.
Spatial database are becoming more and more popular in recent years. There is more and more
commercial and research interest in location-based search from spatial database. Spatial keyword search
has been well studied for years due to its importance to commercial search engines. Specially, a spatial
keyword query takes a user location and user-supplied keywords as arguments and returns objects that are
spatially and textually relevant to these arguments. Geo-textual index play an important role in spatial
keyword querying. A number of geo-textual indices have been proposed in recent years which mainly
combine the R-tree and its variants and the inverted file. This paper propose new index structure that
combine K-d tree and inverted file for spatial range keyword query which are based on the most spatial
and textual relevance to query point within given range.
Vchunk join an efficient algorithm for edit similarity joinsVijay Koushik
Similarity join is most important technique to
involve many applications such as data integration, record
linkage and pattern recognition. Here we introduce new
algorithm for similarity join with edit distance constraints.
Currently extracting overlapping grams from string and consider
only string that share certain gram as candidate. Now we propose
extracting non-overlapping substring or chunk from string.
Chunk scheme based on tail-restricted chunk boundary
dictionary (CBD). This approach integrated existing approach
for calculating similarity with several new filters unique to chunk
based method. Greedy algorithm automatically select good
chunking scheme from given data set. Then show the result our
method occupies less space and faster performance to compute
the value
The document discusses query optimization in database management systems. It covers converting SQL queries to logical and physical query plans, improving logical plans through algebraic transformations, and choosing the optimal physical query plan by considering the order of operations and join trees. The goal is to select the most efficient physical plan by estimating the size of relations and intermediate results.
The document discusses Oracle system catalogs which contain metadata about database objects like tables and indexes. System catalogs allow accessing information through views with prefixes like USER, ALL, and DBA. Examples show how to query system catalog views to get information on tables, columns, indexes and views. Query optimization and evaluation are also covered, explaining how queries are parsed, an execution plan is generated, and the least cost plan is chosen.
The document discusses algorithms and techniques for query processing and optimization in relational database management systems. It covers translating SQL queries into relational algebra, algorithms for operations like selection, projection, join and sorting, using heuristics and cost estimates for optimization, and an overview of query optimization in Oracle databases.
Scalable Keyword Cover Search using Keyword NNE and Inverted IndexingIRJET Journal
This document discusses scalable keyword cover search using keyword nearest neighbor expansion (keyword-NNE) and inverted indexing. It proposes a more efficient algorithm called keyword-NNE to address the performance issues of existing baseline algorithms for closest keyword search as the number of query keywords increases. Keyword-NNE significantly reduces the number of candidate keyword covers generated compared to the baseline. The algorithm and inverted indexing techniques are analyzed and shown to outperform alternatives through experiments on real datasets.
The document discusses cost estimation in query optimization. It explains that the query optimizer should estimate the cost of different execution strategies and choose the strategy with the minimum estimated cost. The cost functions used are estimates and depend on factors like selectivity. The main cost components include access cost to storage, storage cost, computation cost, memory use cost, and communication cost. For different types and sizes of databases, the emphasis may be on minimizing different cost components, such as access cost for large databases. The document provides examples of cost functions for select and join operations that consider factors like index levels, block sizes, and selectivity.
Hey friends, here is my "query tree" assignment. :-) I have searched a lot to get this master piece :p and I can guarantee you that this one gonna help you In Sha ALLAH more than any else document on the subject. Have a good day :-)
Spatial database are becoming more and more popular in recent years. There is more and more
commercial and research interest in location-based search from spatial database. Spatial keyword search
has been well studied for years due to its importance to commercial search engines. Specially, a spatial
keyword query takes a user location and user-supplied keywords as arguments and returns objects that are
spatially and textually relevant to these arguments. Geo-textual index play an important role in spatial
keyword querying. A number of geo-textual indices have been proposed in recent years which mainly
combine the R-tree and its variants and the inverted file. This paper propose new index structure that
combine K-d tree and inverted file for spatial range keyword query which are based on the most spatial
and textual relevance to query point within given range.
Vchunk join an efficient algorithm for edit similarity joinsVijay Koushik
Similarity join is most important technique to
involve many applications such as data integration, record
linkage and pattern recognition. Here we introduce new
algorithm for similarity join with edit distance constraints.
Currently extracting overlapping grams from string and consider
only string that share certain gram as candidate. Now we propose
extracting non-overlapping substring or chunk from string.
Chunk scheme based on tail-restricted chunk boundary
dictionary (CBD). This approach integrated existing approach
for calculating similarity with several new filters unique to chunk
based method. Greedy algorithm automatically select good
chunking scheme from given data set. Then show the result our
method occupies less space and faster performance to compute
the value
The document discusses query optimization in database management systems. It covers converting SQL queries to logical and physical query plans, improving logical plans through algebraic transformations, and choosing the optimal physical query plan by considering the order of operations and join trees. The goal is to select the most efficient physical plan by estimating the size of relations and intermediate results.
The document discusses Oracle system catalogs which contain metadata about database objects like tables and indexes. System catalogs allow accessing information through views with prefixes like USER, ALL, and DBA. Examples show how to query system catalog views to get information on tables, columns, indexes and views. Query optimization and evaluation are also covered, explaining how queries are parsed, an execution plan is generated, and the least cost plan is chosen.
The document discusses algorithms and techniques for query processing and optimization in relational database management systems. It covers translating SQL queries into relational algebra, algorithms for operations like selection, projection, join and sorting, using heuristics and cost estimates for optimization, and an overview of query optimization in Oracle databases.
Scalable Keyword Cover Search using Keyword NNE and Inverted IndexingIRJET Journal
This document discusses scalable keyword cover search using keyword nearest neighbor expansion (keyword-NNE) and inverted indexing. It proposes a more efficient algorithm called keyword-NNE to address the performance issues of existing baseline algorithms for closest keyword search as the number of query keywords increases. Keyword-NNE significantly reduces the number of candidate keyword covers generated compared to the baseline. The algorithm and inverted indexing techniques are analyzed and shown to outperform alternatives through experiments on real datasets.
The document discusses cost estimation in query optimization. It explains that the query optimizer should estimate the cost of different execution strategies and choose the strategy with the minimum estimated cost. The cost functions used are estimates and depend on factors like selectivity. The main cost components include access cost to storage, storage cost, computation cost, memory use cost, and communication cost. For different types and sizes of databases, the emphasis may be on minimizing different cost components, such as access cost for large databases. The document provides examples of cost functions for select and join operations that consider factors like index levels, block sizes, and selectivity.
Hey friends, here is my "query tree" assignment. :-) I have searched a lot to get this master piece :p and I can guarantee you that this one gonna help you In Sha ALLAH more than any else document on the subject. Have a good day :-)
The document presents an optimization algorithm called System Rank Ordering Heuristic (System RO-H) for queries with conjunction of predicates. System RO-H extends the traditional System R optimization algorithm by:
1. Using a heuristic called h-metric to order predicates for joining relations.
2. The h-metric orders predicates in ascending order based on either the predicate's rank or a ratio of selectivity and cost per tuple, whichever is lower.
3. By ordering predicates based on h-metric, System RO-H finds optimal plans in both left-deep and bushy join trees in polynomial time relative to the number of predicates.
This document discusses document clustering. It begins with an introduction that defines document clustering as aiming to minimize within-cluster distances and maximize between-cluster distances. It then shows a block diagram of the clustering process, which includes preprocessing documents by removing stop words and stemming, extracting relevant features, and performing document clustering. The document clustering techniques are then described in three parts: converting heterogeneous documents to homogeneous plain text, extracting features like n-grams and part-of-speech tags, and performing k-means clustering on the feature space to group the documents.
Scoring, term weighting and the vector spaceUjjawal
The document discusses document indexing and retrieval. It notes that documents have different fields and zones, with fields having finite values like dates and zones having arbitrary text. Separate indexes are built for each field/zone, with the dictionary based on the vocabulary in that zone. This allows for smaller dictionaries and efficient retrieval using weighted zone scoring. Weights can be assigned manually, empirically, or learned from training data to optimize how well retrieved documents match queries.
1. The document proposes representing text documents as graphs (graph-of-words) instead of bag-of-words and using frequent subgraph mining to extract features for text categorization.
2. It describes using the gSpan algorithm to efficiently mine frequent subgraphs from the graph-of-words representations to generate features.
3. An elbow method is used to select an optimal minimum support threshold that balances feature set size and accuracy. Representing documents as graphs and mining subgraph features is shown to improve accuracy over traditional bag-of-words on four text categorization datasets.
Query Distributed RDF Graphs: The Effects of Partitioning PaperDBOnto
Abstract: Web-scale RDF datasets are increasingly processed using distributed RDF data stores built on top of a cluster of shared-nothing servers. Such systems critically rely on their data partitioning scheme and query answering scheme, the goal of which is to facilitate correct and ecient query processing. Existing data partitioning schemes are
commonly based on hashing or graph partitioning techniques. The latter techniques split a dataset in a way that minimises the number of connections between the resulting subsets, thus reducing the need for communication between servers; however, to facilitate ecient query answering,
considerable duplication of data at the intersection between subsets is often needed. Building upon the known graph partitioning approaches, in this paper we present a novel data partitioning scheme that employs minimal duplication and keeps track of the connections between partition elements; moreover, we propose a query answering scheme that
uses this additional information to correctly answer all queries. We show experimentally that, on certain well-known RDF benchmarks, our data partitioning scheme often allows more answers to be retrieved without distributed computation than the known schemes, and we show that our query answering scheme can eciently answer many queries.
This document presents an approach for using document clustering algorithms to improve forensic analysis of seized computers. It discusses the limitations of existing approaches and proposes using algorithms like K-means and hierarchical clustering to group related documents without predefining the number of clusters. The system architecture involves preprocessing documents, calculating similarity, forming clusters, and evaluating results. Modules include preprocessing, calculating the number of clusters, clustering techniques, and removing outliers. The approach aims to enhance computer inspection by grouping relevant documents for experts to examine.
The document discusses query compilation and optimization. It covers parsing a SQL query into a parse tree, converting the parse tree into a logical query plan using relational algebra, and estimating the costs of different logical and physical query plans to select the most efficient plan. The key steps are parsing, rewriting the logical query plan using algebraic laws to improve it, estimating sizes of intermediate results, and selecting a physical query plan based on cost estimates.
This document discusses various techniques for document clustering and retrieval, including cosine similarity, k-means clustering, hierarchical clustering, and the EM algorithm. Cosine similarity measures the similarity between document vectors based on the angle between them. K-means clustering partitions documents into k clusters to minimize intra-cluster similarity, while hierarchical clustering merges clusters in a dendogram based on similarity. The EM algorithm computes maximum likelihood estimates of document distributions. Evaluation of clustering assesses the quality based on intra-class and inter-class similarity.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document discusses various steps and algorithms for processing database queries. It covers parsing and optimizing queries, estimating query costs, and algorithms for operations like selection, sorting, and joins. Selection algorithms include linear scans, binary searches, and using indexes. Sorting can use indexes or external merge sort. Join algorithms include nested loops, merge join, and hash join.
1) The document describes writing an MPI program to calculate a quantity called coverage from data files in a distributed manner across a cluster.
2) MPI (Message Passing Interface) is a standard for writing programs that can run in parallel on multiple processors. The program should distribute the computation efficiently across the cluster nodes and yield the same results as a serial code.
3) The MPI program structure involves initialization, processes running concurrently on nodes, communication between processes, and finalization. Communicators define which processes can communicate.
Analysis of different similarity measures: SimrankAbhishek Mungoli
SimRank exploits object-to-object relationships very well and finds out the similarity between two objects.
We have used it in our project to find similar reasearch papers from DBLP dataset (DBLP Dataset provides a comprehensive list of research papers in computer science domain).
SimRank is a generic approach and its basic idea can also be applied to other domain of interests as well.
The document discusses query optimization in databases. Query optimization is the process of selecting the most efficient query evaluation plan to minimize costs and maximize performance. An optimized query will be executed faster using less system resources. Key factors considered during optimization include join size estimation, estimating the number of distinct values, and catalog information about relations.
The document discusses query optimization by describing how a database system estimates the cost of different query evaluation plans using statistical information about relations. It covers topics like estimating the size of selections, joins, aggregations and other operations to choose the lowest cost plan using transformations and equivalence rules.
This document summarizes research on constructing a phylogenetic tree for COX genes using multiple sequence alignments with ClustalW. It begins by introducing phylogenetic analysis and the COX gene. It then describes the methodology used, which involved obtaining nucleotide sequences from a COX protein sequence in mice, performing a tBLASTn search to find related genes, aligning the sequences with ClustalW, and constructing rooted and unrooted phylogenetic trees. The results include the input protein sequence, tBLASTn output, ClustalW alignment, and the rooted and unrooted phylogenetic trees produced. It concludes that phylogenetic analysis is important for understanding gene and protein evolution.
This paper presents a method to improve transient stability and damping of low frequency oscillations in a multi-machine power system using adaptive neuro-fuzzy control of FACTS devices. A Simulink model of a three generator power system equipped with a UPFC is developed. Simulation results show that a UPFC controlled using an adaptive neuro-fuzzy inference system controller more effectively improves transient stability and damps power oscillations compared to using SSSC. The neuro-fuzzy controller is trained using a hybrid learning algorithm to tune its parameters online based on generator speed deviation and acceleration as inputs.
Influence of social media on the academic performance of the undergraduate st...Alexander Decker
1) The document examines the influence of social media on the academic performance of undergraduate students at Kogi State University in Nigeria. It finds that students have high levels of access to social media, especially Facebook.
2) The study also finds that exposure to social media has a negative effect on students' academic performance. Students spend more time on social media than studying, and rely on social media instead of course materials.
3) Based on these findings, the document concludes that social media exposure negatively impacts academic performance for undergraduates at Kogi State University. It recommends that students minimize social media use and focus more on academic activities.
Relationship between Personality Traits, Academic Achievement and Salary: An ...iosrjce
Most of the B-Schools in India are facing problems in placing their students. Recruiters claim that
the reason for this is the absence of required skill-sets in the students. The challenge is in identifying the skills
or personality traits which lead to good placements. In this study, personality traits were borrowed from the
psychological concept of OCEAN. Ten traits were short-listed and the objective was to find out if there is a
correlation between them and CGPA (academic achievement) and Salary Obtained during placements. The
study, which was carried out in a reputed B-school in Bangalore (India), revealed that out of these 10 traits,only
confidence has a correlation with salary. The traits which have correlation with CGPA are self-motivation and
confidence. Another aspect that was studied was the efficacy of a program called personality enhancement
program- which forces students to learn from activities like public speaking, presentations etc.It was found that
this program helps students to build their confidence levels and confidence is impacting, both, CGPA as well as
salary. The study also found that there is no correlation between CGPA and Salary. SEM is also corroborating
the above results, which were obtained through regression analysis and ANOVA.
This document provides a summary of Nidhi Sharma's educational qualifications, internship experience, skills, academic achievements, and extracurricular activities. She has a MSc in Chemistry from IIT Roorkee with high marks and was a university topper in her BSc from Delhi University. Her internships focused on separation of lanthanides and actinides using HPLC and synthesis of antimicrobial polymers. She has strong skills in chemistry software and languages. She received numerous academic awards and scholarships. Her extracurriculars include leadership positions and competition wins in quizzes, presentations, and seminars on topics like energy and hydrocarbons.
academic as an associated factor of stress among studentsNur Atikah Amira
This document summarizes research on academic stress among university students. It identifies several key factors that can cause academic stress, such as academic overload, unclear evaluation criteria, project deadlines, absence of faculty, and searching for course references. Studies found that exams, fear of failure, competition with peers, and lack of time were also primary stressors. Excessive stress can negatively impact students' academic performance and health. The literature review discusses research showing that academic factors like assignments, workloads, and examinations are major sources of stress for university students. Managing stress is important, as too much unmanaged stress can lead students to drop out or have other adverse outcomes.
Knowledge and perception of students regarding islamic banking in Sindh Pakistansanaullah noonari
Abstract
This research investigated the relationship between the university student’s perception and knowledge about
different concepts and terms used in the Islamic banking and products and services offered. Impact of age,
gender, area of study, area of residence, CGPA and family’s monthly income on the perception and knowledge
of students about Islamic banking was also analyzed. Data was collected from the postgraduate students
(Respondents # 60) selected randomly from two public sectors universities (Sindh Agriculture University Tando
jam and University of Sindh) along with one private sectors (ISRA) university of Hyderabad. Simple linear
regressions were used in order to check the impact of socioeconomic characteristics on the knowledge and
perception of students. University students were mainly surveyed to assess the knowledge and perception of
country’s intellectual cream of Islamic banking crop. Results showed that religious sincerity, not the better
knowledge of Islamic banking was the strongest predictor of personal banking performances. Result reflected
that overall perception and knowledge of students was significantly different from zero. Result suggested that
students had better perception about the Islamic banking but poor knowledge. It was found that the Arabic
language in specifying the products and services hindered the understandings of the students. Coefficient of age
and income showed a positive relation with the perception and knowledge of students regarding Islamic banking
in both public sector universities and Private Sector University. Result for area of study also displayed positive
relation with the perception and knowledge of students regarding Islamic banking. Gender, area of residence and
CGPA were not statistically significant which means these did not affected significantly on the perception and
knowledge of students about Islamic banking however in case of private Sector University CGPA count to be
factor, significantly effecting the perception of students.
Keywords: Islamic banking, perception, knowledge, products and services.
The document presents an optimization algorithm called System Rank Ordering Heuristic (System RO-H) for queries with conjunction of predicates. System RO-H extends the traditional System R optimization algorithm by:
1. Using a heuristic called h-metric to order predicates for joining relations.
2. The h-metric orders predicates in ascending order based on either the predicate's rank or a ratio of selectivity and cost per tuple, whichever is lower.
3. By ordering predicates based on h-metric, System RO-H finds optimal plans in both left-deep and bushy join trees in polynomial time relative to the number of predicates.
This document discusses document clustering. It begins with an introduction that defines document clustering as aiming to minimize within-cluster distances and maximize between-cluster distances. It then shows a block diagram of the clustering process, which includes preprocessing documents by removing stop words and stemming, extracting relevant features, and performing document clustering. The document clustering techniques are then described in three parts: converting heterogeneous documents to homogeneous plain text, extracting features like n-grams and part-of-speech tags, and performing k-means clustering on the feature space to group the documents.
Scoring, term weighting and the vector spaceUjjawal
The document discusses document indexing and retrieval. It notes that documents have different fields and zones, with fields having finite values like dates and zones having arbitrary text. Separate indexes are built for each field/zone, with the dictionary based on the vocabulary in that zone. This allows for smaller dictionaries and efficient retrieval using weighted zone scoring. Weights can be assigned manually, empirically, or learned from training data to optimize how well retrieved documents match queries.
1. The document proposes representing text documents as graphs (graph-of-words) instead of bag-of-words and using frequent subgraph mining to extract features for text categorization.
2. It describes using the gSpan algorithm to efficiently mine frequent subgraphs from the graph-of-words representations to generate features.
3. An elbow method is used to select an optimal minimum support threshold that balances feature set size and accuracy. Representing documents as graphs and mining subgraph features is shown to improve accuracy over traditional bag-of-words on four text categorization datasets.
Query Distributed RDF Graphs: The Effects of Partitioning PaperDBOnto
Abstract: Web-scale RDF datasets are increasingly processed using distributed RDF data stores built on top of a cluster of shared-nothing servers. Such systems critically rely on their data partitioning scheme and query answering scheme, the goal of which is to facilitate correct and ecient query processing. Existing data partitioning schemes are
commonly based on hashing or graph partitioning techniques. The latter techniques split a dataset in a way that minimises the number of connections between the resulting subsets, thus reducing the need for communication between servers; however, to facilitate ecient query answering,
considerable duplication of data at the intersection between subsets is often needed. Building upon the known graph partitioning approaches, in this paper we present a novel data partitioning scheme that employs minimal duplication and keeps track of the connections between partition elements; moreover, we propose a query answering scheme that
uses this additional information to correctly answer all queries. We show experimentally that, on certain well-known RDF benchmarks, our data partitioning scheme often allows more answers to be retrieved without distributed computation than the known schemes, and we show that our query answering scheme can eciently answer many queries.
This document presents an approach for using document clustering algorithms to improve forensic analysis of seized computers. It discusses the limitations of existing approaches and proposes using algorithms like K-means and hierarchical clustering to group related documents without predefining the number of clusters. The system architecture involves preprocessing documents, calculating similarity, forming clusters, and evaluating results. Modules include preprocessing, calculating the number of clusters, clustering techniques, and removing outliers. The approach aims to enhance computer inspection by grouping relevant documents for experts to examine.
The document discusses query compilation and optimization. It covers parsing a SQL query into a parse tree, converting the parse tree into a logical query plan using relational algebra, and estimating the costs of different logical and physical query plans to select the most efficient plan. The key steps are parsing, rewriting the logical query plan using algebraic laws to improve it, estimating sizes of intermediate results, and selecting a physical query plan based on cost estimates.
This document discusses various techniques for document clustering and retrieval, including cosine similarity, k-means clustering, hierarchical clustering, and the EM algorithm. Cosine similarity measures the similarity between document vectors based on the angle between them. K-means clustering partitions documents into k clusters to minimize intra-cluster similarity, while hierarchical clustering merges clusters in a dendogram based on similarity. The EM algorithm computes maximum likelihood estimates of document distributions. Evaluation of clustering assesses the quality based on intra-class and inter-class similarity.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document discusses various steps and algorithms for processing database queries. It covers parsing and optimizing queries, estimating query costs, and algorithms for operations like selection, sorting, and joins. Selection algorithms include linear scans, binary searches, and using indexes. Sorting can use indexes or external merge sort. Join algorithms include nested loops, merge join, and hash join.
1) The document describes writing an MPI program to calculate a quantity called coverage from data files in a distributed manner across a cluster.
2) MPI (Message Passing Interface) is a standard for writing programs that can run in parallel on multiple processors. The program should distribute the computation efficiently across the cluster nodes and yield the same results as a serial code.
3) The MPI program structure involves initialization, processes running concurrently on nodes, communication between processes, and finalization. Communicators define which processes can communicate.
Analysis of different similarity measures: SimrankAbhishek Mungoli
SimRank exploits object-to-object relationships very well and finds out the similarity between two objects.
We have used it in our project to find similar reasearch papers from DBLP dataset (DBLP Dataset provides a comprehensive list of research papers in computer science domain).
SimRank is a generic approach and its basic idea can also be applied to other domain of interests as well.
The document discusses query optimization in databases. Query optimization is the process of selecting the most efficient query evaluation plan to minimize costs and maximize performance. An optimized query will be executed faster using less system resources. Key factors considered during optimization include join size estimation, estimating the number of distinct values, and catalog information about relations.
The document discusses query optimization by describing how a database system estimates the cost of different query evaluation plans using statistical information about relations. It covers topics like estimating the size of selections, joins, aggregations and other operations to choose the lowest cost plan using transformations and equivalence rules.
This document summarizes research on constructing a phylogenetic tree for COX genes using multiple sequence alignments with ClustalW. It begins by introducing phylogenetic analysis and the COX gene. It then describes the methodology used, which involved obtaining nucleotide sequences from a COX protein sequence in mice, performing a tBLASTn search to find related genes, aligning the sequences with ClustalW, and constructing rooted and unrooted phylogenetic trees. The results include the input protein sequence, tBLASTn output, ClustalW alignment, and the rooted and unrooted phylogenetic trees produced. It concludes that phylogenetic analysis is important for understanding gene and protein evolution.
This paper presents a method to improve transient stability and damping of low frequency oscillations in a multi-machine power system using adaptive neuro-fuzzy control of FACTS devices. A Simulink model of a three generator power system equipped with a UPFC is developed. Simulation results show that a UPFC controlled using an adaptive neuro-fuzzy inference system controller more effectively improves transient stability and damps power oscillations compared to using SSSC. The neuro-fuzzy controller is trained using a hybrid learning algorithm to tune its parameters online based on generator speed deviation and acceleration as inputs.
Influence of social media on the academic performance of the undergraduate st...Alexander Decker
1) The document examines the influence of social media on the academic performance of undergraduate students at Kogi State University in Nigeria. It finds that students have high levels of access to social media, especially Facebook.
2) The study also finds that exposure to social media has a negative effect on students' academic performance. Students spend more time on social media than studying, and rely on social media instead of course materials.
3) Based on these findings, the document concludes that social media exposure negatively impacts academic performance for undergraduates at Kogi State University. It recommends that students minimize social media use and focus more on academic activities.
Relationship between Personality Traits, Academic Achievement and Salary: An ...iosrjce
Most of the B-Schools in India are facing problems in placing their students. Recruiters claim that
the reason for this is the absence of required skill-sets in the students. The challenge is in identifying the skills
or personality traits which lead to good placements. In this study, personality traits were borrowed from the
psychological concept of OCEAN. Ten traits were short-listed and the objective was to find out if there is a
correlation between them and CGPA (academic achievement) and Salary Obtained during placements. The
study, which was carried out in a reputed B-school in Bangalore (India), revealed that out of these 10 traits,only
confidence has a correlation with salary. The traits which have correlation with CGPA are self-motivation and
confidence. Another aspect that was studied was the efficacy of a program called personality enhancement
program- which forces students to learn from activities like public speaking, presentations etc.It was found that
this program helps students to build their confidence levels and confidence is impacting, both, CGPA as well as
salary. The study also found that there is no correlation between CGPA and Salary. SEM is also corroborating
the above results, which were obtained through regression analysis and ANOVA.
This document provides a summary of Nidhi Sharma's educational qualifications, internship experience, skills, academic achievements, and extracurricular activities. She has a MSc in Chemistry from IIT Roorkee with high marks and was a university topper in her BSc from Delhi University. Her internships focused on separation of lanthanides and actinides using HPLC and synthesis of antimicrobial polymers. She has strong skills in chemistry software and languages. She received numerous academic awards and scholarships. Her extracurriculars include leadership positions and competition wins in quizzes, presentations, and seminars on topics like energy and hydrocarbons.
academic as an associated factor of stress among studentsNur Atikah Amira
This document summarizes research on academic stress among university students. It identifies several key factors that can cause academic stress, such as academic overload, unclear evaluation criteria, project deadlines, absence of faculty, and searching for course references. Studies found that exams, fear of failure, competition with peers, and lack of time were also primary stressors. Excessive stress can negatively impact students' academic performance and health. The literature review discusses research showing that academic factors like assignments, workloads, and examinations are major sources of stress for university students. Managing stress is important, as too much unmanaged stress can lead students to drop out or have other adverse outcomes.
Knowledge and perception of students regarding islamic banking in Sindh Pakistansanaullah noonari
Abstract
This research investigated the relationship between the university student’s perception and knowledge about
different concepts and terms used in the Islamic banking and products and services offered. Impact of age,
gender, area of study, area of residence, CGPA and family’s monthly income on the perception and knowledge
of students about Islamic banking was also analyzed. Data was collected from the postgraduate students
(Respondents # 60) selected randomly from two public sectors universities (Sindh Agriculture University Tando
jam and University of Sindh) along with one private sectors (ISRA) university of Hyderabad. Simple linear
regressions were used in order to check the impact of socioeconomic characteristics on the knowledge and
perception of students. University students were mainly surveyed to assess the knowledge and perception of
country’s intellectual cream of Islamic banking crop. Results showed that religious sincerity, not the better
knowledge of Islamic banking was the strongest predictor of personal banking performances. Result reflected
that overall perception and knowledge of students was significantly different from zero. Result suggested that
students had better perception about the Islamic banking but poor knowledge. It was found that the Arabic
language in specifying the products and services hindered the understandings of the students. Coefficient of age
and income showed a positive relation with the perception and knowledge of students regarding Islamic banking
in both public sector universities and Private Sector University. Result for area of study also displayed positive
relation with the perception and knowledge of students regarding Islamic banking. Gender, area of residence and
CGPA were not statistically significant which means these did not affected significantly on the perception and
knowledge of students about Islamic banking however in case of private Sector University CGPA count to be
factor, significantly effecting the perception of students.
Keywords: Islamic banking, perception, knowledge, products and services.
This document summarizes a research paper that proposes a new Position Based Opportunistic Routing Protocol (POR) to improve reliable data delivery in mobile ad hoc networks. Existing geographic routing protocols have issues with route failures and delays in discovering new routes when nodes move. The proposed POR protocol selects multiple forwarding candidate nodes to opportunistically forward packets. If the primary forwarder fails, backup candidates can forward packets to avoid transmission interruptions. Simulation results show the POR protocol has lower delay and higher packet delivery ratio compared to existing protocols.
This study analyzed the effect of focus on the intensity of disyllabic words in Mandarin Chinese. The results show that:
1) The average intensity of rhymes is significantly greater than that of onsets in both syllables.
2) Under focused conditions, the average intensity of the first syllable rhyme is significantly greater than the second syllable rhyme.
3) Focus has a significant effect of increasing average intensity across onsets and rhymes in both syllables.
This study investigated the effects of different grading policies (lenient vs. strict) on engineering students' cumulative grade point averages (CGPAs) in Pakistan. A sample of 1578 students was analyzed, with around half graded under a lenient policy with 5 letter grades and the other half under a strict policy with 7 letter grades. Results showed that students graded under the strict policy had statistically higher CGPAs on average compared to those under the lenient policy. Low-performing students benefited more from the strict policy in terms of improved CGPA. The study provides evidence that stricter grading policies can positively motivate students to achieve higher performance levels.
A RESEARCH ON EFFECT OF STRESS AMONG KMPh STUDENTS Natrah Abd Rahman
Stress is the feeling that is created when we react to particular events. It can make you feel threatened or upset. It is a combination of psychological, physiological and behavioral reactions that people have in response to events that threaten or challenge them.
The document presents a theoretical framework for analyzing the impact of internet usage on student performance. It hypothesizes that education, social status, cooperation from teachers, and reliability of online information positively impact internet usage, while risks/uncertainties and expenses negatively impact usage. A regression model is developed to measure the relationship between these independent variables and the dependent variable of internet usage.
THE EFFECTS OF SOCIAL NETWORKING SITES ON THE ACADEMIC PERFORMANCE OF STUDENT...Kasthuripriya Nanda Kumar
This document is a research paper that examines the effects of social networking sites on the academic performance of college students. It begins with background information on the rise of social networking and introduces the research problem of whether these sites impact students' grades. The purpose is to determine this impact through a study of 30 students at Taj International College. A literature review discusses previous research, which has found mixed results on whether time spent on social networking correlates with academic performance.
This document summarizes a research study on factors affecting mathematics performance of high school students at Laguna State Polytechnic University in the 2009-2010 academic year. The study examines student-related factors like interest in mathematics, study habits, and teacher-related factors such as personality traits, teaching skills, and instructional materials. It provides background information on the importance of mathematics and reviews previous related studies. The research methodology, data collection instruments, and statistical analysis plan are also outlined.
Ranking Preferences to Data by Using R-TreesIOSR Journals
This document discusses algorithms for efficiently processing top-k spatial preference queries in databases containing spatial and non-spatial data. It defines top-k spatial preference queries as ranking objects based on quality and features in their nearest neighborhoods. It presents the branch and bound and feature join algorithms for computing the top-k results without having to calculate scores for all objects. It also discusses using R-trees to index spatial data and feature data to accelerate query processing.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Aggregation of data by using top k spatial query preferencesAlexander Decker
This document summarizes a research paper on efficient techniques for processing top-k spatial preference queries in a database. It discusses how such queries allow users to rank spatial objects based on the aggregated qualities of nearby features. It proposes two algorithms - branch-and-bound and feature join - to efficiently process these queries by pruning search space. The paper also studies extensions of the algorithms for different aggregate functions and for queries using an influence score to weight nearby features.
Convincing a customer is always considered as a challenging task in every business. But when it comes to online business, this task becomes even more difficult. Online retailers try everything possible to gain the trust of the customer. One of the solutions is to provide an area for existing users to leave their comments. This service can effectively develop the trust of the customer however normally the customer comments about the product in their native language using Roman script. If there are hundreds of comments this makes difficulty even for the native customers to make a buying decision. This research proposes a system which extracts the comments posted in Roman Urdu, translate them, find their polarity and then gives us the rating of the product. This rating will help the native and non-native customers to make buying decision efficiently from the comments posted in Roman Urdu.
Ranking spatial data by quality preferences pptSaurav Kumar
A spatial preference query ranks objects based on the qualities of features in their spatial neighborhood. For example, using a real estate agency database of flats for lease, a customer may want to rank the flats with respect to the appropriateness of their location, defined after aggregating the qualities of other features (e.g., restaurants, cafes, hospital, market, etc.) within their spatial neighborhood. Such a neighborhood concept can be specified by the user via different functions. It can be an explicit circular region within a given distance from the flat. Another intuitive definition is to assign higher weights to the features based on their proximity to the flat. In this paper, we formally define spatial preference queries and propose appropriate indexing techniques and search algorithms for them. Extensive evaluation of our methods on both real and synthetic data reveals that an optimized branch-and-bound solution is efficient and robust with respect to different parameters
Web users and content are increasingly being geo-positioned, and increased focus is being given
to serving local content in response to web queries. This development calls for spatial keyword queries that take
into account both the locations and textual descriptions of content. We study the efficient, joint processing of
multiple top-k spatial keyword queries. Such joint processing is attractive during high query loads and also
occurs when multiple queries are used to obfuscate a user’s true query. To propose a novel algorithm and index
structure for the joint processing of top-k spatial keyword queries. Empirical studies show that the proposed
solution is efficient on real datasets. They also offer analytical studies on synthetic datasets to demonstrate the
efficiency of the proposed solution.
A Study on Optimization of Top-k Queries in Relational DatabasesIOSR Journals
This document discusses optimization of top-k queries in relational databases. It begins with an introduction to top-k queries and their increasing importance. It then discusses extending relational algebra and query optimization to integrate rank-join operators that can efficiently evaluate top-k queries. Specifically, it outlines enlarging the search space of plans to include those using rank-join operators, and providing a cost model for these operators to help the optimizer select the most efficient plan. The goal is to fully support efficient evaluation of top-k queries within relational database systems.
The document describes implementing various sorting algorithms in C including insertion sort, shell sort, selection sort, quick sort, merge sort, and heap sort. Code snippets are provided showing the implementation of each algorithm. The main functions take an integer array and size as input, apply the sorting algorithm, and return the sorted array. Testing involves inputting an array size and values, selecting an algorithm from a menu, and outputting the sorted array.
The document proposes an extension to the M-tree family of index structures called M*-tree. M*-tree improves upon M-tree by maintaining a nearest-neighbor graph within each node. The nearest-neighbor graph stores, for each entry in a node, a reference and distance to its nearest neighbor among the other entries in that node. This additional structure allows for more efficient filtering of non-relevant subtrees during search queries through the use of "sacrifice pivots". The experiments showed that M*-tree can perform searches significantly faster than M-tree while keeping construction costs low.
IRJET- Review of Existing Methods in K-Means Clustering AlgorithmIRJET Journal
The document reviews existing methods for the k-means clustering algorithm. It discusses how k-means clustering works and some of its limitations when dealing with large datasets, such as being dependent on the initial choice of centroids. It then proposes using Hadoop to overcome big data challenges and calculate preliminary centroids for k-means clustering in a distributed manner. Finally, it reviews different techniques that have been proposed in other research to improve k-means clustering, such as methods for selecting better initial centroids or determining the optimal number of clusters.
A Study of Efficiency Improvements Technique for K-Means AlgorithmIRJET Journal
This document discusses techniques to improve the efficiency of the K-means clustering algorithm. It begins with an introduction to K-means clustering and discusses some of its limitations, such as high computational time. It then proposes using a ranking method to help assign data points to clusters more efficiently. The key steps of standard K-means clustering and the proposed ranking-based approach are described. Experimental results on sample datasets show that the ranking method leads to faster clustering compared to standard K-means, with comparable accuracy. Therefore, the ranking method can help enhance the performance of K-means clustering for large datasets.
This document proposes a hypergraph-based approach for optimizing SPARQL queries on RDF data. It first transforms an RDF graph into a hypergraph by grouping subjects and objects connected by the same predicate under a hyperedge for that predicate. It then rearranges the patterns in a SPARQL query based on the size of corresponding hyperedges to build a query path for efficient processing. The query is executed by looping through the rearranged patterns and extracting required subjects and objects from the hypergraph representation to find matching triples. Experimental results show this approach performs better than existing systems like RDF-3x, Jena and AllegroGraph.
Efficient Filtering Algorithms for Location- Aware Publish/subscribeIJSRD
Location-based services have been mostly used in many systems. preceding systems uses a pull model or user-initiated model, where a user arrival a query to a server which gives response with location-aware answers. To offer outcomes to users with fast responses, a push model or server-initiated model is flattering an important computing model in the next-generation location-based services. In the push model, subscribers arrive spatio-textual subscriptions to fastening their curiosities, and publishers send spatio-textual messages. It is used for a high-performance location-aware publish/subscribe system to send publishers’ messages to valid subscribers. In this paper, we find the exploration happenstances that start in manipulative a location-aware publish/subscribe system. We recommend an R-tree based index by merging textual descriptions into R-tree nodes. We design efficient filtering algorithms and effective pruning techniques to accomplish high performance. This method can support likewise conjunctive queries and ranking queries.
Skyline Query Processing using Filtering in Distributed EnvironmentIJMER
This document summarizes a research paper about skyline query processing in distributed databases. Skyline queries return multidimensional data points that are not dominated by other points. In distributed databases, skyline queries must be processed across multiple data sites. The paper proposes using multiple filtering points selected from each local skyline result to reduce the number of false positive results and communication costs between sites. Two heuristics called MaxSum and MaxDist are described for selecting filtering points that maximize their combined dominating potential across sites to improve distributed skyline query processing performance.
This document proposes and defines a top-k spatial preference query that ranks spatial objects based on the qualities of features in their neighborhood. It introduces existing systems that do not support this type of query. The proposed system uses a scoring function to calculate a score for each object based on features within its neighborhood, defined by either a range score or influence score. It describes algorithms to perform a multiway join on feature trees to obtain qualified feature combinations and search the object tree to retrieve relevant objects.
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
1) The document discusses a review of semantic approaches for nearest neighbor search. It describes using an ontology to add a semantic layer to an information retrieval system to relate concepts using query words.
2) A technique called spatial inverted index is proposed to locate multidimensional information and handle nearest neighbor queries by finding the hospitals closest to a given address.
3) Several semantic approaches are described including using clustering measures, specificity measures, link analysis, and relation-based page ranking to improve search and interpret hidden concepts behind keywords.
EFFICIENT R-TREE BASED INDEXING SCHEME FOR SERVER-CENTRIC CLOUD STORAGE SYSTEMNexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The document summarizes research on performing spatio-textual similarity joins. It discusses:
1) Developing a filter-and-refine framework to efficiently find similar object pairs from two datasets using signatures.
2) Generating spatial and textual signatures for objects and building inverted indexes on the signatures to find candidate pairs.
3) Refining the candidate pairs to obtain the final result pairs that satisfy spatial and textual similarity thresholds.
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
This document summarizes a study on reducing corrosion rates in steel through welding design. The researchers tested different welding groove designs (X, V, 1/2X, 1/2V) and preheating temperatures (400°C, 500°C, 600°C) on ferritic malleable iron samples. Testing found that X and V groove designs with 500°C and 600°C preheating had corrosion rates of 0.5-0.69% weight loss after 14 days, compared to 0.57-0.76% for 400°C preheating. Higher preheating reduced residual stresses which decreased corrosion. Residual stresses were 1.7 MPa for optimal X groove and 600°C
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
The document summarizes a study on the use of ground granulated blast furnace slag (GGBS) and limestone powder to replace cement in self-compacting concrete (SCC). Tests were conducted on SCC mixes with 0-50% replacement of cement with GGBS and 0-20% replacement with limestone powder. The results showed that replacing 30% of cement with GGBS and 15% with limestone powder produced SCC with the highest compressive strength of 46MPa, meeting fresh property requirements. The study concluded that this ternary blend of cement, GGBS and limestone powder can improve SCC properties while reducing costs.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 10, Issue 7 (July 2014), PP.68-75
Ranking Spatial Data by Quality Preferences
Satyanarayana Maddala 1 , Avala. Atchyuta Rao2
1II M.Tech, Gokul Institute of Technology and Sciences, Bobbili, Vizianagaram, India.
2Asst.Prof, Department of Computer Science and Engineering,
GOKUL INSTITUTE OF TECHNOLOGY AND SCIENCES, Bobbili, Vizianagaram, India
Abstract:- The objects in real world can be ranked based on the features in their spatial neighborhood using a
preference based top-k special query. In this paper, a two purpose query structure for satisfying the user
requirements is implemented. For example, a user who wishes to find a hotel with 3 star categories that serves
sea food which provides the nearest airport facility. This concept can be obtained by developing a system that
takes a particular query as the input and displays a ranked set of top k best objects that satisfy user requirements.
For that, an indexing technique R-tree and a search method BB algorithm for efficiently processing top-k spatial
preference query is used. R-tree (Real-tree), a data structure is the first index specifically designed to handle
multidimensional extended objects and branch and bound (BB) algorithm that makes searching easier, faster and
accurate. The key idea is to compute upper bound scores for non-leaf entries in the object tree, and prunes those
that cannot lead to better results. The advantage of using this algorithm is that it can reduce the number of steps
to be examined.
Index Terms:- Query processing, spatial databases, R-tree.
I. INTRODUCTION
The management of large collection of geographical entities is possible with the help of spatial
database systems. Apart from spatial attributes Spatial database systems also contain non-spatial values like
size, type, price etc. In this paper, a study of an interesting type of preference queries are made, which selects
the best spatial object with respect to the quality of features in its spatial neighborhood. Given a set D of
interesting objects, a top-k spatial preference query[1] retrieves the k objects in D with the highest scores. The
score of an object is defined by the quality of features (facilities or services) in its spatial neighborhood. As an
example: The user (e.g., tourist) wishes to find a food facility that may be hotel or restaurant also with different
types of transport facility can input these purposes as spatial query. For each hotel „P‟ will defined in terms of (i)
the maximum quality for each feature in the neighborhood region of the particular position „P‟ and (ii) the
aggregation of those user requirements.
This paper proposes four types of ranking; (i) spatial ranking: the objects are ranked based on their
distance from a reference point, (ii) non-spatial ranking: the objects are ranked by aggregating all the non-spatial
values (size, price, type etc.), (iii) neighbor retrieval: In spatial database, ranking is often associated to nearest
neighbor (NN) retrieval [2]. Given a query location, we are interested in retrieving the set of nearest objects to it
that qualify a condition (example: restaurants). Assuming that the set of interesting objects is indexed by an R-tree,
apply distance bounds and traverse the index in a branch and bound fashion to obtain the answer, (iv)
spatial query evaluation on R-tree, which is the most popular spatial access method, which indexes Minimum
Bounding Rectangles (MBR'S) of objects. R-tree can efficiently process main spatial query types, including
spatial range queries, nearest neighbor queries [3], and spatial joins. The top-k spatial preference query
integrates these four types of ranking in an intuitive way.
II. LITERATURE SURVEY
Two query processing algorithms are proposed to answer queries in the existing system. One of which
is a threshold based method and the other one is based on the hybrid index structure. They are MFA and ATRA,
where MFA is a threshold-based algorithm and ATRA that is a AIR-tree based algorithm. Now in this paper,
two search methods are proposed, R tree and enhanced branch and bound algorithm with the help of a
"Preference based Top-k spatial keyword queries" founded in 2011 by the authors Jinzeng Zhang, dongqi Liu,
Xiaofengs Meng[4].
68
MFA (Multiple Feature Algorithms):
A PTkSK query processing method called multiple feature algorithm denoted as MFA [2] is proposed,
which is based on threshold algorithm denoted as TA .TA algorithm is a typical method to addressing top-k
2. Ranking Spatial Data by Quality Preferences
query that returns k-tuples with the highest scores according to a monotone function. The PTkSK query is
partitioned into three features that is query location represents spatial features sf, fuzzy constrains and query
keywords respectively corresponds to user preference feature pf, and text feature tf. Therefore the submitted
query Q is transformed into a set of three features, and can do adjustment in some extend [5].
ALGORITHM: MFA
Input: Q: a PTkSK query;
k: a positive number of returned results;
Variables: GT: a global threshold;
BaScore: best aScore that aggregates the score of the current best objects.
Output: R: The top-k objects satisfying Q;
1: Qf ←Tranform(Q);
2: GT ←0;
3: BaScore←∞;
4: R ← Null;
5: for each each feature qi in Qf do
6: ti ← 0;
7: for i from 1 to k do
8: while(GT < BaScore) do9: for i from 1 to 3 do
10: select query feature qi;
11: get the match o j of qi;
12: ti ← socre of o j on feature qi;
13: update GT;
14: if GT ≥ BaScore then
15: break;
16: compute aScore(o j, Qf )
17: if aScore(o j, Qf ) < BaScore then
18: cur-bestresult← o j;
19: BaScore←aScore(o j, Qf );
20: R ← R ∪ {o j};
21: return R
69
ATRA (ATR Algorithm):
To reduce computation overhead, ATRA algorithm that is based on an effective hybrid indexing
structure called AIR-tree (Attribute Inverted File R-tree) is used for query processing [2]. MFA algorithm may
incur multiple accesses to the same nodes and retrieve the same data point through different queries. To
overcome this drawback , AIR-tree is used to retrieve those objects only containing some query keywords and
satisfying user preference, which can avoid checking the irrelevant objects to query (Fig 1). AIR-tree clusters
spatially close objects together, and carries textual information and attribute vectors in one node. The attribute
vectors are used in users preference similarity computing. The AIR-Tree therefore can improve searching
efficiency for PTkSK queries [2].
3. Ranking Spatial Data by Quality Preferences
Fig 1: AIR-tree
ALGORITHM: ATRA
Input: Q: a PTkSK query;
T: an AIR-tree;
k: a positive number of returned results.
Output: R: the top-k objects satisfying Q;
1: Qf ← Quant(Q);
2: U ← new min-priority queue;
3: U.Enqueue(T.root, 0);
4: while U is not empty do
5: E ← U.Dequeue();
6: if E is an object then
7: R ←R ∪ E;
8: if |R| = k then
9: goto 16;
10: else if E is an intermediate node then
11: for each entry e in E do12: U.Enqueue(e, MINaScore(e, Q f ));
13: else if E is an leaf node then
14: for each object o in E do
15: U.Enqueue(o, aScore(o, Q f ));
16: return R;
70
Limitations of this system:
1) The number of objects to be examined is more using this algorithm.
2) Computing upper bound scores for non-leaf entries in the object tree are not accurate.
3) Takes more time to form the AIR- tree structure.
4) More difficult to implement the Algorithm using the tree structure.
III. RESEARCH ELABORATION
A preference based top-k spatial keyword queries is proposed, that return a ranked set of k best data
objects based on the scores of feature objects in their spatial neighborhood, satisfying user's requirements and
needs. In order to answer PTkSK queries efficiently, an index tree structure called R-tree (Real tree) is proposed,
which combines location proximity with preference similarity and textual relevance. Also presents a search
algorithm called Enhanced BB (branch and bound). The spatial objects can be searched by use of this search
algorithm [6]. In this, the data partitioning method such as R-Tree index is used.
A. R-TREE
R-trees are tree data structures used for spatial access method, i.e.; for indexing multi-dimensional
information such as geographical coordinates, rectangles. The R-tree was proposed by Antonin Guttmann in
1984, and has found significant use in both theoretical and applied contexts [7]. A common real world usage for
an R-tree might be to store spatial objects such as restaurant location, buildings, etc and then find answers
quickly to queries such as “find all museums within 2 km of my current location”, “retrieve all road segments
within 2 km of my location” or “Find the nearest gas station” etc. It essentially modifies the ideas of the B-tree
to accommodate extended spatial objects. The key idea of R-tree is to group nearby objects and represent them
with their minimum bounding rectangle (MBR) in the next higher level of the tree.
B. Time Complexity of R-TREE
i)If MBRs do not overlap on q, the complexity is O (log mN).
(ii)If MBRs overlap on q, it may not be logarithmic, in the worst case when all MBRs overlap on q, it is O (N).
4. Ranking Spatial Data by Quality Preferences
Fig 2: Spatial queries on R-Tree
For example: If we wish to get the ranked nearest object from the point „q‟, in R-tree, it first forms
Minimum bounding Rectangles (MBR) with the k-best element collection p1 to p8. After ranking we understand
that „p7‟ is the member who has the user- specified features and shortest distance from the position „q‟ as shown
in Fig 2.
C. Search Algorithms
The spatial objects can be searched by use of search algorithm. In this algorithm, the data partitioning
method such as R-Tree index is used. The basic search algorithm on R-trees, similar to search operations on B-trees,
traverses the tree from the root to its leaf nodes [8].
71
Branch and Bound algorithm:
The key idea is to compute component score, for non-leaf entries E in the object tree D, an upper bound
T(E) of the score T(p) for any point p in the sub tree of E. The algorithm uses two global variables: Wk is a min-heap
for managing the top-k results and? represents the top-k score so far (i.e., lowest score in Wk).The pseudo-code
of branch and bound algorithm (BB) is called with N being the root node of D. If N is a non-leaf node, in
this algorithm, the scores T (E) for non-leaf entries E can be computed concurrently. Recall that T (E) is an
upper bound score for any point in the sub tree of E. The techniques for computing T (E) will be discussed
shortly. The component score Tc (E) is the range score, take maximum quality of points. With the component
scores Tc (E) known so far, we can derive T+ (E), an upper bound of T (E). If T+ (E) = ?, then the sub tree of E
cannot contain better results than those in Wk and it is removed from set V. In order to obtain points with high
scores early, sort the entries in descending order of T (E) before invoking the above procedure recursively on the
child nodes pointed by the entries in V. If N is a leaf node, then compute the scores for all points of N
concurrently and then update the set Wk of the top-k results. Since both Wk and? are global variables, the value
of ? is updated during recursive call of BB. To improve the performance of Branch and bound algorithm,
Enhanced branch and bound algorithm is developed as follows.
Enhanced Branch and Bound algorithm
Algorithm: Enhanced Branch and Bound
Wk: = new min-heap of size k (initially empty);
?: =0;
// k-th score in Wk
1: Call search algorithm
// Take input as search result E from search algorithm
2: V: {E| E e N}; //V denotes set in which points are to be stored
3: If N is non-leaf then
4: for c: =1 to m do
5: compute T (E) for all E e V concurrently;
6: remove entries E in V such that T+ (E) <= ?;
7: for each entry E e v such that T (E) > ? do
8: read the child node N pointed by E;
9: continue step 2;
10: else
11: for c: =1 to m do
5. Ranking Spatial Data by Quality Preferences
12: compute T (E) for all E e V concurrently;
13: remove entries e in V such that T+ (E) <=V;
14: Sort entries E e V in descending order of T (E);
15: Update Wk (and?) by entries in v;
In branch and bound algorithm, changes have been made in getting input values and also about sorting the
entries, resulted with enhanced branch and bound algorithm. The input values of enhanced BB are the output of
searching algorithm. Instead of performing sorting individually on each node among its child nodes, entire tree
node have been sorted after this process is over. This will reduce the time effectively and improve the
performance.
72
D. Model View Controller
Model-View-controller shown in Fig.3 is a classical design pattern used in applications for who needs a
clean separation between their business logic and views that represents data. MVC design pattern isolates the
application logic from the user interface and permit the individual development, testing and maintenance for
each component. This design pattern is divided into three parts called model, view and controller. Model - This
component manages the information and notify the observers when the information changes. It represents the
data when on which the application operates. The model provides the persistent storage of data, which is
manipulated by the controller. In other words, Model represents an object carrying data. It can also have logic to
update controller if its data changes. In my project, Java classes are used to implement this control.
View- The view displays the data, and also takes input from user. View represents the visualization of
the data that model contains. It renders the model data into a form to display to the user. There can be several
view associated with a single model. It is actually a representation of model data. This control is implemented in
my project using java server pages (jsp).
Controller- The controller handles all requests coming from the view or user interface. The data flow to
whole application is controlled by controller. It forwarded the request to the appropriate handler. Only the
controller is responsible for accessing model and rendering it into various UIs. Controller acts on both Model
and view. It controls the data flow into model object and updates the view whenever data changes. It keeps
View and Model separate. This control is implemented using java server pages and is maintained in a package
named process.
Fig 3: MVC Architecture
IV. SYSTEM ANALYSIS
System Analysis is a process by which we attribute process or goals to a human activity, determine
how well those purpose are being achieved and specify the requirements of the various tools and techniques that
are to be used within the system if the system performances are to be achieved.
6. Ranking Spatial Data by Quality Preferences
73
A. EXPERIMENTAL EVALUATION
The efficiency of the proposed algorithms is compared using real and synthetic datasets. Each dataset is
indexed by an R-tree with 4K bytes page size. An LRU memory buffer is used whose default size is set to 0.5%
of the sum of tree sizes (for the object and feature trees used). The algorithms were implemented in C++ and
experiments were run on a Pentium D 2.8GHz PC with 1GB of RAM. In all experiments, both the I/O cost (in
number of page faults) and the total execution time (in seconds) of the algorithms are measured.
B. EXPERIMENTAL SETTINGS
Both real and synthetic data are used for the experiments. For each synthetic dataset, the coordinates of
points are random values uniformly and independently generated for different dimensions. For a feature dataset
Fc, qualities for its points are generated such that they simulate a real world scenario: facilities close to (far
from) a town center often have high (low) quality. For this, a single anchor point s* is selected such that its
neighborhood region contains high number of points. Let distmin and distmax be the minimum and maximum
distance of a point in Fc from the anchor s*. Then, the quality of a feature point s is generated as:
Range of parameter values
7. Ranking Spatial Data by Quality Preferences
C. Performance on Queries with Range Scores
This section studies the performance of the algorithms for top-k spatial preference queries on range
scores[9]. However, the cost of the other methods is mainly influenced by the effectiveness of pruning. BB
employs an effective technique to prune unqualified non-leaf entries in the object tree so it outperforms GP. The
optimized score computation method enables BB* to save on average 20% I/O and 30% time of BB.
The four symbols used in the graphical representation are as follows:
These four symbols are stands for GP (Group Probing Algorithm), BB (Branch and Bound Algorithm), BB*
(Enhanced Branch and Bound Algorithm), FJ(Feature Join Algorithm) from top to bottom respectively.
Diagram 1:
Diagram 1 plots the cost of the algorithms as a function of the buffer size. As the buffer size increases, the I/O
of all algorithms drops. FJ remains the best method, BB* the second, and BB the third; all of them outperform
GP by a wide margin. Since the buffer size does not affect the pruning effectiveness of the algorithms, it has a
small impact on the execution time.
Diagram 2:
Diagram 2 compares the cost of the algorithms with respect to the object data size |D|. Since the cost of
FJ is dominated by the cost of joining feature datasets, it is insensitive to |D|. On the other hand, the cost of the
other methods (GP, BB, BB*) increases with |D|.
Diagram 3:
74
8. Ranking Spatial Data by Quality Preferences
Diagram3 plots the I/O cost of the algorithms with respect to the feature data size. As size of dataset increases,
the cost of GP, BB, and FJ increases. In contrast, BB* experiences a slight cost reduction as its optimized score
computation method (for objects and non-leaf entries) is able to perform pruning early at a large dataset value.
V. FUTURE SCOPE
As a future scope, a study can be made on top-k spatial preference query on a road network, in which
the distance between two points is defined by their shortest path distance rather than their Euclidean distance.
The challenge is to develop alternative methods for computing the upper bound scores for a group of points on a
road network.
The other future developments for additional improvements are as follows:
1) User friendly interfaces can be improved.
2) Security features can be improved: By using additional authentication mechanisms for authenticating users
and registered objects.
V. CONCLUSION
The paper presents a comprehensive study of top-k spatial preference queries, which provides a novel
type of ranking for spatial objects based on qualities of features in their neighborhood. The neighborhood of an
object p is captured by the scoring function (i) the range score restricts the neighborhood to a crisp region
centered at p, whereas (ii) the influence score relaxes the neighborhood to the whole space and assigns higher
weights to locations closer to p. An index tree structure called R-tree and an Enhanced branch and bound
algorithm for processing top-k spatial preference queries is used, that easily ranks the spatial data depends upon
qualities. The proposed system is helpful for tourism development and travelling management. By using the site,
a user can search the hotels, restaurants and transport facilities, in a city which satisfy his requirements and he
can choose a hotel or a restaurant from a ranked list. This ranking method is effective and efficient in various
applications.
REFERENCES
[1]. M.L.Yiu, X.Dai, N.Mamoulis, and M.Vaitis, “Top-k Spatial Preference Queries,” in ICDE, 2007.
[2]. K.S.Beyer, J.Goldstein, R.Ramakrishnan, and U.Shaft, “When is “nearest neighbor” meaningful?” in
75
ICDT, 1999.
[3]. Y.Chen and J.M.Patel, “Efficient Evaluation of All-Nearest- Neighbor Queries,” in ICDE, 2007.
[4]. Jinzeng Zhang, Dongqi Liu, Xiaofeng Meng, “Preference Based Top-k Spatial Keyword Queries,” in
ACM, 2011.
[5]. Y-Y.Chen, T.Suel, and A.Markowetz, “Efficient Query Processing in Geographic Web Search
Engines,” in SIGMOD, 2006.
[6]. E.Dellis, B.Seeger, and A.Vlachou, “Nearest Neighbour Search on Vertically Partitioned High-
Dimensional Data,” in DaWaK, 2005.
[7]. A.Guttman, “R-Trees: A Dynamic Index Structure for Spatial Searching,” in SIGMOD, 1984.
[8]. S.Hong, B.Moon, and S.Lee, “Efficient Execution of Range Top-k Queries in Aggregate R-Trees,”
IEICE Transactions, 2005.
[9]. I.F.Ilyas, W.G.Aref, and A. Elmagarmid, “Supporting Top-k Join Queries in Relational Databases,” in
VLDB, 2003.