Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Ranking spatial data by quality preferences pptSaurav Kumar
A spatial preference query ranks objects based on the qualities of features in their spatial neighborhood. For example, using a real estate agency database of flats for lease, a customer may want to rank the flats with respect to the appropriateness of their location, defined after aggregating the qualities of other features (e.g., restaurants, cafes, hospital, market, etc.) within their spatial neighborhood. Such a neighborhood concept can be specified by the user via different functions. It can be an explicit circular region within a given distance from the flat. Another intuitive definition is to assign higher weights to the features based on their proximity to the flat. In this paper, we formally define spatial preference queries and propose appropriate indexing techniques and search algorithms for them. Extensive evaluation of our methods on both real and synthetic data reveals that an optimized branch-and-bound solution is efficient and robust with respect to different parameters
Data mining is the process of discovering interesting patterns and knowledge from large amounts of data. Spatial databases store large space related data, such as maps, preprocessed remote sensing or medical imaging data.
Modern mobile phones and mobile devices are equipped with GPS devices; this is the reason for the Location based services to gain significant attention. These Location based services generate large amounts of spatio- textual data which contain both spatial location and textual description. The spatiotextual objects have different representations because of deviations in GPS or due to different user descriptions. This calls for the need of efficient methods to integrate spatio-textual data. Spatio-textual similarity join meets this need. Spatio-textual similarity join: Given two sets of spatio-textual data, it finds all the similar pairs. Filter and refine framework will be developed to device the algorithms. The prefix
filter technique will be extended to generate spatial and textual signatures and inverted indexes will be
built on top of these signatures. Candidate pairs will be found using these indexes. Finally the candidate pairs will be refined to get the result. MBR-prefix based signature will be used to prune dissimilar objects. Hybrid signature will be used to support spatial and textual pruning simultaneously.
Spatial databases are used to store geographic information. Querying on such databases are : range queries, nearest neighbor queries and spatial joins. Many indexing techniques are used for faster retrieval of data out of which r-trees are mainly efficient. Other indexing techniques are quad-trees, grid files etc. Spatial data is used in GIS applications.
Ranking spatial data by quality preferences pptSaurav Kumar
A spatial preference query ranks objects based on the qualities of features in their spatial neighborhood. For example, using a real estate agency database of flats for lease, a customer may want to rank the flats with respect to the appropriateness of their location, defined after aggregating the qualities of other features (e.g., restaurants, cafes, hospital, market, etc.) within their spatial neighborhood. Such a neighborhood concept can be specified by the user via different functions. It can be an explicit circular region within a given distance from the flat. Another intuitive definition is to assign higher weights to the features based on their proximity to the flat. In this paper, we formally define spatial preference queries and propose appropriate indexing techniques and search algorithms for them. Extensive evaluation of our methods on both real and synthetic data reveals that an optimized branch-and-bound solution is efficient and robust with respect to different parameters
Data mining is the process of discovering interesting patterns and knowledge from large amounts of data. Spatial databases store large space related data, such as maps, preprocessed remote sensing or medical imaging data.
Modern mobile phones and mobile devices are equipped with GPS devices; this is the reason for the Location based services to gain significant attention. These Location based services generate large amounts of spatio- textual data which contain both spatial location and textual description. The spatiotextual objects have different representations because of deviations in GPS or due to different user descriptions. This calls for the need of efficient methods to integrate spatio-textual data. Spatio-textual similarity join meets this need. Spatio-textual similarity join: Given two sets of spatio-textual data, it finds all the similar pairs. Filter and refine framework will be developed to device the algorithms. The prefix
filter technique will be extended to generate spatial and textual signatures and inverted indexes will be
built on top of these signatures. Candidate pairs will be found using these indexes. Finally the candidate pairs will be refined to get the result. MBR-prefix based signature will be used to prune dissimilar objects. Hybrid signature will be used to support spatial and textual pruning simultaneously.
Spatial databases are used to store geographic information. Querying on such databases are : range queries, nearest neighbor queries and spatial joins. Many indexing techniques are used for faster retrieval of data out of which r-trees are mainly efficient. Other indexing techniques are quad-trees, grid files etc. Spatial data is used in GIS applications.
TYBSC IT PGIS Unit II Chapter I Data Management and Processing SystemsArti Parab Academics
Data Management and Processing Systems Hardware and Software Trends Geographic Information Systems: GIS Software, GIS Architecture and functionality, Spatial Data Infrastructure (SDI) Stages of Spatial Data handling: Spatial data handling and preparation, Spatial Data Storage and maintenance, Spatial Query and Analysis, Spatial Data Presentation. Database management Systems: Reasons for using a DBMS, Alternatives for data management, The relational data model, Querying the relational database. GIS and Spatial Databases: Linking GIS and DBMS, Spatial database functionality.
Data Entry and Preparation Spatial Data Input: Direct spatial data capture, Indirect spatial data captiure, Obtaining spatial data elsewhere Data Quality: Accuracy and Positioning, Positional accuracy, Attribute accuracy, Temporal accuracy, Lineage, Completeness, Logical consistency Data Preparation: Data checks and repairs, Combining data from multiple sources Point Data Transformation: Interpolating discrete data, Interpolating continuous data
Spatial database are becoming more and more popular in recent years. There is more and more
commercial and research interest in location-based search from spatial database. Spatial keyword search
has been well studied for years due to its importance to commercial search engines. Specially, a spatial
keyword query takes a user location and user-supplied keywords as arguments and returns objects that are
spatially and textually relevant to these arguments. Geo-textual index play an important role in spatial
keyword querying. A number of geo-textual indices have been proposed in recent years which mainly
combine the R-tree and its variants and the inverted file. This paper propose new index structure that
combine K-d tree and inverted file for spatial range keyword query which are based on the most spatial
and textual relevance to query point within given range.
TYBSC IT PGIS Unit I Chapter I- Introduction to Geographic Information SystemsArti Parab Academics
A Gentle Introduction to GIS The nature of GIS: Some fundamental observations, Defining GIS, GISystems, GIScience and GIApplications, Spatial data and Geoinformation. The real world and representations of it: Models and modelling, Maps, Databases, Spatial databases and spatial analysis
Spatial data is comprised of objects in multi-dimensional space.
Storing spatial data in a standard database would require excessive amounts of space.Queries to retrieve and analyze spatial data from a standard database would be long and cumbersome leaving a lot of room for error.
Spatial databases provide much more efficient storage, retrieval, and analysis of spatial data.
An extended database reverse engineering – a key for database forensic invest...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An overlay operation is much more than a simple merging of linework; all the attributes of the features taking part in the overlay are carried through. In general, there are two methods for performing overlay analysis—feature overlay (overlaying points, lines, or polygons) and raster overlay. Some types of overlay analysis lend themselves to one or the other of these methods. Overlay analysis to find locations meeting certain criteria is often best done using raster overlay (although you can do it with feature data). Of course, this also depends on whether your data is already stored as features or raster. It may be worthwhile to convert the data from one format to the other to perform the analysis.
Weighted Overlay
Overlays several raster files using a common measurement scale and weights each according to its importance.
The weighted overlay table allows the calculation of a multiple criteria analysis between several raster files.
Raster- The raster of the criteria being weighted.
Influence- The influence of the raster compared to the other criteria as a percentage of 100.
Field- The field of the criteria raster to use for weighting.
Remap- The scaled weights for the criterion.
In addition to numerical values for the scaled weights in Remap, the following options are available:
Restricted- Assigns the restricted value (the minimum value of the evaluation scale set, minus one) to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
No data - Assigns No Data to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
TYBSC IT PGIS Unit II Chapter I Data Management and Processing SystemsArti Parab Academics
Data Management and Processing Systems Hardware and Software Trends Geographic Information Systems: GIS Software, GIS Architecture and functionality, Spatial Data Infrastructure (SDI) Stages of Spatial Data handling: Spatial data handling and preparation, Spatial Data Storage and maintenance, Spatial Query and Analysis, Spatial Data Presentation. Database management Systems: Reasons for using a DBMS, Alternatives for data management, The relational data model, Querying the relational database. GIS and Spatial Databases: Linking GIS and DBMS, Spatial database functionality.
Data Entry and Preparation Spatial Data Input: Direct spatial data capture, Indirect spatial data captiure, Obtaining spatial data elsewhere Data Quality: Accuracy and Positioning, Positional accuracy, Attribute accuracy, Temporal accuracy, Lineage, Completeness, Logical consistency Data Preparation: Data checks and repairs, Combining data from multiple sources Point Data Transformation: Interpolating discrete data, Interpolating continuous data
Spatial database are becoming more and more popular in recent years. There is more and more
commercial and research interest in location-based search from spatial database. Spatial keyword search
has been well studied for years due to its importance to commercial search engines. Specially, a spatial
keyword query takes a user location and user-supplied keywords as arguments and returns objects that are
spatially and textually relevant to these arguments. Geo-textual index play an important role in spatial
keyword querying. A number of geo-textual indices have been proposed in recent years which mainly
combine the R-tree and its variants and the inverted file. This paper propose new index structure that
combine K-d tree and inverted file for spatial range keyword query which are based on the most spatial
and textual relevance to query point within given range.
TYBSC IT PGIS Unit I Chapter I- Introduction to Geographic Information SystemsArti Parab Academics
A Gentle Introduction to GIS The nature of GIS: Some fundamental observations, Defining GIS, GISystems, GIScience and GIApplications, Spatial data and Geoinformation. The real world and representations of it: Models and modelling, Maps, Databases, Spatial databases and spatial analysis
Spatial data is comprised of objects in multi-dimensional space.
Storing spatial data in a standard database would require excessive amounts of space.Queries to retrieve and analyze spatial data from a standard database would be long and cumbersome leaving a lot of room for error.
Spatial databases provide much more efficient storage, retrieval, and analysis of spatial data.
An extended database reverse engineering – a key for database forensic invest...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An overlay operation is much more than a simple merging of linework; all the attributes of the features taking part in the overlay are carried through. In general, there are two methods for performing overlay analysis—feature overlay (overlaying points, lines, or polygons) and raster overlay. Some types of overlay analysis lend themselves to one or the other of these methods. Overlay analysis to find locations meeting certain criteria is often best done using raster overlay (although you can do it with feature data). Of course, this also depends on whether your data is already stored as features or raster. It may be worthwhile to convert the data from one format to the other to perform the analysis.
Weighted Overlay
Overlays several raster files using a common measurement scale and weights each according to its importance.
The weighted overlay table allows the calculation of a multiple criteria analysis between several raster files.
Raster- The raster of the criteria being weighted.
Influence- The influence of the raster compared to the other criteria as a percentage of 100.
Field- The field of the criteria raster to use for weighting.
Remap- The scaled weights for the criterion.
In addition to numerical values for the scaled weights in Remap, the following options are available:
Restricted- Assigns the restricted value (the minimum value of the evaluation scale set, minus one) to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
No data - Assigns No Data to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
ADVANCE DATABASE MANAGEMENT SYSTEM CONCEPTS & ARCHITECTURE by vikas jagtapVikas Jagtap
The data that indicates the earth location (latitude & longitude, or height & depth ) of these rendered objects is known as spatial data.
When the map is rendered, objects of this spatial data are used to project the location of the objects on 2-Dimentional piece of paper.
The spatial data management systems are designed to make the storage, retrieval, & manipulation of spatial data (i.e points, lines and polygons) easier and natural to users, such as GIS.
While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types.
These are typically called geometry or feature.
Convincing a customer is always considered as a challenging task in every business. But when it comes to online business, this task becomes even more difficult. Online retailers try everything possible to gain the trust of the customer. One of the solutions is to provide an area for existing users to leave their comments. This service can effectively develop the trust of the customer however normally the customer comments about the product in their native language using Roman script. If there are hundreds of comments this makes difficulty even for the native customers to make a buying decision. This research proposes a system which extracts the comments posted in Roman Urdu, translate them, find their polarity and then gives us the rating of the product. This rating will help the native and non-native customers to make buying decision efficiently from the comments posted in Roman Urdu.
Efficiently searching nearest neighbor in documents using keywordseSAT Journals
Abstract Conservative spatial queries, such as range search and nearest neighbor reclamation, involve only conditions on objects’ numerical properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query would instead ask for the restaurant that is the closest among those whose menus contain “steak, spaghetti, brandy” all at the same time. Currently the best solution to such queries is based on the InformationRetrieval2-tree, which, has a few deficiencies that seriously impact its efficiency. Motivated by this, there is a development of a new access method called the spatial inverted index that extends the conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries with keywords in real time. As verified by experiments, the proposed techniques outperform the InformationRetrieval2-tree in query response time significantly, often by a factor of orders of magnitude Keywords: Information retrieval, spatial index, keyword search
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Performance Evaluation of Trajectory Queries on Multiprocessor and Clustercsandit
In this study, we evaluate the performance of traje
ctory queries that are handled by Cassandra,
MongoDB, and PostgreSQL. The evaluation is conducte
d on a multiprocessor and a cluster.
Telecommunication companies collect a lot of data f
rom their mobile users. These data must be
analysed in order to support business decisions, su
ch as infrastructure planning. The optimal
choice of hardware platform and database can be dif
ferent from a query to another. We use data
collected from Telenor Sverige, a telecommunication
company that operates in Sweden. These
data are collected every five minutes for an entire
week in a medium sized city. The execution
time results show that Cassandra performs much bett
er than MongoDB and PostgreSQL for
queries that do not have spatial features. Statio’s
Cassandra Lucene index incorporates a
geospatial index into Cassandra, thus making Cassan
dra to perform similarly as MongoDB to
handle spatial queries. In four use cases, namely,
distance query, k-nearest neigbhor query,
range query, and region query, Cassandra performs m
uch better than MongoDB and
PostgreSQL for two cases, namely range query and re
gion query. The scalability is also good for
these two use cases.
PERFORMANCE EVALUATION OF TRAJECTORY QUERIES ON MULTIPROCESSOR AND CLUSTERcscpconf
In this study, we evaluate the performance of trajectory queries that are handled by Cassandra, MongoDB, and PostgreSQL. The evaluation is conducted on a multiprocessor and a cluster.
Telecommunication companies collect a lot of data from their mobile users. These data must be analyzed in order to support business decisions, such as infrastructure planning. The optimal
choice of hardware platform and database can be different from a query to another. We use data
collected from Telenor Sverige, a telecommunication company that operates in Sweden. These data are collected every five minutes for an entire week in a medium sized city. The execution
time results show that Cassandra performs much better than MongoDB and PostgreSQL for queries that do not have spatial features. Statio’s Cassandra Lucene index incorporates a
geospatial index into Cassandra, thus making Cassandra to perform similarly as MongoDB to
handle spatial queries. In four use cases, namely, distance query, k-nearest neigbhor query, range query, and region query, Cassandra performs much better than MongoDB and
PostgreSQL for two cases, namely range query and region query. The scalability is also good for these two use cases
Web users and content are increasingly being geo-positioned, and increased focus is being given
to serving local content in response to web queries. This development calls for spatial keyword queries that take
into account both the locations and textual descriptions of content. We study the efficient, joint processing of
multiple top-k spatial keyword queries. Such joint processing is attractive during high query loads and also
occurs when multiple queries are used to obfuscate a user’s true query. To propose a novel algorithm and index
structure for the joint processing of top-k spatial keyword queries. Empirical studies show that the proposed
solution is efficient on real datasets. They also offer analytical studies on synthetic datasets to demonstrate the
efficiency of the proposed solution.
Study of the Class and Structural Changes Caused By Incorporating the Target ...ijceronline
High dimensional data when processed by using various machine learning and pattern recognition techniques, it undergoes several changes. Dimensionality reduction is one such successfully used pre-processing technique to analyze and represent the high dimensional data that causes several structural changes to occur in the data through the process. The high-dimensional data when used to extract just the target class from among several classes that are spatially scattered then the philosophy of the dimensionality reduction is to find an optimal subset of features either from the original space or from the transformed space using the control set of the target class and then project the input space onto this optimal feature subspace. This paper is an exploratory analysis carried out to study the class properties and the structural properties that are affected due to the target class guided feature subsetting in specific. K-nearest neighbors and minimum spanning tree are employed to study the structural properties, and cluster analysis is applied to understand the target class and other class properties. The experimentation is conducted on the target class derived features on the selected bench mark data sets namely IRIS, AVIRIS Indiana Pine and ROSIS Pavia University data set. Experimentation is also extended to data represented in the optimal principal components obtained by transforming the subset of features and results are also compared
PERFORMANCE EVALUATION OF SQL AND NOSQL DATABASE MANAGEMENT SYSTEMS IN A CLUSTERijdms
In this study, we evaluate the performance of SQL and NoSQL database management systems namely;
Cassandra, CouchDB, MongoDB, PostgreSQL, and RethinkDB. We use a cluster of four nodes to run the
database systems, with external load generators.The evaluation is conducted using data from Telenor
Sverige, a telecommunication company that operates in Sweden. The experiments are conducted using
three datasets of different sizes.The write throughput and latency as well as the read throughput and
latency are evaluated for four queries; namely distance query, k-nearest neighbour query, range query, and
region query. For write operations Cassandra has the highest throughput when multiple nodes are used,
whereas PostgreSQL has the lowest latency and the highest throughput for a single node. For read
operations MongoDB has the lowest latency for all queries. However, Cassandra has the highest
throughput for reads. The throughput decreasesas the dataset size increases for both write and read, for
both sequential as well as random order access. However, this decrease is more significant for random
read and write. In this study, we present the experience we had with these different database management
systems including setup and configuration complexity
Experimental Investigation of a Household Refrigerator Using Evaporative-Cool...inventy
The objective of this paper was to investigate experimentally the effect of Evaporative-cooled condenser in a household refrigerator. The experiment was done using HCF134a as the refrigerant. The performance of the household refrigerator with air-cooled and Evaporative-cooled condenser was compared for different load conditions. The results indicate that the refrigerator performance had improved when evaporative-cooled condenser was used instead of air-cooled condenser on all load conditions. Evaporativecooled condenser reduced the energy consumption when compared with the air-cooled condenser. There was also an enhancement in coefficient of performance (COP) when evaporative-cooled condenser was used instead of air-cooled condenser. The Evaporative cooled heat exchanger was designed and the system was modified by retrofitting it, instead of the conventional air-cooled condenser by making drop wise condensation using water and forced circulation over the condenser. From the experimental analysis it is observed that the COP of evaporative cooled system increased by 13.44% compared to that of air cooled system. So the overall efficiency and refrigerating effect is increased. In minimum constructional, maintenance and running cost, the system is much useful for domestic purpose. This study also revealed that combining a evaporative cooled system along with conventional water cooled system under the condition that the defrost water obtained from the freezer is used for drop wise condensation over condenser and water cooled condensation of the condenser at the bottom using remaining defrost water would reduce the power consumption, work done and hence further increase in refrigerating effect of the system. The study has shown that such a system is technically feasible and economically viable
Copper Strip Corrossion Test in Various Aviation Fuelsinventy
This research work takes in to account of corrosiveness test on various aviation fuels in the state of Telengana (India). The purpose of this experiment is to determine the corrosiveness test of fuels. This determination will be accomplished by using copper strip corrosion test by using the copper strip experiment we can determine the corrosive property of the fuel and hence the efficiency of fuel. The research covers the importance of knowing the corrosive property of different petroleum fuels including aviation turbine fuel.
Additional Conservation Laws for Two-Velocity Hydrodynamics Equations with th...inventy
A series of the differential identities connecting velocities, pressure and body force in the twovelocity hydrodynamics equations with equilibrium of pressure phases in reversible hydrodynamic approximation is obtaned.
Comparative Study of the Quality of Life, Quality of Work Life and Organisati...inventy
People’s lives are increasingly centred on work; they spend at least one-third of their time within the organisations that employ them. Investigating the factors that interfere with employees’ well-being and the organisational environment is becoming an increasing concern in organisations. This article identifies the criteria of the quality of life (QoL), quality of working life (QWL) and organisational climate instruments to point out their similarities. For bibliographic construction and data research, articles were sought in national and international journals, books and dissertations/articles in SciELO, Science Direct, Medline and Pub Med databases. The results show direct relationships amongst QoL, QWL and organisational climate instruments. The relationship between QoL and QWL instruments is based on fair compensation, social interaction, organisational communication, working conditions and functional capacity. QWL and organisational climate instruments are related through social interaction and interfaces. QoL and organisational climate instruments are related based on social interaction, organisational communication, and work conditions.
A Study of Automated Decision Making Systemsinventy
The decision making process of many operations are dependent on analysing very large data sets, previous decisions and their results. The information generated from the large data sets are used as an input for making decisions. Since the decisions to be taken in day to day operations are expanding, the time taken for manual decision making is also expanding. In order to reduce the time, cost and to increase the efficiency and accuracy, which are the most important things for customer satisfaction, many organisations are adopting the automated decision making systems. This paper is about the technologies used for automated decision making systems and the areas in which automated decisions systems works more efficiently and accurately.
Crystallization of L-Glutamic Acid: Mechanism of Heterogeneous β -Form Nuclea...inventy
The mechanism of heterogeneous nucleation of β-form L-glutamic acid was deeply investigated in cooling crystallization. The present study found that the β-form crystals were epitaxially grown on the α-form crystals and they were preferably crystallized on the (011) and (001) surfaces instead of the (111) surfaces of α- form crystals. This result was explained via the molecular simulation. The molecular simulation indicated that the different surfaces of α-form crystals provided different functional groups, resulting in different sites for the heterogeneous nucleation of β-form crystals. Here, the functional group were COO- , C=O and O-H on the (011) and (001) surfaces of α-form crystals, respectively, while it was the NH3 + on the (111) surfaces of α-form crystals. As such, the degree of lattice matching (E) between the β-form crystals and the various surfaces of α- form crystal was distinguished, where the degree of lattice matching (E) between the β-form crystals and the (011), (001) and (111) surfaces of α-form crystal were estimated as 5.30, 5.25 and 2.39, respectively, implying that the (011) and (001) surfaces of α-form crystal were more favorable to generate the heterogeneous nucleation of β-form crystals than the (111) surfaces of α-form crystal
Evaluation of Damage by the Reliability of the Traction Test on Polymer Test ...inventy
In recent decades, polymers have undergone a remarkable historical development and their use has been greatly imposed by gradually dethroning most of the secular materials. These polymer materials have always distinguished themselves by their simple shaping and inexpensive price, their versatility, lightness, and chemical stability but despite their massive use in everyday life as well as in advanced technologies. Generally, these materials still not understood which requires a thorough knowledge of their chemical, physical, rheological and mechanical properties. This paper, we study the mechanical behavior of an amorphous polymer: Acrylonitrile Butadiene Styrene “ABS” by means of uniaxial tensile testing on pierced test pieces with different notch lengths ranging between 1 to 14mm.The proposed approach consists in analyzing the evolution of the global geometry of the obtained strain curves by taking into account the zones and characteristic points of these curves as well as the effect of the damage on the mechanical behavior of the polymer ABS, in order to visualize the evolution of the damage by a static model
Application of Kennelly’model of Running Performances to Elite Endurance Runn...inventy
: The model of Kennelly between distance (Dlim) and exhaustion time (tlim) has been applied to the individual performances of 19 elite endurance runners (World-record holders and Olympic winners) from P. Nurmi (1920-1924) to M. Farah (2012) whose individual best performances on several different distances are known. Kennelly’s model (Dlim = k tlim ) can describe the individual performances of elite runners with a high accuracy (errors lower than 2 %). There is a linear relationship between parameters k and exponents of the elite runners and the extreme values correspond to S. Coe (k = 15.8; = 0.851) and E. Zatopek (k = 6.57; = 0.984). Exponent can be considered as a dimensionless index of aerobic endurance which is close to 1 in the best endurance runners. If it is assumed than maximal aerobic speed can be maintained 7 min in elite endurance runners, exponent is equal to the normalized critical speed (critical speed/maximal aerobic speed) computed from exhaustion times equal to 3 and 12.5 min in these runners.
Development and Application of a Failure Monitoring System by Using the Vibra...inventy
In this project, a failure monitoring system is developed by using the vibration and location information of balises in railway signaling. A lot of field equipment in railway are loosening and broken in time period so that they need maintenance due to the vibrations that occur due to high speed trains traffic and railway vehicles impact. Among the field equipment, balises have very important role of communication in terms of transmitting information to trains. In this scope, it is aimed to make maintenance works more efficient, have no delayed trains, detect previously failure location and intervene in failure timely, by detecting and controlling balise cases such as loosening, out of place and the data consistency error that happens because of balise physical state. In this project, the communication is provided with I2C, Modbus RTU (Remote Terminal Unit) and RS485 standards by using Arduino Uno cards and MPU6050 IMU (Inertial Measurement Unit) sensors in laboratory. Each used sensors are in slave mode and computer interface designed with C# is in master mode. Fault situations in the system are checked instant by the interface. (it is assumed to mount the IMU sensor and the Arduino circuit on the balise) it is seen that the interface responds to the sensor movements instant and the system works well in the end of test processes.
The Management of Protected Areas in Serengeti Ecosystem: A Case Study of Iko...inventy
The study assessed the management of protected areas in Serengeti ecosystem using the case of IGGRs. Specifically, the study aimed at identifying the strategies used for natural resources management; examining the impacts of those strategies; examining the hindrances of the identified strategies; and lastly, examining the methods for scaling up the performance of strategies used for natural resources in the study area. The study involved two villages among 31 villages bordering IGGRs where in each village; at least 5% of the households were sampled. Both Primary data and secondary data were collected and analyzed both manually and computer by using SPSS software. The study revealed that, study population ranked IGGRs performance on protection of natural resources, especially on conserving wildlife for future generation and in reducing poaching to be good(53.3%). In addition, the relationship with IGGRs was said to be considerable good (46.7%). In the aspect of reducing poaching, the findings show that poaching has been reduced by 96.2% from 2009 to 2012. Furthermore, 81.4% of respondents said they use different strategies to control loss of natural resources which in turn has considerably improved the relationship between protected areas and the surrounding communities in some of the aspects. Despite of above successes, the study findings has revealed a number of challenges that hinders the full attainment of conservation objectives. Among the challenges are loss of life and properties (86.4%), shortage of water for livestock (68.9%) since water sources such as Grumeti and Rubana rivers are within protected area while the adjacent local communities do not have a free access to those water sources. Other challenges especially on the IGGRs management include insufficient fund base, working facilities and inadequate staffs. Based on the above findings, the study concluded that the strategies used for natural resources management of protected areas in Serengeti ecosystem is fairly sustainable and need functional participatory approaches of local people and other stakeholders in order to bring about a collaborative natural resources management network in the ecosystem. Furthermore, based on the findings above, equity in benefit sharing accrued from natural resource management in protected areas, more financial support to IGGRs and local community, the use of non-lethal deterrents for crop protection, integration of croplivestock production systems, adoption of land use plans as a solution to land conflicts, strengthens of community based conservation (CBC), adoption of modern information technology such as geographical information system (GIS) and remote sensing are recommended.
Size distribution and biometric relationships of little tunny Euthynnus allet...inventy
This study is taken from data of commercial fishing of the little tunny, Euthynnus alletteratus (Rafinesque, 1810) caught in the Algerian coast, sampled between november 2011 and april 2016. Data were collected in order to determine size distributions of the population and biometric relationships of species including the size - weight relationships. A total of 601 fish ranged from 30.9 and 103 cm fork length (FL) were observed. The size distribution of Euthynnus alletteratus shows multiple modal values witch the most important cohort corresponds to the age class 2 (42-46 cm). The value of the allometric coefficient (b) of the FL/TW relationship is lower than 3, indicating a negative allometric growth.
Removal of Chromium (VI) From Aqueous Solutions Using Discarded Solanum Tuber...inventy
Industrial polluting effluents containing heavy metals are of serious environmental concern in India. Chromium is frequently used in industries like electroplating, metal finishing, cooling towers, dyes, paints, anodizing and leather tanning and is found as traces in effluents finding their way to natural water bodies causing hazardous toxicity to the health of humans, animals and aquatic lives directly or indirectly. Many methods for the removal of Chromium such as chemical reduction, precipitation, ion exchange, electrochemical reduction, evaporation, reverse osmosis and adsorption using activated carbon etc. have been reported but all being expensive and complicated to operate. Experimental practices reveal that adsorption by agricultural and horticultural wastes are quite simple, inexpensive and efficient method. Agra is famous for Potato farming, a lot of discarded potato waste from cold storages is thrown along road side drains causing solid waste generated which either creates solid waste disposal problem or otherwise it finds way to Yamuna river resulting high BOD and posing a serious threat to the aquatic environment. For developing countries like India adsorption studies using discarded potato (Solanum tuberosum) waste from cold storages (DPWC) a solid waste as low cost adsorbent for Chromium removal was dual beneficial i.e., an ideal solution to these solid wastes disposal problem of Agra and removal of Chromium from tannery effluents and thereby saving aquatic life from Chromium contamination in Yamuna river. Keeping this in view batch experiments were designed to study the feasibility of discarded potato waste from cold storages to remove chromium (VI) from the aqueous solutions. During the study various affecting parameters, such as pH, adsorbent does, initial concentration, temperature, contact time, adsorbent grain size and start up agitation speed were optimized as 5.0, 10-20 g/l, 50 mg/l, 250C, 135 minutes, average size and 80 rpm respectively on chromium removal efficiency. Various Isotherms such as Langmuir, Freundlich, Tempkin also fitted suitably and various corresponding constants determined from these Isotherms favor and support the adsorption. Thermodynamic constants ∆G, ∆H and ∆S were found to be 0.267 KJ/mole, 0.288 KJ/mole and 0.0013 KJ/mole respectively.
Effect of Various External and Internal Factors on the Carrier Mobility in n-...inventy
The effect of various external (temperature, electric field, light) and intracrystalline (doping, initial resistivity) factors on the mobility of carriers in layered n-InSe semiconductor experimentally have been investigated. Scientific explanations of the results are proposed
Transient flow analysis for horizontal axial upper-wind turbineinventy
This study is to carry out a transient flow field analysis on the condition that the wind turbine is working to generate turbine, the wind turbine operating conditions change over time, Purpose of this study is try to find out the rule from the wind turbine changing over time . In transient analysis, the wind velocity on inlet boundary and rotation speed in the rotor field will change over time, and an analytical process is provided that can be used for future reference. At present, the wind turbine model is designed on the concept of upwind horizontal axis type. The computer engineering software GH Bladed is used to obtain the relationship between the rotor velocity and the wind turbine. Then the ANSYS engineering software is used to calculate the stress and strain distribution in the blades over time. From the analytical result, the relationship between the stress distribution in the blades and the rotor velocity is got to be used as a reference for future wind turbine structural optimization.
Choice of Numerical Integration Method for Wind Time History Analysis of Tall...inventy
Wind tunnel tests are being performed routinely around the world for designing tall buildings but the advent of powerful computational tools will make time-history analysis for wind more common in near future. As the duration of wind storms ranges from tens of minutes to hours while earthquake durations are typically less than a three to four minutes, the choice of a time step size (Δt) for wind studies needs to be much larger both to reduce the computational time and to save disk space. As the error in any numerical solution of the equation of motion is dependent on step size (Δt), careful investigations on the choice of numerical integration methods for wind analyses are necessary. From a wide variety of integration methods available, it was decided to investigate three methods that seem appropriate for 3D-time history analysis of tall buildings for wind. These are modal time history analysis, the Hilber-Hughes-Taylor (HHT) method or α-method with α=- 0.1, and the Newmark method with β=0.25 and γ=0.5 ( i.e., trapezoidal rule). SAP2000, a common structural analysis software tool, and a 64-story structure are used to conduct all the analyses in this paper. A boundary layer wind tunnel (BLWT) pressure time history measured at 120 locations around the building envelope of a similar structure is used for the analyses. Analyses performed with both the HHT and Newmark-method considering P-delta effects show that second order effects have a considerable impact on both displacement and acceleration response. This result shows that it is necessary to account P-delta effect for wind analysis of tall buildings. As the direct integration time history analysis required very large computation times and very large computer physical memory for a wind duration of hours, a modal analysis with reduced stiffness is considered as a good alternative. For that purpose, a non-linear static analysis of the structure with a load combination of 1.0D + 1.0L is performed in SAP2000 and the reduced stiffness of the structure after the analysis is used to conduct an eigenvalue analysis to extract the mode shapes and frequencies of this structure. Then the first 20- modes are used to perform a modal time history analysis for wind load. The result shows that the responses from modal analysis with “20-mode (reduced stiffness)” are comparable with that from the P-Δ analyses of Newmark-method
Impacts of Demand Side Management on System Reliability Evaluationinventy
Electricity demand in Saudi Arabia is steadily increasing as electrical loads grows at a rate of about 7% per year, this represents a high rate by all standards, and largely due to population growth, as well as due to government subsidies which may lead to prices much lower than actual production cost. This growth represents a challenge that requires Saudi Electricity Company (SEC) to invest huge amounts of money every year, for the construction of additional generation capacity along with the reinforcement of transmission network to meet the consumption growth.Also the demand varies frequently throughout the day, causing a waste of a large part of the energy. SEC believes the optimum solution lies in altering the load shape in order to have a better balance between customer’s consumption and SEC’s generation, This paper describes the method for improving the power system reliability by shifting the portion of peak load to off-peak periods This load management scheme can be achieved by lifting the generation during off peak periods and utilizing the stored energy during peak periods. A hybrid set up involving solar and wind energy along with batteries can also be used to store energy and utilize it during peak periods.
Reliability Evaluation of Riyadh System Incorporating Renewable Generationinventy
In this paper, the experience of Saudi Electricity Company (SEC) in analyzing the generation adequacy for Year 2013 is presented. This analysis is conducted by calculating several reliability indices for Riyadh system hourly load during all four seasonal periods. The reliability indices are gauged against the international utility practice. SEC also plans to introduce renewable energy into the network in order to secure the environmental standards and reduce fuel costs of conventional generation. Thus, the reliability improvement due to different integration levels of Solar and Wind generating sources has also been investigated. The capacity value provided by these variable renewable energy sources (VERs) to reliably meet the system load has been calculated using effective load carrying capability (ELCC) technique with a loss of load expectancy metric.
The effect of reduced pressure acetylene plasma treatment on physical charact...inventy
The capacitors are increasingly being used as energy storage devicesin various power systems. The scientists of the world are tryingto maximize the electrical capacity of the supercapacitors. To achieve this purpose, numerous method sare used: the surface activation of electrodes, the surface etching using the electronbeam, the electrode etching with variousgasplasma, etc. The purpose of this work is toresearch how the properties of carbon electrodes depend on the plasma parameters at whichtheywere formed. The largest surface area ofcarbonelectrodeof47.25m2 /gis obtainedat 15 ofAr/C2H2gasratio. Meanwhile, theSEMimages show that the disruption of structures with low bond energies and the formation of new onesare taking place when the carbon electrodes are etched at acetylene plasma and placed on carbon electrode. The measurements of capacitance showthat capacitors with affectedelectrodes have about10-15% highercapacity than those not treated with acetyleneplasma.
Experimental Investigation of Mini Cooler cum Freezerinventy
In general cases the refrigerator could be converted into an air conditioner by attaching a fan. Thus a cooler as well as freezer is obtained in a single set up. The freezer can be converted to an air conditioner when the outside air is allowed to flow beside the cooling coil and is forced outside by an exhaust fan. In this case a mini scale cooler cum freezer using R134a as refrigerant was fabricated and tested In our mini project work we had designed, fabricated and experimentally analysed a mini cooler cum freezer. From the observations and calculations, the results of mini cooler cum freezer are obtained and are compared.
Growth and Magnetic properties of MnGeP2 thin filmsinventy
We have successfully grown MnGeP2 thin films on GaAs (100) substrate. A ferromagnetic transition near 320 K has been observed by temperature dependent magnetization and resistance measurements. Field dependent magnetization experiments have shown that the coercive fields at 5, 250, and 300 K are 3870, 1380 and 155 Oe, respectively. Magnetoresistance and Hall measurements have displayed that hole conduction is dominant in MnGeP2. PACS: 75.50.Pp, 75.70.-i, 85.70.-w, 73.50.-h
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Research Inventy : International Journal of Engineering and Science
1. Research Inventy: International Journal Of Engineering And Science
Vol.3, Issue 5 (July 2013), Pp 47-53
Issn(e): 2278-4721, Issn(p):2319-6483, Www.Researchinventy.Com
47
“Web Based Spatial Ranking System”
1
Mr. Vijayakumar Neela, 2
Prof. Raafiya Gulmeher
(Dept. of Computer Science and Engineering, KBNCE, Gulbarga /VTU Belgaum, India)
ABSTRACT - A spatial preference query ranks objects based on the qualities of features in their spatial
neighborhood. For example, using a real estate agency database of flats for lease, a customer may want to rank
the flats with respect to the appropriateness of their location, defined after aggregating the qualities of other
features (e.g., restaurants, cafes, hospital, market, etc.) within their spatial neighborhood. Such a neighborhood
concept can be specified by the user via different functions. It can be an explicit circular region within a given
distance from the flat. Another intuitive definition is to assign higher weights to the features based on their
proximity to the flat. In this paper, formally define spatial preference queries and propose appropriate indexing
techniques and search algorithms for them. Extensive evaluation of this methods on both real and synthetic data
reveals that an optimized branch-and-bound solution is efficient and robust with respect to different parameters.
KEYWORDS - Query processing, spatial databases
I. INTRODUCTION
Spatial database systems manage large collections of geographic entities, which apart from spatial
attributes contain non spatial information (e.g., name, size, type, price, etc.). In this paper, they have presented
an interesting type of preference queries, which select the best spatial location with respect to the quality of
facilities in its spatial neighborhood. The set D of interesting objects (e.g., candidate locations), a top-k spatial
preference query retrieves the k objects in D with the highest scores. The score of an object is defined by the
quality of features (e.g., facilities or services) in its spatial neighborhood.
Traditionally, there are two basic ways for ranking objects:
• Spatial ranking, which orders the objects according to their distance from a reference point.
• Non-spatial ranking, which orders the objects by an aggregate function on their non-spatial values.
The top-k spatial preference query integrates these two types of ranking in an intuitive way. As
indicated by our examples, this new query has a wide range of applications in service recommendation and
decision support systems. To gain knowledge yet, there is no existing efficient for processing the top-k spatial
preference query. A bruteforce approach for evaluating it is to compute the scores of all objects in D and select
the top-k ones. This method, however, is expected to be very expensive for large input data sets. In this paper,
this propose alternative techniques that aim at minimizing the accesses to the object and feature data sets, while
being also computationally efficient. This technique applies on spatial-partitioning access methods and compute
upper score bounds for the objects indexed by them, which are used to effectively prune the search space.
Specifically, this contribute the branch-and-bound (BB) algorithm and the feature join (FJ) algorithm for
efficiently processing the top-k spatial preference query. Furthermore, this paper studies three relevant
extensions that have not been investigated in our preliminary work [1]. The first extension is an optimized
version of BB that exploits a more efficient technique for computing the scores of the objects. The second
extension studies adaptations of the proposed algorithms for aggregate functions other than SUM, e.g., the
functions MIN and MAX. The third extension develops solutions for the top-k spatial preference query based on
the influence score. For example, consider a real estate agency office that holds a database with available flats
for lease. Here "feature" refers to a class of objects in a spatial map such as specific facilities or services. A
customer may want to rank the contents of this database with respect to the quality of their locations, quantified
by aggregating non-spatial characteristics of other features (e.g., restaurants, cafes, hospital, market, etc.) in the
spatial neighborhood of the flat (defined by a spatial range around it). Quality may be subjective and query-
parametric. As another example, the user (e.g., a tourist) wishes to find a hotel p that is close to a high-quality
restaurant and a high quality cafe.
2. Web Based Spatial Ranking System
48
Fig. la shows the locations of an object data set D (hotels) in white, and two feature data sets: the set J I
(restaurants) in gray, and the set J 2 (cafes) in black. Feature points are labeled by quality values that can be
obtained from rating providers (e.g., http://www.zagat.coml). For the ease of discussion, the qualities are
normalized to values in [0,1]. The score T(P) of a hotel p is defined in terms of:
• The maximum quality for each feature in the neighborhood region of p.
• The aggregation of those qualities
A simple score instance, called the range score, binds the neighborhood region to a circular region at p
with radius E (shown as a circle), and the aggregate function to SUM. For instance, the maximum quality of
gray and black points within the circle of PI are 0.9 and 0.6, respectively, so the score PI is T(Pa =0.9 + 0.6 =
l.5.Similarly,we obtain T(P2) =1.0 + 0.1 =1.1 and r(p3) = 0.7 + 0.7 = l A. Hence, the hotel p, is returned as the
top result. The semantics of the aggregate function is relevant to the user’s query. The SUM function attempts to
balance the overall qualities of all features. For the MIN function, the top result becomes P3, with the score
T(P3) = min{ 0.7, 0.7} = 0.7. It ensures that the top result has reasonably high qualities in all features. For the
MAX function, the top result is p2, with score of r(p2) = max{ 1.0, 0.1} = 1.0. It is used to optimize the quality
in a particular feature, but not necessarily all of them. The neighborhood region in the above spatial preference
query can also be defined by other score functions that is the influence score. As opposed to the crisp radius E
constraint in the range score, the influence score smoothens the effect of E and assigns higher weights to cafes
that are closer to the hotel.
Fig. 1 b shows a hotel P5 and three cafes s" S2, S3 (its quality with the weight 2·j, where j is the order
of the smallest circle containing Sf. For example, the scores of s" S2 and S3 are0.3 12' = 0.15,0.9 1 22 = 0.225
and 1.0 / 2 3 = 0.125,respectively. The influence score of P5 is taken as the highest value (0.225). Furthermore,
this paper studies three relevant extensions that have not been investigated in this preliminary work. The first
extension is an optimized version of BB that exploits a more efficient technique for computing the scores of the
objects.
The second extension studies adaptations of the proposed algorithms for aggregate functions other than
SUM, e.g., the functions MIN and MAX. The third extension develops solutions for the top-k spatial preference
query based on the influence score.
II. RELATED WORK
Object ranking is a popular retrieval task in various applications. In relational databases, it rank tuples
using an aggregate score function on their attribute values. For example, a real estate agency maintains a
database that contains information of flats available for rent. A potential customer wishes to view the top 10
flats with the largest sizes and lowest prices. In this case, the score of each flat is expressed by the sum of two
qualities: size and price, after normalization to the domain [0, 1] (e.g., 1 means the largest size and the lowest
price). In spatial databases, ranking is often associated to nearest neighbor (NN) retrieval. Given a query
location, this is interested in retrieving the set of nearest objects to it that qualify a condition (e.g., restaurants).
Assuming that the set of interesting objects is indexed by an R-tree, this can apply distance bounds and traverse
the index in a branch-and-bound fashion to obtain the answer.
Nevertheless, it is not always possible to use multidimensional indexes for top-k retrieval. First, such
indexes break down in high-dimensional spaces. Second, top-k queries may involve an arbitrary set of user-
specified attributes (e.g., size and price) from possible ones (e.g., size, price, distance to the beach, number of
bedrooms, floor, etc.) and indexes may not be available for all possible attribute combinations (i.e., they are too
expensive to create and maintain). Third, information for different rankings to be combined (i.e., for different
attributes) could appear in different databases (in a distributed database scenario) and unified indexes may not
exist for them. Solutions for top-k queries [2] focus on the efficient merging of object rankings that may arrive
from different (distributed) sources. Their motivation is to minimize the number of accesses to the input
rankings until the objects with the top k aggregate scores have been identified. To achieve this, upper and lower
bounds for the objects seen so far are maintained while scanning the sorted lists. The first review of R-tree,
which is the most popular spatial access method and the NN search algorithm of [4]. Then, it survey recent
research of feature based spatial queries.
A. Special Query Evaluation on R-Tree
The most popular spatial access method is the R-tree, which indexes minimum bounding rectangles
(MBRs) of objects can no longer be used to prune a combination based on distances among the entries in the
combination. Any possible combination must be considered if its upper bound score is above the best score
found.
3. Web Based Spatial Ranking System
49
Fig. 2 shows a set D = { Pl ... p8}of spatial objects (e.g., points) and an R-tree that indexes them. R-trees can
efficiently process main spatial query types, including spatial range queries, nearest neighbor queries, and
spatial joins. Given a spatial region W, a spatial range query retrieves from D the objects that intersect W. For
instance, consider a range query that asks for all objects within the shaded area in Fig. 2. Starting from the root
of the tree, the query is processed by recursively following entries, having MBRs that intersect the query region.
For instance, el does not intersect the query region, thus the sub tree pointed by el cannot contain any query
result. In contrast, e2 is followed by the algorithm and the points in the corresponding node are examined
recursively to find the query result P7.
A nearest neighbor query takes as input a query object q and returns the closest object in D to q. For
instance, the nearest neighbor of q in Fig. 2 is P7. Its generalization is the, k-NN query, which returns the k
closest objects to q, given a positive integer k. NN (and k-NN) queries can be efficiently processed using the
best-first (BF) algorithm of [4], provided that D is indexed by an R-tree. A min-heap H which organizes R-tree
entries based on the (minimum) distance of their MBRs to q is initialized with the root entries. In order to find
the NN of q in Fig. 2, BF first inserts to H entries el, e2 , e3 and their distances to q. Then, the nearest entry is e2
retrieved from H and objects P1, P2, P3 are inserted to H. The next nearest entry in H is P7, which is the nearest
neighbor of q. In terms of I/ O, the BF algorithm is shown to be no worse than any NN algorithm on the same R-
tree [4].
The aggregate R-tree (a R-tree) [6] is a variant of the R-tree, where each non leaf entry augments an
aggregate measure for some attribute value (measure) of all points in its sub-tree. As an example, the tree shown
in Fig. 2 can be upgraded to a MAX a R-tree over the point set, if entries el.e2 ,e3 contain the maximum
measure values of sets{ P2, P3}, { P1. P8, P7 }, { P4, P5, P6}, respectively. Assume that the measure values of
P4, Ps, P6 are {0.2, 0.l,0.4}, respectively. In this case, the aggregate measure augmented in e3 would be
MAX{0.2, 0.l,0.4}=0.4. In this paper, we employ MAX a R-trees for indexing the feature data sets (e.g.,
restaurants), in order to accelerate the processing of top-k spatial preference queries.
Given a feature data set F and a multidimensional region R, the range top-k query selects the tuples
(from F) within the region R and returns only those with the k highest qualities in [7]. It indexed the data set by
a MAX a R-tree and developed an efficient tree traversal algorithm to answer the query. Instead of finding the
best k qualities from F in a specified region, our (range score) query considers multiple spatial regions based on
the points from the object data set D, and attempts to find out the best k regions (based on scores derived from
multiple feature data sets Fc).
B. Feature-Based Spatial Queries
It solved the problem of finding top-k sites based on their influence on feature points
.
4. Web Based Spatial Ranking System
50
As an example, Fig. 3a shows a set of sites (white points), a set of features (black points weights), such
that each line links a feature point to its nearest site. The influence of a site PI is defined by the sum of weights
of feature points having pi as their closest site. For instance, the score of PI is 0.9+0.5=1.4. Similarly, the scores
of P2 and P3 are 1.5 and 1.2, respectively. Hence, p2 is returned as the top-l influential site.
Related to top-k influential sites query are the optimal location queries studied in [8] and [9]. The goal
of the top k is to find the location in space (not chosen from a specific set of sites) that minimizes an objective
function.
Fig. 3b and 3c shows, feature points and existing sites are shown as black and gray points, respectively.
Assume that all feature points have the same quality. The maximum influence optimal location query [8] finds
the location (to insert to the existing set of sites) with the maximum influence, whereas the minimum distance
optimal location query [9] searches for the location that minimizes the average distance from each feature point
to its nearest site. The optimal locations for both queries are marked as white points in Fig. 3b and 3 c,
respectively. The techniques proposed in [8] and [9] are specific to the particular query types described above
and cannot be extended for our top-k spatial preference queries. Also, they deal with a single feature data set
whereas our queries consider multiple feature data sets. Recently, novel spatial queries and joins [10] and [11],
have been proposed for various spatial decision support problems. However, they do not utilize non-spatial
qualities of facilities to define the score of a location. Finally, [12] and [13 ] studied the evaluation of textual
location-based queries on spatial objects.
III. SPATIAL PREFERENCE QUERIES
A. Definitions and Index Structures
Let Fc be a feature data set, in which each feature object S Fc is associated with a quality w(s) and a
spatial point. It assume that the domain of w(s) is the interval [0,1]. As an example, the quality w(s) of a
restaurant s can be obtained from a ratings provider. It proceeds to elaborate the aggregate function and the
component score function. Typical examples of the aggregate function AGG are: SUM, MIN, and MAX.
This first focus on the case where AGG is SUM. This will discuss the generic scenario where AGG is
an arbitrary monotone aggregate function.
In this paper, It assume that the object data set . . D is indexed by an R-tree and each feature data set J e
is indexed by an MAX a R-tree, where each non-leaf entry augments the maximum quality (of features) in its
sub-tree. Nevertheless, this solution is directly applicable to data sets that are indexed by other hierarchical
spatial indexes (e.g., point quad-trees).
The rationale of indexing different feature data sets by separate a R-trees is that:
A user queries for only few features (e.g., restaurants and cafes) out of all possible features (e.g.,
restaurants, cafes, hospital, market, etc.).
Different users may consider different subsets of features.
Branch-and-Bound algorithm can significantly reduce the number of objects to be examined. The key idea
is to compute, for non-leaf entries e in the object tree D, an upper bound T( e ) of the score T( e) for any point p
in the subtree of e. If T( e) then we need not access the subtree of e, thus we can save numerous score
computations.
This a pseudo code of BB algorithm, based on this idea. BB is called with N being the root node of D.
If N is a non-leaf node, Lines 3 -5 compute the scores T(e) for non-leaf entries e concurrently. Recall that T(e) is
an upper bound score for any point in the subtree of e. The techniques for computing T(e) will be discussed
shortly with the component scores Te( e) known so far, it can derive T(e), an upper bound of T(e). If T( e) ,
then the subtree of e cannot contain better results than those in Wk and it is removed from V . In order to obtain
points with high scores early, we sort the entries in descending order of T(e) before invoking the above
procedure recursively on the child nodes pointed by the entries in V . If N is a leaf node, we compute the scores
for all points of N concurrently and then update the set Wk of the top-k results. Since both Wk and Y are global
variables, their values are updated during recursive call of BB.
5. Web Based Spatial Ranking System
51
B.1 Upper Bound Score Computation
Upper Bound Score remains to clarify how the (upper bound) scores Tc( e) of non-leaf entries (within
the same node N) can be computed concurrently . The goal is to compute these upper bound scores such that
The bounds are computed with low I/O cost
The bounds are reasonably tight, in order to facilitate effective pruning.
It utilizes only level-l entries (i.e., lowest level non leaf entries) in J e for deriving upper bound scores because:
1. There are much fewer level-l entries than leaf entries (i.e., points)
2. High-level entries in J e cannot provide tight bounds.
It also verifies the effectiveness and the cost of using level- 1 entries for upper bound score
computation. Algorithm 2 can be modified for the above upper bound computation task (where input V
corresponds to a set of non-leaf entries), after changing Line 2 to check whether child nodes of N are above the
leaf-level. The following example illustrates how upper bound range scores are derived. In Fig. 4a, v I and V2
are non-leaf entries in the object tree D and the others are level-l entries in the feature tree Fc' For the entry V1,
we first define its Minkowski region (i.e., gray region around v I), the area whose mindist from V1 is within E.
Observe that only entries e intersecting the Minkowski region of V1 can contribute to the scope of some point in
V1. Thus, the upper bound score Te(v1) is simply the maximum quality of entries e1,.e8 ,e6, e7 i.e., 0.9.
Similarly, Tc(v2 ) is computed as the maximum quality of entries e2 , e3 e4, e8 i.e., 0.7. Assuming that V1 and
V2 are entries in the same tree node of D, their upper bounds are computed concurrently to reduce I/O cost.
B. Optimized Branch-and-Bound Algorithm
Computing the scores of objects efficiently from the feature trees F1 , , F2 , . . . . , Fm. The set V
contains objects whose scores need to be computed. Here, E refers to the distance threshold of the range score,
and Y represents the best score found s far. For each feature tree Fe, we employ a max-heap H c to traverse the
entries of Fc in descending order of their quality values. The root of Fc is first inserted into H c. The variable
maintains the upper bound quality of entries in the tree that will be visited. We then initialize each
component score Tc(P) of every object p E V to O. The variable keeps track of the ID of the current feature
tree being processed. The loop is used to compute the scores for the points in the set.
C. Optimized Branch-and-Bound Algorithm
Computing the scores of objects efficiently from the feature trees F1 , , F2 , . . . . , Fm .The set V
contains objects whose scores need to be computed. Here, E refers to the distance threshold of the range score,
and y represents the best score founds far. For each feature tree Fe, we employ a max-heap H c to traverse the
entries of J e in descending order of their quality values. The root of Fc is first inserted into H c. The variable
maintains the upper bound quality of entries in the tree that will be visited. We then initialize each
component score Tc(P) of every object p E V to O. The variable keeps track of the ID of the current feature
tree being processed. The loop is used to compute the scores for the points in the set.
6. Web Based Spatial Ranking System
52
And then deheap an entry e from the current heap Hα . The property of the max-heap guarantees that the quality
value of any future entry deheaped from Hα is at most w( e). Thus, the bound µc is updated to w(e). It prune the
entry e if its distance from each object point pα V is larger than α. In case e is not pruned, it compute the tight
upper bound score T*(p) for each pαV ; the object p is removed from V if T*(p) <= y.
Next, it access the child node pointed to bye, and examine each entry e ' in the node. A nonleaf entry e'
is inserted into the heap Ha if its minimum distance from some p α V is within α whereas a leaf entry e' is used
to update the component score Tα(P) for any p α V within distance E and it apply the round-robin strategy to
find the next a value such that the heap Hα is not empty.
D. Influence Score
The influence score function that combines both the qualities and relative locations of feature points.
And then it presents the adaptations of our solutions for the influence score function. Finally, it discuss how our
solutions can be used for other types of influence score functions. The range score has a drawback that the
parameter E is not easy to set. The range score has a drawback that the parameter α is not easy to set. Consider,
for instance, the example of the range score Tmg
( ). A score function such that it is not too sensitive to the range
parameter α.The user usually prefer a high-quality restaurant rather than a large number of low-quality
restaurants.
IV. EXPERIMENTS
The efficiency of the proposed algorithms using real and synthetic data sets is compared. Each data set
is indexed by an R-tree with 4 K bytes page size. It used an LRU memory buffer whose default size is set to 0,5
percent of the sum of tree sizes (for the object and feature trees used). These algorithms were implemented in
java and experiments were run on a Pentium D 2.8 GHz PC with 1 GB of RAM. In all experiments, it measure
both the I/ O cost (in number of page faults) and the total execution time (in seconds) of this algorithms.
A. Experimental Settings
It used both real and synthetic data for the experiments. For each synthetic data set, the coordinates of
points are random values uniformly and independently generated for different dimensions. By default, an object
data set contains 200K points and a feature data set contains l00K points. The point coordinates of all data sets
are normalized to the 2D space [0,10000f]2
For a feature data set :Fc, we generated qualities for its points such
that they simulate a real-world scenario: facilities close to (far from) a town centre often have high (low) quality.
For this, a single anchor point s. is selected such that its neighborhood region contains high number of points.
Let distmin(distmax) be the minimum (maximum) distance of a point in :Fc from the anchor s.*.
B. Performance on Queries with Range Scores
It empirically justifies the choice of using level-l entries of feature trees Fc for the upper bound score
computation routine in the BB algorithm. Table 1 shows the decomposition of node accesses over the tree D and
the trees :Fc, and the statistics of upper bound score computation. Each accessed non-leaf node of D invokes a
call of the upper bound score computation routine.
When level-0 entries of :Fc are used, each upper bound computation call incurs a high number (617.5) of node
accesses (of :Fc). On the other hand, using level-2 entries for upper bound computation leads to very loose
bounds, making it difficult to prune the leaf nodes of D. Observe that the total cost is minimized when level-l
entries (of :Fc) are used. In that case, the node accesses per upper bound computation call is low (15), and yet
the obtained bounds are tight enough for pruning most leaf nodes of D. The incremental computation technique
derives a tight upper bound score (of each point) for the MIN function, a partially tight bound for SUM, and a
loose bound for MAX . This explains the performance across different aggregate functions. However, the cost of
the other methods is mainly influenced by the effectiveness of pruning. BB employs an effective technique to
prune unqualified non-leaf entries in the object tree so it outperforms group probing.
7. Web Based Spatial Ranking System
53
C Results on Real Data
This experiment on real object and feature data sets in order to demonstrate the application of top-k
spatial preference queries. It obtained three real spatial data sets from a travel portal website,
http://www.allstays.coml. Locations in these data sets correspond to (longitude and latitude) coordinates in US.
This cleaned the data sets by discarding records without longitude and latitude. Each remaining location is
normalized to a point in the 2D space [0, l0,000]2
One data set is used as the object data set and the other two are
used as feature data sets. The object data set D contains 11,399 camping locations. The feature data set :F1
,contains 3 0,921 hotel records, each with a room price (quality) and a location. The feature data set: F2 has 3
,848 records of Wal-Mart stores, each with a gasoline vailability (quality) and a location. The domain of each
quality attribute (e.g., room price and gasoline availability) is normalized to the unit interval [0, 1]. Intuitively, a
camping location is considered as good if it is close to a Wal-Mart store with high gasoline availability (i.e.,
convenient supply) and a hotel with high room price (which indirectly reflects the quality of nearby outdoor
environment) this experiment, it can use the default parameter setting and study how the number of node
accesses of BB is affected by the level of :Fc used.
V. CONCLUSIONS
The top-k spatial preference queries provide a novel type of ranking for spatial objects based on
qualities of features in their neighborhood. The neighborhood of an object p is captured by the scoring function:
• The range score restricts the neighborhood to a crisp region centered at p,
• The influence score relaxes the neighborhood to the whole space and assigns higher weights to locations closer
to p.
It presented above algorithms for processing top-k spatial preference queries. The algorithm BB derives
upper bound scores for non-leaf entries in the object tree, and prunes those that cannot lead to better results. The
algorithm BB* is a variant of BB that utilizes an optimized method for computing the scores of objects (and
upper bound scores of non leaf entries).
The algorithm performs a multiway join on feature trees to obtain qualified combinations of feature points and
then search for their relevant objects in the object tree. Based on experimental findings, BB* is scalable to large
data sets and it is the most robust algorithm with respect to various parameters. However, BB* is the best
algorithm in cases where the number m of feature data sets is low and each feature data set is small.
Based on experimental findings, BB* is scalable to large data sets and it is the most robust algorithm with
respect to various parameters. However, BB* is the best algorithm in cases where the number m of feature data
sets is low and each feature data set is small.
REFERENCES
[1] M.L. Yiu, X. Dai, N. Mamoulis, and M. Vaitis, "Top-k Spatia Preference Queries," Proc. IEEE Int"l Conf. Data Eng. (rCDE), 2007.
[2] N. Bruno, L. Gravano, and A. Marian, "Evaluating Top-k Queries over Web-Accessible Databases," Proc. IEEE Int"l Conf. Data
Eng. (lCDE), 2002.
[3] Guttman, "R- Trees: A Dynamic Index Structure for Spatial Searching," Proc. ACM IGMOD, 1984.
[4] G.R. Hjaltason and H. Samet, "Distance Browsing in Spatial Databases," ACM Trans. Database Systems, vol. 24, no. 2, pp. 265
318,1999.
[5] R. Weber, H.-J. Schek, and S. Blott, "A Quantitative Analysis and Performance Study for Similarity-Search Methods in
HighDimensional Spaces," Proc. Int"l Conf. Very Large Data Bases (VLDB), 1998.