Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
This document proposes a new image encryption scheme based on chaotic encryption. It provides a fast encryption algorithm using a pseudorandom key stream generator based on coupled chaotic maps. Only the most important image components identified using discrete wavelet transform are encrypted. Statistical analysis shows the encrypted images have uniform histograms and negligible pixel correlations, resisting cryptanalysis attacks. The partial encryption also reduces computation time for applications with bandwidth and power constraints like mobile devices.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Modified weighted embedding method for image steganographyIAEME Publication
This document proposes a modified weighted embedding method for image steganography. It begins by discussing traditional LSB substitution methods and their weaknesses. It then describes the proposed method, which embeds data by complementing LSBs in image pixels based on the decimal value of the data, rather than direct bit replacement. This is intended to provide better security while maintaining high image quality. The embedding algorithm works by converting the data to decimal, dividing the cover image into blocks, and complementing LSBs in the block pixels based on the decimal digits and an embedding table. Extraction works similarly but in reverse. Experiments on grayscale images are said to support the method.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
Efficient Reversible Data Hiding Algorithms Based on Dual Predictionsipij
In this paper, a new reversible data hiding (RDH) algorithm that is based on the concept of shifting of
prediction error histograms is proposed. The algorithm extends the efficient modification of prediction
errors (MPE) algorithm by incorporating two predictors and using one prediction error value for data
embedding. The motivation behind using two predictors is driven by the fact that predictors have different
prediction accuracy which is directly related to the embedding capacity and quality of the stego image. The
key feature of the proposed algorithm lies in using two predictors without the need to communicate
additional overhead with the stego image. Basically, the identification of the predictor that is used during
embedding is done through a set of rules. The proposed algorithm is further extended to use two and three
bins in the prediction errors histogram in order to increase the embedding capacity. Performance
evaluation of the proposed algorithm and its extensions showed the advantage of using two predictors in
boosting the embedding capacity while providing competitive quality for the stego image.
Kernel based similarity estimation and real time tracking of movingIAEME Publication
This document discusses kernel-based mean shift algorithm for real-time object tracking. It presents the following:
1) The algorithm uses kernel density estimation to calculate the similarity between a target model and candidate windows, using the Bhattacharyya coefficient. 2) It can successfully track objects moving uniformly at slow speeds but struggles with fast or non-uniform motion, or changes in scale. 3) The algorithm was tested on video streams and could track objects moving slowly but failed for fast or irregular motion. Adaptive target windows are needed to handle changes in scale.
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...IAEME Publication
Multiple human tracking based on object detection has been a challenge due to its
complexity. Errors in object detection would be propagated to tracking errors. In this
paper, we propose a tracking method that minimizes the error produced by object
detector. We use RetinaNet as object detector and Hungarian algorithm for tracking.
The cost matrix for Hungarian algorithm is calculated using the RetinaNet features,
bounding box center distances, and intersection of unions of bounding boxes. We
interpolate the missing detections in the last step. The proposed method yield 43.2
MOTA for MOT16 benchmark
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
This document proposes a new image encryption scheme based on chaotic encryption. It provides a fast encryption algorithm using a pseudorandom key stream generator based on coupled chaotic maps. Only the most important image components identified using discrete wavelet transform are encrypted. Statistical analysis shows the encrypted images have uniform histograms and negligible pixel correlations, resisting cryptanalysis attacks. The partial encryption also reduces computation time for applications with bandwidth and power constraints like mobile devices.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Modified weighted embedding method for image steganographyIAEME Publication
This document proposes a modified weighted embedding method for image steganography. It begins by discussing traditional LSB substitution methods and their weaknesses. It then describes the proposed method, which embeds data by complementing LSBs in image pixels based on the decimal value of the data, rather than direct bit replacement. This is intended to provide better security while maintaining high image quality. The embedding algorithm works by converting the data to decimal, dividing the cover image into blocks, and complementing LSBs in the block pixels based on the decimal digits and an embedding table. Extraction works similarly but in reverse. Experiments on grayscale images are said to support the method.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
Efficient Reversible Data Hiding Algorithms Based on Dual Predictionsipij
In this paper, a new reversible data hiding (RDH) algorithm that is based on the concept of shifting of
prediction error histograms is proposed. The algorithm extends the efficient modification of prediction
errors (MPE) algorithm by incorporating two predictors and using one prediction error value for data
embedding. The motivation behind using two predictors is driven by the fact that predictors have different
prediction accuracy which is directly related to the embedding capacity and quality of the stego image. The
key feature of the proposed algorithm lies in using two predictors without the need to communicate
additional overhead with the stego image. Basically, the identification of the predictor that is used during
embedding is done through a set of rules. The proposed algorithm is further extended to use two and three
bins in the prediction errors histogram in order to increase the embedding capacity. Performance
evaluation of the proposed algorithm and its extensions showed the advantage of using two predictors in
boosting the embedding capacity while providing competitive quality for the stego image.
Kernel based similarity estimation and real time tracking of movingIAEME Publication
This document discusses kernel-based mean shift algorithm for real-time object tracking. It presents the following:
1) The algorithm uses kernel density estimation to calculate the similarity between a target model and candidate windows, using the Bhattacharyya coefficient. 2) It can successfully track objects moving uniformly at slow speeds but struggles with fast or non-uniform motion, or changes in scale. 3) The algorithm was tested on video streams and could track objects moving slowly but failed for fast or irregular motion. Adaptive target windows are needed to handle changes in scale.
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...IAEME Publication
Multiple human tracking based on object detection has been a challenge due to its
complexity. Errors in object detection would be propagated to tracking errors. In this
paper, we propose a tracking method that minimizes the error produced by object
detector. We use RetinaNet as object detector and Hungarian algorithm for tracking.
The cost matrix for Hungarian algorithm is calculated using the RetinaNet features,
bounding box center distances, and intersection of unions of bounding boxes. We
interpolate the missing detections in the last step. The proposed method yield 43.2
MOTA for MOT16 benchmark
Vehicle detection using background subtraction and clustering algorithmsTELKOMNIKA JOURNAL
Traffic congestion has raised worldwide as a result of growing motorization, urbanization, and population. In fact, congestion reduces the efficiency of transportation infrastructure usage and increases travel time, air pollutions as well as fuel consumption. Then, Intelligent Transportation System (ITS) comes as a solution of this problem by implementing information technology and communications networks. One classical option of Intelligent Transportation Systems is video camera technology. Particularly, the video system has been applied to collect traffic data including vehicle detection and analysis. However, this application still has limitation when it has to deal with a complex traffic and environmental condition. Thus, the research proposes OTSU, FCM and K-means methods and their comparison in video image processing. OTSU is a classical algorithm used in image segmentation, which is able to cluster pixels into foreground and background. However, only FCM (Fuzzy C-Means) and K-means algorithms have been successfully applied to cluster pixels without supervision. Therefore, these methods seem to be more potential to generate the MSE values for defining a clearer threshold for background subtraction on a moving object with varying environmental conditions. Comparison of these methods is assessed from MSE and PSNR values. The best MSE result is demonstrated from K-means and a good PSNR is obtained from FCM. Thus, the application of the clustering algorithms in detection of moving objects in various condition is more promising.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document provides a survey of single scalar point multiplication algorithms for elliptic curves over prime fields. It discusses the background of elliptic curve cryptography and point multiplication. Point multiplication is the dominant operation in ECC and can be computed using on-the-fly techniques or precomputation if the point is fixed. The efficiency of point multiplication depends on the recoding method used to represent the scalar and the composite elliptic curve operations employed. Various recoding methods and point multiplication algorithms are analyzed, including binary, signed binary using NAF representation, and window methods.
A STOCHASTIC STATISTICAL APPROACH FOR TRACKING HUMAN ACTIVITYZac Darcy
This document summarizes a research paper on tracking human activity using a stochastic statistical approach. The paper proposes using covariance matrices to represent image regions for feature extraction, rather than traditional histogram-based methods. An improved mathematical model is developed using covariance matrices that is capable of more accurate and faster human/object tracking compared to existing histogram and other approaches. The accuracy of the new mathematical detection model is approximately 94.3% compared to 89.1% for conventional models, based on evaluation using publicly available datasets. The approach uses integral images to quickly compute covariances over regions of interest for efficient tracking.
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
With the development of information security, the traditional image encryption methods have become
outdated. Because of amply using images in the transmission process, it is important to protect the confidential image
data from unauthorized access. This paper presents a new chaos based image encryption algorithm, which can improve
the security during transmission more effectively utilizes the chaotic systems properties, such as pseudo-random
appearance and sensitivity to initial conditions. Based on chaotic theory and decomposition and recombination of pixel
values, this new image scrambling algorithm is able to change the position of pixel, simultaneously scrambling both
position and pixel values. Experimental results show that the new algorithm improves the image security effectively to
avoid unscramble, and it also can restore the image as same as the original one, which reaches to the purposes of image
safe and reliable transmission.
Steganography based on random pixel selection for efficient data hiding 2IAEME Publication
This document discusses steganography techniques for hiding data in digital images. It begins with definitions of steganography and the RGB color model used in digital images. It then describes the least significant bit (LSB) insertion method for embedding secret messages into the LSB of pixels in an image. Specifically, it proposes randomly selecting pixels using a pseudorandom number generator to increase security. The document reviews related work on LSB-based steganography and discusses adaptive techniques and those using edge detection or patchwork algorithms. The goal is to combine security against visual and statistical attacks while hiding a large amount of data in the cover image.
Flow Trajectory Approach for Human Action RecognitionIRJET Journal
This document proposes a method for human action recognition in videos using scale-invariant feature transform (SIFT) and flow trajectory analysis. The key steps are:
1. Extract SIFT features from each video frame to detect keypoints.
2. Track the keypoints across frames and calculate the magnitude and direction of motion for each keypoint.
3. Analyze the tracked keypoints and their motion parameters to recognize the human action, such as walking, running, etc. occurring in the video.
This document reviews different techniques for thinning images, including the Zhang and Suen algorithm and neural networks. It provides an overview of existing thinning approaches, such as iterative algorithms, and proposes a new approach using neural networks. The proposed approach aims to perform thinning invariant to rotations while being less sensitive to noise than existing methods. It evaluates techniques based on execution time, thinning rate, and other performance measures. The document concludes that neural networks may provide better results than existing techniques in terms of metrics like PSNR and MSE, while also reducing execution time for skeletonization.
Balancing Compression and Encryption of Satellite Imagery IJECEIAES
With the rapid developments in the remote sensing technologies and services, there is a necessity for combined compression and encryption of satellite imagery. The onboard satellite compression is used to minimize storage and communication bandwidth requirements of high data rate satellite applications. While encryption is employed to secure these resources and prevent illegal use of image sensitive information. In this paper, we propose an approach to address these challenges which raised in the highly dynamic satellite based networked environment. This approach combined compression algorithms (Huffman and SPIHT) and encryptions algorithms (RC4, blowfish and AES) into three complementary modes: (1) secure lossless compression, (2) secure lossy compression and (3) secure hybrid compression. The extensive experiments on the 126 satellite images dataset showed that our approach outperforms traditional and state of art approaches by saving approximately (53%) of computational resources. In addition, the interesting feature of this approach is these three options that mimic reality by imposing every time a different approach to deal with the problem of limited computing and communication resources.
This document discusses parallelizing graph algorithms on GPUs for optimization. It summarizes previous work on parallel Breadth-First Search (BFS), All Pair Shortest Path (APSP), and Traveling Salesman Problem (TSP) algorithms. It then proposes implementing BFS, APSP, and TSP on GPUs using optimization techniques like reducing data transfers between CPU and GPU and modifying the algorithms to maximize GPU computing power and memory usage. The paper claims this will improve performance and speedup over CPU implementations. It focuses on optimizing graph algorithms for parallel GPU processing to accelerate applications involving large graph analysis and optimization problems.
This document discusses data hiding techniques for images. It begins by introducing steganography and some common image steganography methods like LSB substitution, blocking, and palette modification. It then reviews related work on minimizing distortion in steganography, modifying matrix encoding for minimal distortion, and designing adaptive steganographic schemes. The document proposes using a universal distortion measure to evaluate embedding changes independently of the domain. It presents a system for reversible data hiding in encrypted images that partitions the image, encrypts it, hides data in the encrypted image, and allows extraction from the decrypted or encrypted image. Least significant bit substitution is discussed as an approach for hiding data in the encrypted image.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
This document provides a report on predicting stock prices for ITC through developing a neural network model. It describes extracting the last 7 closing values as inputs to a 1D CNN with max pooling and fully connected layers to automatically learn features and predict the next closing value. The strategy was chosen because CNN is well-suited for automatic feature extraction to feed into an artificial neural network, and the last 7 days appeared to capture enough information to predict the following day based on analyzing the dataset's attribute correlations. Screenshots show the model was able to closely predict price values in the test dataset.
Face recognition using gaussian mixture model & artificial neural networkeSAT Journals
Abstract
Face recognition is a non-contact and friendly biometric identification technology. It has broad application prospects in the
military, public security and economic security. In this work, we also consider illumination variable database. The images have
taken from far distance and do not consider the close view face of the individual as in most of the face databases, clear face view
has been considered. In this first we located face as region of interest and then LBP and LPQ descriptors are used which is
illuminance invariant in nature. After this GMM has been used to reduce feature set by taking negative log-likelihood from each
LBP and LPQ descripted image histograms. After this ANN consumes stayed used for organization purposes. The investigational
consequencesshow excellent correctness rates in overall testing of input data.
Keywords: Illumination invariant, face recognition, LBP, LPQs,GMM,ANN
Development of 3D convolutional neural network to recognize human activities ...journalBEEI
This document describes the development of a 3D convolutional neural network (CNN) model to recognize human activities using moderate computation capabilities. The model is trained on the KTH dataset, which contains activities like walking, running, jogging, handwaving, handclapping, and boxing. The proposed model uses 3D CNN layers and max pooling layers to extract both spatial and temporal features from video frames. Testing achieved an accuracy of 93.33% for activity recognition. The number of model parameters and operations are also calculated to show the model can perform human activity recognition with reasonable computational requirements suitable for devices with moderate capabilities.
Artificial Neural Network Based Graphical User Interface for Estimation of Fa...ijsrd.com
This document describes the development of an artificial neural network (ANN) and graphical user interface (GUI) to estimate fabrication time in rig construction projects. The ANN was trained on data from 960 completed fabrication jobs. It uses height, plate thickness, and inspection criteria as inputs to predict fabrication time in days as the output. Eleven different ANN architectures were tested and the model with 3 input nodes, 50 hidden nodes, and 1 output node performed best with a mean squared error of 1.35337e-2. A GUI was created allowing users to input job parameters and receive a fabrication time prediction without ANN expertise. The developed ANN and GUI provide a data-driven method for fabrication time estimation in rig construction project
Design and development of DrawBot using image processing IJECEIAES
Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
This document summarizes a research paper that proposes a robust foreground modeling method to segment and detect multiple moving objects in videos. The proposed method uses a running average technique to model the background and subtract it from video frames to detect foreground objects. Morphological operations like dilation and erosion are applied to reduce noise and merge connected regions. Convex hull processing is also used to define object boundaries more clearly. The method was tested on standard video datasets and achieved better performance than other techniques in segmenting objects under various challenging conditions like illumination changes and occlusion. Experimental results demonstrated high precision, recall and specificity based on comparisons with ground truth data.
A tutorial on applying artificial neural networks and geometric brownian moti...eSAT Journals
This document discusses using artificial neural networks (ANN) and geometric Brownian motion (GBM) to predict stock prices. It first provides background on ANN and GBM models. It then applies each to stock price, profit/earnings, and S&P 500 data to predict future prices. For ANN, the model achieved 48% accuracy within $5 of actual prices. For GBM, the model did not accurately capture price dynamics, likely due to insufficient data used to calculate drift and volatility. While both methods show promise, ANN performed slightly better with this dataset and hyperparameters.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
Vehicle detection using background subtraction and clustering algorithmsTELKOMNIKA JOURNAL
Traffic congestion has raised worldwide as a result of growing motorization, urbanization, and population. In fact, congestion reduces the efficiency of transportation infrastructure usage and increases travel time, air pollutions as well as fuel consumption. Then, Intelligent Transportation System (ITS) comes as a solution of this problem by implementing information technology and communications networks. One classical option of Intelligent Transportation Systems is video camera technology. Particularly, the video system has been applied to collect traffic data including vehicle detection and analysis. However, this application still has limitation when it has to deal with a complex traffic and environmental condition. Thus, the research proposes OTSU, FCM and K-means methods and their comparison in video image processing. OTSU is a classical algorithm used in image segmentation, which is able to cluster pixels into foreground and background. However, only FCM (Fuzzy C-Means) and K-means algorithms have been successfully applied to cluster pixels without supervision. Therefore, these methods seem to be more potential to generate the MSE values for defining a clearer threshold for background subtraction on a moving object with varying environmental conditions. Comparison of these methods is assessed from MSE and PSNR values. The best MSE result is demonstrated from K-means and a good PSNR is obtained from FCM. Thus, the application of the clustering algorithms in detection of moving objects in various condition is more promising.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document provides a survey of single scalar point multiplication algorithms for elliptic curves over prime fields. It discusses the background of elliptic curve cryptography and point multiplication. Point multiplication is the dominant operation in ECC and can be computed using on-the-fly techniques or precomputation if the point is fixed. The efficiency of point multiplication depends on the recoding method used to represent the scalar and the composite elliptic curve operations employed. Various recoding methods and point multiplication algorithms are analyzed, including binary, signed binary using NAF representation, and window methods.
A STOCHASTIC STATISTICAL APPROACH FOR TRACKING HUMAN ACTIVITYZac Darcy
This document summarizes a research paper on tracking human activity using a stochastic statistical approach. The paper proposes using covariance matrices to represent image regions for feature extraction, rather than traditional histogram-based methods. An improved mathematical model is developed using covariance matrices that is capable of more accurate and faster human/object tracking compared to existing histogram and other approaches. The accuracy of the new mathematical detection model is approximately 94.3% compared to 89.1% for conventional models, based on evaluation using publicly available datasets. The approach uses integral images to quickly compute covariances over regions of interest for efficient tracking.
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
With the development of information security, the traditional image encryption methods have become
outdated. Because of amply using images in the transmission process, it is important to protect the confidential image
data from unauthorized access. This paper presents a new chaos based image encryption algorithm, which can improve
the security during transmission more effectively utilizes the chaotic systems properties, such as pseudo-random
appearance and sensitivity to initial conditions. Based on chaotic theory and decomposition and recombination of pixel
values, this new image scrambling algorithm is able to change the position of pixel, simultaneously scrambling both
position and pixel values. Experimental results show that the new algorithm improves the image security effectively to
avoid unscramble, and it also can restore the image as same as the original one, which reaches to the purposes of image
safe and reliable transmission.
Steganography based on random pixel selection for efficient data hiding 2IAEME Publication
This document discusses steganography techniques for hiding data in digital images. It begins with definitions of steganography and the RGB color model used in digital images. It then describes the least significant bit (LSB) insertion method for embedding secret messages into the LSB of pixels in an image. Specifically, it proposes randomly selecting pixels using a pseudorandom number generator to increase security. The document reviews related work on LSB-based steganography and discusses adaptive techniques and those using edge detection or patchwork algorithms. The goal is to combine security against visual and statistical attacks while hiding a large amount of data in the cover image.
Flow Trajectory Approach for Human Action RecognitionIRJET Journal
This document proposes a method for human action recognition in videos using scale-invariant feature transform (SIFT) and flow trajectory analysis. The key steps are:
1. Extract SIFT features from each video frame to detect keypoints.
2. Track the keypoints across frames and calculate the magnitude and direction of motion for each keypoint.
3. Analyze the tracked keypoints and their motion parameters to recognize the human action, such as walking, running, etc. occurring in the video.
This document reviews different techniques for thinning images, including the Zhang and Suen algorithm and neural networks. It provides an overview of existing thinning approaches, such as iterative algorithms, and proposes a new approach using neural networks. The proposed approach aims to perform thinning invariant to rotations while being less sensitive to noise than existing methods. It evaluates techniques based on execution time, thinning rate, and other performance measures. The document concludes that neural networks may provide better results than existing techniques in terms of metrics like PSNR and MSE, while also reducing execution time for skeletonization.
Balancing Compression and Encryption of Satellite Imagery IJECEIAES
With the rapid developments in the remote sensing technologies and services, there is a necessity for combined compression and encryption of satellite imagery. The onboard satellite compression is used to minimize storage and communication bandwidth requirements of high data rate satellite applications. While encryption is employed to secure these resources and prevent illegal use of image sensitive information. In this paper, we propose an approach to address these challenges which raised in the highly dynamic satellite based networked environment. This approach combined compression algorithms (Huffman and SPIHT) and encryptions algorithms (RC4, blowfish and AES) into three complementary modes: (1) secure lossless compression, (2) secure lossy compression and (3) secure hybrid compression. The extensive experiments on the 126 satellite images dataset showed that our approach outperforms traditional and state of art approaches by saving approximately (53%) of computational resources. In addition, the interesting feature of this approach is these three options that mimic reality by imposing every time a different approach to deal with the problem of limited computing and communication resources.
This document discusses parallelizing graph algorithms on GPUs for optimization. It summarizes previous work on parallel Breadth-First Search (BFS), All Pair Shortest Path (APSP), and Traveling Salesman Problem (TSP) algorithms. It then proposes implementing BFS, APSP, and TSP on GPUs using optimization techniques like reducing data transfers between CPU and GPU and modifying the algorithms to maximize GPU computing power and memory usage. The paper claims this will improve performance and speedup over CPU implementations. It focuses on optimizing graph algorithms for parallel GPU processing to accelerate applications involving large graph analysis and optimization problems.
This document discusses data hiding techniques for images. It begins by introducing steganography and some common image steganography methods like LSB substitution, blocking, and palette modification. It then reviews related work on minimizing distortion in steganography, modifying matrix encoding for minimal distortion, and designing adaptive steganographic schemes. The document proposes using a universal distortion measure to evaluate embedding changes independently of the domain. It presents a system for reversible data hiding in encrypted images that partitions the image, encrypts it, hides data in the encrypted image, and allows extraction from the decrypted or encrypted image. Least significant bit substitution is discussed as an approach for hiding data in the encrypted image.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
This document provides a report on predicting stock prices for ITC through developing a neural network model. It describes extracting the last 7 closing values as inputs to a 1D CNN with max pooling and fully connected layers to automatically learn features and predict the next closing value. The strategy was chosen because CNN is well-suited for automatic feature extraction to feed into an artificial neural network, and the last 7 days appeared to capture enough information to predict the following day based on analyzing the dataset's attribute correlations. Screenshots show the model was able to closely predict price values in the test dataset.
Face recognition using gaussian mixture model & artificial neural networkeSAT Journals
Abstract
Face recognition is a non-contact and friendly biometric identification technology. It has broad application prospects in the
military, public security and economic security. In this work, we also consider illumination variable database. The images have
taken from far distance and do not consider the close view face of the individual as in most of the face databases, clear face view
has been considered. In this first we located face as region of interest and then LBP and LPQ descriptors are used which is
illuminance invariant in nature. After this GMM has been used to reduce feature set by taking negative log-likelihood from each
LBP and LPQ descripted image histograms. After this ANN consumes stayed used for organization purposes. The investigational
consequencesshow excellent correctness rates in overall testing of input data.
Keywords: Illumination invariant, face recognition, LBP, LPQs,GMM,ANN
Development of 3D convolutional neural network to recognize human activities ...journalBEEI
This document describes the development of a 3D convolutional neural network (CNN) model to recognize human activities using moderate computation capabilities. The model is trained on the KTH dataset, which contains activities like walking, running, jogging, handwaving, handclapping, and boxing. The proposed model uses 3D CNN layers and max pooling layers to extract both spatial and temporal features from video frames. Testing achieved an accuracy of 93.33% for activity recognition. The number of model parameters and operations are also calculated to show the model can perform human activity recognition with reasonable computational requirements suitable for devices with moderate capabilities.
Artificial Neural Network Based Graphical User Interface for Estimation of Fa...ijsrd.com
This document describes the development of an artificial neural network (ANN) and graphical user interface (GUI) to estimate fabrication time in rig construction projects. The ANN was trained on data from 960 completed fabrication jobs. It uses height, plate thickness, and inspection criteria as inputs to predict fabrication time in days as the output. Eleven different ANN architectures were tested and the model with 3 input nodes, 50 hidden nodes, and 1 output node performed best with a mean squared error of 1.35337e-2. A GUI was created allowing users to input job parameters and receive a fabrication time prediction without ANN expertise. The developed ANN and GUI provide a data-driven method for fabrication time estimation in rig construction project
Design and development of DrawBot using image processing IJECEIAES
Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
This document summarizes a research paper that proposes a robust foreground modeling method to segment and detect multiple moving objects in videos. The proposed method uses a running average technique to model the background and subtract it from video frames to detect foreground objects. Morphological operations like dilation and erosion are applied to reduce noise and merge connected regions. Convex hull processing is also used to define object boundaries more clearly. The method was tested on standard video datasets and achieved better performance than other techniques in segmenting objects under various challenging conditions like illumination changes and occlusion. Experimental results demonstrated high precision, recall and specificity based on comparisons with ground truth data.
A tutorial on applying artificial neural networks and geometric brownian moti...eSAT Journals
This document discusses using artificial neural networks (ANN) and geometric Brownian motion (GBM) to predict stock prices. It first provides background on ANN and GBM models. It then applies each to stock price, profit/earnings, and S&P 500 data to predict future prices. For ANN, the model achieved 48% accuracy within $5 of actual prices. For GBM, the model did not accurately capture price dynamics, likely due to insufficient data used to calculate drift and volatility. While both methods show promise, ANN performed slightly better with this dataset and hyperparameters.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
Number plate recognition system using matlab.Namra Afzal
The document describes a student project to develop a car recognition system using MATLAB. The system aims to detect and recognize car number plates using image processing and optical character recognition algorithms. A group of three students divided the work, with one student writing the Matlab code, another interfacing the system with a microcontroller, and the third building the hardware. The document outlines the workflow and basic modules of the system, including license plate localization, character segmentation, and character recognition using template matching in Matlab. It also discusses some problems faced with the Matlab-based system.
The document discusses Automatic Number Plate Recognition (ANPR) systems. It provides the following key points:
1. ANPR uses optical character recognition on images captured by specialized cameras to read license plates on vehicles.
2. The cameras capture images that are then processed by ANPR software to detect, segment, and identify the license plate numbers.
3. ANPR systems are commonly used for electronic toll collection, traffic management, parking enforcement, and border control by storing images and license plate data.
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
The document proposes a framework that uses intelligent mobile devices to enable indoor wireless location tracking, navigation, and mobile augmented reality (AR). It discusses using mobile devices equipped with inertial measurement units (IMU) and multi-touch screens to provide user feedback to correct positioning errors. The framework also uses mobile AR through device cameras to help navigate users in complex 3D indoor environments and provide interactive location-based services. A prototype system was developed to demonstrate the feasibility of the proposed application framework.
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...Eswar Publications
In recent years, majority of researches are focused on localization system for wireless environment. These researches rely on localization using devices to track the entities. In this paper, we use, a recently proposed Device-free Passive (DfP) that uses Probabilistic techniques to track locations in large-scale real environment without the need of carrying devices. The proposed system uses the Access Points (APs) and Monitoring Point (MPs) that works by monitoring and processing the changes in the received physical signals at one or more monitoring points to detect changes in the environment. The system uses continuous space estimator to return multiple location while the mortal is in motion. Our results show that the system can achieve very high probability of detection and tracking with very few false positives.
Human activity recognition with self-attentionIJECEIAES
The document describes a study that used a self-attention neural network architecture for human activity recognition using smartphone sensor data. The study compared the proposed self-attention model to convolutional neural network (CNN) and long short-term memory (LSTM) baselines. The self-attention model achieved a test accuracy of 91.75% for classifying six human activities, which was comparable to the baseline models. The study investigated components of the self-attention model like dropout rate, positional encoding, and scaling factors to determine the best performing model.
Synchronization of the GPS Coordinates Between Mobile Device and Oracle Datab...idescitation
The article describes an architecture and implementation of module for
acquiring a synchronization of GPS data between mobile device and central database
system. The process of data exchange is inspired by SAMD algorithm. The article
sequentially presents a solution of individual system components. Special attention is paid to
the exchange data format. The processing of the exchanged data is also described in detail.
The resulting solution was deployed and tested in a real production environment.
Hand LightWeightNet: an optimized hand pose estimation for interactive mobile...IJECEIAES
In this paper, a hand pose estimation method is introduced that combines MobileNetV3 and CrossInfoNet into a single pipeline. The proposed approach is tailored for mobile phone processors through optimizations, modifications, and enhancements made to both architectures, resulting in a lightweight solution. MobileNetV3 provides the bottleneck for feature extraction and refinements while CrossInfoNet benefits the proposed system through a multitask information sharing mechanism. In the feature extraction stage, we utilized an inverted residual block that achieves a balance between accuracy and efficiency in limited parameters. Additionally, in the feature refinement stage, we incorporated a new best-performing activation function called “activate or not” ACON, which demonstrated stability and superior performance in learning linearly and non-linearly gates of the whole activation area of the network by setting hyperparameters to switch between active and inactive states. As a result, our network operated with 65% reduced parameters, but improved speed by 39% which is suitable for running in a mobile device processor. During experiment, we conducted test evaluation on three hand pose datasets to assess the generalization capacity of our system. On all the tested datasets, the proposed approach demonstrates consistently higher performance while using significantly fewer parameters than existing methods. This indicates that the proposed system has the potential to enable new hand pose estimation applications such as virtual reality, augmented reality and sign language recognition on mobile devices.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
IRJET- Behavior Analysis from Videos using Motion based Feature ExtractionIRJET Journal
This document proposes a technique for analyzing human behavior in videos using motion-based feature extraction. It discusses how previous approaches have used spatial and temporal features to detect abnormal behaviors. The proposed approach extracts motion features from videos to represent each video with a single feature vector, rather than extracting features from each individual frame. This reduces the feature space and unnecessary information. The technique involves preprocessing videos into frames, extracting motion features, using KNN classification on the features to classify behaviors as normal or abnormal, and evaluating the method's performance on various metrics like accuracy, recall, and precision. Testing on fight and riot datasets showed the motion-based approach achieved higher accuracy, recall, precision and F-measure than a non-motion based approach.
This document describes a plugin developed for the AWARE framework that estimates the indoor or outdoor location type of a mobile device using its on-board sensors. The plugin analyzes data from the accelerometer, magnetometer, light sensor, location sensor, and battery to infer the location type. It is moderately battery efficient, consuming between 3-10% of total device battery. The plugin stores location data and inferences in a database and displays updates every 3 minutes. It uses sensor data and established boundaries to assign weightings to indoor or outdoor predictions and sums the weights to determine the overall location type. The plugin was evaluated using real sensor data and boundaries were adjusted until it accurately inferred location in over 60% of tests.
Engfi Gate: An Indoor Guidance System using Marker-based Cyber-Physical Augme...IJECEIAES
The document describes an indoor guidance system called Engfi Gate that uses augmented reality and markers. It consists of three subsystems: 1) a marker-based cyber-physical interaction system that connects the physical and digital environments using visible and invisible markers, 2) an indoor positioning system that tracks a user's location using visible markers or beacons, and 3) an augmented reality system that provides guidance information to users through their mobile device or head-mounted display. The system was implemented and tested on a university campus as a way to help new students navigate buildings.
The document discusses enhancing indoor localization using IoT techniques. It proposes a framework that uses a quaternion-based extended Kalman filter for heading estimation in pedestrian dead reckoning (PDR), along with low pass filtering and adaptive step length methodology. This approach achieved an average error of 0.16 meters, representing 0.07% of the total 210 meters traveled in experiments. The document also discusses using IoT devices to further improve indoor localization accuracy.
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...IRJET Journal
This document presents a proposed system for creating a security alert for caregivers by implementing a vast deep learning model to recognize human activities and gestures. The system would collect a dataset of skeleton images of human actions and gestures. It would then train models using deep learning algorithms like AlexNet, VGG16, GoogleNet, and ResNet to accurately recognize activities and gestures. This would help monitor senior citizens and detect any health issues or untrustworthy individuals. The proposed system aims to optimize techniques such as stochastic gradient descent and regularizers like ReLU and ELU to increase prediction accuracy and provide low-cost, high-accuracy monitoring to improve senior citizen safety.
Context aware system for recongnizing daily-activitiesSakher BELOUADAH
In recent decades, human activity recognition has been the subject of an important amount of research which enabled many applications in different areas, such as time management, healthcare and anomaly detection. Most of those works were based on using multiple special sensors and few address complex activities. In order to solve those issues, we propose a context-aware system based on the combination of ontological reasoning, GPS mining using k-nearest neighbors, and statistical recognition model using cascade neural networks. We first present some complex activity recognition models and discus their limitations. A general architecture of our approach is then presented along with a detailed description of each section of the system. Finally, we will present the results obtained and discus the system?s limitation and ideas that need to be addressed in future work.
GPS (Global Positioning System) technology is widely known for its ability to track down devices in real time. Combined with mobile phones, it has become a very powerful tool with a great potential for future development of mobile GPS applications. The Interactive ZooOz Guide was a final year industry project
carried out by seven students from three separate courses to tackle a project that involves upgrading the Melbourne Zoo’s mapping system through the use of GPS technology. The aim of the project is to explore the potential capability of using GPS in the Zoo environment. The proposed system uses a PDA device
with a GPS receiver that tracks users’ location in real time as they are touring around the Zoo. In this paper, an insight into GPS technology is briefly reviewed and the design and implementation of “Interactive ZooOZ Guide” is described. Then GUI (Graphical User Interface) is presented in detail.
Finally conclusion is drawn from the “proof of concept” prototype.
a data mining approach for location production in mobile environments marwaeng
The document proposes a three-phase algorithm for predicting the next location of mobile users. In the first phase, mobility patterns are mined from historical user trajectory data. In the second phase, mobility rules are extracted from these patterns. In the third phase, predictions are made by matching mobility rules to a user's current trajectory. The algorithm aims to overcome limitations of prior work by discovering regular patterns in user movements and distinguishing between random and regular movements. A simulation evaluation found the proposed method achieved more accurate predictions than other methods.
This smart mirror project uses a Raspberry Pi, webcam, Walabot sensor, and reflective computer screen to build an interactive mirror. The Walabot detects breathing rate and a swiping gesture to control the mirror's display. A webcam takes pictures that are analyzed by a Microsoft API to extract facial features. All data is sent to a database via a custom API. The mirror's screen displays readings, news, and weather accessed from online APIs. Code modules include Walabot detection and image processing on the Raspberry Pi, APIs for facial recognition and online data, and a website to display information to the user.
This document summarizes research on using smartphones as inertial navigation systems (INS). It discusses how accelerometers and gyroscopes in smartphones can measure position, orientation, and velocity without external devices like GPS. However, smartphone INS have limitations like drift, bias, noise, and interference that filters can help address. The document reviews literature on INS applications in vehicles and pedestrians, as well as methods to integrate GPS and correct errors, such as Kalman and Markov filters. The goal of this research is to experiment with smartphone INS accuracy in various motions and implement filtering to optimize data collection for potential classroom or app applications.
Schematic model for analyzing mobility and detection of multipleIAEME Publication
The document discusses a schematic model for analyzing mobility and detecting multiple objects in traffic scenes. It aims to not only detect and count moving objects, but also understand crowd behavior and reduce issues with objects occluding each other. Previous work on object detection is reviewed, noting that most approaches do not integrate detecting multiple objects simultaneously or address problems of object occlusion. The proposed model uses background subtraction and unscented Kalman filtering to increase detection accuracy and reduce false positives when analyzing image sequences of traffic scenes to detect multiple moving objects. It was tested in MATLAB and results showed highly accurate detection rates.
IRJET - An Intelligent Pothole Detection System using Deep LearningIRJET Journal
This document describes a proposed intelligent pothole detection system using deep learning. The system would use a convolutional neural network trained on pothole image data to detect potholes in photos taken from a vehicle mounted camera. When potholes are detected, their locations would be stored in a cloud database. A mobile app would allow users to view the locations of detected potholes on a map. This would help automate pothole detection to assist road maintenance authorities and provide drivers with pothole location information. The proposed system aims to address the inefficiencies of manual pothole detection by automating the process using deep learning and cloud/mobile technologies.
This document describes a context-aware automatic traffic notification system for cell phones that can learn a user's common destinations and routes over time using location and context data. It collects GPS and other data from users, identifies important locations through clustering, learns frequent routes between locations, and can predict a user's destination and route to then notify them of any traffic conditions. The system is implemented on a mobile phone to provide automated traffic alerts to users during their daily commutes without needing to manually enter a destination.
Design and Implementation of a GPS based Personal Tracking SystemSudhanshu Janwadkar
Design and Implementation of a GPS based Personal Tracking System
Tracking based applications have been quite popular in recent times. Most of them have been limited to commercial applications such as vehicular tracking (e.g tracking of a train etc). However, not much work has been done towards design of a personal tracking system. Our Research work is an attempt to design such personal tracking system. In this paper, we have shared glimpses of our research work.
The objective of our research project is to design & develop a system which is capable of tracking and monitoring a person, object or any other asset of importance (called as target). The system uses GPS to determine the exact position of the target. The target is aided with a compact handheld device which consists of a GPS receiver and GSM modem. GPS receiver obtains location coordinates (viz. Latitude & Longitude) from GPS satellites. The location information in NMEA format is decoded, formatted and sent to control station, through a GSM modem. Due to use of Open CPU development platform, no external Microcontroller is required, with additional advantage of compact size product, reduced design & development time and reduced cost.
Thus, the proposed system is able to track the accurate location of target. This system finds applications in tracking old-age people, tracking animals in forest, tracking delivery of goods etc. Our final designed system is a small-size compact l.S"X3.7S" Tracker system with position accuracy error <30m (100 feet).
Similar to A real time filtering method of positioning data with moving window mechanism (20)
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
A study to evaluate the attitude of faculty members of public universities of...Alexander Decker
This study evaluated faculty members' attitudes toward shared governance in public universities in Pakistan. It used a questionnaire to assess attitudes on 4 indicators of shared governance: the role of the dean, role of faculty, role of the board, and role of joint decision-making. The study analyzed responses from 90 faculty across various universities. Statistical analysis found significant differences in perceptions of shared governance based on faculty rank and gender. Faculty rank influenced perceptions of the dean's role and role of joint decision-making. Gender influenced overall perceptions of shared governance. The results indicate a need to improve shared governance practices in Pakistani universities.
A study to assess the knowledge regarding prevention of pneumonia among middl...Alexander Decker
1) The study assessed knowledge of pneumonia prevention among 60 middle-aged adults in rural Moodbidri, India. Most subjects (55%) had poor knowledge and 41.67% had average knowledge. The mean knowledge score was 40.66%.
2) Knowledge was lowest in areas of diagnosis, prevention and management (35.61%) and highest in introduction to pneumonia (45.42%).
3) There was a significant association between knowledge and gender but not other demographic factors like age, education level or occupation. The study concluded knowledge of prevention was low and health education is needed.
A study regarding analyzing recessionary impact on fundamental determinants o...Alexander Decker
This document analyzes the impact of fundamental factors on stock prices in India during normal and recessionary periods. It finds that during normal periods from 2000-2007, earnings per share had a positive and significant impact on stock prices, while coverage ratio had a negative impact. During the recession from 2007-2009, price-earnings ratio positively and significantly impacted stock prices, while growth had a negative effect. Overall, the study aims to compare the influence of fundamental factors like book value, dividends, earnings, etc. on stock prices during different economic conditions in India.
A study on would be urban-migrants’ needs and necessities in rural bangladesh...Alexander Decker
This document summarizes a study on the needs and necessities of potential rural migrants in Bangladesh and how providing certain facilities could encourage them to remain in rural areas. The study involved surveys of 350 local and non-local people across 7 upazilas to understand their satisfaction with existing services and priority of needs. The findings revealed variations in requirements between local and non-local respondents. Based on the analysis, the study recommends certain priority facilities, such as employment opportunities and community services, that should be provided in rural areas to improve quality of life and reduce migration to cities. Limitations include the small sample size not representing all of Bangladesh and difficulties collecting full information from all respondents.
A study on the evaluation of scientific creativity among scienceAlexander Decker
This study evaluated scientific creativity among 31 science teacher candidates in Turkey. The candidates were asked open-ended questions about scientific creativity and how they would advance science. Their responses showed adequate fluency and scientific knowledge, but low flexibility and originality. When asked to self-evaluate, most said their scientific creativity was partially adequate. The study aims to help improve the development of scientific creativity among future teachers.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Generating privacy-protected synthetic data using Secludy and Milvus
A real time filtering method of positioning data with moving window mechanism
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
A Real-time Filtering Method of Positioning Data with Moving
Window Mechanism
Ha YoonSong* Han-gyoo Kim
Department of Computer Engineering, Hongik University, Sangsu-dong, Seoul 121-791, Korea
* E-mail of the corresponding author: hgkim@hongik.ac.kr
Abstract
Nowadays, advanced mobile devices can obtain current position with the help of positioning data systems such as
GPS, GLONASS, Galileo, and so on. However, positioning data sets usually have erroneous data for various reasons,
mainly due to the environmental issues as well as inherent systematical issues. While doing research related to
positioning data sets, authors experienced quite a large number of erroneous positioning data using Apple iPhone or
Samsung Galaxy devices, and thus need to filter evident errors. In this paper, we will suggest relatively simple, but
efficient filtering method based on statistical approach. From the user’s mobile positioning data in a form of <
latitude; longitude; time > obtained by mobile devices, we can calculate user’s speed and acceleration. From the idea
of sliding window (moving window), we can calculate statistical parameters from speed and acceleration of user
position data and thus filtering can be made with controllable parameters. We expect that the simplicity of our
algorithm can be applied on portable mobile device with low computation power. For the possible enhancement of
our method, we will focus on the construction of more precise window for better filtering. A backtracking
interpolation was added in order to replace erroneous data with proper estimations in order to have more precise
estimation of moving window. We also proposed this filtering algorithm with interpolation as a basis of future
investigation in the section of conclusion and future research.
Keywords: Human Mobility, Error Filtering, Positioning Data, Moving Window, Sliding Window
1. Introduction
The recent advances of mobile devices enable various location based services over human mobility, especially the
introduction of smart phone with GPS or other positioning equipment. The application of positioning system can be
easily found in various mobile devices as shown in (Enescu 2008) including educational field as shown in (Tsing
2009). However these positioning data sometimes have position errors according to the operational environment. In
such cases, many of applications require filtering of such erroneous positioning data. As we experienced by our
experiments, more than 12% of positioning data were erroneous by use of smart phones. This basic experiment was
done by use of smart phone app over Samsung Galaxy Tab which internally uses the position of cellular base station,
over portable GPS device (Garmin), and Apple iPhone 3GS with iOS5 which uses combination of crowd sourced
WIFI positioning, cellular networks, and GPS (iOS 5). More precise result can be found in Kim and Song (Kim, H
2011) along with researches on human mobility model. Another research field of complex system physics showed
that up to 93% of human mobility can be predicted since peoples avoid the random selection of next destination
instead selects their place frequented and their route frequented (Gonzales 2008). The sets of positioning data will be
a basis for human mobility model construction as shown in (Kim, W 2011). In this paper, we will propose a filtering
technique which filters erroneous positioning data with the use of moving window approach. Section 2 shows our
idea using moving window with pre-experiments for algorithm set-up. Section 3 shows filtering algorithm and its
detailed description. Section 4 shows our consideration on user controllable parameters for experiment design and
shows our experimental results. We will conclude and discuss about our future research in section5.
2. Backgrounds
2.1 Idea on Moving Window
Collected user position in a form of <latitude; longitude; time > composes a set of user mobile trace and adding an
identification parameter to the tuple will represent user’s mobility data set. We call one tuple at time t as Pt, and
20
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
latitude of Pt as latt, longitude of Pt as lont. From two consecutive position data tuples, we can calculate the distance
Di moved at time Pi with < lati; loni > and < lati-1; loni-1 > according Vincenty’s formula (Vincentry 1975). Of course,
from the two consecutive distance, we can calculate speed at time of Pi, Vi, and acceleration at time of Pi, ai.
Therefore a tuple Pt has a core form of < t; latt; lont;Dt; Vt; at > with possible auxiliary attributes.
Based on the speed values on actual position data set, we find a glitch on a series of speed such as 600m/sec, which
are meaningless for usual lifetime environment. Thus we investigated maximum possible speed values of usual
human mobility as shown in table 1. We define max speed MAXspeed as 250m/sec. In addition, those maximum values
cannot be reached instantly, i.e. the acceleration cannot be made abruptly. In addition to the MAXspeed, MAXacceleration
can be defined.
For the next step we introduced the idea of moving average and moving standard deviation of speed. We define the
moving average of speed at current time t, MAspeed(n) where n stands for the number of past data of {Px : t – n + 1 ≤
x ≤ t}. Similarly, we can define the moving standard deviation at time t, MSDspeed(n) where n stands for the number
of past data. Here, n is referred as window size by most of researchers. Once we obtain a new tuple Pt, then we can
determine whether Vt is in the usual range of human mobility, and for the at, the same can be applicable. Once Vt is
out of range for the normal distribution with average of MAspeed(n) and standard deviation of MSDspeed(n) we can
discard Pt and this tuple will be filtered out from the series of human trace. The condition of filtering out Pt is:
Vt > MAspeed(n) + s x MSDspeed(n) (1)
where s stands for the sensitivity level of filtering and it is user controllable parameters. Otherwise, we can include
the tuple Pt to the series as a valid positioning data, and thus can recalculate MAspeed(n) and MSDspeed(n). Note that
this calculation can be made in real-time, and we intentionally introduced this approach because it requires relatively
simple calculation. In other words, this algorithm can be executed on a device with low computing power such as
smart phones or similar mobile devices. Our previous research includes similar work with more complicated
statistical theory (Kim 2011), however it is not likely to be introduced for real time environment. Other examples of
moving window based applications can be found in (Ucenic 2006) and (Wettayaprasit 2007).
2.2 Pre-experiments on Window Size
We will conduct experiments to see the effect of window size. For this experiment, we used a set of positioning data
over the area of Seoul, Korea collected during more than 20 days. Note that these data are voluntarily collected by
the authors of this paper using iPhone 3GS with positioning data collection app. We will call it iPhone data set. The
app records position data whenever it senses the location change of iPhone or for every user-defined interval (3 to 60
seconds) if the iPhone is in immobile state. We can draw the positioning data set on geographical map with various
techniques. Among the various techniques, we choose Google map (Google) for visualization of positioning data set.
The visualization of raw data sets is shown in Figure 1 and contains erroneous data. One of the notable phenomenon
is that iOS5 sometimes give simultaneous report of position data from three different schemes. Our guess is that
cellular base station locations, crowd source WIFI positioning, and GPS sometimes reports different positioning data
at the same time. In case we meet multiple position values at the same time, the position data with smallest speed
value was the correct one empirically. Therefore the first stage of filtering is trivially to choose the position data with
the smallest distance to previous position among multiple position data of the same time. In addition, data sets are
composed of several set of discontinued data, for example, position data collection was unable on the subway train or
there was no need to collect data in the home bed. While traveling by subway train, only at the stations was it
possible to collect positioning data.
In order to determine n, we must be more considerate. With varying size of n with the algorithm over data set, we
made several set of experiments. We can expect that large windows size cannot react to the situation of rapid speed
change while it is useful in stable, immobile states, successfully coping with continuous errors. On the contrary small
n will show rapid reaction to abrupt speed change for mobile states which maybe is considered to include continuous
error tuples. For example, once we met m continuous errors we cannot filter out such errors if m > n. In order to
figure out our conjecture, we make experiments on window size upon the iPhone data set. We calculated moving
average and moving standard deviation over various window sizes such as 5, 10, 25, 50, and 100. Figure 2 shows the
21
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
result of our simple experiment. It is clear that larger window size is dull. Once we met very large speed value, large
window size tails the effect of large error speed thus leads to under-filtering of erroneous data. In case of window
size with 100 or 50, we can clearly see the tailing effect on figure 2. On the contrary, small window size is sensitive
and rapidly reacts on speed change while it over-filters correct data especially at the starting phase of speed change.
We experienced two or more plausible tuples were discarded by the small window size while they look like correct
speed data with window size 5 or 10. This phenomenon implies that we must introduce the throttling mechanism to
moving window in order to avoid tailing effect of window size.
2.3 Pre-experiments on Positioning Data Error
We also conducted a basic test as our base experiment to check the positioning data accuracy as we mentioned in
section 1. We fix positioning devices both outside area and inside the building, and collected positioning data for
several hours without moving any device. The first positioning device is Garmin GPSMAP62s (Garmin) for pure
GPS data collection. The second positioning device is Samsung Galaxy Tab to obtain positioning data from its
connected 3G base stations (3GBS). We guess Galaxy tab will show more error in both situations and both of the
data set from GPS and 3GBS shows positioning error, especially inside the building. The result of this basic
experiment is listed in table 2.
The variance in position data is regarded as errors and distance of error can be calculated form the position data. As
we guessed, 3GBS shows larger error rate, larger error in distance, larger maximum error distance, and bigger
standard deviation in error distance. Due to the producer’s policy of Garmin GPSMAP62s, which estimates the
user’s location upon past velocity while it lost the GPS signal, it shows drastic error value inside the building. Thus
we think the GPS inside a building cannot be a meaningful data. GPS data from outside area is very accurate enough
for precise localization and even the maximum error distance is in a reasonable range of 52 meters.
3. Filtering Algorithm
According to the considerations in section 2, we build an algorithm for erroneous positioning data filtering as shown
in algorithm 1. With the new acquisition of new position data Pi+1, this algorithm can determine whether Pi+1 be
filtered out or not. In practice, we must consider several situations:
Initial construction of Moving Window: in case there are less than n tuples, we cannot construct complete
moving average and moving standard deviation. Instead, we have to have incomplete window with less number
of data: lines 2 to 5 in algorithm 1.
Both acceleration and speed are considered as throttling parameters even though they have difference in filtering
details. Positioning data tuple with out of range speed or with unreasonable acceleration will be filtered out: lines
6 to 8 in algorithm 1.
Once the speed of a tuple is too big, which can affect the MA and MSD values and leads to filtering error as
expansion of confidence interval which leads to erroneous filter-in of to-be-filtered tuple, we will calibrate the
speed value with MAspeed(n) + 2.57ⅹMSDspeed in order to include possibly rapid change of speed and to avoid
erroneous expansion of confidence interval: lines 9 to 11 in algorithm 1. The value s = 2.57 stands for 99.5%
confidence interval of normal distribution. This is throttling which reduces the effect of erroneous speed to
moving window: s99:5 in lines 9 and 10 of algorithm 1.
Window Construction: Even though a tuple be filtered out, we will include the speed of the tuple unless it is out
of 99.5% confidence interval of normal distribution. It is intended to update moving window in order to cope
with rapid change of speed, i.e. a tuple can be filtered out with rapid change of speed while it is a genuine one. In
such cases, even though the tuple was filtered, moving window can reflect the change of speed for the next
incoming tuples: line 10 in algorithm 1.
Small speed less than 2.77m/s (10Km/h) will not be filtered since it is always possible for a human ambulation
within this small speed, and it is also in GPS error range: MINvelocity line 9 in algorithm 1.
Tuples with unreal acceleration must be filtered. As reported in (List), 10.8m/s2 is the current biggest value for
sport-cars: MAXacceleration as line 12 in algorithm 1.
22
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Once a tuple be filtered out due to excessive acceleration, the tuple must be filtered, the acceleration value of the
tuple is forced to set as MAXacceleration, and the speed value is forced to set as MAspeed(n) to nullify the effect of
unreal speed value: lines 12 to 16 in algorithm 1.
Tuples with positive acceleration values more than MAXacceleration will be regarded as errors while positioning
data tuples with negative acceleration values will not be regarded as errors since it is always possible for a
vehicle to stop emergently with large negative acceleration.
Algorithm 1. Moving Window Filtering at real-time t
Require: P0 . ▷At least one initial tuple is required
Require: window size n
Require: user sensitivity level s
Ensure: Check validness of new position tuple
Ensure: Calibrated series of tuple { Pi : t ≥ i > 0} for t inputs
Require: i=0
1: repeat Get Pi+1 . ▷Acquisition of new tuple, if exist
2: Construct MAspeed(n) with {Px : max(i-n +1, 0) ≤ x ≤ i }
3: Construct MSDspeed(n) with : {Px : max(i - n +1, 0) ≤ x ≤ i }
4: Set MAspeed = MAspeed(n)
5: Set MSDspeed = MSDspeed(n) ▷Moving Window Construction
6: if (Vi+1 > MAspeed + sⅹMSDspeed) OR (ai+1 ≥ MAXacceleration), then ▷Filtering
7: Mark Pi+1 as filtered.
8: end if
9: if (Vi+1 ≥ MAspeed + s99:5 ⅹ MSDspeed) AND (Vi+1 > MINvelocity), then ▷Calibration of Speed
10: Set Vi+1 = MAspeed + s99:5 ⅹ MSDspeed
11: end if
12: if ai+1 ≥ MAXacceleration, then ▷Restriction by Maximum Acceleration
13: Mark Pi+1 as filtered
14: Set Vi+1 = MAspeed
15: Set ai+1 = MAXacceleration
16: end if
17: Set i = i + 1
18: until Exist no more input of positioning tuple
4. Experiments
4.1 Experiment Design
For the algorithm from section 3, we left two parameters unspecified: n as a number of tuples in the window and s as
a sensitivity level of filtering. Both of the parameters can be specified by user of this algorithm. The user sensitivity
level s has relatively simple to determine. From the properties of normal distribution, we can obtain s with proper
confidence interval. Since we use only the positive part of normal distribution for filtering, we can set s = 1:64 for
confidence interval of 95% and s = 2:33 for confidence level of 99%. Users can determine s as their own purpose.
For example, we choose the sensitivity level s as follows. Table 2 shows a typical error rate for position data
collecting when the device is immobile. For GPS case, 12.3% data was erroneous and for 3GBS cellular positioning
system, 36.75% of data has errors. So we can choose s = 1.16 for GPS data or s = 0.34 for cellular positioning data in
our experiment.
4.2 Reconsideration of Window Size
We already discussed the effect of window size in section 2. Due to the trailing effect of large window, we may
choose smaller window. However, algorithm 1 calibrates incorrect speed values. As well, abnormal acceleration
values are restricted and the speed value will be replaced by average speed of moving window. With this calibration
mechanism, we need to see the effect of window size again. Figure 3 shows the effect of window size under our
23
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
calibration mechanism. The x-axis stands for wall clock time on 11th of November, 2011. We choose window size n
= 5, 10, 25, 50, 100 and sensitivity level s = 1.16 which stands that 88% of positioning data are regarded as correct
ones. Even though the trailing effect is restricted, smaller windows show more flexible reaction according to speed
change. Thus window size of 5 or 10 is better choice.
One more consideration is the effect of consecutive errors. Even if we have throttling mechanism of speed,
consecutive errors will have to effect on the moving average and moving standard deviation, and thus leads
confusion to our filtering algorithm. We experienced up to four consecutive errors in real positioning data set and
thus concluded that n = 5 is deficient for our experimental environment while n = 10 will cover consecutive errors
and will reduce the tailing effect of larger window size. Thus our final choice of window size for our main
experiment is 10. Of course, once we experienced more number of consecutive errors, we choose proper window
size or dynamically increase the size of window as we will see in section 5.
Figure 4 shows the progression of filtering algorithm. The same positioning data set as in figure 3 was chosen for fair
combination. The x-axis stands for wall clock time on 11th of November, 2011. For figure 4, window size is n = 10
and sensitivity level s = 1.16 which stands that 88% of positioning data are regarded as correct ones. Thin black solid
line shows the change of real speed in m/s and thin gray dashed line shows calibrated speed by our filtering
algorithm. Calibrated speed usually overlapped with speed while it shows calibration once a tuple be filtered by
filtering algorithm. Dotted line shows the values of acceleration.
Thick black solid line denotes the coverage (acceptance range of speed) of moving window with n = 10. Note that
the moving window shown in the figure is based on calibrated speed values. It reacts rapidly with the change of
speed while successfully filters erroneous tuples. Double dotted line shows the coverage of moving windows without
speed calibration (raw coverage). Comparing calibrated coverage and raw coverage, the effect of speed calibration or
limitation is clear. Speed calibration in our algorithm successfully suppresses the trail of moving window due to
gigantic speed errors. Thus moving window composed of calibrated speeds successfully eliminates the effect of
speed errors and keeps proper estimation of positioning tuple values.
Figure 5 shows the progression of filtering algorithm similar to figure 4. For figure 5, sensitivity level s = 0.34 and
every other conditions is the same.
4.3 Filtering Results
For the final representation of our filtering experiment, we will use two kinds of representation.
First, we conduct the filtering over our whole positioning data set and represent the filtering result on real map. The
visualization is also done with Google maps (Google). Figure 6 shows the filtering result of n = 10 and s = 0.34.
Figure 7 shows the filtering result of 88% confidence interval for n = 10. The collector of positioning data set cannot
find errors in this figure while it contains more data for positioning than the result shown in figure 6. Therefore, we
can conclude that proper selection of window size and sensitivity level will leads to adequate filtering results.
Second, we conduct the filtering on the combinations of window size and sensitivity level over the whole positioning
data set. Table 3 shows the percentage of filtered-out tuples in each combination of parameters. Users of our
algorithm may choose windows size and sensitivity level according to table 3 for their own environment.
5. Conclusion
In this research we build algorithm for erroneous position data filtering and experienced the combinational effect of
window size and sensitivity level. A real set of positioning data collected by author is used for algorithm verification
and we found successful filtering results in general. Several parameters of the algorithm must be defined by user
such as window size, sensitivity level, maximum speed and maximum acceleration. Even it is possible for a user can
change constant parameters of the algorithm such as MAXacceleration, s99.5 (maximum sensitivity level) and MINvelocity
(minimum threshold of speed for filtering) of the algorithm.
While investigating filtering process one by one, we find several minute frailties of our algorithm. The first one is
that out algorithm cannot work at the starting phase of data collection because at the stage of initial window
24
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
construction there were not enough tuples to fill the whole window. We think it is a compulsory demerit for every
approach based on moving window.
The second problem is a tendency of over-filtering and under-filtering. The arrival of new tuple with rapid increase
of speed will be filtered out regardless of its correctness. This tendency is clear when we have a large window size
since large window cannot react fast enough to catch the rapid change of speed. In compensation, we include the
velocity of filtered tuple to preserve the coverage of window in order to cope with speed change unless the change of
speed is out of 99% of confidence interval of normal distribution. In case the velocity is out of 99% confidence
interval, we calibrate the velocity for the future construction of moving window. As we noticed in Figure 3 the
tendency of under-filtering is clear with larger window size.
With smaller windows we cannot filter if number of consecutive errors is bigger than window size. As we
experienced four consecutive errors in our data, we think n = 10 is better choice than n = 5. Another benefit of small
window size is a small computation time and small memory capacity for moving window construction so that this
algorithm can work on mobile devices with low computational power in real time.
We must express several sorts of further considerations. The first one is consideration on windows size. We can
express window size as the time duration instead of number of tuples in a window. This will be effective once we
regularly collect position data and somewhat accurate since speed is a function of time. Another thought on window
size is dynamic calibration of windows size. We can increase or decrease the window size dynamically according to
the number of consecutive errors. Once we found larger number of consecutive errors we can increase window size
in order to minimize the effect of consecutive errors to moving average and moving standard deviation. If we have
smaller number of consecutive errors, we can decrease the window size so that we achieve more proactive reaction
of rapid speed change and less computation for filtering.
The second one is pseudo real time algorithm rather than real time one as algorithm 1. For a window size n, we can
decide filtering of n-th tuple in a window instead of filtering (n+1)th tuple in the window. Even though it cannot be
used in real time, this approach can reduce the tendency of under-filtering and over-filtering. Another idea of
algorithm enhancement is an introduction of interpolation. Algorithm 2 has extra operation for interpolation rather
than algorithm 1. Here, we add extra stages to the algorithm for better approximation of moving window statistics.
Upon the arrival of new tuple, we would replace the velocity of the last tuple in existing window with linearly
interpolated value once the last tuple in the window found marked. This interpolation will give more precise
approximation of moving window for filtering purpose: lines 17 to 20 in algorithm 2. In other words, we can
interpolate the marked last tuple in a window whenever a new tuple is obtained by the latest part of algorithm 2.
Another variance of moving window construction with more precise approximation is to interpolate n-th tuple with n
tuples in a window. For better estimation, interpolating the middle tuple in a window using asymptotic curve
estimated from n tuples will enable more precise interpolation. However it could introduce computational overhead
so that the application of the algorithm to mobile device be difficult. We consider these enhancements of filtering
algorithm as our next research. We will see the effect of speed interpolation to the filtering accuracy.
Algorithm 2. Moving Window Construction with Interpolation
Require: P0 . ▷At least one initial tuple is required
Require: window size n
Require: user sensitivity level s
Ensure: Check validness of new position tuple
Ensure: Calibrated series of tuple {Pi : t ≥ i > 0}for t inputs
Require: i=0
1: repeat Get Pi+1 . ▷Acquisition of new tuple, if exist
2: Construct MAspeed(n) with {Px : max(i-n+1,0) ≤ x ≤ i}
3: Construct MSDspeed(n) with {Px : max(i-n +1,0) ≤x≤ i}
4: Set MAspeed = MAspeed(n)
5: Set MSDspeed = MSDspeed(n) ▷Moving Window Construction
6: if (Vi+1 > MAspeed + sⅹMSDspeed) OR (ai+1 ≥MAXacceleration),then ▷Filtering
7: Mark Pi+1 as filtered
8: end if
25
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
9: if (Vi+1 ≥ MAspeed + s99.5ⅹMSDspeed) AND(Vi+1 > MINvelocity) then ▷Calibration of Speed
10: Set Vi+1 = MAspeed + s99.5 ⅹMSDspeed
11: end if
12: if ai+1 ≥MAXacceleration,then ▷Restriction by Maximum Acceleration
13: Mark Pi+1 as filtered
14: Set Vi+1 = MAspeed
15: Set ai+1 = MAXacceleration
16: end if
17: if (Pi marked as filtered),then ▷Linear Interpolation
18: Set Vi = (Vi+1-Vi-1)ⅹ(ti-ti-1) /(ti+1-ti-1)+ Vi-1
19: Mark Pi as interpolated
20: end if
21: Set i = i + 1
22: until Exist no more input of positioning tuple
Finally, we need to investigate the effect of probability distribution for filtering. In general, normal distribution is a
usual candidate for various sources of errors and filtering. However, a research showed that human mobility pattern
is in a heavy tailed distribution such as Levy Walk (Gonzales 2008). We therefore see the effect of Levy Walk for
filtering since the distribution of positioning data is likely to be in a Levy Walk form.
References
Enescu, N., Mancas, D., and Manole, E. (2008), “Locating GPS Coordinates on PDA,” 8th WSEAS International
Conference on APPLIED INFORMATICS AND COMMUNICATIONS (AIC08), Rhodes, Greece, August 20-22,
470-474.
Tsing, T., and Sandoval, F. (2009), “Initiation to GPS localization and navigation using a small-scale model electric
car: An illustration of learning by project for graduated students,” Proceedings of the 8th WSEAS International
Conference on EDUCATION and EDUCATIONAL TECHNOLOGY, 21-26.
Garmin GPSMAP62s, Available: https://buy.garmin.com/shop/shop.do?pID=63801
iOS 5: “Understanding Location Services,” Available: http://support.apple.com/kb/ht4995
Kim, H., Song, H., (2011), “Life Mobility of A Student: From Position Data to Human Mobility Model through
Expectation Maximization Clustering,” Communications in Computer and Information Science, Volume 263, pp.
88-97.
Gonzalez, M.C., Hidalgo, A., and Barabasi, A., (2008) “Understanding individual human mobility patterns,” Nature,
vol. 453, pp.779-782.
Vincenty, T., (1975), “Direct and Inverse Solutions of Geodesics on the ellipsoid with Application of Nested
Equations,” Survey Review, Volume 23,Number 176, pp. 88-93(6).
Kim., W., and Song, H., (2011), “Optimization Conditions of OCSVM for Erroneous GPS Data Filtering,”
Communications in Computer and Information Science, Volume 263, pp. 62-70.
Ucenic, C., and George, A., (2006) “A Neuro-fuzzy Approach to Forecast the Electricity Demand,” Proceedings of
the 2006 IASME/WSEAS International Conference on Energy & Environmental Systems, Chalkida, Greece,
pp.299-304.
Wettayaprasit, W., Laosen, N., and Chevakidagarn, S., (2007) “Data Filtering Technique for Neural Networks
Forecasting,” Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization,
Beijing, China, pp.225-230.
Google Maps API, Available: https://developers.google.com/maps/
List of fastest production cars by acceleration, Available:http://en.wikipedia.org/wiki/LisP_of_fastesP_
cars_by_acceleration
Ha Yoon Song received his B.S. degree in Computer Science and Statistics in 1991 and received his M.S.degree in Computer Science in 1993,
26
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
both from Seoul National University, Seoul, Korea. He received Ph.D. degree in Computer Science from University of California at Los Angeles,
USA in 2001. From 2001 he has worked at Department of Computer Engineering, Hongik University, Seoul, Korea and is now associate professor.
In his sabbatical year 2009, he worked at Institute of Computer Technology, Vienna University of Technology, Austria as a visiting scholar. Prof.
Song’s research interests are in the areas of mobile computing, performance analysis, complex system simulation and human mobility modeling.
Han-gyoo Kim received his B.S. degree in 1981 from Seoul National University. He received Ph.D. degree in Computer Science from
University of California at Berkeley in 1994. He is now Associate Professor in Department of Computer Engineering, Hongik University, Seoul,
Korea. He has led a number of national researches in Korea since 1994. His research areas include networked system, network storage, and cloud
computing.
Figure 1. A Trail of Positioning Data Set
Figure 2. Effects of Different Window Size
27
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 3. Effects of Different Window Size with s = 1.16 and Calibrated Speeds
Figure 4. Filtering Investigation with s=1.16 and n=10
28
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 5. Filtering Investigation with s=0.34 and n=10
Figure 6. Trail of Filtered Positioning Data with n=10 and s=0.34 (63% confidence interval)
29
11. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 7. Trail of Filtered Positioning Data with n=10 and s=1.16 (88% confidence interval)
Table 1. Maximum speed of transportation methods
Transportation Methods Maximum Speed m/s
Ambulation 3.00
Bicycle 33.33
Automobile 92.78
Sports-Car 244.44
High-speed Train 159.67
Airplane 528.00
30
12. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Table 2. Typical Errors in Positioning Data Acquisition (unit: meters)
3G Base Station GPS
Inside a Building n(Data Point) 893 n(Data Point) 2186
n(Error Point) 434 n(Error Point) 939
Error Rate 48.6% Error Rate 483.0%
E[Error Dist] 52.5530m E[Error Dist] 43.5506m
Max(Error Dist) 156.7578m Max(Error Dist) 10769.72m
σErrorDist 32.6859m σErrorDist 370.6034m
Outside a n(Data Point) 331 n(Data Point) 1690
Building n(Error Point) 122 n(Error Point) 208
Error Rate 36.9% Error Rate 12.3%
E[Error Dist] 52.6618m E[Error Dist] 4.4498m
Max(Error Dist) 206.3526m Max(Error Dist) 1.7789m
σErrorDist 23.5953m σErrorDist 7.1696m
Table 3. Ratio (%) of Filtered-out Tuples with respect to Sensitivity Level
Size of Sliding Window s = 0.34 s = 1.16 s = 1.64 s = 2.33
5 42.20 29.07 24.06 19.31
10 38.67 24.10 18.81 14.31
25 37.37 21.43 16.07 11.85
50 37.71 20.54 15.23 11.07
100 37.05 20.03 14.90 10.65
31
13. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar