This document presents a novel algorithm for object tracking in video frames based on position vectors. The algorithm first extracts position vectors for the object in the first frame. It then tracks the object across subsequent frames by cropping each new frame into blocks based on shifted position vectors and performing block matching between frames using feature vectors extracted by discrete wavelet transform (DWT) or dual tree complex wavelet transform (DTCWT). Experimental results on video sequences show the algorithm using DTCWT achieves higher tracking precision (95%) compared to DWT (92%). The algorithm is computationally efficient and can track multiple moving and still objects.
This document summarizes an object tracking algorithm that uses Dual Tree Complex Wavelet Transform (DTCWT) feature vectors. It begins by extracting a position vector for the object in the first frame. Nine position vectors are then calculated by shifting the original position in different directions. The second frame is cropped into nine blocks based on these position vectors. DTCWT is used to extract feature vectors for the block in the first frame and nine blocks in the second frame. Block matching is performed by calculating the Manhattan distance between the first frame's feature vector and each of the second frame's nine vectors. The block with the minimum distance corresponds to the tracked object. This process is repeated over successive frames to track the moving object.
Unsupervised multispectral image Classification By fuzzy hidden Markov chains...CSCJournals
This paper deals with unsupervised classification of multi-spectral images, we propose to use a new vectorial fuzzy version of Hidden Markov Chains (HMC). The main characteristic of the proposed model is to allow the coexistence of crisp pixels (obtained with the uncertainty measure of the model) and fuzzy pixels (obtained with the fuzzy measure of the model) in the same image. Crisp and fuzzy multi-dimensional densities can then be estimated in the classification process, according to the assumption considered to model the statistical links between the layers of the multi-band image. The efficiency of the proposed method is illustrated with a Synthetic and real SPOTHRV images in the region of Rabat. The comparisons of two methods: fuzzy HMC and HMC are also provided. The classification results show the interest of the fuzzy HMC method.
This document provides a 3-paragraph summary of support vector machines (SVMs). It begins with a brief history of SVMs from the 1990s work of Russian mathematicians Vapnik and Chervonenkis. It then explains how SVMs find the optimal separating hyperplane for classification problems, including methods for non-linear and non-separable data using kernel functions and cost functions. It concludes by noting applications of SVMs include text recognition, face detection, gene expression analysis, and music information retrieval tasks.
This document describes a new deep learning model called Convolutional eXtreme Gradient Boosting (ConvXGB) that combines a Convolutional Neural Network (CNN) and XGBoost for classification problems. ConvXGB consists of stacked convolutional layers for feature learning, followed by XGBoost in the last layer for class prediction. Experiments on image and general datasets showed ConvXGB achieved slightly better accuracy than CNN and XGBoost alone, and was sometimes significantly better.
The document discusses the relationship between pixels in an image, including pixel neighborhoods and connectivity. It defines different types of pixel neighborhoods - the 4 nearest neighbors, 8 nearest neighbors including diagonals, and boundary pixels that have fewer than 8 neighbors. Connectivity refers to whether two pixels are adjacent or connected based on their intensity values and neighborhood relationships. Specifically, it describes 4-connectivity, 8-connectivity, and m-connectivity. Regions in an image are sets of connected pixels, while boundaries separate adjacent regions.
Lecture 1 Introduction to image processingVARUN KUMAR
This document introduces image processing and provides an overview of the topic. It discusses that image processing involves processing digital images where the fundamental building block is a pixel. It then lists some key applications of image processing in fields like medical science, agriculture, entertainment and more. Finally, it outlines some challenges of image processing like segmentation, noise filtering and image enhancement.
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
Lecture 2 Introduction to digital imageVARUN KUMAR
This document provides an introduction to digital images and image processing. It discusses the need for digitization of images, including that digital signals are less affected by noise and interference and can be stored at low cost. It describes the process of digitization, which involves sampling an analog signal to obtain discrete values, and quantizing the signal levels into a finite set of values. Finally, it discusses key aspects of digital images like sampling rate, bandwidth, and reconstructing an image from its sampled and quantized values.
This document summarizes an object tracking algorithm that uses Dual Tree Complex Wavelet Transform (DTCWT) feature vectors. It begins by extracting a position vector for the object in the first frame. Nine position vectors are then calculated by shifting the original position in different directions. The second frame is cropped into nine blocks based on these position vectors. DTCWT is used to extract feature vectors for the block in the first frame and nine blocks in the second frame. Block matching is performed by calculating the Manhattan distance between the first frame's feature vector and each of the second frame's nine vectors. The block with the minimum distance corresponds to the tracked object. This process is repeated over successive frames to track the moving object.
Unsupervised multispectral image Classification By fuzzy hidden Markov chains...CSCJournals
This paper deals with unsupervised classification of multi-spectral images, we propose to use a new vectorial fuzzy version of Hidden Markov Chains (HMC). The main characteristic of the proposed model is to allow the coexistence of crisp pixels (obtained with the uncertainty measure of the model) and fuzzy pixels (obtained with the fuzzy measure of the model) in the same image. Crisp and fuzzy multi-dimensional densities can then be estimated in the classification process, according to the assumption considered to model the statistical links between the layers of the multi-band image. The efficiency of the proposed method is illustrated with a Synthetic and real SPOTHRV images in the region of Rabat. The comparisons of two methods: fuzzy HMC and HMC are also provided. The classification results show the interest of the fuzzy HMC method.
This document provides a 3-paragraph summary of support vector machines (SVMs). It begins with a brief history of SVMs from the 1990s work of Russian mathematicians Vapnik and Chervonenkis. It then explains how SVMs find the optimal separating hyperplane for classification problems, including methods for non-linear and non-separable data using kernel functions and cost functions. It concludes by noting applications of SVMs include text recognition, face detection, gene expression analysis, and music information retrieval tasks.
This document describes a new deep learning model called Convolutional eXtreme Gradient Boosting (ConvXGB) that combines a Convolutional Neural Network (CNN) and XGBoost for classification problems. ConvXGB consists of stacked convolutional layers for feature learning, followed by XGBoost in the last layer for class prediction. Experiments on image and general datasets showed ConvXGB achieved slightly better accuracy than CNN and XGBoost alone, and was sometimes significantly better.
The document discusses the relationship between pixels in an image, including pixel neighborhoods and connectivity. It defines different types of pixel neighborhoods - the 4 nearest neighbors, 8 nearest neighbors including diagonals, and boundary pixels that have fewer than 8 neighbors. Connectivity refers to whether two pixels are adjacent or connected based on their intensity values and neighborhood relationships. Specifically, it describes 4-connectivity, 8-connectivity, and m-connectivity. Regions in an image are sets of connected pixels, while boundaries separate adjacent regions.
Lecture 1 Introduction to image processingVARUN KUMAR
This document introduces image processing and provides an overview of the topic. It discusses that image processing involves processing digital images where the fundamental building block is a pixel. It then lists some key applications of image processing in fields like medical science, agriculture, entertainment and more. Finally, it outlines some challenges of image processing like segmentation, noise filtering and image enhancement.
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
Lecture 2 Introduction to digital imageVARUN KUMAR
This document provides an introduction to digital images and image processing. It discusses the need for digitization of images, including that digital signals are less affected by noise and interference and can be stored at low cost. It describes the process of digitization, which involves sampling an analog signal to obtain discrete values, and quantizing the signal levels into a finite set of values. Finally, it discusses key aspects of digital images like sampling rate, bandwidth, and reconstructing an image from its sampled and quantized values.
11.digital image processing for camera application in mobile devices using ar...Alexander Decker
This document discusses using artificial neural networks for digital image processing on mobile devices. It proposes training a neural network using sample input and output images to generate a "function matrix" that can then process other images in real-time on mobile devices. The neural network has 9 input nodes, 9 hidden nodes, and 9 output nodes arranged to process 3x3 pixel sections of images. It is trained using backpropagation to modify weights to match sample output images. This allows mobile devices to perform effects like edge detection without large pre-defined processing matrices.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
$$ Formulating semantic image annotation as a supervised learning problemmhmt82
This document discusses different approaches to automatically annotating images with semantic keywords:
1) Supervised one-vs-all (OVA) labeling trains independent binary classifiers for each keyword. It requires estimating densities for images both with and without the keyword, which is computationally expensive.
2) Unsupervised labeling models images as mixtures of hidden states, each defining a joint distribution over keywords and image features. It provides a natural ranking of keywords but the annotations are not optimal for recognition.
3) The proposed supervised multi-class labeling approach directly models keywords as image classes, without requiring prior image segmentation. It combines advantages of the unsupervised and supervised OVA approaches, having lower computational complexity while producing recognition-optimal
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
The document discusses an algorithm called Adaptive Multichannel Component Analysis (AMMCA) for separating image sources from mixtures using adaptively learned dictionaries. It begins by reviewing image denoising using learned dictionaries, then extends this to image separation from single mixtures. The key contribution is applying this approach to separating sources from multichannel mixtures by learning local dictionaries for each source during the separation process. The algorithm is described and simulated results are shown separating two images from a noisy mixture using the learned dictionaries. In conclusion, AMMCA is able to separate sources without prior knowledge of their sparsity domains by fusing dictionary learning into the separation process.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Improving Performance of Back propagation Learning Algorithmijsrd.com
The standard back-propagation algorithm is one of the most widely used algorithm for training feed-forward neural networks. One major drawback of this algorithm is it might fall into local minima and slow convergence rate. Natural gradient descent is principal method for solving nonlinear function is presented and is combined with the modified back-propagation algorithm yielding a new fast training multilayer algorithm. This paper describes new approach to natural gradient learning in which the number of parameters necessary is much smaller than the natural gradient algorithm. This new method exploits the algebraic structure of the parameter space to reduce the space and time complexity of algorithm and improve its performance.
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM ijcisjournal
Color image segmentation algorithms in the literature segment an image on the basis of color, texture, and
also as a fusion of both color and texture. In this paper, a color image segmentation algorithm is proposed
by extracting both texture and color features and applying them to the One-Against-All Multi Class Support
Vector Machine classifier for segmentation. A novel Power Law Descriptor (PLD) is used for extracting
the textural features and homogeneity model is used for obtaining the color features. The Multi Class SVM
is trained using the samples obtained from Soft Rough Fuzzy-C-Means (SRFCM) clustering. Fuzzy set
based membership functions capably handle the problem of overlapping clusters. The lower and upper
approximation concepts of rough sets deal well with uncertainty, vagueness, and incompleteness in data.
Parameterization tools are not a prerequisite in defining Soft set theory. The goodness aspects of soft sets,
rough sets and fuzzy sets are incorporated in the proposed algorithm to achieve improved segmentation
performance. The Power Law Descriptor used for texture feature extraction has the advantage of being
dealt in the spatial domain thereby reducing computational complexity. The proposed algorithm is
comparable and achieved better performance compared with the state of the art algorithms found in the
literature.
Semantic Video Segmentation with Using Ensemble of Particular Classifiers and...ITIIIndustries
A new approach based on the use of a deep neural network and an ensemble of particular classifiers is proposed. This approach is based on use of the novel block of fuzzy generalization for combines classes of objects into semantic groups, each of which corresponds to one or more particular classifiers. As result of processing, the sequence of frames is converted into the annotation of the event occurring in the video for a certain time interval
A Novel Approach for Moving Object Detection from Dynamic BackgroundIJERA Editor
In computer vision application, moving object detection is the key technology for intelligent video monitoring
system. Performance of an automated visual surveillance system considerably depends on its ability to detect
moving objects in thermodynamic environment. A subsequent action, such as tracking, analyzing the motion or
identifying objects, requires an accurate extraction of the foreground objects, making moving object detection a
crucial part of the system. The aim of this paper is to detect real moving objects from un-stationary background
regions (such as branches and leafs of a tree or a flag waving in the wind), limiting false negatives (objects
pixels that are not detected) as much as possible. In addition, it is assumed that the models of the target objects
and their motion are unknown, so as to achieve maximum application independence (i.e. algorithm works under
the non-prior training).
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document is a project report submitted by Shubham Jain and Vikas Jain for their course CS676A. The project aims to learn relative attributes associated with face images using the PubFig dataset. Convolutional neural network features and the RankNet model were used to predict attribute rankings. RankNet achieved better performance than RankSVM and GIST features. Zero-shot learning for unseen classes was explored by building probabilistic class models, but performance was poor. Future work could improve the modeling of unseen classes.
A Review of Image Classification TechniquesIRJET Journal
This document provides a review of various image classification techniques. It begins by defining image classification as the process of assigning pixels to finite classes based on their data values. The techniques can be categorized as supervised or unsupervised. Supervised techniques use training data to define decision boundaries, while unsupervised techniques automatically partition data without labels. Common supervised techniques discussed include parallelpiped, minimum distance, and maximum likelihood classification. Unsupervised techniques include hierarchical and partitioning clustering. The document also explores hard and soft classifiers, and how combinations of techniques can improve accuracy over single methods.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
This document summarizes a research paper that proposes a method to discriminate between natural and manmade scenes using texture analysis. It analyzes local texture information in images using a "texture unit matrix" approach. Texture units characterize the texture of a pixel and its neighbors. Texture unit matrices are generated from images and used to form feature vectors. A self-organizing map (SOM) classifier is then used to classify images as natural or manmade based on these feature vectors. The researchers tested their method on databases of "near" scenes within 10 meters and "far" scenes about 500 meters away. Their results found that analyzing the minimum texture unit matrix in a base-5 approach provided the most accurate classifications between natural and manmade scenes
Content Based Image Retrieval Using Gray Level Co-Occurance Matrix with SVD a...ijcisjournal
In this paper, gray level co-occurrence matrix, gray level co-occurrence matrix with singular value
decomposition and local binary pattern are presented for content based image retrieval. Based upon the
feature vector parameters of energy, contrast, entropy and distance metrics such as Euclidean distance,
Canberra distance, Manhattan distance the retrieval efficiency, precision, and recall of the images are
calculated. The retrieval results of the proposed method are tested on Corel-1k database. The results after
being investigated shows a significant improvement in terms of average retrieval rate, average retrieval
precision and recall of different algorithms such as GLCM, GLCM & SVD, LBP with radius one and LBP
with radius two based on different distance metrics.
Offline Character Recognition Using Monte Carlo Method and Neural Networkijaia
Human Machine interface are constantly gaining improvements because of increasing development of
computer tools. Handwritten Character Recognition do have various significant applications like form
scanning, verification, validation, or checks reading. Because of the importance of these applications
passionate research in the field of Off-Line handwritten character recognition is going on. The challenge in
recognising the handwritings lies in the nature of humans, having unique styles in terms of font, contours,
etc. This paper presents a novice approach to identify the offline characters; we call it as character divider
approach which can be used after pre-processing stage. We devise an innovative approach for feature
extraction known as vector contour. We also discuss the pros and cons including limitations, of our
approach
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...csandit
The document describes an algorithm called X-TREPAN that extracts decision trees from trained neural networks. X-TREPAN is an enhancement of the TREPAN algorithm that allows it to handle both multi-class classification and multi-class regression problems. It can also analyze generalized feed forward networks. The algorithm was tested on several real-world datasets and was found to generate decision trees with good classification accuracy while also maintaining comprehensibility.
TIMMINT MI - Social Protection Weekly Report (Issue 2014-18)The TIMMINT Group
The document provides a weekly report on global social protection issues from April 28 to May 4, 2014. It discusses key topics like social insurance programs in the US and Australia potentially raising the retirement age to 70. It also mentions an ambitious Indian pension scheme for low-income workers abroad that is gaining momentum, with 70 new enrollments in March and April 2014, mostly from the UAE. The report aims to foster discussion on pressing social protection issues and help readers stay informed on the latest developments in the industry through data, analyses and opportunities to provide feedback.
The document discusses the procedure for enforcing foreign arbitration awards in Indian courts. It notes that foreign awards must first be recognized by an Indian court as a foreign decree before they can be enforced. The court examines the award to ensure it is valid and not against public policy. Once recognized, the foreign award has the same status as an Indian court decree and can be enforced through the lengthy and complex execution procedures under the Code of Civil Procedure. The document argues that streamlining the execution process could help reduce delays and better achieve arbitration's goal of efficient dispute resolution.
11.digital image processing for camera application in mobile devices using ar...Alexander Decker
This document discusses using artificial neural networks for digital image processing on mobile devices. It proposes training a neural network using sample input and output images to generate a "function matrix" that can then process other images in real-time on mobile devices. The neural network has 9 input nodes, 9 hidden nodes, and 9 output nodes arranged to process 3x3 pixel sections of images. It is trained using backpropagation to modify weights to match sample output images. This allows mobile devices to perform effects like edge detection without large pre-defined processing matrices.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
$$ Formulating semantic image annotation as a supervised learning problemmhmt82
This document discusses different approaches to automatically annotating images with semantic keywords:
1) Supervised one-vs-all (OVA) labeling trains independent binary classifiers for each keyword. It requires estimating densities for images both with and without the keyword, which is computationally expensive.
2) Unsupervised labeling models images as mixtures of hidden states, each defining a joint distribution over keywords and image features. It provides a natural ranking of keywords but the annotations are not optimal for recognition.
3) The proposed supervised multi-class labeling approach directly models keywords as image classes, without requiring prior image segmentation. It combines advantages of the unsupervised and supervised OVA approaches, having lower computational complexity while producing recognition-optimal
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
The document discusses an algorithm called Adaptive Multichannel Component Analysis (AMMCA) for separating image sources from mixtures using adaptively learned dictionaries. It begins by reviewing image denoising using learned dictionaries, then extends this to image separation from single mixtures. The key contribution is applying this approach to separating sources from multichannel mixtures by learning local dictionaries for each source during the separation process. The algorithm is described and simulated results are shown separating two images from a noisy mixture using the learned dictionaries. In conclusion, AMMCA is able to separate sources without prior knowledge of their sparsity domains by fusing dictionary learning into the separation process.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Improving Performance of Back propagation Learning Algorithmijsrd.com
The standard back-propagation algorithm is one of the most widely used algorithm for training feed-forward neural networks. One major drawback of this algorithm is it might fall into local minima and slow convergence rate. Natural gradient descent is principal method for solving nonlinear function is presented and is combined with the modified back-propagation algorithm yielding a new fast training multilayer algorithm. This paper describes new approach to natural gradient learning in which the number of parameters necessary is much smaller than the natural gradient algorithm. This new method exploits the algebraic structure of the parameter space to reduce the space and time complexity of algorithm and improve its performance.
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM ijcisjournal
Color image segmentation algorithms in the literature segment an image on the basis of color, texture, and
also as a fusion of both color and texture. In this paper, a color image segmentation algorithm is proposed
by extracting both texture and color features and applying them to the One-Against-All Multi Class Support
Vector Machine classifier for segmentation. A novel Power Law Descriptor (PLD) is used for extracting
the textural features and homogeneity model is used for obtaining the color features. The Multi Class SVM
is trained using the samples obtained from Soft Rough Fuzzy-C-Means (SRFCM) clustering. Fuzzy set
based membership functions capably handle the problem of overlapping clusters. The lower and upper
approximation concepts of rough sets deal well with uncertainty, vagueness, and incompleteness in data.
Parameterization tools are not a prerequisite in defining Soft set theory. The goodness aspects of soft sets,
rough sets and fuzzy sets are incorporated in the proposed algorithm to achieve improved segmentation
performance. The Power Law Descriptor used for texture feature extraction has the advantage of being
dealt in the spatial domain thereby reducing computational complexity. The proposed algorithm is
comparable and achieved better performance compared with the state of the art algorithms found in the
literature.
Semantic Video Segmentation with Using Ensemble of Particular Classifiers and...ITIIIndustries
A new approach based on the use of a deep neural network and an ensemble of particular classifiers is proposed. This approach is based on use of the novel block of fuzzy generalization for combines classes of objects into semantic groups, each of which corresponds to one or more particular classifiers. As result of processing, the sequence of frames is converted into the annotation of the event occurring in the video for a certain time interval
A Novel Approach for Moving Object Detection from Dynamic BackgroundIJERA Editor
In computer vision application, moving object detection is the key technology for intelligent video monitoring
system. Performance of an automated visual surveillance system considerably depends on its ability to detect
moving objects in thermodynamic environment. A subsequent action, such as tracking, analyzing the motion or
identifying objects, requires an accurate extraction of the foreground objects, making moving object detection a
crucial part of the system. The aim of this paper is to detect real moving objects from un-stationary background
regions (such as branches and leafs of a tree or a flag waving in the wind), limiting false negatives (objects
pixels that are not detected) as much as possible. In addition, it is assumed that the models of the target objects
and their motion are unknown, so as to achieve maximum application independence (i.e. algorithm works under
the non-prior training).
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document is a project report submitted by Shubham Jain and Vikas Jain for their course CS676A. The project aims to learn relative attributes associated with face images using the PubFig dataset. Convolutional neural network features and the RankNet model were used to predict attribute rankings. RankNet achieved better performance than RankSVM and GIST features. Zero-shot learning for unseen classes was explored by building probabilistic class models, but performance was poor. Future work could improve the modeling of unseen classes.
A Review of Image Classification TechniquesIRJET Journal
This document provides a review of various image classification techniques. It begins by defining image classification as the process of assigning pixels to finite classes based on their data values. The techniques can be categorized as supervised or unsupervised. Supervised techniques use training data to define decision boundaries, while unsupervised techniques automatically partition data without labels. Common supervised techniques discussed include parallelpiped, minimum distance, and maximum likelihood classification. Unsupervised techniques include hierarchical and partitioning clustering. The document also explores hard and soft classifiers, and how combinations of techniques can improve accuracy over single methods.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
This document summarizes a research paper that proposes a method to discriminate between natural and manmade scenes using texture analysis. It analyzes local texture information in images using a "texture unit matrix" approach. Texture units characterize the texture of a pixel and its neighbors. Texture unit matrices are generated from images and used to form feature vectors. A self-organizing map (SOM) classifier is then used to classify images as natural or manmade based on these feature vectors. The researchers tested their method on databases of "near" scenes within 10 meters and "far" scenes about 500 meters away. Their results found that analyzing the minimum texture unit matrix in a base-5 approach provided the most accurate classifications between natural and manmade scenes
Content Based Image Retrieval Using Gray Level Co-Occurance Matrix with SVD a...ijcisjournal
In this paper, gray level co-occurrence matrix, gray level co-occurrence matrix with singular value
decomposition and local binary pattern are presented for content based image retrieval. Based upon the
feature vector parameters of energy, contrast, entropy and distance metrics such as Euclidean distance,
Canberra distance, Manhattan distance the retrieval efficiency, precision, and recall of the images are
calculated. The retrieval results of the proposed method are tested on Corel-1k database. The results after
being investigated shows a significant improvement in terms of average retrieval rate, average retrieval
precision and recall of different algorithms such as GLCM, GLCM & SVD, LBP with radius one and LBP
with radius two based on different distance metrics.
Offline Character Recognition Using Monte Carlo Method and Neural Networkijaia
Human Machine interface are constantly gaining improvements because of increasing development of
computer tools. Handwritten Character Recognition do have various significant applications like form
scanning, verification, validation, or checks reading. Because of the importance of these applications
passionate research in the field of Off-Line handwritten character recognition is going on. The challenge in
recognising the handwritings lies in the nature of humans, having unique styles in terms of font, contours,
etc. This paper presents a novice approach to identify the offline characters; we call it as character divider
approach which can be used after pre-processing stage. We devise an innovative approach for feature
extraction known as vector contour. We also discuss the pros and cons including limitations, of our
approach
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...csandit
The document describes an algorithm called X-TREPAN that extracts decision trees from trained neural networks. X-TREPAN is an enhancement of the TREPAN algorithm that allows it to handle both multi-class classification and multi-class regression problems. It can also analyze generalized feed forward networks. The algorithm was tested on several real-world datasets and was found to generate decision trees with good classification accuracy while also maintaining comprehensibility.
TIMMINT MI - Social Protection Weekly Report (Issue 2014-18)The TIMMINT Group
The document provides a weekly report on global social protection issues from April 28 to May 4, 2014. It discusses key topics like social insurance programs in the US and Australia potentially raising the retirement age to 70. It also mentions an ambitious Indian pension scheme for low-income workers abroad that is gaining momentum, with 70 new enrollments in March and April 2014, mostly from the UAE. The report aims to foster discussion on pressing social protection issues and help readers stay informed on the latest developments in the industry through data, analyses and opportunities to provide feedback.
The document discusses the procedure for enforcing foreign arbitration awards in Indian courts. It notes that foreign awards must first be recognized by an Indian court as a foreign decree before they can be enforced. The court examines the award to ensure it is valid and not against public policy. Once recognized, the foreign award has the same status as an Indian court decree and can be enforced through the lengthy and complex execution procedures under the Code of Civil Procedure. The document argues that streamlining the execution process could help reduce delays and better achieve arbitration's goal of efficient dispute resolution.
Este documento presenta el Programa de Prevención de Riesgos de una empresa constructora. Describe la política de prevención de riesgos de la empresa, los objetivos del programa, y los elementos y responsabilidades clave del programa, incluyendo el liderazgo, cumplimiento legal, capacitación, investigación de accidentes, inspecciones, y roles del ingeniero administrador, profesionales, jefes de obra y capataces. El objetivo general del programa es integrar efectivamente la prevención de riesgos en todas las obras para proteger a los trabajadores, equipos y
Alla fine dell'anno vale la pena prendersi un momento per fare il bilancio di quello passato.
Come è stato il tuo 2015? Quali sono stati i tuoi successi? Cosa ha funzionato? Cosa no? Cosa vuoi contnuare a fare? Cosa vuoi cambiare?
Il presente è frutto del passato, e puoi creare il tuo futuro. Adesso.
Mohammed Nasser Hussein Mohammed is a reservation agent at Coral Beach in Hurghada, Egypt seeking a new challenging position. He has over 3 years of experience as a reservation agent and has also worked in marketing research. He lists his contact information, target job details, employment history, education, training and certifications, computer skills, and personal information such as hobbies in his resume.
Helena Curtain presented on incorporating play and playfulness into the language classroom. She discussed the benefits of play, which include emotional, developmental, and educational gains for both children and adults. Play can aid cognitive development, social skills, physical skills, and language acquisition. It makes learning fun and encourages experimentation. The presentation provided many ideas for using playful activities focused on pretend play, games with rules, partner and small group work, cultural exploration, and storytelling. Participants were encouraged to set a goal of adapting at least one play idea into their own teaching.
Based on the cross section contour surface model reconstructionIJRES Journal
Based on the characteristics of the reverse engineering modeling technology, can effectively restore
the original design intent of products, to shorten the development period of industrial products, improve the
industrial fields of product innovation design ability of major significance and value. This paper STL Viewer
software to obtain the section curve of refactoring, obtaining high quality of reconstruction model, through cross
section data preprocessing in the first place, the cross section data are the initial segmentation; Secondly to add
constraint section curve reconstruction; The last generation CAD model. Examples prove that this method is
feasible and effective.
TIMMINT MI - KSA Construction & Real Estate Weekly Report (Issue 2014-18)The TIMMINT Group
The document is a weekly report on the construction and real estate industry in Saudi Arabia from May 25 to May 31, 2014. It includes the following sections:
1) News items on projects being approved or awarded, such as the Jeddah metro project budget and contracts for electricity projects in Saudi Arabia worth over $613 million.
2) Performance of the construction, real estate development, and cement sectors in the Tadawul stock market indexes over the past year, showing increases between 12-65%.
3) A table showing the stock price changes of listed construction companies, with some seeing increases of 2-5% and others decreases of less than 1%.
Film development process uni application 1kristylai__
This document discusses the development of a student film project titled "Mute" about a deaf girl struggling to fit into society. The student decided on a genre of social realism and focused on issues of isolation and bullying. Location filming took place at the student's school using a Canon DSLR. Post-production included choosing a sad piano song for the soundtrack and learning sign language. The short film was completed despite challenges with equipment and weather.
In this paper person identification is done based on sets of facial images. Each facial image is considered as the scattered point of logistic regression. The vertical distance of scattered point of facial image and the regression line is considered as the parameter to determine whether the image is of same person or not. The ratio of Euclidian distance (in terms of number of pixel of gray scale image based on ‘imtool’ of Matlab 13.0) between nasal and eye points are determined. The variance of the ration is considered another parameter to identify a facial image. The concept is combined with ghost image of Principal Component Analysis; where the mean square error and signal to noise ratio (SNR) in dB is considered as the parameters of detection. The combination of three methods, enhance the degree of accuracy compared to individual one.
This document discusses single object tracking and velocity determination. It begins with an introduction and objectives of the project which is to develop an algorithm for tracking a single object and determining its velocity in a sequence of video frames. It then provides details on preprocessing techniques like mean filtering, Gaussian smoothing and median filtering to reduce noise. It describes segmentation methods including histogram-based, single Gaussian background and frame difference approaches. Feature extraction methods like edges, bounding boxes and color are explained. Object detection using optical flow and block matching is covered. Finally, it discusses tracking and calculating velocity of the moving object. MATLAB is introduced as a technical computing language for solving these types of problems.
This document proposes a method for change detection in images that combines Change Vector Analysis, K-Means clustering, Otsu thresholding, and mathematical morphology. It involves detecting intensity changes using CVA, segmenting the difference image using K-Means, calculating a threshold with Otsu's method, applying the threshold and morphological operations, and comparing results to other change detection techniques. Experimental results on medical and other images show the proposed method achieves satisfactory change detection with fewer errors compared to other methods.
This document proposes using machine learning techniques to predict COVID-19 infections based on chest x-ray images. Specifically, it involves using discrete wavelet transform to extract space-frequency features from chest x-rays, reducing the dimensionality of features using Shannon entropy, and then training standard machine learning classifiers like logistic regression, support vector machine, decision tree, and convolutional neural network on the extracted features to classify images as COVID-19 positive or negative. The document provides background on the proposed techniques of discrete wavelet transform, entropy, and various machine learning models.
Data-Driven Motion Estimation With Spatial AdaptationCSCJournals
The pel-recursive computation of 2-D optical flow raises a wealth of issues, such as the treatment of outliers, motion discontinuities and occlusion. Our proposed approach deals with these issues within a common framework. It relies on the use of a data-driven technique called Generalised Cross Validation to estimate the best regularisation scheme for a given pixel. In our model, the regularisation parameter is a general matrix whose entries can account for different sources of error. The motion vector estimation takes into consideration local image properties following a spatially adaptive approach where each moving pixel is supposed to have its own regularisation matrix. Preliminary experiments indicate that this approach provides robust estimates of the optical flow.
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
This document describes a content-based image retrieval system that uses 2-D discrete wavelet transform with texture features. It proposes using DWT to reduce image dimensions before extracting texture features from images using gray level co-occurrence matrix. Texture features and Euclidean distance are then used to retrieve similar images from a database. The system is tested on a dataset of 1000 images from 10 classes and achieves an average retrieval accuracy of 89.8%.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
Segmentation and Classification of MRI Brain TumorIRJET Journal
This document presents a study comparing two techniques for detecting brain tumors in MRI images: level set segmentation and K-means segmentation. Features are extracted from the segmented tumors using discrete wavelet transform and gray level co-occurrence matrix. The features are then classified as benign or malignant using a support vector machine. The level set method and K-means method are evaluated based on accuracy, sensitivity, and specificity on a dataset of 41 MRI brain images. The level set method achieved slightly higher accuracy of 94.12% compared to the K-means method.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
WAVELET BASED AUTHENTICATION/SECRET TRANSMISSION THROUGH IMAGE RESIZING (WA...sipij
This document presents a wavelet-based steganographic technique called WASTIR for secret message/image transmission or image authentication. The technique embeds a secret image into a cover image through discrete wavelet transformation and resizing. Specifically, it first converts the secret image pixels from decimal to quaternary representation. It then performs inverse DWT on the cover image to generate sub-images, and embeds the quaternary digits in the sub-image coefficients using a hash function. Experimental results on standard test images show the technique achieves better performance than other existing steganographic techniques in terms of mean square error, peak signal-to-noise ratio and image fidelity.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
A Novel Super Resolution Algorithm Using Interpolation and LWT Based Denoisin...CSCJournals
Image capturing technique has some limitations and due to that we often get low resolution(LR) images. Super Resolution(SR) is a process by which we can generate High Resolution(HR) image from one or more LR images. Here we have proposed one SR algorithm which take three shifted and noisy LR images and generate HR image using Lifting Wavelet Transform(LWT) based denoising method and Directional Filtering and Data Fusion based Edge-Guided Interpolation Algorithm.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
This document summarizes an article that proposes improvements to the nonlocal means (NLM) image denoising algorithm. The proposed method first applies Gaussian blurring to pre-process the noisy image. It then extracts features from image patches using Hu's moment invariants. Next, it performs k-means clustering on the feature vectors to group similar patches. Finally, it applies row image weighted averaging to reconstruct the image. The experimental results showed this method can perform better denoising than the original NLM, especially at higher noise levels, by providing more reliable candidate patches for the weighted averaging.
1) The document presents a novel approach for pose estimation, normalization, and face recognition using multi-class SVM, affine transformation, and DCT.
2) The training section uses affine transformation for pose normalization and DCT for illumination removal. During testing, multi-class SVM estimates pose and affine transformation and DCT are applied to normalize pose and remove illumination before recognition.
3) PCA is used to extract features for matching during recognition. Experimental results on FERET datasets show the approach achieves 100% recognition rate after pose normalization and illumination removal.
This document describes a machine learning project that uses support vector machines (SVM) and k-nearest neighbors (k-NN) algorithms to segment gesture phases based on radial basis function (RBF) kernels and k-nearest neighbors. The project aims to classify frames of movement data into five gesture phases (rest, preparation, stroke, hold, retraction) using two classifiers. The SVM approach achieved 53.27% accuracy on test data while the k-NN approach achieved significantly higher accuracy of 92.53%. The document provides details on the dataset, feature extraction methods, model selection process and results of applying each classifier to the test data.
Similar to Performance Evaluation of Object Tracking Technique Based on Position Vectors (20)
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Performance Evaluation of Object Tracking Technique Based on Position Vectors
1. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 124
Performance Evaluation of Object Tracking Technique Based on
Position Vectors
V Purandhar Reddy vpreddy4@gmail.com
Associate Professor,
Dept of ECE, S V college of Engineering,
Tirupati, 517507, India.
K Thirumala Reddy kanipakamthirumala@gmail.com
Assistant Professor,
Dept of ECE, S V college of Engineering,
Tirupati, 517507, India.
Y.B.Dawood dawoodyb@gmail.com
Dept of ECE, S V college of Engineering,
Tirupati, 517507, India.
K.M.YaagnaTheja yaagnatheja@gmail.com
Dept of ECE, S V college of Engineering,
Tirupati, 517507, India.
C.Munesh muneshneverstepbacks@gmail.com
Dept of ECE, S V college of Engineering,
Tirupati, 517507, India.
M.V.L.Sraavani sraavani.mvl@gmail.com
Dept of ECE, S V college of Engineering,
Tirupati, 517507, India.
Abstract
In this paper, a novel algorithm for moving object tracking based on position vectors has
proposed. The position vector of an object in first frame of a video has been extracted based on
selection of region of interest. Based on position vector in first frame object direction has shown in
nine different directions. We extract nine position vectors for nine different directions. With these
position vectors next frame is cropped into nine blocks. We exploit block matching of the first
frame with nine blocks of the next frame in a simple feature space by Descrete wavelet transform
and dual tree complex wavelet transform. The matched block is considered as tracked object and
its position vector is a reference location for the next successive frame. We describe performance
evaluation and algorithm in detail to perform simulation experiments of object tracking using
different feature vectors which verifies the tracking algorithm efficiency.
Keywords: Object Tracking, DWT, DTCWT, Feature Vector, Block Matching.
1. INTRODUCTION
The moving object tracking in video pictures has attracted a great deal of interest in computer
vision. For object recognition, navigation systems and surveillance systems, object tracking is an
indispensable first-step.
The conventional approach to object tracking is based on the difference between the current
image and the background image. However, algorithms based on the difference image cannot
2. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 125
simultaneously detect still objects. Furthermore, they cannot be applied to the case of a moving
camera. Algorithms including the camera motion information have been proposed previously, but,
they still contain problems in separating the information from the background.
In this paper, we propose object tracking in video pictures based on position vectors. Our
algorithm is based on position vector calculation and block matching by feature vectors .The
proposed method for tracking uses block matching between successive frames. As a
consequence, the algorithm can simultaneously track multiple moving and still objects in video
pictures.
This paper is organized as follows. The proposed method consisting of stages position vector
calculation, feature extraction by different transforms, block matching and minimum distance
measure which are described in detail.
2. PROPOSED MOVING OBJECT TRACKING TECHNIQUE
2.1 Extraction of Position Vectors
2.1.1 First Frame position vector extraction
In general, image segmentation and object extraction methods are used to calculate position
vectors. In the proposed concept, first select the portion of an object which is to be tracked. The
portion of an image is cropped in the first frame which is referred as block.
Based on co-ordinate parameters of an block, we extract the position of the pixel Pxmax (Pxmin)
which has the maximum (minimum) x-component.
Pxmax = (Xmax,x,Xmax,y),
Pxmin = (Xmin,x,Xmin,y),
Where Xmax,x, Xmax,y, Xmin,x, and Xmin,y are x and y coordinates of the
Rightmost and leftmost boundary of the block, respectively. In addition, we also extract
Pymax = (Ymax,x, Ymax,y),
Pymin = (Ymin,x, Ymin,y).
Then we calculate the width w and the height h of the block as follows
wi(t) = Xmax,x − Xmin,x,
hi(t) = Ymax,y − Ymin,y.
We define the positions of each block in the frame as follows
P = (X1,Y1)
X1(t) = (X max,x + X min,x)/2
Y1(t) = (Ymax,y + Ymin,y)/2
2.1.2 Position Vectors in nine different directions
Frame to Frame movement distance of an object is negligible. So, we consider movement shift
by “m” units in nine different directions as shown in the figure (1).
3. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 126
FIGURE 1: Movement of Object in Nine Different Directions.
For Direction 1, position vector shift P1= (X1-m,Y1)
For Direction 2, position vector shift P2= (X1+m,Y1)
For Direction 3, position vector shift P3= (X1,Y1+m)
For Direction 4, position vector shift P4= (X1,Y1-m)
For Direction 5, position vector shift P5= (X1-m,Y1-m)
For Direction 6, position vector shift P6= (X1+m,Y1+m)
For Direction 7, position vector shift P7= (X1-m,Y1+m)
For Direction 8, position vector shift P8= (X1+m,Y1-m)
For Direction 9, position vector shift P9= (X1,Y1)
Based on the position vectors P1,P2,P3,P4,P5,P6,P7,P8 and P9 crop the second frame into nine
blocks.
2.2 Feature Vectors Using DWT & DTCWT
For testing the developed algorithm, Feature vectors are calculated by two methods one is by
using Descrete Wavelet Transform (DWT) and the other is Dual Tree Complex Wavelet
Transform (DTCWT).
2.2.1 2D-Discrete Wavelet Transform
A weakness shared by most of the texture analysis schemes is that the image is analyzed at one
single-scale. A limitation that can be lifted by employing multi-scale representation of the textures
such as the one offered by the wavelet transform. Wavelets have been shown to be useful for
texture analysis in literature, possibly due to their finite duration, which provides both frequency
and spatial locality. The hierarchical wavelet transform uses a family of wavelet functions is
associated scaling functions to decompose the original signal/image into different sub bands. The
decomposition process is recursively applied to the sub bands to generate the next level of the
hierarchy. At every iteration of the DWT, the lines of the input image (obtained at the end of the
previous iteration) are low-pass filtered and high pass filtered. Then the lines of the two images
obtained at the output of the two filters are decimated with a factor of 2.
4. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 127
Next, the columns of the two images obtained are low and high pass filtered. The columns of
those four images are also decimated with a factor of 2. Four new sub-images representing the
result of the current iteration are generated. The first one, obtained after two low-pass filtering, is
named approximation sub-image (or LL image). The others three are named as detail sub-
images: LH, HL and HH. The LL image represents the input for the next iteration.
In this paper we have used level 2 Haar transform. Only the second level LL image is used for the
analysis as that contains most of the important information for feature vector calculation. For first
frame the feature vector is f1 based on cropped image using position vector P1.For next frame
feature vector set is V={V1,V2,V3,V4,V5,V6,V7,V8,V9} based on position vectors
P1,P2,P3,P4,P5,P6,P7,P8 &P9.
2.2.2 Dual Tree Complex Wavelet Transform
Dual Tree Complex Wavelet Transform is a recent enhancement technique to the Discrete
Wavelet Transform with some additional properties and changes. It is a effective method for
implementing an analytical wavelet transform.
DTCWT gives the complex transform of a signal using two separate DWT decompositions ie.,
tree ‘a’ and tree ‘b’. DTCWT produces complex coefficients by using a dual tree of wavelet filters
and gives real and imaginary parts.
The DTCWT has following properties:
• Approximate shift invariance;
• Good directional selectivity in 2-dimensions (2-D) with Gabor-like filters also true for
higher dimensionality: m-D);
• Perfect reconstruction (PR) using short linear-phase filters;
• Limited redundancy: independent of the number of scales: 2:1 for 1-D 2m :1 for m-D)
• Efficient order-N computation - only.
• DTCWT differentiates positive and negative frequencies and generates
Six sub bands oriented in ±15°, ±45°, and ±75 °. The different levels of DTCWT such as levels 5,
6, and 7 are applied on Cropped Blocks.
For performance analysis of algorithm we have used DTCWT Vectors for feature vector
calculation. For first frame the feature vector is f1 based on cropped image using position vector
P1.For next frame feature vector set is V={V1,V2,V3,V4,V5,V6,V7,V8,V9} based on position
vectors P1,P2,P3,P4,P5,P6,P7,P8 &P9.
2.3 Block Matching and Distance Measure
Using the Feature Vector in the first frame and feature vectors of next frame we perform the
minimum distance by Euclidean distance between feature vectors. Finally the cropped image is
identified with minimum distance feature vector of next frame. Repeating this matching procedure
for all the frames with first frame, we can identify all cropped area one by one and can keep track
of cropped area between frames.
5. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 128
2.3.1 Object Tracking Algorithm
1. Input video
2. Crop the first frame for ROI(Region Of Interest)
3. Calculate position vector for cropped portion.P=(X1,Y1)
4. Based on position vector calculate nine position vectors with X and Y co-ordinate shift by
‘m’ units. P1= (X1-m,Y1), P2= (X1+m,Y1), P3=(X1,Y1+m),P4=(X1,Y1-m), P5= (X1-m,Y1-
m),P6= (X1+m,Y1+m), P7= (X1-m,Y1+m), P8= (X1+m,Y1-m), P9= (X1,Y1)
5. For previous frame i.e first frame perform feature vector extraction method for cropped
block to get feature vector f1.
6. Similarly perform DWT for all cropped blocks in the next frame K+1 to get the feature
vectors and for vector set V.
7. Block matching distance measure
a.) Calculate the distance measured using Manhattan distance between f1 and V.
b.) Apply Feature Vector matching of K
th
frame cropped image with minimum distance
block of K+1
th
frame. If not matched, perform for next successive frame.
c.) After matching remove the position vector data of N
th
frame and store the data of
position vector of K+1
th
frame.
d.) Increase the value of K by K+1.
8. Repeat the steps from 1 to 7.
3. IMPLEMENTATION
The implementation of proposed object tracking technique using DWT & DTCWT Feature Vectors
is done in MATLAB 7.0.
3.1 Video Data
The object tracking techniques by DWT & DTCWT are tested on the Video Pictures. To compare
the techniques and to check their performance we have used the Precision of object Tracking and
tracking speed.
3.2 Precision of Object Tracking
Here we address some of the features of an efficient Object tracking system such as accuracy,
stability and speed. To assess the object tracking effectiveness, we have used the precision as
Statistical comparison parameters for the DWT and DTCWT. The definitions of Precision is
given by the equation
6. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 129
4. RESULTS AND DISCUSSION
The object tracking methods by DWT and DTCWT were applied to the Video frames. The
precession is 95% in case of DTCWT and 92 % in case of DWT for 100 frames. This shows that
DTCWT outperforms DWT. As far as precision is concerned DTCWT gives best performance.
Table 1 shows the actual precision values computed for considered query frame at percentage
precision using two techniques discussed in the paper.
FIGURE 2: Tracking Results from Successive Frames Using DTCWT Feature Vectors.
FIGURE 3: Tracking Results from Successive Frames Using DWT Feature Vectors.
7. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 130
Method
Precision based on number of frames Execution time between two
frames5 10 15 20 25 30
DWT 100% 100% 93.3% 90% 92% 93.3% 0.4msec
DTCWT 100% 100% 100% 95% 96% 96.66% 0.2msec
TABLE 1: Percentage Precision Values for Object Tracking by DWT and DTCWT Feature Vectors.
5. CONCLUSION
The performance of object tracking system depends on the precision and speed. Percentage
Precision is taken as criteria for judging the performance of object tracking technique. Thus
DTCWT gives the best performance if precision and speed both are considered. Simulation
results for frame sequences with moving objects verify the suitability of the algorithm for reliable
moving object tracking. We also have confirmed that the algorithm works very well for more
complicated video pictures including rotating objects and occlusion of objects. It is obvious that,
the simulation result in the proposed algorithm is quite enough to apply for the real time
applications. We would like to implement this algorithm with feature vectors in different vectors for
future applications too.
6. REFERENCES
1. Moving object tracking based on position vectors V. Purandhar Reddy, International Journal of
Scientific & Engineering Research in “Digital Image Processing”, volume 4, Issue 4, April 2013.
2. Object Tracking based on pattern matching V. Purandhar Reddy, International Journal of
Advanced Research in “ Computer Science and Software Engineering “, Volume 2, Issue 2,
February 2012.
3. Object Tracking based on Image Segmentation and pattern matching V. Purandhar Reddy,
International Journal of Computer Science and Information technology, Issue 2, May 2012.
4. H. Kimura and T. Shibata, “Simple-architecture motion-detection analog V-chip based on
quasi-two-dimensional processing,” Ext. Abs. of the 2002 Int. Conf. on Solid State Devices and
Materials (SSDM2002), pp. 240– 241, 2002.
5. John Eakins and Margaret Graham. Content-based image retrieval. Project Report University
of Northumbria at Newcastle.
6. Datta et.al. 2005. Content-based image retrieval approaches and trends of the new age. In
Proceedings of the 7th ACM SIGMM International Workshop on Multimedia information
Retrieval. Singapore: ACM Press. pp.253-262.
7. Datta et. al. 2008. Image retrieval: idea influences, and trends of the new age. ACM
Computing Surveys.vol.40(2), pp.1-60.
8. Stricker et. al. 1995. Similarity of color image. In Proceedings of SPIE on storage retrieval for
image and video databases, California.Vol. 2420, pp. 381-392.
9. Gao Li-chun and Xu Ye-qiang. 2011. Image retrieval based on relevance feedback using
blocks weighted dominant colors in MPEG-7. Journal of Computer Applications.vol.31(6),
pp.15491551.
10. H B Kekre and V A Bharadi. Modified BTC & walsh coefficients based features for content
based image retrieval. NCICT, India.
8. V Purandhar Reddy , K Thirumala Reddy, Y.B.Dawood, K.M.YaagnaTheja, C.Munesh & M.V.L.Sraavani
International Journal of Image Processing (IJIP), Volume (7) : Issue (2) : 2013 131
11. S.Vidivelli and S.Sathiya devi. 2011. Wavelet based integrated color image retrieval. IEEE-
International Conference on Recent Trends in Information Technology, ICRTIT.
12. Kekre et. al. 2010 IEEE. Content based image retreival using fusion of gabor magnitude and
modified block truncation coding. Third International Conference on Emerging Trends in
Engineering and Technology.