Trademark Image Retrieval is playing a vital role as a part of CBIR System. Trademark is of great
significance because it carries the status value of any company. To retrieve such a fake or copied
trademark we design a retrieval system which is based on hybrid techniques. It contains a mixture of two
different feature vector which combined together to give a suitable retrieval system. In the proposed system
we extract the corner feature which is applied on an edge pixel image. This feature is used to extract the
relevant image and to more purify the result we apply other feature which is the invariant moment feature.
From the experimental results we conclude that the system is 85 percent efficient.
Performance Evaluation Of Ontology And Fuzzybase Cbiracijjournal
In This Paper, We Have Done Performance Evaluation Of Ontology Using Low-Level Features Like
Color, Texture And Shape Based Cbir, With Topic Specific Cbir.The Resulting Ontology Can Be Used
To Extract The Appropriate Images From The Image Database. Retrieving Appropriate Images From An
Image Database Is One Of The Difficult Tasks In Multimedia Technology. Our Results Show That The
Values Of Recall And Precision Can Be Enhanced And This Also Shows That Semantic Gap Can Also Be
Reduced. The Proposed Algorithm Also Extracts The Texture Values From The Images Automatically
With Also Its Category (Like Smooth, Course Etc) As Well As Its Technical Interpretation
This document summarizes and reviews several techniques for image mining, including feature extraction, image clustering, and object recognition algorithms. It discusses color, texture, and edge feature extraction techniques and evaluates their precision and recall. It also describes the block truncation algorithm for image recognition and the cascade feature extraction approach. The key techniques - color moments, block truncation coding, and cascade classifiers - are evaluated based on experimental recall and precision results. Overall, the document provides an overview of different image mining techniques and evaluates their effectiveness.
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSijma
Nowadays, image alteration in the mainstream media has become common. The degree of manipulation is
facilitated by image editing software. In the past two decades the number indicating manipulation of
images rapidly grows. Hence, there are many outstanding images which have no provenance information
or certainty of authenticity. Therefore, constructing a scientific and automatic way for evaluating image
authenticity is an important task, which is the aim of this paper. In spite of having outstanding
performance, all the image forensics schemes developed so far have not provided verifiable information
about source of tampering. This paper aims to propose a different kind of scheme, by exploiting a group of
similar images, to verify the source of tampering. First, we define our definition with regard to tampered
image. The distinctive features are obtained by exploiting Scale- Invariant Feature Transform (SIFT)
technique. We then proposed clustering technique to identify the tampered region based on distinctive
keypoints. In contrast to k-means algorithm, our technique does not require the initialization of k value. The
experimental results over and beyond the dataset indicate the efficacy of our proposed scheme
This document presents a novel edge detection algorithm proposed for mammographic images. It begins with an abstract summarizing the paper's focus on edge detection in mammograms and comparison to other common edge detection methods. It then provides background on edge detection and medical image analysis, describing common gradient and derivative-based edge detection methods. The main body introduces a new two-phase edge detection process called Binary Homogeneity Enhancement Algorithm (BHEA) that homogenizes the mammogram and detects edges by traversing the image horizontally and vertically. Results from the new method are then compared to other common edge detection filters.
Image retrieval is the major innovations in the development of images. Mining of images is used to mine latest information from
the general collection of images. CBIR is the latest method in which our target images is to be extracted on the basis of specific features of
the specified image. The image can be retrieved in fast if it is clustered in an accurate and structured manner. In this paper, we have the
combined the theories of CBIR and analysis of features of CBIR systems.
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
This document provides a review of various texture classification approaches and texture datasets. It begins with an introduction to texture classification and its general framework. Key steps in texture classification are preprocessing, feature extraction, and classification. The document then discusses several common feature extraction methods used in texture classification, including local binary pattern (LBP), scale invariant feature transform (SIFT), speeded up robust features (SURF), Fourier transformation, texture spectrum, and gray level co-occurrence matrix (GLCM). It also reviews three popular classifiers for texture classification: K-nearest neighbors (K-NN), artificial neural network (ANN), and support vector machine (SVM). Finally, it mentions several popular texture datasets that are commonly used for training and testing texture
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
Performance Evaluation Of Ontology And Fuzzybase Cbiracijjournal
In This Paper, We Have Done Performance Evaluation Of Ontology Using Low-Level Features Like
Color, Texture And Shape Based Cbir, With Topic Specific Cbir.The Resulting Ontology Can Be Used
To Extract The Appropriate Images From The Image Database. Retrieving Appropriate Images From An
Image Database Is One Of The Difficult Tasks In Multimedia Technology. Our Results Show That The
Values Of Recall And Precision Can Be Enhanced And This Also Shows That Semantic Gap Can Also Be
Reduced. The Proposed Algorithm Also Extracts The Texture Values From The Images Automatically
With Also Its Category (Like Smooth, Course Etc) As Well As Its Technical Interpretation
This document summarizes and reviews several techniques for image mining, including feature extraction, image clustering, and object recognition algorithms. It discusses color, texture, and edge feature extraction techniques and evaluates their precision and recall. It also describes the block truncation algorithm for image recognition and the cascade feature extraction approach. The key techniques - color moments, block truncation coding, and cascade classifiers - are evaluated based on experimental recall and precision results. Overall, the document provides an overview of different image mining techniques and evaluates their effectiveness.
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSijma
Nowadays, image alteration in the mainstream media has become common. The degree of manipulation is
facilitated by image editing software. In the past two decades the number indicating manipulation of
images rapidly grows. Hence, there are many outstanding images which have no provenance information
or certainty of authenticity. Therefore, constructing a scientific and automatic way for evaluating image
authenticity is an important task, which is the aim of this paper. In spite of having outstanding
performance, all the image forensics schemes developed so far have not provided verifiable information
about source of tampering. This paper aims to propose a different kind of scheme, by exploiting a group of
similar images, to verify the source of tampering. First, we define our definition with regard to tampered
image. The distinctive features are obtained by exploiting Scale- Invariant Feature Transform (SIFT)
technique. We then proposed clustering technique to identify the tampered region based on distinctive
keypoints. In contrast to k-means algorithm, our technique does not require the initialization of k value. The
experimental results over and beyond the dataset indicate the efficacy of our proposed scheme
This document presents a novel edge detection algorithm proposed for mammographic images. It begins with an abstract summarizing the paper's focus on edge detection in mammograms and comparison to other common edge detection methods. It then provides background on edge detection and medical image analysis, describing common gradient and derivative-based edge detection methods. The main body introduces a new two-phase edge detection process called Binary Homogeneity Enhancement Algorithm (BHEA) that homogenizes the mammogram and detects edges by traversing the image horizontally and vertically. Results from the new method are then compared to other common edge detection filters.
Image retrieval is the major innovations in the development of images. Mining of images is used to mine latest information from
the general collection of images. CBIR is the latest method in which our target images is to be extracted on the basis of specific features of
the specified image. The image can be retrieved in fast if it is clustered in an accurate and structured manner. In this paper, we have the
combined the theories of CBIR and analysis of features of CBIR systems.
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
This document provides a review of various texture classification approaches and texture datasets. It begins with an introduction to texture classification and its general framework. Key steps in texture classification are preprocessing, feature extraction, and classification. The document then discusses several common feature extraction methods used in texture classification, including local binary pattern (LBP), scale invariant feature transform (SIFT), speeded up robust features (SURF), Fourier transformation, texture spectrum, and gray level co-occurrence matrix (GLCM). It also reviews three popular classifiers for texture classification: K-nearest neighbors (K-NN), artificial neural network (ANN), and support vector machine (SVM). Finally, it mentions several popular texture datasets that are commonly used for training and testing texture
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Content Based Image Retrieval Approach Based on Top-Hat Transform And Modifie...cscpconf
In this paper a robust approach is proposed for content based image retrieval (CBIR) using texture analysis techniques. The proposed approach includes three main steps. In the first one, shape detection is done based on Top-Hat transform to detect and crop object part of the image. Second step is included a texture feature representation algorithm using color local binary patterns (CLBP) and local variance features. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. The performance of the proposed approach is evaluated using Corel and Simplicity image sets and it compared by some of other well-known approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages.
This document summarizes a research paper on tracking moving objects and determining their distance and velocity using background subtraction algorithms. It first describes background subtraction as a process to extract foreground objects from video by comparing each frame to a background model. It then discusses several algorithms used in the research, including median filtering for noise removal, morphological operations to smooth object regions, and connected component analysis to detect large foreground regions representing objects. The document evaluates these techniques on video to track a single object, determine the distance and velocity of that object between frames, and identify multiple moving objects.
An efficient method for recognizing the low quality fingerprint verification ...IJCI JOURNAL
In this paper, we propose an efficient method to provide personal identification using fingerprint to get better accuracy even in noisy condition. The fingerprint matching based on the number of corresponding minutia pairings, has been in use for a long time, which is not very efficient for recognizing the low quality fingerprints. To overcome this problem, correlation technique is used. The correlation-based fingerprint verification system is capable of dealing with low quality images from which no minutiae can be extracted reliably and with fingerprints that suffer from non-uniform shape distortions, also in case of damaged and partial images. Orientation Field Methodology (OFM) has been used as a preprocessing module, and it converts the images into a field pattern based on the direction of the ridges, loops and bifurcations in the image of a fingerprint. The input image is then Cross Correlated (CC) with all the images in the cluster and the highest correlated image is taken as the output. The result gives a good recognition rate, as the proposed scheme uses Cross Correlation of Field Orientation (CCFO = OFM + CC) for fingerprint identification.
Digital Image Forgery Detection Using Improved Illumination Detection ModelEditor IJMTER
Image processing methods are widely used in advertisement, magazines, blogs, website,
television and more. When the digital images took their role, Happening of crimes and escaping from
the crimes happened becomes easier. To be with lawful, No one should be punished for not
commencing a crime, to help them this application can be used. The identification using color edge
method will give a exact detection of the crime and the forgeries that has been done in the digital
image.
Image composition or splicing methods are used to discover the image forgeries. The approach is
machine-learning- based and requires minimal user interaction and this technique is applicable to
images containing two or more people and requires no expert interaction for the tampering decision.
The obtained result by the classification performance using an SVM (Super Vector Machine) metafusion classifier and It yields detection rates of 86% on a new benchmark dataset consisting of 200
images, and 83% on 50 images that were collected from the Internet.
The further improvements can be achieved when more advanced illuminant color estimators become
available. Bianco and Schettini has proposed a machine-learning based illuminant estimator
particularly for faces which would help us in this for more accurate prediction. Effective skin
detection methods have been developed in the computer vision literature and this method also helps
us, in detecting pornography compositions which, according to forensic practitioners, have become
increasingly common nowadays.
This document describes an image preprocessing scheme for line detection using the Hough transform in a mobile robot vision system. The preprocessing includes resizing images to 128x96 pixels, converting to grayscale, performing edge detection using Sobel filters, and edge thinning. A newly developed edge thinning method is found to produce images better suited for the Hough transform than other thinning methods. The preprocessed images are then used as input for line detection and the robot's self-navigation system.
Texture based feature extraction and object trackingPriyanka Goswami
This document provides a project report on texture-based feature extraction and object tracking. It discusses using various texture analysis techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP), and Local Ternary Pattern (LTP) to extract features from images for tasks like cloud tracking. It implements these techniques in MATLAB and evaluates them on standard datasets to extract features and represent images with histograms for tasks like image recognition and analysis while reducing computational requirements compared to using raw images. The techniques are then applied to track cloud motion in weather satellite images by analyzing differences in texture histograms over time.
This document summarizes various image segmentation techniques including region-based, edge-based, thresholding, feature-based clustering, and model-based segmentation. It provides details on each technique, including advantages and disadvantages. Region-based segmentation groups similar pixels into regions while edge-based segmentation detects boundaries between regions. Thresholding uses threshold values from histograms to segment images. Feature-based clustering groups pixels based on characteristics like intensity. Model-based segmentation uses probabilistic models like Markov random fields. The document concludes that the best technique depends on the application and image type, though thresholding is simplest computationally.
Feature Extraction and Feature Selection using Textual Analysisvivatechijri
After pre-processing the images in character recognition systems, the images are segmented based on
certain characteristics known as “features”. The feature space identified for character recognition is however
ranging across a huge dimensionality. To solve this problem of dimensionality, the feature selection and feature
extraction methods are used. Hereby in this paper, we are going to discuss, the different techniques for feature
extraction and feature selection and how these techniques are used to reduce the dimensionality of feature space
to improve the performance of text categorization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
A novel embedded hybrid thinning algorithm forprjpublications
The document proposes a hybrid thinning algorithm that combines the Stentiford and Zhang-Suen thinning algorithms. It compares the hybrid algorithm to the original Stentiford and Zhang-Suen algorithms on an input image. The hybrid algorithm more accurately thins the image to a single pixel width but does not improve time complexity compared to the original algorithms. The hybrid approach uses four templates across two sub-iterations to identify and remove pixels based on connectivity values until no more can be removed. Experimental results show the hybrid algorithm more effectively increases image contrast than the original thinning algorithms.
Land Boundary Detection of an Island using improved Morphological OperationCSCJournals
Image analysis is one of the important tasks to obtain the information about earth surface. To detect and mark a particular land area, it is required to have the image from remote place. To recognize the same, the accurate boundary of that area has to be detected. In this paper, the example of remote sensing image has been considered. The accurate detection of the boundary is a complex task. A novel method has been proposed in this paper to detect the boundary of such land. Mathematical morphology is a simple and efficient method for this type of task. The morphological analysis is performed using structure elements (SE). By using mathematical morphology the images can be enhanced and then the boundary can be detected easily. Simultaneously the noise is removed by using the proposed model. The results exhibit the performance of the proposed method. Keywords: Remote Sensing images ; Edge detection; Gray- scale Morphological analysis, Structuring Element (SE).
Shot Boundary Detection using Radon Projection MethodIDES Editor
This paper proposes a novel technique for shot boundary detection using radon projection. It first removes illumination effects using DCT and DWT. Then it detects shot boundaries using radon transformation, which projects image intensity along radial lines at different angles. Differences in projections between frames indicate shot boundaries. The technique was tested on news, documentary and movie videos, achieving satisfactory results. Radon projection effectively summarizes 3D video volumes into the projection domain for comparing frames and detecting shot changes. The method handles illumination changes and object/camera motions better than previous techniques.
Bangalore. His research interests are in the areas of image processing,
computer vision and pattern recognition.
A Methodology for Extracting Standing Human Bodies from Single Imagesjournal ijrtem
Abstract: Extraction of the image of human body in unconstrained still images is challenging due to several factors, including shading, image noise, occlusions, background clutter, the high degree of human body deformability, and the unrestricted positions due to in and out of the image plane rotations. we propose a bottom-up approach for human body segmentation in static images. We decompose the problem into three sequential problems: Face detection, upper body extraction, and lower body extraction, since there is a direct pair wise correlation among them. Index Terms: Skin segmentation, Torso, Face recognition, Thresholding, Ethnicity, Morphology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Critical Survey on Detection of Object and Tracking of Object With differen...Editor IJMTER
Basically object detection and object tracking are two important and challenging aspects in
many computer vision applications like surveillance system, vehicle navigation, autonomous robot
navigation, compression of video etc. Object detection is first low level important task for any video
surveillance application. To detection of moving object is a challenging task. Tracking is required in
higher level applications that required the location and shape of object. There are three key steps in
video analysis: detection of interesting moving objects, tracking of such objects from frame to frame,
and analysis of object tracks to recognize their behavior. Object detection and tracking especially for
human and vehicle is currently most active research topic. A lot of research has been undergoing
ranging from applications to noble algorithms. The main objective of this paper is to review (survey)
of various moving object detection and object tracking methodologies.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of pattern recognition and supervised learning for machine vision. It discusses what pattern recognition is, examples of pattern recognition applications, the basic steps in a pattern recognition system including data acquisition, preprocessing, feature extraction, supervised/unsupervised learning, and post-processing. For supervised learning, it describes the process of inferring functions from labeled training data. It also provides an example of using multiple features and decision boundaries for texture classification of images.
E MOTION I NTERACTION WITH V IRTUAL R EALITY U SING H YBRID E MOTION C...ijcsit
The document describes a new hybrid method for classifying human emotions using electroencephalogram (EEG) brain signals for interaction with virtual reality. The method combines self-assessment, arousal valence dimension modeling, and analysis of variance in brain hemisphere activity. Two basic emotions, happy and sad, are highlighted. EEG signals are used to interpret the user's emotional state. Emotion interaction is expressed through a 3D model changing its walking style based on the classified user emotion. The results show the hybrid method can classify emotions in different circumstances and synchronize a 3D virtual model accordingly. The goal is to develop a new technique for classifying emotions to provide feedback through a 3D virtual character's walking expression.
Entropy Nucleus a nd Use i n Waste Disposal Policiesijitjournal
The central theme of this article is that the usual Shannon’s entropy [1] is not suff
icient to address the
unknown Gaussian population average. A remedy is necessary. By peeling away entropy junkies, a refined
version is introduced and it is named nucleus entropy in this article. Statistical properties and advantages
of the Gaussian nucleu
s entropy are derived and utilized to interpret 2005 and 2007 waste disposals (in
1,000 tons) by fifty
-
one states (including the District of Columbia) in USA. Each state generates its own,
imports from a state for a revenue, and exports to another state wi
th a payment waste disposal [2]. Nucleus
entropy is large when the population average is large and/or when the population variance is lesser.
Nucleus entropy advocates the significance of the waste policies under four scenarios: (1) keep only
generated, (2
) keep generated with receiving in and shipping out, (3) without receiving in, and (4) without
shipping out. In the end, a few recommendations are suggested for the waste management policy makers
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Content Based Image Retrieval Approach Based on Top-Hat Transform And Modifie...cscpconf
In this paper a robust approach is proposed for content based image retrieval (CBIR) using texture analysis techniques. The proposed approach includes three main steps. In the first one, shape detection is done based on Top-Hat transform to detect and crop object part of the image. Second step is included a texture feature representation algorithm using color local binary patterns (CLBP) and local variance features. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. The performance of the proposed approach is evaluated using Corel and Simplicity image sets and it compared by some of other well-known approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages.
This document summarizes a research paper on tracking moving objects and determining their distance and velocity using background subtraction algorithms. It first describes background subtraction as a process to extract foreground objects from video by comparing each frame to a background model. It then discusses several algorithms used in the research, including median filtering for noise removal, morphological operations to smooth object regions, and connected component analysis to detect large foreground regions representing objects. The document evaluates these techniques on video to track a single object, determine the distance and velocity of that object between frames, and identify multiple moving objects.
An efficient method for recognizing the low quality fingerprint verification ...IJCI JOURNAL
In this paper, we propose an efficient method to provide personal identification using fingerprint to get better accuracy even in noisy condition. The fingerprint matching based on the number of corresponding minutia pairings, has been in use for a long time, which is not very efficient for recognizing the low quality fingerprints. To overcome this problem, correlation technique is used. The correlation-based fingerprint verification system is capable of dealing with low quality images from which no minutiae can be extracted reliably and with fingerprints that suffer from non-uniform shape distortions, also in case of damaged and partial images. Orientation Field Methodology (OFM) has been used as a preprocessing module, and it converts the images into a field pattern based on the direction of the ridges, loops and bifurcations in the image of a fingerprint. The input image is then Cross Correlated (CC) with all the images in the cluster and the highest correlated image is taken as the output. The result gives a good recognition rate, as the proposed scheme uses Cross Correlation of Field Orientation (CCFO = OFM + CC) for fingerprint identification.
Digital Image Forgery Detection Using Improved Illumination Detection ModelEditor IJMTER
Image processing methods are widely used in advertisement, magazines, blogs, website,
television and more. When the digital images took their role, Happening of crimes and escaping from
the crimes happened becomes easier. To be with lawful, No one should be punished for not
commencing a crime, to help them this application can be used. The identification using color edge
method will give a exact detection of the crime and the forgeries that has been done in the digital
image.
Image composition or splicing methods are used to discover the image forgeries. The approach is
machine-learning- based and requires minimal user interaction and this technique is applicable to
images containing two or more people and requires no expert interaction for the tampering decision.
The obtained result by the classification performance using an SVM (Super Vector Machine) metafusion classifier and It yields detection rates of 86% on a new benchmark dataset consisting of 200
images, and 83% on 50 images that were collected from the Internet.
The further improvements can be achieved when more advanced illuminant color estimators become
available. Bianco and Schettini has proposed a machine-learning based illuminant estimator
particularly for faces which would help us in this for more accurate prediction. Effective skin
detection methods have been developed in the computer vision literature and this method also helps
us, in detecting pornography compositions which, according to forensic practitioners, have become
increasingly common nowadays.
This document describes an image preprocessing scheme for line detection using the Hough transform in a mobile robot vision system. The preprocessing includes resizing images to 128x96 pixels, converting to grayscale, performing edge detection using Sobel filters, and edge thinning. A newly developed edge thinning method is found to produce images better suited for the Hough transform than other thinning methods. The preprocessed images are then used as input for line detection and the robot's self-navigation system.
Texture based feature extraction and object trackingPriyanka Goswami
This document provides a project report on texture-based feature extraction and object tracking. It discusses using various texture analysis techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP), and Local Ternary Pattern (LTP) to extract features from images for tasks like cloud tracking. It implements these techniques in MATLAB and evaluates them on standard datasets to extract features and represent images with histograms for tasks like image recognition and analysis while reducing computational requirements compared to using raw images. The techniques are then applied to track cloud motion in weather satellite images by analyzing differences in texture histograms over time.
This document summarizes various image segmentation techniques including region-based, edge-based, thresholding, feature-based clustering, and model-based segmentation. It provides details on each technique, including advantages and disadvantages. Region-based segmentation groups similar pixels into regions while edge-based segmentation detects boundaries between regions. Thresholding uses threshold values from histograms to segment images. Feature-based clustering groups pixels based on characteristics like intensity. Model-based segmentation uses probabilistic models like Markov random fields. The document concludes that the best technique depends on the application and image type, though thresholding is simplest computationally.
Feature Extraction and Feature Selection using Textual Analysisvivatechijri
After pre-processing the images in character recognition systems, the images are segmented based on
certain characteristics known as “features”. The feature space identified for character recognition is however
ranging across a huge dimensionality. To solve this problem of dimensionality, the feature selection and feature
extraction methods are used. Hereby in this paper, we are going to discuss, the different techniques for feature
extraction and feature selection and how these techniques are used to reduce the dimensionality of feature space
to improve the performance of text categorization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
A novel embedded hybrid thinning algorithm forprjpublications
The document proposes a hybrid thinning algorithm that combines the Stentiford and Zhang-Suen thinning algorithms. It compares the hybrid algorithm to the original Stentiford and Zhang-Suen algorithms on an input image. The hybrid algorithm more accurately thins the image to a single pixel width but does not improve time complexity compared to the original algorithms. The hybrid approach uses four templates across two sub-iterations to identify and remove pixels based on connectivity values until no more can be removed. Experimental results show the hybrid algorithm more effectively increases image contrast than the original thinning algorithms.
Land Boundary Detection of an Island using improved Morphological OperationCSCJournals
Image analysis is one of the important tasks to obtain the information about earth surface. To detect and mark a particular land area, it is required to have the image from remote place. To recognize the same, the accurate boundary of that area has to be detected. In this paper, the example of remote sensing image has been considered. The accurate detection of the boundary is a complex task. A novel method has been proposed in this paper to detect the boundary of such land. Mathematical morphology is a simple and efficient method for this type of task. The morphological analysis is performed using structure elements (SE). By using mathematical morphology the images can be enhanced and then the boundary can be detected easily. Simultaneously the noise is removed by using the proposed model. The results exhibit the performance of the proposed method. Keywords: Remote Sensing images ; Edge detection; Gray- scale Morphological analysis, Structuring Element (SE).
Shot Boundary Detection using Radon Projection MethodIDES Editor
This paper proposes a novel technique for shot boundary detection using radon projection. It first removes illumination effects using DCT and DWT. Then it detects shot boundaries using radon transformation, which projects image intensity along radial lines at different angles. Differences in projections between frames indicate shot boundaries. The technique was tested on news, documentary and movie videos, achieving satisfactory results. Radon projection effectively summarizes 3D video volumes into the projection domain for comparing frames and detecting shot changes. The method handles illumination changes and object/camera motions better than previous techniques.
Bangalore. His research interests are in the areas of image processing,
computer vision and pattern recognition.
A Methodology for Extracting Standing Human Bodies from Single Imagesjournal ijrtem
Abstract: Extraction of the image of human body in unconstrained still images is challenging due to several factors, including shading, image noise, occlusions, background clutter, the high degree of human body deformability, and the unrestricted positions due to in and out of the image plane rotations. we propose a bottom-up approach for human body segmentation in static images. We decompose the problem into three sequential problems: Face detection, upper body extraction, and lower body extraction, since there is a direct pair wise correlation among them. Index Terms: Skin segmentation, Torso, Face recognition, Thresholding, Ethnicity, Morphology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Critical Survey on Detection of Object and Tracking of Object With differen...Editor IJMTER
Basically object detection and object tracking are two important and challenging aspects in
many computer vision applications like surveillance system, vehicle navigation, autonomous robot
navigation, compression of video etc. Object detection is first low level important task for any video
surveillance application. To detection of moving object is a challenging task. Tracking is required in
higher level applications that required the location and shape of object. There are three key steps in
video analysis: detection of interesting moving objects, tracking of such objects from frame to frame,
and analysis of object tracks to recognize their behavior. Object detection and tracking especially for
human and vehicle is currently most active research topic. A lot of research has been undergoing
ranging from applications to noble algorithms. The main objective of this paper is to review (survey)
of various moving object detection and object tracking methodologies.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of pattern recognition and supervised learning for machine vision. It discusses what pattern recognition is, examples of pattern recognition applications, the basic steps in a pattern recognition system including data acquisition, preprocessing, feature extraction, supervised/unsupervised learning, and post-processing. For supervised learning, it describes the process of inferring functions from labeled training data. It also provides an example of using multiple features and decision boundaries for texture classification of images.
E MOTION I NTERACTION WITH V IRTUAL R EALITY U SING H YBRID E MOTION C...ijcsit
The document describes a new hybrid method for classifying human emotions using electroencephalogram (EEG) brain signals for interaction with virtual reality. The method combines self-assessment, arousal valence dimension modeling, and analysis of variance in brain hemisphere activity. Two basic emotions, happy and sad, are highlighted. EEG signals are used to interpret the user's emotional state. Emotion interaction is expressed through a 3D model changing its walking style based on the classified user emotion. The results show the hybrid method can classify emotions in different circumstances and synchronize a 3D virtual model accordingly. The goal is to develop a new technique for classifying emotions to provide feedback through a 3D virtual character's walking expression.
Entropy Nucleus a nd Use i n Waste Disposal Policiesijitjournal
The central theme of this article is that the usual Shannon’s entropy [1] is not suff
icient to address the
unknown Gaussian population average. A remedy is necessary. By peeling away entropy junkies, a refined
version is introduced and it is named nucleus entropy in this article. Statistical properties and advantages
of the Gaussian nucleu
s entropy are derived and utilized to interpret 2005 and 2007 waste disposals (in
1,000 tons) by fifty
-
one states (including the District of Columbia) in USA. Each state generates its own,
imports from a state for a revenue, and exports to another state wi
th a payment waste disposal [2]. Nucleus
entropy is large when the population average is large and/or when the population variance is lesser.
Nucleus entropy advocates the significance of the waste policies under four scenarios: (1) keep only
generated, (2
) keep generated with receiving in and shipping out, (3) without receiving in, and (4) without
shipping out. In the end, a few recommendations are suggested for the waste management policy makers
Il decreto legge balduzzi impone alle società sportive di avere un defibrillatore. Ecco il testo della legge, che chiarisce competenze e responsabilità
This document outlines a joint effort between IBM's Poughkeepsie Lab and Silicon Valley Lab to benchmark a 50TB data warehouse on System z and establish best practices for managing large data warehouses on the System z platform. It discusses using workload manager to handle mixed transactional and analytic workloads, implementation considerations, and references several other IBM Redbooks publications related to enterprise data warehousing with DB2 on System z.
Affable Compression through Lossless Column-Oriented Huffman Coding TechniqueIOSR Journals
This document discusses a lossless column-oriented compression technique using Huffman coding. It begins by explaining that column-oriented data compression is more efficient than row-oriented compression because values within the same attribute are more correlated. It then proposes compressing and decompressing column-oriented data images using the Huffman coding technique. Finally, it implements a software algorithm to compress and decompress column-oriented databases using Huffman coding in MATLAB.
El portafolio es un método de enseñanza y evaluación que implica la recopilación de trabajos de los estudiantes para juzgar sus capacidades. Los estudiantes muestran su progreso a lo largo del tiempo y los profesores pueden evaluar el logro de objetivos. El portafolio electrónico permite incluir diferentes tipos de contenido digital y muestra las relaciones entre conceptos. El portafolio guía a los estudiantes y muestra su aprendizaje, y también permite la evaluación continua y la autonomía del estudiante.
The document discusses checksums and how even small errors can cause visible changes in checksums. A checksum is a numeric value used to verify data integrity. The document contains two identical numbers that are likely checksum values to demonstrate that a checksum can detect even minor changes to data.
This document outlines a business plan for a mobile healthcare application called "Your Personal Healthcare Concierge". The app would provide patients information about their condition, treatment, and navigation of hospital facilities. It includes integrated modules like maps, timelines, medical encyclopedias and a survey system. The management team aims to license the app to 500+ hospitals starting in 2012. Funds of $200k are being sought to complete marketing, testing and development before a public release.
The document repeatedly states that for more study material one should log on to the website www.ululu.in. It provides this message and URL 15 separate times throughout. The purpose seems to be promoting the website as a source for additional study resources.
The document discusses special finance in the used car industry. It notes that historically, used car dealers did most special finance lending, but now many third parties have entered the market. Key metrics are provided, such as average gross profit per deal being around $2,966 and average retail price being $14,953. Various marketing channels are discussed, and it is noted that while digital has grown, building a strong brand remains very important. Additional statistics on conversion rates and costs per lead and sale are also presented.
The document discusses the path from randomness and uncertainty to information, thermodynamics, and intelligence of an observer. It proposes that random interactions from an observable process can be integrated into information processes by an observer. The observer applies "impulse controls" that cut off portions of the observable process, converting uncertainty to certainty and extracting information. This establishes an "information path" where the observer sequentially decreases entropy and maximizes information. The information is encoded in an "information network" within the observer that exhibits logical computation and leads to the development of intelligence. In this way, the model aims to unite uncertainty, information dynamics, and the emergence of an intelligent observer.
Compression technique using dct fractal compressionAlexander Decker
This document summarizes and compares different image compression techniques, including DCT, fractal compression, and their applications in steganography. It discusses how DCT works by transforming image data into frequency domains, while fractal compression exploits self-similarity within images. The document reviews several existing studies on combining these techniques with steganography and encryption. Specifically, it examines approaches that use DCT and fractal compression to improve data hiding capacity and security. Overall, the document provides an overview of key compression algorithms and their applications in digital watermarking and steganography.
la méthode MeSH Database + Lilmits appliquée à une recherche sur la prévention de la thrombose veineuse par les anticoagulants.
Un chapitre est consacré à MyNCBI
El documento presenta varios problemas estadísticos para resolver, incluyendo calcular el salario promedio de trabajadores, el peso promedio de estudiantes, y la nota promedio de un curso. También pide calcular medidas como la mediana y realizar comparaciones. Finalmente, presenta datos de precios de artículos en una cafetería y pide calcular medidas como la mediana, cuartiles y graficar los datos.
This document outlines the specifications for an AS coursework main task involving creating the titles and opening of an original fiction film up to two minutes in length. It notes that all video and audio material must be original, with the exception of copyright-free music or effects. Students can work individually or in groups of up to four. The marking criteria and breakdown of marks is also provided, as well as guidance on research, planning, construction, evaluation, and use of new technologies.
Internet data almost double every year. The need of multimedia communication
is less storage space and fast transmission. So, the large volume of video data has become
the reason for video compression. The aim of this paper is to achieve temporal compression
for three-dimensional (3D) videos using motion estimation-compensation and wavelets.
Instead of performing a two-dimensional (2D) motion search, as is common in conventional
video codec’s, the use of a 3D motion search has been proposed, that is able to better exploit
the temporal correlations of 3D content. This leads to more accurate motion prediction and
a smaller residual. The discrete wavelet transform (DWT) compression scheme has been
added for better compression ratio. The DWT has a high-energy compaction property thus
greatly impacted the field of compression. The quality parameters peak signal to noise ratio
(PSNR) and mean square error (MSE) have been calculated. The simulation results shows
that the proposed work improves the PSNR from existing work.
Analysis of image compression algorithms using wavelet transform with gui in ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The document summarizes solar irradiation spectra and spectral reflectance signatures of different materials that are important for optical remote sensing. It discusses how the solar irradiation spectrum above the atmosphere can be modeled as a blackbody radiation spectrum with a peak around 500nm. After passing through the atmosphere, solar irradiation at the ground is modulated by atmospheric transmission windows, remaining mostly between 0.25-3μm. It also explains how different materials like water, soil, and vegetation have unique reflectance spectra that serve as signatures and how these spectra are impacted by factors like composition, turbidity, and plant health.
Cfhb annex 24 rapid public communication - civil information assessment-vers0.1Mamuka Mchedlidze
This document provides a template for assessing rapid public communication and civil information in a municipality or village. The template includes sections to gather information on telephone services, telegraph services, radio broadcast, television broadcast, two-way communication systems, postal services, periodicals, printers and publishing facilities, and public address systems. Recommendations are also requested.
PERFORMANCE EVALUATION OF ONTOLOGY AND FUZZYBASE CBIRacijjournal
IN THIS PAPER, WE HAVE DONE PERFORMANCE EVALUATION OF ONTOLOGY USING LOW-LEVEL FEATURES LIKE
COLOR, TEXTURE AND SHAPE BASED CBIR, WITH TOPIC SPECIFIC CBIR.THE RESULTING ONTOLOGY CAN BE USED
TO EXTRACT THE APPROPRIATE IMAGES FROM THE IMAGE DATABASE. RETRIEVING APPROPRIATE IMAGES FROM AN
IMAGE DATABASE IS ONE OF THE DIFFICULT TASKS IN MULTIMEDIA TECHNOLOGY. OUR RESULTS SHOW THAT THE
VALUES OF RECALL AND PRECISION CAN BE ENHANCED AND THIS ALSO SHOWS THAT SEMANTIC GAP CAN ALSO BE
REDUCED. THE PROPOSED ALGORITHM ALSO EXTRACTS THE TEXTURE VALUES FROM THE IMAGES AUTOMATICALLY
WITH ALSO ITS CATEGORY (LIKE SMOOTH, COURSE ETC) AS WELL AS ITS TECHNICAL INTERPRETATION.
A Comparative Study of Content Based Image Retrieval Trends and ApproachesCSCJournals
Content Based Image Retrieval (CBIR) is an important step in addressing image storage and management problems. Latest image technology improvements along with the Internet growth have led to a huge amount of digital multimedia during the recent decades. Various methods, algorithms and systems have been proposed to solve these problems. Such studies revealed the indexing and retrieval concepts, which have further evolved to Content-Based Image Retrieval. CBIR systems often analyze image content via the so-called low-level features for indexing and retrieval, such as color, texture and shape. In order to achieve significantly higher semantic performance, recent systems seek to combine low-level with high-level features that contain perceptual information for human. Purpose of this review is to identify the set of methods that have been used for CBR and also to discuss some of the key contributions in the current decade related to image retrieval and main challenges involved in the adaptation of existing image retrieval techniques to build useful systems that can handle real-world data. By making use of various CBIR approaches accurate, repeatable, quantitative data must be efficiently extracted in order to improve the retrieval accuracy of content-based image retrieval systems. In this paper, various approaches of CBIR and available algorithms are reviewed. Comparative results of various techniques are presented and their advantages, disadvantages and limitations are discussed.
Tag based image retrieval (tbir) using automatic image annotationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Tag based image retrieval (tbir) using automatic image annotationeSAT Journals
Abstract In recent days, several social networking sites are more popular with digitized images. It comprises the major portion of the databases which makes the search engines to face difficulty in searching. We present a proficient image retrieval technique, which achieves eminent retrieval efficiency. Most of the images are annotated manually, thus the visual content and tags may be mismatched. This leads to poor performance in Tag Based Image Retrieval (TBIR). Automatic Image Annotation (AIA) analyzes the missing and noisy tags and over-refines it to increase the performance of TBIR. AIA can be achieved using the Tag Completion algorithm. The images retrieved from the TBIR are ranked based on the relevancy of the tags and visual content of the images. The relevancy can be evaluated using Content Based Image Retrieval (CBIR) technique. Based on the ranks, the images are indexed in the Tag matrix. Thus the images that match the search query can be retrieved in an optimal way. Keywords: Image Retrieval, Automatic Image Annotation, Tag Based Image Retrieval (TBIR), Tag Completion Algorithm, Content Based Image Retrieval (CBIR), Tag Matrix
IRJET- Retrieval of Images & Text using Data Mining TechniquesIRJET Journal
This document discusses using data mining techniques like clustering and association rule mining for image retrieval. It proposes a system that extracts both visual features (e.g. color, texture) and textual features from images. The features are clustered separately, then association rules are mined by fusing the clusters. Strong association rules are selected as training data. A query image's features are mined to find matching rules to retrieve semantically related images from the database. This combines content-based and text-based retrieval to address limitations of each approach individually.
This document discusses various techniques for image retrieval, including text-based, content-based, and hybrid approaches. Content-based image retrieval (CBIR) extracts visual features like color, texture, shape from images and is able to retrieve similar images to a query image. CBIR systems segment images, extract features, search databases, and return results. CBIR has advantages over text-based retrieval but challenges remain around the semantic gap between low-level features and high-level concepts. The document also discusses evaluating retrieval performance and promising future research directions like reducing the semantic gap.
This document discusses various techniques for image retrieval, including text-based, content-based, and hybrid approaches. Content-based image retrieval (CBIR) extracts visual features like color, texture, shape from images and is able to retrieve similar images to a query image. CBIR systems segment images, extract features, search databases, and return results. CBIR techniques are improving but challenges remain around reducing the semantic gap between low-level features and high-level concepts. Future areas of research include developing techniques more aligned with human perception and improving efficiency and interfaces.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Survey on Image retrieval techniques with feature extractionIRJET Journal
This document discusses content-based image retrieval techniques. It provides an overview of different image retrieval approaches, including text-based, content-based, and hybrid methods. Content-based image retrieval aims to retrieve images based on automatically extracted visual features like color, texture, and shape, rather than relying on textual metadata or keywords. The document reviews recent research that has improved content-based image retrieval performance, such as incorporating relevance feedback and focusing on local image regions rather than global features. It also proposes a new image retrieval model to further optimize existing techniques.
Applications of spatial features in cbir a surveycsandit
With advances in the computer technology and the World Wide Web there has been an
explosion in the amount and complexity of multimedia data that are generated, stored,
transmitted, analyzed, and accessed. In order to extract useful information from this huge
amount of data, many content based image retrieval (CBIR) systems have been developed in the
last decade. A typical CBIR system captures image features that represent image properties
such as color, texture, or shape of objects in the query image and try to retrieve images from the
database with similar features. Retrieval efficiency and accuracy are the important issues in
designing Content Based Image Retrieval System. The Shape and Spatial features are quiet easy
and simple to derive and effective. Researchers are moving towards finding spatial features and
the scope of implementing these features in to the image retrieval framework for reducing the
semantic gap. This Survey paper focuses on the detailed review of different methods and their
evaluation techniques used in the recent works based on spatial features in CBIR systems.
Finally, several recommendations for future research directions have been suggested based on
the recent technologies.
APPLICATIONS OF SPATIAL FEATURES IN CBIR : A SURVEYcscpconf
This document summarizes research on using spatial features for content-based image retrieval (CBIR). It first discusses common CBIR techniques like feature extraction, selection, and similarity measurement. It then reviews several related works that extract spatial features like edge histograms and color difference histograms. Experimental results show integrating spatial information through image partitioning can improve semantic concept detection performance. While finer partitions carry more spatial data, coarser partitions like 2x2 are preferred to avoid feature mismatch. Future work may explore combining multiple feature domains and contexts to further enhance retrieval accuracy and effectiveness for large-scale image datasets.
System analysis and design for multimedia retrieval systemsijma
Due to the extensive use of information technology and the recent developments in multimedia systems, the
amount of multimedia data available to users has increased exponentially. Video is an example of
multimedia data as it contains several kinds of data such as text, image, meta-data, visual and audio.
Content based video retrieval is an approach for facilitating the searching and browsing of large
multimedia collections over WWW. In order to create an effective video retrieval system, visual perception
must be taken into account. We conjectured that a technique which employs multiple features for indexing
and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate
this, content based indexing and retrieval systems were implemented using color histogram, Texture feature
(GLCM), edge density and motion..
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal,
CONTENT RECOVERY AND IMAGE RETRIVAL IN IMAGE DATABASE CONTENT RETRIVING IN TE...Editor IJMTER
Digital Images are used in magazines, blogs, website, television and more. Digital image processing
techniques are used for feature selection, pattern extraction classification and retrieval requirements. Color, texture
and shape features are used in the image processing. Digital images processing also supports computer graphics
and computer vision domains. Scene text recognition is performed with two schemes. They are character
recognizer and binary character classifier models. A character recognizer is trained to predict the category of a
character in an image patch. A binary character classifier is trained for each character class to predict the existence
of this category in an image patch. Scene text recognition is performed on detected text regions. Pixel-based layout
analysis method is adopted to extract text regions and segment text characters in images. Text character
segmentation is carried out with color uniformity and horizontal alignment of text characters. Discriminative
character descriptor is designed by combining several feature detectors and descriptors. Histogram of Oriented
Gradients (HOG) is used to identify the character descriptors. Character structure is modeled at each character
class by designing stroke configuration maps. The scene text extraction scheme is also supports for smart mobile
devices. Text recognition methods are used with text understanding and text retrieval applications. The text
recognition scheme is enhanced with content based image retrieval process. The system is integrated with
additional representative and discriminative features for text structure modeling process. The system is enhanced to
perform text and word level recognition using lexicon analysis. The training process is included with word
database update task.
IRJET- Image based Information RetrievalIRJET Journal
This document discusses content-based image retrieval (CBIR) for retrieving images based on visual similarity. It focuses on using CBIR to match images of monuments for tourism applications. The paper describes extracting shape features using edge histogram descriptors to divide images into sub-images and compare edge distributions. An experiment matches images of Humayun's Tomb and the Statue of Liberty by comparing their edge magnitude values across sub-images. Similar edge distributions between two images' sub-images indicates similarity in shape and matches the images. The paper concludes CBIR using shape features can effectively match similar images of monuments to provide relevant information to users.
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
IRJET- A Survey on Different Image Retrieval TechniquesIRJET Journal
This document discusses different techniques for content-based image retrieval. It begins by describing content-based image retrieval (CBIR) and how it uses visual features like color, texture, and shape to search for images, unlike text-based retrieval which relies on metadata. It then discusses various CBIR techniques in detail, focusing on block truncation coding (BTC) techniques. Specifically, it examines dot diffusion block truncation coding (DDBTC), which extracts color histogram and bit pattern features to retrieve images. Performance is measured using average precision and recall rates.
Content Based Image Retrieval (CBIR) aims at retrieving the images from the database based on the user query which is visual form rather than the traditional text form. The applications of CBIR extend from surveillance to remote sensing, medical imaging to weather forecasting, and security systems to historical research and so on. Though extensive research is made on content based image retrieval in the spatial domain, we have most images in the internet which is JPEG compressed which pushes the need for image retrieval in the compressed domain itself rather than decoding it to raw format before comparison and retrieval. This research addresses the need to retrieve the images from the database based on the features extracted from the compressed domain along with the application of genetic algorithm in improving the retrieval results. The research focuses on various features and their levels of impact on improving the precision and recall parameters of the CBIR system. Our experimentation results also indicate that the CBIR features in compressed domain along with the genetic algorithm usage improves the results considerably when compared with the literature techniques.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Similar to A novel approach to develop a new hybrid (20)
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
New techniques for characterising damage in rock slopes.pdf
A novel approach to develop a new hybrid
1. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
A NOVEL APPROACH TO DEVELOP A NEW HYBRID
TECHNIQUE FOR TRADEMARK IMAGE RETRIEVAL
Saurabh Agarwal1 and Punit Kumar Johari2
1 2 Department of CSE/IT,
Madhav Institute of Technology and Science, Gwalior
ABSTRACT
Trademark Image Retrieval is playing a vital role as a part of CBIR System. Trademark is of great
significance because it carries the status value of any company. To retrieve such a fake or copied
trademark we design a retrieval system which is based on hybrid techniques. It contains a mixture of two
different feature vector which combined together to give a suitable retrieval system. In the proposed system
we extract the corner feature which is applied on an edge pixel image. This feature is used to extract the
relevant image and to more purify the result we apply other feature which is the invariant moment feature.
From the experimental results we conclude that the system is 85 percent efficient.
KEYWORDS
CBIR, TIR, Prompt Edge Detection, Corner Count, Invariant Moments.
1. INTRODUCTION
The rapid increase in the field of computer technology and digital system will help the user to
store multimedia information, digital images and other digital data in an effective and processed
manner. With the use of digital storage the amount of data has increased and it is a difficult task
to search and get the desired outcome from this huge volume of data. As it is very tricky task for a
user to search for desired needs, so to overcome this problem a demand for the retrieval system
which understands the user demands and search for the required results. But to design such a
system which is close enough to the human perception is a typical task.
As by the demand towards this innovative retrieval system, various researchers were attracted
towards it and to work for this active research area. There were various factors to judge the
overall performance of the system like the quality of the output, the time required for performing
any individual query and the major factor is the difference between human perception and
retrieval system must be as low as it can.
The early retrieval system uses the textual annotation. This system works on the principal of
employing individual keywords to each image, and for searching the desired result the textual
queries are applied in the system. This system is known as Text Based Image Retrieval System. It
works well under a low amount of data, but as the data increases it become a very tough task to
annotate a text or keyword for each individual. So this system is not suitable for today’s scenario.
DOI : 10.5121/ijit.2014.3403 33
2. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
To overcome the problem of Text Based Image Retrieval system, a new system is introduced
which work on content features of the image. In 1992 a new term is introduced in the field of
retrieval system by Kato [1] which uses the content features, this system is well known as Content
Based Image Retrieval (CBIR) system. Kato emphasized on the use of color and shape as the
content feature of the image for performing the retrieval process. Later a new feature namely
texture feature is also added in the field of CBIR systems.
CBIR approach is based on Query by example approach in this a query image is passed through
the retrieval system and the similar images from the image database are selected which are close
to query image features. The CBIR uses three main content features:
34
1.1 Shape
Shape [2] as a feature doesn’t refer to the shape of any object; it refers to the properties related to
shape like foreground, background, region, contour etc. From these properties the contour
detection and the region detection is more popular.
1.2 Color
Color [3] is the easiest and closest feature with the human perception. As in this the machine also
categorizes the feature and intensity value as the human does so we can say it is very close to
human perception. In this the machine categorizes the images into standard color formats like
RGB, CMY, HSV etc. In Color format the feature were stored according to the intensity values of
the standard color which lies between 0 and 255. These intensity values were used to find the
relevant images.
1.3 Texture
Texture [4] refers to as the repeated pattern in an image. In this two major works were performed
first is to find the region which has texture pattern and then to find the properties of that visual
patterns. The properties which define the texture patterns are the property of the surface having
homogeneous patterns. The main features of texture are contrast, roughness, directionality,
energy, entropy etc, these features were also known as the tamura [5] features.
In CBIR system the shape feature were found more flexible and accurate as compared to the other
two. Because shape features are much like human observation so it is very popular between the
researchers.
Trademark Image Retrieval (TIR) [6, 7] system is of great importance now-a-days. As the
trademark holds the prestigious value of the company so it is very important to avoid the copying
of the similar image for another company. TIR is a branch of shape base CBIR system so it is
easy to build up a TIR system using the feature of shape. Trademark can be broadly classified
into four different types [8]. First category is word in mark it only contains the words and
character. The other one is device mark which contains specific shapes and graphical designs.
The next is composite mark which is a combination of the previous two i.e. it contains both words
as well as the graphical designs. The last one is complex marks it is the extension on composite
mark as it consist of three dimensional graphical designs. The classification can be better
understandable with the help of Figure 1.
3. International Journal on Information Theory (IJIT),Vol.3, No.4, October
Figure
Figure1. Types of trademark (Kim & Kim, 1998)
2. EXISTING RETRIEVAL SY
2014
XISTING SYSTEM
In CBIR system the work mainly perform on the shape contents.
For extracting the shape feature
different shape descriptors escriptors techniques were used. The techniques were broadly classified into two
main categories, one is the contour based shape descriptors and another one is region based shape
descriptors.
Contour refers to the boundary pixel
pixel many other contour descriptors were developed like hi
tangential direction of contour points
boundary and it is a typical task to find a smooth and
such an edge holding both the properties is very tough but there were so many
developed which nearly find a satisfactory result
detection system.
ary of any object in an image. Using the feature of
boundary
histogram of centroid distance [9
stogram 9],
[10] and many more. To perform all this we need
an edge
t connected edge of a noisy image. To find
algorithms
i.e. Canny, Sobel, Prewitt, Roberts, Prompt edge
Region refers to as the area internally covered by the edge pixel including the edge line. There are
so many region based ased shape descriptors, some of them which are frequently used by the
researchers are hu’s invariant moment , Zernike moment
SIFT etc. Out of these we mainly emphasizes on hu’s invariant moment because it
property to handle TRS (Translation, Rotation and Scaling) structures.
, Wavelet transform, Fourier Descriptor,
There were so many ny previous work performed on Trademark retrieval system.
retrieval is categorized in three ee different types of system [11] 11
in which the active researchers are
working. First from these category is TRADEMARK system which is introduced in 1990 by K
et al. This system works on those shape descriptors which are derived from graphical shape
vectors. The other system is named as STAR
system and it is introduced by Wu et al. in 1996. It
works on the base of CBIR system having some having some extended
features of different
region based shape descriptors. The last one is ARTISAN system it is introduced by Eakins et al.
in the year 1996. It works on the principle of Gestalt. The Gestalt theory [
12] states that the
human visual perception is more conditional conditio
to the properties of image. This theory is introduced
in 19th century by the team of psychologists, psychologists
according to them there remain a challenge of
finding accurate features.
35
, has the
Trademark
] Kato
] nal ,
4. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
36
3. OVERVIEW OF THE PROPOSED WORK
The proposed system will work on the principal of CBIR system. It consists of two phases i.e.
offline and online phase. This combination of offline and online process can be more
understandable with the help of the figure shown in Figure 2.
In the first phase which is the offline phase contains a dataset of different formats of images
which is passed through a pre- processing unit which apply the function to make image mare
desirable to human inputs. This step includes the changing of color formats or managing the size
of image or any other pre processing functions. After applying all these function we need to find
the feature of the image which may be anything depend on the applied algorithm. These features
were now stored in a database for further processing on demand by the user. This whole process
is performed in an offline mode i.e. the time complexity of the system doesn’t depend on this
process.
The other phase which is online phase is the main part or better to say the heart of the system. It is
much more similar to the offline system because it has some same functions as that in the first
phase. In this the user passes the query image which goes through the pre- processing and feature
extraction phase these phases are exactly same as that of the offline phase. But now the main part
of the unit which is the similarity measurement functions. In this the difference between the
inputs of both the phases are compared to find the close common image. These extracted images
were the Relevant Images which is the output of the retrieval system.
Figure 2. Image Retrieval system
4. FEATURE EXTRACTION METHODOLOGY
Feature extraction is a very important part of the retrieval system. The features are those points
which define whole or part of an image which can be use to find the relevant images from
database images. To extract the feature we use the shape descriptors, as we discussed earlier that
shape descriptors are of two types out of this our main focus is on region based shape descriptors.
In region based descriptor we find that corner count feature perform well, but by performing
5. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
some experiments we conclude that it is not an easy task to find the corner points of a noisy or a
roughly scanned image. As we are performing our experimental setup on trademark images and
most of them were scanned images of different old company’s logo. To extract the fine and
appropriate corners in the image we must take help of Contour based shape features. After
performing some of the experiment we find that prompt based edge detection finds a fine and
appropriate edge of any noisy image.
37
4.1 Edge Detection
We are using Prompt based edge detection [13]. For finding appropriate edge pixel we evaluate
every pixel of image one after the other. To take decision that the pixel is edge pixel or not the
system performs some calculation like calculating the difference between the intensity values
with its neighbouring pixels. This process helps the system for taking decisions. The elaborated
process of the Prompt based edge detection is shown in the Algorithm 1.
Algorithm 1. Prompt Based Edge Detection
1. Select the input image I.
2. Find the image size in row an column form
[R, C]= size (I);
3. For each pixel in the image, Repeat step 4 to 6
4. Calculate the absolute difference between all the 8 neighboring pixels.
5. Find the number of difference that exceeds the local threshold (T).
If, difference > T
Then, k (difference count) =k+1
6. If, 3<k<6
Then, the above pixel is an edge pixel.
7. Connect all the calculated edge pixels in a single image to obtain the desired result.
4.2 Corner Point Detection
It refers to those points which have high changing differences with respect to their neighbouring
pixels. To evaluate the corner pixels most researchers use the eigenvectors. These eigenvectors
are used to build a corner matrix. It is first introduced by Harris and Stephens [14], they use the
sum squared difference between the eigenvectors to find the corner pixels. For having the clearer
picture of corner point detection the algorithm is shown in Algorithm 2.
Algorithm 2. Corner Count in an image
1. Select the input image I.
2. Generate the corner metric matrix of the image I.
CM=cornermetric (I);
3. Find the corner peaks in the CM matrix.
(x, y) = Corner Index.
4. Plot all the corner coordinates in the image.
5. Calculate the total no. of corner in the image.
6. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
38
4.3 Invariant moment
In 1962, hu presented seven invariant moments [15] which are calculated for two dimensional
graphical images. It is introduced for the process of pattern recognition of visual images. It is
more likely to be popular between the researchers because of its flexible nature to deal with
translated, rotated and scaled images.
The seven moments introduced by hu is shown below:
Ø1 = 20 + 02
Ø2 = (20 – 02)2 + 4211
Ø3 = (30 – 3 12)2 + 3(21 – 03)2
Ø4 = (30 - 12)2 + (21 + 03)2
Ø5 = (30 – 3 12) (30 + 12) [(30 + 12)2 – 3(21 + 03)2] + (3 21 – 03) (21 +
03) [3 (30 + 12)2 – (21 + 03)2]
Ø6 = (20 – 02) [(30 + 12)2 – (21 + 03)2] + 4 11 (30 + 12) (21 + 03)
Ø7 = (3 21 – 03) (30 + 12) [(30 + 12)2 – 3(21 + 03)2] + (30 – 3 12) (21 +
03) [3 (30 + 12)2 – (21 + 03)2]
According to the experiment performed we have a decision to make that the central moments
were more reliable to handle translation invariance structures and the first two or three were more
flexible with the rotational structures. To more understand the working principle of moment
invariant the algorithm is shown in Algorithm 3.
Algorithm 3. Invariant Moment
1. Select the input image I.
2. Transform the image into two dimensional, real valued and numeric forms.
3. Calculate the value of raw moment’s mpq.
8. Where,
5. Find the normalized central moment pq.
μ
μ
Where,
6. Evaluate the values of all seven hu’s moments using output from step 5.
5. FEATURE MATCHING
The term feature matching refers to similarity measurement between the query image and the
images stored in the databases. It is a very important part of the retrieval process, a good choice
of matching strategy can help a system to give better and faster results and vice versa.
9. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
Normally the feature matching finds the difference between the two feature points and these
differences were passed through a threshold system which filters out the unwanted result. The
most commonly used feature matching system by the researchers is Euclidean distance [16]
method. The Equation for calculation using Euclidean distance is shown in equation (a). In this
method the squared sum of all the feature points are passed through a square root function which
gives the distance calculation between the two images.
39
10. ………… (a)
5.1 Threshold function
It is a tough task to eliminate the relevant images from the non relevant ones. For this threshold
function is used to filter out the final result. In our proposed algorithm we have main focus on the
threshold system. As by the experiment performed on the retrieval system we conclude that for
matching the corner points feature we have to manipulate the threshold values according to the
query image. The relation between corner count and threshold value is that they are directly
proportional to each other. This relation can be better understood by equation (b).
!#$%'( ) *#+$#*,+- ………… (b)
For this we design a threshold system which suits our query, for this the minimum threshold and
maximum threshold are set on run time. To better understand the system please refers the
Algorithm 4.
Algorithm 4. Threshold function
1. Find the number of corner in an image.
Count = Cornercount (I);
2. Initialize the value of range difference coefficient R and threshold difference coefficient
T.
3. For Count in range from init_R (initially 0) to final_R, repeat step 4 to 5.
4. Set, Threshold = T;
and, T = T * Multiplying Coefficient;
5. Set, init_R = init_R + R;
and, final_R = final_R +R;
6. Calculate the minimum and maximum threshold.
Min_T = Count – Threshold;
Max_T = Count + Threshold;
6. PROPOSED ALGORITHM
In the proposed algorithm we first apply the prompt edge detection method on the images to
extract the boundary pixels. Now over target is to find that which of these boundary pixels
belongs to the set of corner pixel, for this we apply the corner point detection so that we get the
corner count for each individual image. Using these corner count values we find the similar
images with that of the query images. To get more purify result we pass the output to the
Rotational Invariant filter. The working algorithm of the whole process is shown in Algorithm 5.
11. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
40
Algorithm 5. Proposed Algorithm
1. Select the input image I.
I = Query image
2. Convert the image in gray scale intensity values.
Input_image = rgb2gray (I);
3. Find the Edge pixel image using Prompt edge detection.
Edge_image=Prompt_edge (Input_image);
4. Find the corner points of the Edge pixel image.
Corner_count = corner_point (Edge_image);
5. Apply the similarity measurement algorithm
Difference_value=|Corner_count–Corner_count_database |
6. Find the rotational moment value of QI images (i.e. query image and the images obtained
from step 5)
Phi = invmoments (QI);
7. Display the images filtered through Step 6.
The flow Chart of the proposed algorithm is shown in Figure 3.
Figure 3. Flow chart of the proposed retrieval system
7. PERFORMANCE EVALUATION
This section displays the result obtained in different stage under the testing phase of the system.
To develop such a system which satisfies the human needs is the final destination of the retrieval
process. For the judgment of result with the desired goal we use the Precision and Recall graph.
Precision/recall graph is the most commonly used decision making system for Trademark image
12. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
retrieval system. There exists a standard formula for calculating precision and recall [17] values
of a system. The formula used in the proposed experiment for evaluating the value of precision is
shown in equation (c) and for recall is displayed in equation (d).
41
.#$/0%0+ 12
32
………… (c)
4$/5'' 12
36
………… (d)
Where,
Nr = Number of similar images in the retrieved result.
Tr = Total number of images in the retrieved result.
Ts = Total number of similar images in the database.
For testing phase we use a Trademark Dataset [18] of approx 108 images which has images to test
rotational challenges in the system. The trademark database consist of 18 different classes of
images each of which contains rotated images in six different angles i.e. 0, 798, :798, 8798,
;798 and 798.
Table1. Retrieved images with their precision/recall value
13. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
As shown in Table 1. we conclude that the precision value obtained by the proposed system is
100 percent and the recall value is also nearly equal to 100 percent except for some of the query
images. The overall performance of the system is found to be approx. 85 percent which is
satisfactory result.
We have design a hybrid system in which first feature is used to find the relevant image and the
other feature is used for filtering the result. If we individually use the two features then the result
is found to be 60 percent which is not good in comparison 85 percent. The progress graph of the
three systems i.e. using corner only, using invariant moment only and hybrid of the two is shown
in figure 4.
42
Figure 4. Precision/Recall Graph of Comparative methods
8. CONCLUSION AND FUTURE WORK
In the proposed work, we have design an efficient trademark retrieval system which works on the
principle of CBIR system. In this system we apply a two phase feature matching strategy. One for
global shape features and another is for local shape features. Different forms of transformational
challenges are applied to test the efficiency of the system. We have applied the final testing on a
rotational transformed image dataset. For evaluating the performance of the system we have use
the precision and recall values. It is shown in the P-R graph that the system performance was
satisfactory.
In future we are trying to more generalize the system so that it may handle all the other
transformations. It is also very important to design such a trademark system which can handle all
the types of trademarks. We can also use better clustering and efficient filtering approaches. We
can use the feedback mechanism to attain such a system which is very close to human perception.
14. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
43
ACKNOWLEDGEMENTS
The authors would like to thank the anonymous reviewers for their constructive comments.
REFERENCES
[1] Kato, T.: Database architecture for content-based image retrieval. Image Storage and Retrieval
Systems, Proc SPIE 1662 (1992) 112-123.
[2] Amanatiadis, A., Kaburlasos, V.G., Gasteratos, A. Papadakis, S.E.: Evaluation of shape descriptors
for shape-based image retrieval. IET Image Process (2011) 493-499.
[3] Cheng, Y.F. Cong, Z.: The Technique of Color and Shape-based multi- Feature Combination of
Trademark image Retrieval. IEEE (2010).
[4] Agrawal, D., Jalal, A.S. Tripathi, R.: Trademark Image Retrieval by Integrating Shape with Texture
Feature. IEEE (2013) 30-33.
[5] Tamura, H.S., Mori and Yamawaki, T.: Texture features corresponding to visual perception. IEEE
Trans. Systems Man Cyber net (1978) 460–473.
[6] Wei, C.H., Li, Y., Chau, W.Y. Li, C.T.: Trademark Image Retrieval Using Synthetic features for
describing global shape and interior structure. Pattern Recognition 42 (2009) 386-394.
[7] Arafat, S.Y., Saleem, M. Hussain, S.A.: Comparative Analysis of Invariant Schemas for Logo
Classification. IEEE (2009) 256-261.
[8] Kim, Y.S. and Kim, W.Y.: Content-based trademark retrieval system using visually salient features.
IEEE computer society conference on computer vision and pattern recognition (1998) 931-939.
[9] Zhang, D. Lu, D.: A Comparative study of Fourier descriptors for shape representation and
retrieval. The 5th Asian Conference of computer vision (2002).
[10] Jain, A.K. Vailaya, A.: Image Retrieval using color and shape. Pattern Recognition 29 (1996)
1233-1244.
[11] Anuar, F.M., Setchi, R. Lai, Y.K.: Trademark image Retrieval using integrated shape descriptor.
Expert systems with applications 40 (2013) 105-121.
[12] Eakins, J.P., Boardman, J.M. Shields, K.: Retrieval of trademark images by shape feature- the
ARTISAN project. IEEE Intelligent Image Databases (1996) 9/1 - 9/6.
[13] Lin, H.J. Kao, Y.T.: A prompt contour detection method. International Conference on the
distributed multimedia systems (2001).
[14] Harris, C. Stephens, M.: A combined corner and edge detector. The Plessey company plc (1988)
147-151.
[15] Hu, M.K.: Visual patterns recognition of moment invariants. IRE Transactions on information theory
(1962) 179-187.
[16] Swets, D.L. Weng, J.: Using discriminant eigenfeatures for image retrieval. IEEE Transactions on
pattern analysis and machine intelligence (1996) 831-836.
[17] Muller, H., Muller, W., Squire, D.M., Maillet, S.M. Pun, T.: Performance evaluation in Content
Based Image Retrieval: overview and proposals. Pattern Recognition Letters 22 (2001) 593-601
[18] DATASET: Logo Database for Research http://lampsrv02.umiacs.umd.edu/projdb/project.php?id=47
[19] BOOK: Gonzalez, R.C., Woods, R.E. Eddins, S.L. Digital Image Processing using Matlab.
[20] BOOK: Szeliski, R. (2010). Computer Vision: Algorithms and Aplications.
[21] Agarwal, S., Chaturvedi, N. Johari, P.K.: An Efficient Trademark Image Retrieval using
Combination of Shape Descriptor and Salience Features. International Journal of Image Processing,
Image Processing and Pattern Recognition Vol 7, No. 4 (2014) 295-302.
15. International Journal on Information Theory (IJIT),Vol.3, No.4, October 2014
44
AUTHORS
Saurabh Agarwal, male, is currently an M.Tech. Student at Madhav Institute of
Technology and Science, Gwalior, India. He got his bachelor degree from Laxmi Narayan
Institute of Technology, Gwalior, India in 2012. His research interest includes digital image
processing and pattern recognition.
Punit Kumar Johari, male, is an Assistant Professor at Madhav Institute of Technology
and Science, Gwalior, India. He has a working experience of 10 years in different colleges.
His research interest includes Digital image processing, Pattern recognition and Data
mining.