RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
Enhanced Optimization of Edge Detection for High Resolution Images Using Veri...ijcisjournal
dge Detection plays a crucial role in Image Processing and Segmentation where a set of algorithms aims
to identify various portions of a digital image at which a sharpened image is observed in the output or
more formally has discontinuities. The contour of Edge Detection also helps in Object Detection and
Recognition. Image edges can be detected by using two attributes such as Gradient and Laplacian. In our
Paper, we proposed a system which utilizes Canny and Sobel Operators for Edge Detection which is a
Gradient First order derivative function for edge detection by using Verilog Hardware Description
Language and in turn compared with the results of the previous paper in Matlab. The process of edge
detection in Verilog significantly reduces the processing time and filters out unneeded information, while
preserving the important structural properties of an image. This edge detection can be used to detect
vehicles in Traffic Jam, Medical imaging system for analysing MRI, x-rays by using Xilinx ISE Design
Suite 14.2.
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
Enhanced Optimization of Edge Detection for High Resolution Images Using Veri...ijcisjournal
dge Detection plays a crucial role in Image Processing and Segmentation where a set of algorithms aims
to identify various portions of a digital image at which a sharpened image is observed in the output or
more formally has discontinuities. The contour of Edge Detection also helps in Object Detection and
Recognition. Image edges can be detected by using two attributes such as Gradient and Laplacian. In our
Paper, we proposed a system which utilizes Canny and Sobel Operators for Edge Detection which is a
Gradient First order derivative function for edge detection by using Verilog Hardware Description
Language and in turn compared with the results of the previous paper in Matlab. The process of edge
detection in Verilog significantly reduces the processing time and filters out unneeded information, while
preserving the important structural properties of an image. This edge detection can be used to detect
vehicles in Traffic Jam, Medical imaging system for analysing MRI, x-rays by using Xilinx ISE Design
Suite 14.2.
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
AUTOMATED IMAGE MOSAICING SYSTEM WITH ANALYSIS OVER VARIOUS IMAGE NOISEijcsa
Mosaicing is blending together of several arbitrarily shaped images to form one large balanced image such
that boundaries between the original images are not seen. Image mosaicing creates a large field of view
using of scene and the result image can be used for texture mapping of a 3D environment too. Blended
image has become a wide necessity in images captured from real time sensor devices, bio-medical
equipment, satellite images from space, aerospace, security systems, brain mapping, genetics etc. Idea
behind this work is to automate the Image Mosaicing System so that blending may be fast, easy and
efficient even if large number of images are considered. This work also provides an analysis of blending
over images containing different kinds of distortion and noise which further enhances the quality of the
system and make the system more reliable and robust.
DEEP LEARNING BASED TARGET TRACKING AND CLASSIFICATION DIRECTLY IN COMPRESSIV...sipij
Past research has found that compressive measurements save data storage and bandwidth usage. However, it is also observed that compressive measurements are difficult to be used directly for target tracking and classification without pixel reconstruction. This is because the Gaussian random matrix destroys the target location information in the original video frames. This paper summarizes our research effort on target tracking and classification directly in the compressive measurement domain. We focus on one type of compressive measurement using pixel subsampling. That is, the compressive measurements are obtained by randomly subsample the original pixels in video frames. Even in such special setting, conventional trackers still do not work well. We propose a deep learning approach that integrates YOLO (You Only Look Once) and ResNet (residual network) for target tracking and classification in low quality videos. YOLO is for multiple target detection and ResNet is for target classification. Extensive experiments using optical and mid-wave infrared (MWIR) videos in the SENSIAC database demonstrated the efficacy of the proposed approach.
Remotely sensed image segmentation using multiphase level set acmKriti Bajpai
Satellite Imagery is used in various research domains. These images contain major quality issues. However, it can be improvised by image enhancement algorithms in terms of contrast, brightness, feature reduction from noise contents, etc. These algorithms are employed to focus, sharp or smooth image to exhibit and examine the image attributes. Hence, the objective of image enhancement depends on the precise application. The objective of this paper is to provide brief information about the image enhancement techniques; which fetches progressive and optimum results for remote-sensing satellite imagery.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
Object tracking is one of the most important problems in modern visual systems and researches are
continuing their studies in this field. A suitable tracking method should not only be able to recognize and
track the related object in continuous frames, but should also provide a reliable and efficient reaction
against the phenomena disturbing tracking process including performance efficiency in real-time
applications. In this article, an effective mesh-based method is introduced as a suitable tracking method in
continuous frames. Also, its preference and limitation is discussed.
Imaging and Image sensors is a field that is continuously evolving. There are new products
coming into the market every day. Some of these have very severe Size, Weight and Power
constraints whereas other devices have to handle very high computational loads. Some require
both these conditions to be met simultaneously. Current imaging architectures and digital image
processing solutions will not be able to meet these ever increasing demands. There is a need to
develop novel imaging architectures and image processing solutions to address these
requirements. In this work we propose analog signal processing as a solution to this problem.
The analog processor is not suggested as a replacement to a digital processor but it will be used
as an augmentation device which works in parallel with the digital processor, making the
system faster and more efficient. In order to show the merits of analog processing the highly
computational Normalized Cross Correlation algorithm is implemented. We propose two novel
modifications to the algorithm and a new imaging architecture which, significantly reduces the
computation time.
Haze removal for a single remote sensing image based on deformed haze imaging...LogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
Hand gesture recognition using ultrasonic sensor and atmega128 microcontrollereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
AUTOMATED IMAGE MOSAICING SYSTEM WITH ANALYSIS OVER VARIOUS IMAGE NOISEijcsa
Mosaicing is blending together of several arbitrarily shaped images to form one large balanced image such
that boundaries between the original images are not seen. Image mosaicing creates a large field of view
using of scene and the result image can be used for texture mapping of a 3D environment too. Blended
image has become a wide necessity in images captured from real time sensor devices, bio-medical
equipment, satellite images from space, aerospace, security systems, brain mapping, genetics etc. Idea
behind this work is to automate the Image Mosaicing System so that blending may be fast, easy and
efficient even if large number of images are considered. This work also provides an analysis of blending
over images containing different kinds of distortion and noise which further enhances the quality of the
system and make the system more reliable and robust.
DEEP LEARNING BASED TARGET TRACKING AND CLASSIFICATION DIRECTLY IN COMPRESSIV...sipij
Past research has found that compressive measurements save data storage and bandwidth usage. However, it is also observed that compressive measurements are difficult to be used directly for target tracking and classification without pixel reconstruction. This is because the Gaussian random matrix destroys the target location information in the original video frames. This paper summarizes our research effort on target tracking and classification directly in the compressive measurement domain. We focus on one type of compressive measurement using pixel subsampling. That is, the compressive measurements are obtained by randomly subsample the original pixels in video frames. Even in such special setting, conventional trackers still do not work well. We propose a deep learning approach that integrates YOLO (You Only Look Once) and ResNet (residual network) for target tracking and classification in low quality videos. YOLO is for multiple target detection and ResNet is for target classification. Extensive experiments using optical and mid-wave infrared (MWIR) videos in the SENSIAC database demonstrated the efficacy of the proposed approach.
Remotely sensed image segmentation using multiphase level set acmKriti Bajpai
Satellite Imagery is used in various research domains. These images contain major quality issues. However, it can be improvised by image enhancement algorithms in terms of contrast, brightness, feature reduction from noise contents, etc. These algorithms are employed to focus, sharp or smooth image to exhibit and examine the image attributes. Hence, the objective of image enhancement depends on the precise application. The objective of this paper is to provide brief information about the image enhancement techniques; which fetches progressive and optimum results for remote-sensing satellite imagery.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
Object tracking is one of the most important problems in modern visual systems and researches are
continuing their studies in this field. A suitable tracking method should not only be able to recognize and
track the related object in continuous frames, but should also provide a reliable and efficient reaction
against the phenomena disturbing tracking process including performance efficiency in real-time
applications. In this article, an effective mesh-based method is introduced as a suitable tracking method in
continuous frames. Also, its preference and limitation is discussed.
Imaging and Image sensors is a field that is continuously evolving. There are new products
coming into the market every day. Some of these have very severe Size, Weight and Power
constraints whereas other devices have to handle very high computational loads. Some require
both these conditions to be met simultaneously. Current imaging architectures and digital image
processing solutions will not be able to meet these ever increasing demands. There is a need to
develop novel imaging architectures and image processing solutions to address these
requirements. In this work we propose analog signal processing as a solution to this problem.
The analog processor is not suggested as a replacement to a digital processor but it will be used
as an augmentation device which works in parallel with the digital processor, making the
system faster and more efficient. In order to show the merits of analog processing the highly
computational Normalized Cross Correlation algorithm is implemented. We propose two novel
modifications to the algorithm and a new imaging architecture which, significantly reduces the
computation time.
Haze removal for a single remote sensing image based on deformed haze imaging...LogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
Hand gesture recognition using ultrasonic sensor and atmega128 microcontrollereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
It is designed to measure the distance of any object by using an ultrasonic transducer. Ultrasonic means of distance measurement is a convenient method compared to traditional one using measurement scales.This kind of measurement is particularly applicable to inaccessible areas where traditional means cannot be implemented such as high temperature, pressure zones etc.
B.Tech.Final Year ECE Project Report on Ultrasonic distance measure robotSushant Shankar
ULTRA-4 or ultrasonic distance measure robot is a robot which perform many action such as it gives the actual position of wall or obstacle which comes in front of it, measures the distance which displayed by 7-segment and also show the moving images of the objects by camera.
The application area of ultra-4 is very wide such as rescue oprations, spy robot, versatile use in autonomus technology,use in mining,it has found essential use in light industry (e.g. toy industry) agriculture and power engineering and used in car parking system.
Efficient 3D stereo vision stabilization for multi-camera viewpointsjournalBEEI
In this paper, an algorithm is developed in 3D Stereo vision to improve image stabilization process for multi-camera viewpoints. Finding accurate unique matching key-points using Harris Laplace corner detection method for different photometric changes and geometric transformation in images. Then improved the connectivity of correct matching pairs by minimizing
the global error using spanning tree algorithm. Tree algorithm helps to stabilize randomly positioned camera viewpoints in linear order. The unique matching key-points will be calculated only once with our method.
Then calculated planar transformation will be applied for real time video rendering. The proposed algorithm can process more than 200 camera viewpoints within two seconds.
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
Automatic detection of geographical objects such as roads, buildings and bridges
from remote sensing imagery is a very meaningful but difficult work. Bridges over water is
a typical geographical object and its automatic detection is of great significance for many
applications. Finding Region Of Interest (ROI) having water areas alone is the most crucial
task in bridge detection. This can be done with image processing / soft computing methods
using images in spatial domain or with Normalized Differential Water Index (NDWI) using
images in spectral domain. We have developed an efficient algorithm for bridge detection
where the ROI segmentation is done using both methods. Exact locations of bridges are
obtained by knowledge models and spatial resolution of the image. These knowledge models
are applied in the algorithm in such a way that the thresholds are automatically fixed
depending on the quality of the image. Using the algorithm any type of bridges are extracted
irrespective of their inclination and shape.
Object tracking is one of the most important problems in modern visual systems and researches are
continuing their studies in this field. A suitable tracking method should not only be able to recognize and
track the related object in continuous frames, but should also provide a reliable and efficient reaction
against the phenomena disturbing tracking process including performance efficiency in real-time
applications. In this article, an effective mesh-based method is introduced as a suitable tracking method in
continuous frames. Also, its preference and limitation is discussed.
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOScsandit
There is an increasing demand on identifying the sharp and the blur photos from a burst of series or a mass of collection. Subjective assessment on image blurriness takes account of not only pixel variation but also the region of interest and the scene type. It makes measuring image sharpness in line with visual perception very challenging. In this paper, we devise a noreference image sharpness metric, which combines a set of gradient-based features adept in estimating Gaussian blur, out-of-focus blur and motion blur respectively. We propose a datasetadaptive logistic regression to build the metric upon multiple datasets, where over half of the samples are realistic blurry photos. Cross validation confirms that our metric outperforms thestate- of-the-art methods on the datasets with a total of 1577 images. Moreover, our metric is very fast, suitable for parallelization, and has the potential of running on mobile or embedded devices.
The aim of this paper is to present the essential elements of the electro-optical imaging system EOIS for space applications and how these elements can affect its function. After designing a spacecraft for low orbiting missions during day time, the design of an electro-imaging system becomes an important part in the satellite because the satellite will be able to take images of the regions of interest. An example of an electro-optical satellite imaging system will be presented through this paper where some restrictions have to be considered during the design process. Based on the optics principals and ray tracing techniques the dimensions of lenses and CCD (Charge Coupled Device) detector are changed matching the physical satellite requirements. However, many experiments were done in the physics lab to prove that the resizing of the electro optical elements of the imaging system does not affect the imaging mission configuration. The procedures used to measure the field of view and ground resolution will be discussed through this work. Examples of satellite images will be illustrated to show the ground resolution effects.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Design and implementation of video tracking system based on camera field of viewsipij
The basic idea of this paper is to design and implement of video tracking system based on Camera Field of
View (CFOV), Otsu’s method was used to detect targets such as vehicles and people. Whereas most
algorithms were spent a lot of time to execute the process, an algorithm was developed to achieve it in a
little time. The histogram projection was used in both directional to detect target from search region,
which is robust to various light conditions in Charge Couple Device (CCD) camera images and saves
computation time.
Our algorithm based on background subtraction, and normalize cross correlation operation from a series
of sequential sub images can estimate the motion vector. Camera field of view (CFOV) was determined and
calibrated to find the relation between real distance and image distance. The system was tested by
measuring the real position of object in the laboratory and compares it with the result of computed one. So
these results are promising to develop the system in future.
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...CSCJournals
The use of visual information in real time applications such as in robotic pick, navigation, obstacle avoidance etc. has been widely used in many sectors for enabling them to interact with its environment. Robotics require computationally simpler and easy to implement stereo vision algorithms that will provide reliable and accurate results under real time constraint. Stereo vision is a less expensive, passive sensing technique, for inferring the three dimensional position of objects from two or more simultaneous views of a scene and there is no interference with other sensing devices if multiple robots are present in the same environment. Stereo correspondence aims at finding matching points in the stereo image pair based on Lambertian criteria to obtain disparity. The correspondence algorithm will provide high resolution disparity maps of the scene by comparing two views of the scene under the study. By using the principle of triangulation and with the help of camera parameters, depth information can be extracted from this disparity .Since the focus is on real-time application, only the local stereo correspondence algorithms are considered. A comparative study based on error and computational costs are done between two area based algorithms. Evaluation of Sum of absolute Difference algorithm, which is less computationally expensive, suitable for ideal lightening condition and a more accurate adaptive binary support window algorithm that can handle of non-ideal lighting conditions are taken for this study. To simplify the correspondence search, rectified stereo image pairs are used as inputs.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
Similar to Object Distance Detection using a Joint Transform Correlator (20)
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM Clustering
Object Distance Detection using a Joint Transform Correlator
1. Object Distance Detection using a Joint Transform
Correlator
Alexander Layton
Dept. of Computer Science
University of Illinois at Urbana-Champaign
Urbana, IL
Dr. Ronald Marsh
Dept. of Computer Science
University of North Dakota
Grand Forks, ND
Abstract—Computer stereo vision makes heavy use of object
distance detection. The primary method to detect object distance
is to compare two images of the same scene taken from different
vantage points. The necessity of comparing two images naturally
leads us to investigate optical correlators.
Since the Fourier transform, on which optical correlators are
based, is lossless, we suppose that distance information encoded in
a stereo image pair is preserved through the correlation process.
We then try to recover that distance by investigating the location
of the correlation peaks.
Initial data indicates that we may plausibly extract distance
information from a correlation result. However, this data was
gathered under very specific and controlled conditions, and
further research is necessary to derive a more general result.
Keywords—computer vision; stereo vision; optics; optical
correlation; joint transform correlator; distance detection
I. INTRODUCTION
A. Optical Correlators
An optical correlator is a device for comparing two images
using Fourier transforms. More specifically, an optical
correlator takes in two input images and outputs their cross-
correlation. We may think of the correlation result as locating
one image inside the other. If the two input images are
sufficiently similar, we will see a bright spot in the correlation
result, hereby referred to as a peak. The location of the peak
indicates the location of one image within the other.
A. VanderLugt developed the first successful optical
correlator, the matched filter correlator, in 1963 [1]. The
matched filter correlator (MFC) pioneered development of
optical correlators and is still in use today. However, the MFC
design requires specialized hardware and is highly sensitive to
the alignment of its instruments.
The MFC was originally designed to locate a particular
image, known as the “reference” or “filter” image, inside many
other images, the “target” or “input” images. As such, the
reference and target image are treated differently within the
correlator, and the MFC is best suited to such asymmetrical
applications.
To overcome the limitations of the matched filter correlator,
Weaver and Goodman invented the joint transform correlator
(JTC) in 1966 [2]. The JTC is much less sensitive to instrument
alignment but is less space-efficient. Additionally, both input
images in a JTC undergo the same transformations, without
regard for a “target” or “reference” image. Thus, the JTC is
better suited to applications without preferential treatment of
either input image. (It should be noted, however, that while the
JTC does not discriminate between input images, the
correlation result depends on each image’s position in the focal
plane. Thus we will not obtain the same result if we swap our
two input images.)
Optical correlation need not be done optically anymore; the
same process can be performed programmatically [3]. Though
the correlation is no longer real-time, one has the opportunity
to post-analyze the correlation result.
B. Application to Object Distance Detection
The root of computer stereo vision is using two or more 2D
images to reproduce a 3D scene. In particular, a computer
detects the distance to an object by measuring the shift in the
object’s 2D location from one vantage point to another [4].
Comparing two images of the same object leads us naturally
to explore optical correlators. Since the Fourier transform is
lossless, we posit that any distance information encoded in the
original images will be preserved in their correlation. In effect,
the correlator automates the process of finding a common pixel
in the algorithm outlined in [5]. Since we use a new pair of
images for each distance measurement, the joint transform
correlator is the appropriate design.
This experiment is entirely exploratory in nature; we are
only concerned with the plausibility of applying optical
correlation to object distance detection, not the practicality. We
hope, however, that this research may find applications in
space, where the background is more or less static.
Nevertheless, to the best of our knowledge, the use of a JTC to
determine distance to an object is a novel approach.
II. EXPERIMENT
A. Setup
Images were generated by two Microsoft LifeCam Cinema
webcams aligned horizontally with 9.5” baseline separation
between cameras (Fig. 1). LifeCam Cinema webcams have a
73° field of view and an autofocusing lens.
This research was made possible by the National Science Foundation,
Award #1359224, with support from the Department of Defense.
2. Fig. 1. Microsoft LifeCam Cinema webcams used for data collection.
Fig. 2. Post-It note on wall from 5ft away.
The subject was a 2”x1.5” Post-It note on a wall (Fig. 2).
We photograph a wall to minimize the confounding effect of
the background on the correlation. To this end, we make sure
the wall is as evenly lit as possible. The Post-It note was chosen
to highly contrast the wall, making it easier for the JTC to
identify the object.
B. Procedure
We measured the distance to the wall with a tape measure,
then took a picture with each camera. We used ImageMagick to
crop each image and then extract the value channel, producing
a black-and-white square image for the correlator. Each input
image served once as the “target” and once as the “filter.” The
correlation program also attempted to detect correlation peaks
using an algorithm developed by Dr. Marsh. The actual distance
and peak data were exported into an Excel spreadsheet.
The cameras would frequently fail to focus on the wall; this
led to blurry input images that would not produce a strong
correlation peak. These correlation results were thrown out.
Frequently, a peak would be plainly visible but not strong
enough for the algorithm to detect it. For example, a peak that
occupies two pixels in the output image would slip through the
detector since neither pixel has sufficient contrast with its
neighbor, despite being clearly enough resolved to be a useful
data point. In these cases, the peaks locations were entered
manually.
III. DATA
Each stereo pair of input images (Fig. 3) produced a pair of
correlation results (Fig. 4).
Fig. 3. A pair of input images.
Fig. 4. A pair of correlation results. The peak is the cross-shaped mark.
Fig. 5. Coordinates of the peak vs. distance from wall.
We plot the location of the peak versus the distance to the
wall to find a clear relation (Fig. 5). Coordinates of the peak are
measured from center, since distance to the wall should not
depend on the size of the image. Note that we take the absolute
value of the coordinates since swapping the input images
negates the peak’s coordinates.
A. Wrapping and Flipping Effect
As distance to the object decreases, the X coordinate of the
peak increases, up to a maximum of half the image size. With
our testing environment, this occurs at around 1.5 feet. If actual
distance to the object decreases beyond that, the peak will wrap
around the image and then get both coordinates flipped:
0
50
100
150
0 2 4 6 8 10 12
|PeakX,Y|
Distance
Distance vs. |Peak X,Y|
|Peak X| |Peak Y|
3. Fig. 6. Effects of an off-image peak.
B. Regression
If we restrict our analysis to distances that do not produce
the above “flipping” effect, we may invert the data and a
regression becomes quite obvious (Fig. 7). Excel gives the
following logarithmic model:
. | | .
It is important to note that this formula was generated under
the specific conditions, including a 73° field of view and a 9.5
in baseline, but it is expected to lay the groundwork for a more
general formula. With this formula we were able to determine
distances from 2 ft to 5 ft, with an accuracy of ±3 in.
Fig. 7. Absolute value of peak x vs distance.
IV. CONCLUSIONS
The work to date achieved its goal of proving that distance
can be recovered from a joint transform correlation peak. While
this work shows promise, further work is needed to elucidate and
generalize the relationship between peak location and distance.
This will move us closer to the ultimate goal of finding a formula
relating Peak X, Peak Y, field of view, baseline, and distance.
The wrapping and flipping effect presents a formidable
challenge to this goal, and will likely be the next target of
investigation. Another obvious direction future research will
take is to test the effect of varying baseline on the location of the
correlation peak.
Finally, at this time the method has not been tested for
practical applications like the CubeSat platform (for which this
research was originally conceived). The work to date and near-
future work is entirely proof of concept, and we may expect that
applications follow once a strong foundation is laid.
ACKNOWLEDGMENT
This material is based upon work supported by the National
Science Foundation Research Experience for Undergraduates
under Grant No. (NSF 1359244).
REFERENCES
[1] A. Vander Lugt, "Signal detection by complex spatial filtering," IEEE
Transactions on Information Theory, vol. 10, pp. 139-145. 1964.
[2] C. S. Weaver and J. W. Goodman, "A Technique for Optically
Convolving Two Functions," Applied Optics vol. 5, pp. 1248-1249. 1966.
[3] A. J. Barry and R. Tedrake, "Pushbroom stereo for high-speed navigation
in cluttered environments," Robotics and Automation (ICRA), 2015 IEEE
International Conference on, Seattle, WA, 2015, pp. 3046-3052.
[4] Jernej Mrovlje and Damir Vrančić, “Distance measuring based on
stereoscopic pictures,” 9th International PhD Workshop on Systems and
Control: Young Generation Viewpoint. 2008.
[5] E. Tjandranegara, “Distance Estimation Algorithm for Stereo Pair
Images,” Purdue ECE Tech. Rep., West Lafayette, IN, Rep. 64, 2005.
[6] R. Mandelbaum, L. McDowell, L. Bogoni, B. Reich and M. Hansen,
"Real-time stereo processing, obstacle detection, and terrain estimation
from vehicle-mounted stereo cameras," Applications of Computer Vision,
1998. WACV '98. Proceedings., Fourth IEEE Workshop on, Princeton,
NJ, 1998, pp. 288-289.
y = -2.059ln(x) + 11.529
R² = 0.9812
0
2
4
6
8
10
12
0 20 40 60 80 100 120
Distance
|Peak X|
|Peak X| vs. Distance