Shape from focus is a method of 3D shape and depth estimation of an object from a sequence of pictures with changing focus settings. In this paper we propose a novel method of shape recovery, which was originally created for shape and position identification of glass pipette in medical hybrid robot. In proposed algorithm, Sum of Modified Laplacian is used as a focus operator. Each step of the algorithm is tested in order to pick the operators with the best results. Reconstruction allows not only to determine shape but also precisely define position of the object. The results of proposed method, performed on real objects, have shown the efficiency of this scheme.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
This document presents a new color image segmentation approach based on overlap wavelet transform (OWT). OWT extracts wavelet features to better separate different patterns in an image. The proposed method also uses morphological operators and 2D histogram clustering for effective segmentation. It is concluded that the proposed OWT method improves segmentation quality, is reliable, fast and computationally less complex than direct histogram clustering. When tested on various color spaces, the proposed segmentation scheme produced better results in RGB color space compared to others. The main advantages are its use of a single parameter and faster speed.
GRAY SCALE IMAGE SEGMENTATION USING OTSU THRESHOLDING OPTIMAL APPROACHJournal For Research
Image segmentation is often used to distinguish the foreground from the background. Image segmentation is one of the difficult research problems in the machine vision industry and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, the Otsu method, improves the image segmentation effect obviously. It can be implemented by two different approaches: Iteration approach and Custom approach. In this paper both approaches has been implemented on MATLAB and give the comparison of them and show that both has given almost the same threshold value for segmenting image but the custom approach requires less computations. So if this method will be implemented on hardware in an optimized way then custom approach is the best option.
Medical Image Segmentation Based on Level Set MethodIOSR Journals
This document presents a new medical image segmentation technique based on the level set method. The technique uses a combination of thresholding, morphological erosion, and a variational level set method. Thresholding is applied to determine object pixels, followed by optional erosion to remove small fragments. Then a variational level set method is applied on the original image to evaluate the contour and segment objects. The technique is tested on various medical images and provides good segmentation results, though it struggles with complex images containing multiple distinct objects.
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...CSCJournals
This document summarizes a study that uses Particle Swarm Optimization (PSO) for automatic segmentation of nano-particles in Transmission Electron Microscopy (TEM) images. PSO is applied to specify local and global thresholds for segmentation by treating image entropy as a minimization problem. Results show the PSO method improves over previous techniques by reducing incorrect characterization of nano-particles in images affected by liquid concentrations or supporting materials, with up to a 27% reduction in errors. Compared to manual characterization, PSO provides comparable particle counting with higher computational efficiency suitable for real-time analysis.
The document describes a method for image fusion and optimization using stationary wavelet transform and particle swarm optimization. It summarizes that image fusion combines information from multiple images to extract relevant information. The proposed method uses stationary wavelet transform for image decomposition and particle swarm optimization to optimize the fused results. It applies stationary wavelet transform to source images to decompose them into wavelet coefficients. Particle swarm optimization is then used to optimize the transformed images. The inverse stationary wavelet transform is applied to the optimized coefficients to generate the fused image. The method is tested on various images and performance is evaluated using metrics like peak signal-to-noise ratio, entropy, mean square error and standard deviation.
The document presents a new method for segmenting MR brain images that combines a hidden Markov random field (HMRF) model with a hybrid metaheuristic optimization algorithm. The HMRF model uses adaptive parameters to balance contributions from different tissue classes during segmentation. The hybrid metaheuristic algorithm improves the quality of solutions during HMRF optimization by combining the cuckoo search and particle swarm optimization algorithms. Experimental results on simulated and real MR brain images show the proposed method achieves satisfactory segmentation performance for images with noise and intensity inhomogeneity.
A comparison of SIFT, PCA-SIFT and SURFCSCJournals
This document summarizes and compares three feature detection methods: SIFT, PCA-SIFT, and SURF. It finds that SIFT is the slowest but most stable, detecting features accurately across various transformations. SURF is the fastest but less stable than SIFT. PCA-SIFT shows advantages in rotation and illumination changes, detecting features more accurately than SURF in some cases when viewpoint changes are large. The document evaluates the methods using repeatability, matching numbers, and processing time under different image scales, rotations, blurs, illuminations, and affine transformations.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
This document presents a new color image segmentation approach based on overlap wavelet transform (OWT). OWT extracts wavelet features to better separate different patterns in an image. The proposed method also uses morphological operators and 2D histogram clustering for effective segmentation. It is concluded that the proposed OWT method improves segmentation quality, is reliable, fast and computationally less complex than direct histogram clustering. When tested on various color spaces, the proposed segmentation scheme produced better results in RGB color space compared to others. The main advantages are its use of a single parameter and faster speed.
GRAY SCALE IMAGE SEGMENTATION USING OTSU THRESHOLDING OPTIMAL APPROACHJournal For Research
Image segmentation is often used to distinguish the foreground from the background. Image segmentation is one of the difficult research problems in the machine vision industry and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, the Otsu method, improves the image segmentation effect obviously. It can be implemented by two different approaches: Iteration approach and Custom approach. In this paper both approaches has been implemented on MATLAB and give the comparison of them and show that both has given almost the same threshold value for segmenting image but the custom approach requires less computations. So if this method will be implemented on hardware in an optimized way then custom approach is the best option.
Medical Image Segmentation Based on Level Set MethodIOSR Journals
This document presents a new medical image segmentation technique based on the level set method. The technique uses a combination of thresholding, morphological erosion, and a variational level set method. Thresholding is applied to determine object pixels, followed by optional erosion to remove small fragments. Then a variational level set method is applied on the original image to evaluate the contour and segment objects. The technique is tested on various medical images and provides good segmentation results, though it struggles with complex images containing multiple distinct objects.
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...CSCJournals
This document summarizes a study that uses Particle Swarm Optimization (PSO) for automatic segmentation of nano-particles in Transmission Electron Microscopy (TEM) images. PSO is applied to specify local and global thresholds for segmentation by treating image entropy as a minimization problem. Results show the PSO method improves over previous techniques by reducing incorrect characterization of nano-particles in images affected by liquid concentrations or supporting materials, with up to a 27% reduction in errors. Compared to manual characterization, PSO provides comparable particle counting with higher computational efficiency suitable for real-time analysis.
The document describes a method for image fusion and optimization using stationary wavelet transform and particle swarm optimization. It summarizes that image fusion combines information from multiple images to extract relevant information. The proposed method uses stationary wavelet transform for image decomposition and particle swarm optimization to optimize the fused results. It applies stationary wavelet transform to source images to decompose them into wavelet coefficients. Particle swarm optimization is then used to optimize the transformed images. The inverse stationary wavelet transform is applied to the optimized coefficients to generate the fused image. The method is tested on various images and performance is evaluated using metrics like peak signal-to-noise ratio, entropy, mean square error and standard deviation.
The document presents a new method for segmenting MR brain images that combines a hidden Markov random field (HMRF) model with a hybrid metaheuristic optimization algorithm. The HMRF model uses adaptive parameters to balance contributions from different tissue classes during segmentation. The hybrid metaheuristic algorithm improves the quality of solutions during HMRF optimization by combining the cuckoo search and particle swarm optimization algorithms. Experimental results on simulated and real MR brain images show the proposed method achieves satisfactory segmentation performance for images with noise and intensity inhomogeneity.
A comparison of SIFT, PCA-SIFT and SURFCSCJournals
This document summarizes and compares three feature detection methods: SIFT, PCA-SIFT, and SURF. It finds that SIFT is the slowest but most stable, detecting features accurately across various transformations. SURF is the fastest but less stable than SIFT. PCA-SIFT shows advantages in rotation and illumination changes, detecting features more accurately than SURF in some cases when viewpoint changes are large. The document evaluates the methods using repeatability, matching numbers, and processing time under different image scales, rotations, blurs, illuminations, and affine transformations.
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Reconstructing Vehicle License Plate Image from Low Resolution Images using N...CSCJournals
This document discusses methods for reconstructing low resolution vehicle license plate images using non-uniform interpolation. It evaluates several image registration methods for estimating the position and orientation differences between low resolution images, finding that Fourier registration is superior. It then uses non-uniform interpolation to reconstruct license plate images from as few as 3x6 pixel characters, improving readability over the original low resolution images. The best results are obtained with 7 frames, while additional frames provide diminishing returns due to registration errors.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
Filtering Based Illumination Normalization Techniques for Face RecognitionRadita Apriana
The main challenge experienced by the present face recognition techniques and smooth filters
are the difficulty in managing illumination. The differences in face images that are created by illumination
are normally bigger compared to the differences in inter-person that is utilized to differentiate identities.
However, face recognition over illumination has more uses in a lot of applications that deal with subjects
that are not cooperative where the highest potential of the face recognition as a non-intrusive biometric
feature can be executed and utilized. A lot of work has been put into the research and development of
illumination and face recognition in the present era and a lot of critical methods have been introduced.
Nevertheless, there are some concerns with face recognition and illumination that require further
considerations which include the deficiencies in comprehending the sub-spaces in illumination pictures,
problems with intractability in face modelling and complicated mechanisms of face surface reflections.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Local Phase Oriented Structure Tensor To Segment Texture Images With Intensit...CSCJournals
This paper proposed the active contour based texture image segmentation scheme using the linear structure tensor and tensor oriented steerable Quadrature filter. Linear Structure tensor (LST) is a popular method for the unsupervised texture image segmentation where LST contains only horizontal and vertical orientation information but lake in other orientation information and also in the image intensity information on which active contour is dependent. Therefore in this paper, LST is modified by adding intensity information from tensor oriented structure tensor to enhance the orientation information. In the proposed model, these phases oriented features are utilized as an external force in the region based active contour model (ACM) to segment the texture images having intensity inhomogeneity and noisy images. To validate the results of the proposed model, quantitative analysis is also shown in terms of accuracy using a Berkeley image database.
Unimodal Multi-Feature Fusion and one-dimensional Hidden Markov Models for Lo...IJECEIAES
The objective of low-resolution face recognition is to identify faces from small size or poor quality images with varying pose, illumination, expression, etc. In this work, we propose a robust low face recognition technique based on one-dimensional Hidden Markov Models. Features of each facial image are extracted using three steps: firstly, both Gabor filters and Histogram of Oriented Gradients (HOG) descriptor are calculated. Secondly, the size of these features is reduced using the Linear Discriminant Analysis (LDA) method in order to remove redundant information. Finally, the reduced features are combined using Canonical Correlation Analysis (CCA) method. Unlike existing techniques using HMMs, in which authors consider each state to represent one facial region (eyes, nose, mouth, etc), the proposed system employs 1D-HMMs without any prior knowledge about the localization of interest regions in the facial image. Performance of the proposed method will be measured using the AR database.
Detection of hard exudates using simulated annealing based thresholding mecha...csandit
This document presents a method for detecting hard exudates in retinal fundus images using simulated annealing based thresholding. The proposed method involves 5 steps: 1) median filtering to reduce noise and blur exudates, 2) image subtraction between the input and filtered images to extract bright regions, 3) application of simulated annealing to determine an optimal threshold value, 4) thresholding with the determined value to segment exudates, and 5) image addition to enhance detection. The method was tested on 10 images and achieved an overall sensitivity of 98.66% and predictivity of 98.12% according to expert evaluation, demonstrating accurate exudate detection.
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Adaptive Thresholding Using Quadratic Cost FunctionsCSCJournals
This algorithm seeks to create a thresholding surface (also referred to as an active surface) derived from the initial image topology by means which minimize the image gradient, resulting in a smoothed image which can be used for adaptive thresholding. Unlike approaches which interpolate through points usually associated with high gradient values, each image point is uniquely characterized by a quadratic cost function determined by the gradient at that point along with a constraining potential determined by the image intensity. Minimization is achieved by allowing each point to deviate from its initial value so as to minimize the gradient, as balanced by a constraining potential which seeks to minimize the amount of deviation. The cost function also contains terms which cause the gradient of the thresholding surface to closely parallel those of the image in regions of near uniform intensity (where the absolute values of the gradients are small). This is done to reduce the effects of ghosting (or false segmenting) when thresholding, an important feature of this approach. Image binarization is achieved by comparing the values of the original image points with those of the thresholding surface; values above a given threshold are considered part of the foreground, while those below are considered part of the background.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
The document describes an image fusion approach that uses adaptive fuzzy logic modeling for global processing followed by Markov random field modeling for local processing.
It begins by introducing image fusion and its applications. It then discusses existing fusion approaches and their limitations. The proposed approach first uses an adaptive fuzzy logic model to minimize redundant information globally. It then applies Markov random field modeling locally for fusion. Experimental results showed the proposed approach improved the universal image quality index by 30-35% compared to fusion with Markov random field modeling alone.
CONTENT BASED MEDICAL IMAGE INDEXING AND RETRIEVAL USING A FUZZY COMPACT COMP...guesta2cfc
1. The document proposes a new fuzzy compact composite descriptor (BTDH) for content-based medical image indexing and retrieval.
2. BTDH uses both brightness and texture features in a compact 128-bin vector to describe images, with size under 48 bytes per image.
3. In experiments on 5000 images with 120 queries, BTDH achieved better retrieval accuracy than other descriptors, with an Average Normalized Modified Retrieval Rank of 0.272.
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
Medical image fusion is one of the major research fields in image processing. Medical imaging has become a
vital component in major clinical applications such as detection/ diagnosis and treatment. Joint analysis of
medical data collected from same patient using different modalities is required in many clinical applications.
This paper introduces an optimal fusion technique for multiscale-decomposition based fusion of medical images
and measuring its performance with existing fusion techniques. This approach incorporates genetic algorithm
for optimal coefficient selection and employ various multiscale filters for noise removal. Experiments
demonstrate that proposed fusion technique generate better results than existing rules. The performance of
proposed system is found to be superior to existing schemes used in this literature.
Object extraction using edge, motion and saliency information from videoseSAT Journals
Abstract Object detection is a process of finding the instances of object of a certain class which is useful in analysis of video or image. There are number of algorithms have been developed so far for object detection. Object detection has got significant role in variety of areas of computer vision like video surveillance, image retrieval`. In this paper presented an efficient algorithm for moving object extraction using edge, motion and saliency information from videos. Out methodology includes 4 stages: Frame generation, Pre-processing, Foreground generation and integration of cues. Foreground generation includes edge detection using sobel edge detection algorithm, motion detection using pixel-based absolute difference algorithm and motion saliency detection. Conditional Random Field (CRF) is applied for integration of cues and thus we get better spatial information of segmented object. Keywords: Object detection, Saliency information, Sobel edge detection, CRF.
OBJECT DETECTION, EXTRACTION AND CLASSIFICATION USING IMAGE PROCESSING TECHNIQUEJournal For Research
Domestic refrigerators are widely used household appliances and a large extent of energy is consumed by this system. A phase change material is a substances that can store or release significant amount of heat energy by changing the phase liquid to vapour or vice versa. So, reduction of temperature fluctuation and improvement of system performances is that main reason of using PCM enhances the heat transfer rate thus improves the COP of refrigeration as well as the quality frozen food. The release and storage rate of a refrigerator is depends upon the characteristics of refrigerators and its properties using phase change material for a certain thermal load it is found that COP of conventional refrigerator is increased . The phase change material used in chamber built manually and which surrounds the evaporator chamber of a conventional refrigerator the whole heat transfer for load given to refrigerator cabin (to evaporator) evaporator to phase change material by conduction. This system hence improves the performances of household refrigerator by increasing its compressor cut-off time and thereby minimizing electrical energy usage. The main objective is to improve the performance, cooling time period, storage capacity and to maintain the constant cooling effect for more time during power cut off hours using phase change material.
Feature Analysis for Affect Recognition Supporting Task Sequencing in Adaptiv...janningr
Originally, the task sequencing in adaptive intelligent tutoring systems needs information gained from expert and domain knowledge as well as information about former performances. In a former work a new efficient task sequencer based on a performance prediction system was presented, which only needs former performance information but not the expensive expert and domain knowledge. This task sequencer uses the output of the performance prediction to sequence the tasks according to the theory of Vygotsky’s Zone of Proximal Development. In this presentation we aim to support this sequencer by a further automatically to gain information source, namely speech input from the students interacting with the tutoring system. The proposed approach extracts features from students speech data and applies to that features an automatic affect recognition method. The output of the affect recognition method indicates, if the last task was too easy, too hard or appropriate for the student. In this presentation we (1) propose a new approach for supporting task sequencing by affect recognition, (2) present an analysis of appropriate features for affect recognition extracted from students speech input and (3) show the suitability of the proposed features for affect recognition for supporting task sequencing in adaptive intelligent tutoring systems.
The document outlines the main stages of image processing which include image acquisition, restoration, enhancement, representation and description, segmentation, object recognition, color processing, compression, and morphological operations. It describes each stage in detail, explaining their purposes and some common techniques used. The overall stages take a raw image and perform various operations to extract useful information and simplify analysis for applications like object identification and extraction.
Scalability in Model Checking through Relational DatabasesCSCJournals
This document discusses an ATL model checking tool that uses relational databases to improve scalability. The tool allows designers to interactively build models as directed graphs and verify properties expressed as ATL formulas. The key contributions are implementing the Pre function using relational algebra expressions translated to SQL, and generating an ATL model checker from a grammar specification using ANTLR. Relational databases improve performance by efficiently representing large models for verification.
Image Enhancement by Image Fusion for Crime InvestigationCSCJournals
This document proposes a method for image enhancement through image fusion for crime investigation applications. It summarizes existing image enhancement techniques like histogram equalization and presents their limitations. It then describes the proposed method which involves constructing an image pyramid and performing a wavelet transformation on input images. The pyramid and wavelet transformed images are then fused to generate an enhanced output image with improved contrast and information content. Experimental results on a surveillance camera image show that the proposed fusion scheme provides better perception for human visual analysis compared to traditional enhancement techniques.
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Reconstructing Vehicle License Plate Image from Low Resolution Images using N...CSCJournals
This document discusses methods for reconstructing low resolution vehicle license plate images using non-uniform interpolation. It evaluates several image registration methods for estimating the position and orientation differences between low resolution images, finding that Fourier registration is superior. It then uses non-uniform interpolation to reconstruct license plate images from as few as 3x6 pixel characters, improving readability over the original low resolution images. The best results are obtained with 7 frames, while additional frames provide diminishing returns due to registration errors.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
Filtering Based Illumination Normalization Techniques for Face RecognitionRadita Apriana
The main challenge experienced by the present face recognition techniques and smooth filters
are the difficulty in managing illumination. The differences in face images that are created by illumination
are normally bigger compared to the differences in inter-person that is utilized to differentiate identities.
However, face recognition over illumination has more uses in a lot of applications that deal with subjects
that are not cooperative where the highest potential of the face recognition as a non-intrusive biometric
feature can be executed and utilized. A lot of work has been put into the research and development of
illumination and face recognition in the present era and a lot of critical methods have been introduced.
Nevertheless, there are some concerns with face recognition and illumination that require further
considerations which include the deficiencies in comprehending the sub-spaces in illumination pictures,
problems with intractability in face modelling and complicated mechanisms of face surface reflections.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Local Phase Oriented Structure Tensor To Segment Texture Images With Intensit...CSCJournals
This paper proposed the active contour based texture image segmentation scheme using the linear structure tensor and tensor oriented steerable Quadrature filter. Linear Structure tensor (LST) is a popular method for the unsupervised texture image segmentation where LST contains only horizontal and vertical orientation information but lake in other orientation information and also in the image intensity information on which active contour is dependent. Therefore in this paper, LST is modified by adding intensity information from tensor oriented structure tensor to enhance the orientation information. In the proposed model, these phases oriented features are utilized as an external force in the region based active contour model (ACM) to segment the texture images having intensity inhomogeneity and noisy images. To validate the results of the proposed model, quantitative analysis is also shown in terms of accuracy using a Berkeley image database.
Unimodal Multi-Feature Fusion and one-dimensional Hidden Markov Models for Lo...IJECEIAES
The objective of low-resolution face recognition is to identify faces from small size or poor quality images with varying pose, illumination, expression, etc. In this work, we propose a robust low face recognition technique based on one-dimensional Hidden Markov Models. Features of each facial image are extracted using three steps: firstly, both Gabor filters and Histogram of Oriented Gradients (HOG) descriptor are calculated. Secondly, the size of these features is reduced using the Linear Discriminant Analysis (LDA) method in order to remove redundant information. Finally, the reduced features are combined using Canonical Correlation Analysis (CCA) method. Unlike existing techniques using HMMs, in which authors consider each state to represent one facial region (eyes, nose, mouth, etc), the proposed system employs 1D-HMMs without any prior knowledge about the localization of interest regions in the facial image. Performance of the proposed method will be measured using the AR database.
Detection of hard exudates using simulated annealing based thresholding mecha...csandit
This document presents a method for detecting hard exudates in retinal fundus images using simulated annealing based thresholding. The proposed method involves 5 steps: 1) median filtering to reduce noise and blur exudates, 2) image subtraction between the input and filtered images to extract bright regions, 3) application of simulated annealing to determine an optimal threshold value, 4) thresholding with the determined value to segment exudates, and 5) image addition to enhance detection. The method was tested on 10 images and achieved an overall sensitivity of 98.66% and predictivity of 98.12% according to expert evaluation, demonstrating accurate exudate detection.
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Adaptive Thresholding Using Quadratic Cost FunctionsCSCJournals
This algorithm seeks to create a thresholding surface (also referred to as an active surface) derived from the initial image topology by means which minimize the image gradient, resulting in a smoothed image which can be used for adaptive thresholding. Unlike approaches which interpolate through points usually associated with high gradient values, each image point is uniquely characterized by a quadratic cost function determined by the gradient at that point along with a constraining potential determined by the image intensity. Minimization is achieved by allowing each point to deviate from its initial value so as to minimize the gradient, as balanced by a constraining potential which seeks to minimize the amount of deviation. The cost function also contains terms which cause the gradient of the thresholding surface to closely parallel those of the image in regions of near uniform intensity (where the absolute values of the gradients are small). This is done to reduce the effects of ghosting (or false segmenting) when thresholding, an important feature of this approach. Image binarization is achieved by comparing the values of the original image points with those of the thresholding surface; values above a given threshold are considered part of the foreground, while those below are considered part of the background.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
The document describes an image fusion approach that uses adaptive fuzzy logic modeling for global processing followed by Markov random field modeling for local processing.
It begins by introducing image fusion and its applications. It then discusses existing fusion approaches and their limitations. The proposed approach first uses an adaptive fuzzy logic model to minimize redundant information globally. It then applies Markov random field modeling locally for fusion. Experimental results showed the proposed approach improved the universal image quality index by 30-35% compared to fusion with Markov random field modeling alone.
CONTENT BASED MEDICAL IMAGE INDEXING AND RETRIEVAL USING A FUZZY COMPACT COMP...guesta2cfc
1. The document proposes a new fuzzy compact composite descriptor (BTDH) for content-based medical image indexing and retrieval.
2. BTDH uses both brightness and texture features in a compact 128-bin vector to describe images, with size under 48 bytes per image.
3. In experiments on 5000 images with 120 queries, BTDH achieved better retrieval accuracy than other descriptors, with an Average Normalized Modified Retrieval Rank of 0.272.
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
Medical image fusion is one of the major research fields in image processing. Medical imaging has become a
vital component in major clinical applications such as detection/ diagnosis and treatment. Joint analysis of
medical data collected from same patient using different modalities is required in many clinical applications.
This paper introduces an optimal fusion technique for multiscale-decomposition based fusion of medical images
and measuring its performance with existing fusion techniques. This approach incorporates genetic algorithm
for optimal coefficient selection and employ various multiscale filters for noise removal. Experiments
demonstrate that proposed fusion technique generate better results than existing rules. The performance of
proposed system is found to be superior to existing schemes used in this literature.
Object extraction using edge, motion and saliency information from videoseSAT Journals
Abstract Object detection is a process of finding the instances of object of a certain class which is useful in analysis of video or image. There are number of algorithms have been developed so far for object detection. Object detection has got significant role in variety of areas of computer vision like video surveillance, image retrieval`. In this paper presented an efficient algorithm for moving object extraction using edge, motion and saliency information from videos. Out methodology includes 4 stages: Frame generation, Pre-processing, Foreground generation and integration of cues. Foreground generation includes edge detection using sobel edge detection algorithm, motion detection using pixel-based absolute difference algorithm and motion saliency detection. Conditional Random Field (CRF) is applied for integration of cues and thus we get better spatial information of segmented object. Keywords: Object detection, Saliency information, Sobel edge detection, CRF.
OBJECT DETECTION, EXTRACTION AND CLASSIFICATION USING IMAGE PROCESSING TECHNIQUEJournal For Research
Domestic refrigerators are widely used household appliances and a large extent of energy is consumed by this system. A phase change material is a substances that can store or release significant amount of heat energy by changing the phase liquid to vapour or vice versa. So, reduction of temperature fluctuation and improvement of system performances is that main reason of using PCM enhances the heat transfer rate thus improves the COP of refrigeration as well as the quality frozen food. The release and storage rate of a refrigerator is depends upon the characteristics of refrigerators and its properties using phase change material for a certain thermal load it is found that COP of conventional refrigerator is increased . The phase change material used in chamber built manually and which surrounds the evaporator chamber of a conventional refrigerator the whole heat transfer for load given to refrigerator cabin (to evaporator) evaporator to phase change material by conduction. This system hence improves the performances of household refrigerator by increasing its compressor cut-off time and thereby minimizing electrical energy usage. The main objective is to improve the performance, cooling time period, storage capacity and to maintain the constant cooling effect for more time during power cut off hours using phase change material.
Feature Analysis for Affect Recognition Supporting Task Sequencing in Adaptiv...janningr
Originally, the task sequencing in adaptive intelligent tutoring systems needs information gained from expert and domain knowledge as well as information about former performances. In a former work a new efficient task sequencer based on a performance prediction system was presented, which only needs former performance information but not the expensive expert and domain knowledge. This task sequencer uses the output of the performance prediction to sequence the tasks according to the theory of Vygotsky’s Zone of Proximal Development. In this presentation we aim to support this sequencer by a further automatically to gain information source, namely speech input from the students interacting with the tutoring system. The proposed approach extracts features from students speech data and applies to that features an automatic affect recognition method. The output of the affect recognition method indicates, if the last task was too easy, too hard or appropriate for the student. In this presentation we (1) propose a new approach for supporting task sequencing by affect recognition, (2) present an analysis of appropriate features for affect recognition extracted from students speech input and (3) show the suitability of the proposed features for affect recognition for supporting task sequencing in adaptive intelligent tutoring systems.
The document outlines the main stages of image processing which include image acquisition, restoration, enhancement, representation and description, segmentation, object recognition, color processing, compression, and morphological operations. It describes each stage in detail, explaining their purposes and some common techniques used. The overall stages take a raw image and perform various operations to extract useful information and simplify analysis for applications like object identification and extraction.
Scalability in Model Checking through Relational DatabasesCSCJournals
This document discusses an ATL model checking tool that uses relational databases to improve scalability. The tool allows designers to interactively build models as directed graphs and verify properties expressed as ATL formulas. The key contributions are implementing the Pre function using relational algebra expressions translated to SQL, and generating an ATL model checker from a grammar specification using ANTLR. Relational databases improve performance by efficiently representing large models for verification.
Image Enhancement by Image Fusion for Crime InvestigationCSCJournals
This document proposes a method for image enhancement through image fusion for crime investigation applications. It summarizes existing image enhancement techniques like histogram equalization and presents their limitations. It then describes the proposed method which involves constructing an image pyramid and performing a wavelet transformation on input images. The pyramid and wavelet transformed images are then fused to generate an enhanced output image with improved contrast and information content. Experimental results on a surveillance camera image show that the proposed fusion scheme provides better perception for human visual analysis compared to traditional enhancement techniques.
Fourth Dimension Level 1 By Dr.Moiz HussainEhtesham Mirxa
The Fourth Dimension® Educational series
Conceived, Created and Conducted by Prof. Dr Moiz Hussain
Motive ......
“To create a world of those who can alter the reality of the existing world to make it a better place to live and for those who follow”
Level-1
THE AWAKENING
From the Impossible to the POSSIBLE
(Conducted in Pakistan, India, UAE, USA, UK, Germany, Austria, Switzerland and KSA)
The Fourth Dimension® is one of the most powerful and result oriented workshop that can change your life from a level of ordinary to outstanding. Based on programming the sub conscious mind to attract health, wealth and happiness in your life through the learning and use of directed day dreaming and creative visualization in area such as;
• Education-learning, higher grades
• Improved memory and concentration
• Decision making
• Out of Box thinking
• Intuition and sixth sense development
• Creativity, imagination and visualization of goals
• Goal setting and goal achievement
• Self confidence and personal charisma
• Prevention and healing of diseases, disorders and many health conditions
• Improved relationship and quality in relationships-happiness
• Success in Business, job and career
• Those suffering from Panic attack, anxiety, Depression, Stress,
• Low self esteem, Lack of Confidence
• Weak memory and concentration
• Restful sleep
• Anti aging
• Finding love and the right partner
• Attracting Wealth in your business
• Creating massive unprecedented success in your professional and personal life
• Any much much more
Who should attend?
Business executives, employees and professionals
Entrepreneurs & Leaders
Students
Housewives
Doctors, Psychologist, Psychiatrist, Occupational Therapist
Wealth management Managers
Police, Military and law and order enforcement agencies
Just anyone who wants success and wants to achieve the impossible
Requirements
Take a High Protein Breakfast when you come for the workshop (eggs, cheese, meat, poultry)
Wear loose clothing
Bring 4 passport size colored photographs
Sign the code of ethic agreement
Recording and note taking is not allowed, cell phones must be switched off during workshop
Those who have attended the Fourth Dimension workshop includes:
Senior Doctors, Psychiatrist and Psychologists
Business tycoons, Stock management managers, brokers
Senior and middle management
Decision makers including presidents of multinational companies, Banks
Govt, Police and senior Military officials
Students of O and A Levels, College and University students.
Scientists, Research scholars, Artists , Musicians and singers
TV and Film actors and actress
Leaders and Entrepreneurs
Off-line English Character Recognition: A Comparative Surveyidescitation
It has been decades since the evolution of idea that
human brain can be mimicked by artificial neuron like
mathematical structures. Till date, the development of this
endeavor has not reached the threshold of excellence. Neural
networks are commonly used to solve sample-recognition
problems. One of these is character recognition. The solution
of this problem is one of the easier implementations of neural
networks. This paper presents a detailed comparative
literature survey on the research accomplished for the last
few decades. The comparative literature review will help us
understand the platform on which we stand today to achieve
the highest efficiency in terms of Character Recognition
accuracy as well as computational resource and cost.
Improving the Accuracy of Object Based Supervised Image Classification using ...CSCJournals
A lot of research has been undertaken and is being carried out for developing an accurate classifier for extraction of objects with varying success rates. Most of the commonly used advanced classifiers are based on neural network or support vector machines, which uses radial basis functions, for defining the boundaries of the classes. The drawback of such classifiers is that the boundaries of the classes as taken according to radial basis function which are spherical while the same is not true for majority of the real data. The boundaries of the classes vary in shape, thus leading to poor accuracy. This paper deals with use of new basis functions, called cloud basis functions (CBFs) neural network which uses a different feature weighting, derived to emphasize features relevant to class discrimination, for improving classification accuracy. Multi layer feed forward and radial basis functions (RBFs) neural network are also implemented for accuracy comparison sake. It is found that the CBFs NN has demonstrated superior performance compared to other activation functions and it gives approximately 3% more accuracy.
This paper presents to building identification from satellite images. Because of monitoring illegal land usage. Nowadays rapid urbanization leads to
increase the land usage, in this case of monitoring illegal land usage is very important. This project implemented to building identification from
satellite images, images are provided from Bing maps. Adaptive Neuro Fuzzy Inference System used to check data base information. In this proposed
system, I can identify only building images from the satellite images, To improving the image details effectively.
Arabic Handwritten Script Recognition Towards Generalization: A Survey Randa Elanwar
Our concern in this paper is to:
provide a comprehensive review of recent off-line and on-line trends in Arabic cursive handwriting recognition (last 10 years publications)
clarify the challenges standing against obtaining a reliable, accurate, simple, general purpose recognizer based on these trends.
Video Key-Frame Extraction using Unsupervised Clustering and Mutual ComparisonCSCJournals
The document presents a novel method for extracting key frames from videos using unsupervised clustering and mutual comparison. It assigns weights of 70% to color (HSV histogram) and 30% to texture (GLCM) when computing frame similarity for clustering. It then performs mutual comparison of extracted key frames to remove near duplicates, improving accuracy. The algorithm is computationally simple and able to detect unique key frames, improving concept detection performance as validated on open databases.
Vehicle accidents are one of the most leading causes of fatality. More accidents are because of improper driving of vehicles by non-licensed persons. One approach to eliminate the authentication of vehicles by non-licensed persons is to use a smart vehicle authentication technique for vehicles. This authentication is done by the fingerprint scanner which is the most secure system for vehicle and the smart license card which has the biometric details in it, the vehicle will start if the details in both the license and the fingerprint matches else it won’t. This will stop the authentication of vehicles by non-licensed persons. Another problem is that the many person are not aware of their vehicle insurance due dates. The most popular existing system to monitor the due dates is by the GSM technique but it has lots of disadvantages. To eliminate those disadvantages the Wi-Fi system is used to send intimations. So that the person can pay the due on date without any penalty.
Optical character recognition of handwritten Arabic using hidden Markov modelsMuhannad Aulama
The document describes an approach to optical character recognition of handwritten Arabic text using hidden Markov models. It discusses extracting optical features from Arabic characters like width-to-length ratios and number of dots. It also describes encoding the structure of the Arabic language by analyzing character transition probabilities and frequencies. Hidden Markov models are used to model both the optical properties and language structure. Recognition is performed using the Viterbi algorithm to identify the most probable characters. The approach achieved accurate recognition of handwritten Arabic text.
1) Touchless fingerprint systems use remote sensing to capture fingerprints without physical contact, avoiding issues like skin distortion, slippage, and residue seen in touch-based systems. They allow for larger capture areas and higher user comfort.
2) 3D touchless fingerprint systems acquire full fingerprint images from multiple views and extract a 3D shape of the finger. The 3D print can then be unwrapped onto a 2D plane either through parametric modeling like cylindrical projection or non-parametric direct unfolding.
3) Key steps in fingerprint recognition systems using touchless scans include image acquisition, preprocessing, feature extraction like finding minutiae, and matching against an existing database.
Andromeda Council. Update. This report continues the work of the first paper of human evolution into being 4th dimensional humans... of crystalline structure.
Topics discussed: love, marriage, procreation, work/careers, life span, etc.
Object recognition is the challenging problem in the real world application. Object recognition can be achieved through the shape matching. Shape matching is preceded by i) detecting the edges of the objects from the images. ii) Finding the correspondence between the shapes. iii) Measuring the dissimilarity between the shapes using the correspondence. iv) Classifying the object into classes by using this dissimilarity measures. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts. The one to one correspondence is achieved through the cost based bipartite graph matching. The cost matrix is reduced through Hungarian algorithm. The dissimilarity between the two shapes is computed using Canberra distance. The nearest neighbor classifier is used to classify the objects with the matching error. The results are obtained using the MATLAB for MINIST hand written digits.
The document describes a project to develop optical character recognition (OCR) software for recognizing online and offline handwritten text in multiple languages. It aims to recognize characters from scanned documents or real-time handwriting input and create a user profile. The system scope includes recognizing handwriting from multiple users and cursive script. It will store recognized characters in a text file and optionally convert words to audio for reading documents aloud. The document provides details on OCR technology, applications, literature review, user and system requirements, and the project's goal of using OCR for applications like forms processing.
Shape matching and object recognition using shape context belongie pami02irisshicat
1) The document presents a novel approach for measuring shape similarity and using it for object recognition. It involves finding point correspondences between shapes, estimating an aligning transformation, and computing distance as a sum of matching errors and transformation magnitude.
2) At the core is using a "shape context" descriptor at sample points to solve the correspondence problem as a graph matching problem. This provides correspondences to estimate an aligning transformation.
3) Shape similarity is then a measure of matching errors between corresponding points after alignment, allowing nearest neighbor classification for recognition. Results are shown for various datasets.
This document discusses face recognition technology. It defines biometrics as measurable human characteristics used for identification. Face recognition is a biometric that analyzes facial features from images. It has advantages over other biometrics like fingerprints in not requiring physical contact. The document outlines the process of face recognition including image capture, feature extraction, comparison, and matching. It also discusses factors like accuracy rates and response time.
SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAINijma
This document presents a single image super resolution algorithm that uses both spatial and wavelet domains. The algorithm takes advantage of both domains by upsampling the image in the spatial domain using bicubic interpolation, then refining the high frequency subbands in the wavelet domain. It is iterative, using back projection to minimize reconstruction error between the original low resolution image and the downsampled output. Wavelet-based denoising is applied to the high frequency subband to remove noise before reconstruction. Experimental results on various test images show the proposed algorithm achieves similar or better PSNR than other compared methods.
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Image Filtering Using all Neighbor Directional Weighted Pixels: Optimization ...sipij
The document describes an image filtering technique that uses all neighboring directional weighted pixels in a 5x5 window to detect and filter random valued impulse noise. It uses particle swarm optimization to optimize the parameters for the detection and filtering operators. The technique detects noisy pixels using differences between pixel values aligned in four directions in the window. Filtering replaces the pixel with the value that minimizes the variance calculated from pixels in the direction with lowest variance. PSO searches a three-dimensional space of iteration number, threshold, and threshold decrease rate parameters to optimize performance for images with different noise levels. Results show it performs better than other techniques at preserving details while removing noise from highly corrupted images.
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
This document proposes an algorithm for efficiently computing 2D spatial convolution through image partitioning and short convolution. The algorithm partitions an input image into overlapping 6x6 blocks, which are then further partitioned into non-overlapping 3x3 sub-images. Convolution is computed for each sub-image independently using a variable-length filter, reducing computational complexity compared to FFT-based techniques. The outputs from each sub-image convolution are combined to reconstruct the original block. Simulation results demonstrate the effectiveness of the algorithm for tasks like edge detection and noise reduction through local image filtering.
Nowadays crowd analysis, essential factor about decision management of brand strategy, is not a controllable field by individuals. Therefore a technology, software products is needed. In this paper we focused on what we have done about crowd analysis and examination the problem of human detection with fish-eye lenses cameras. In order to identify human density, one of the machine learning algorithm, which is Haar Classification algorithm, is used to distinguish human body under different conditions. First, motion analysis is used to search for meaningful data, and then the desired object is detected by the trained classifier. Significant data has been sent to the end user via socket programming and human density analysis is presented.
Nowadays crowd analysis, essential factor about decision management of brand strategy, is not a
controllable field by individuals. Therefore a technology, software products is needed. In this paper we
focused on what we have done about crowd analysis and examination the problem of human detection with
fish-eye lenses cameras. In order to identify human density, one of the machine learning algorithm, which
is Haar Classification algorithm, is used to distinguish human body under different conditions. First,
motion analysis is used to search for meaningful data, and then the desired object is detected by the trained
classifier. Significant data has been sent to the end user via socket programming and human density
analysis is presented.
Machine learning for Tomographic Imaging.pdfMunir Ahmad
This document discusses using machine learning techniques for tomographic imaging reconstruction and denoising. It begins with an overview of tomographic imaging and PET/CT as an example. It then discusses tomographic data acquisition through PET imaging and sinogram generation. Various analytical and iterative reconstruction methods are described along with their limitations related to noise and ill-posed problems. Neural network approaches for image reconstruction from sinograms, CT image denoising, and mapping iterative reconstruction algorithms to neural networks are proposed to overcome these limitations. Specific network architectures discussed include a simple FBP mapping network, residual learning networks, and networks that unroll iterative algorithms. Applications to PET, SPECT, and developing new techniques like positronium imaging are envisioned.
Machine learning for Tomographic Imaging.pptxMunir Ahmad
This document discusses using machine learning techniques for tomographic imaging reconstruction and denoising. It begins with an overview of tomographic imaging and PET/CT as an example. It then discusses tomographic data acquisition through PET imaging and sinogram generation. Various analytical and iterative reconstruction methods are described along with their limitations related to noise and ill-posed problems. Several studies applying neural networks to different aspects of tomographic reconstruction are summarized, including mapping filtered backprojection to a neural network, using CNNs for low-dose CT denoising, incorporating denoising models within iterative reconstruction networks, and mapping iterative PET reconstruction algorithms to neural networks. Potential applications of deep learning to new modalities like positronium imaging are also mentioned.
Image restoration model with wavelet based fusionAlexander Decker
1. The document discusses various techniques for image restoration, which aims to recover a sharp original image from a degraded one using mathematical models of degradation and restoration.
2. It analyzes techniques like deconvolution using Lucy Richardson algorithm, Wiener filter, regularized filter, and blind image deconvolution on different image formats based on metrics like PSNR, MSE, and RMSE.
3. Previous studies have applied techniques like Wiener filtering, wavelet-based fusion, and iterative blind deconvolution for motion blur restoration and compared their performance.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
This document proposes a computer-aided lung cancer classification system using curvelet features and an ensemble classifier. It first pre-processes CT images using adaptive histogram equalization to improve contrast. Then it segments the images using kernelized fuzzy c-means clustering. Curvelet features are extracted from the segmented regions and an ensemble classifier is applied to classify regions as benign or malignant. The proposed approach achieves reliable and accurate classification results compared to existing methods, with better performance metrics like accuracy, sensitivity and specificity.
This paper presents a frequency domain degraded
image restoration practical method. We call it practical wiener
filter. Using this filter, the value for K parameter of wiener
filter is determined experimentally that is so difficult and
time consuming. Furthermore, there is no any absolute remark
to claim that the obtained images by restoration process are
the best could be possible. In order to find a solution for this
problem, we use genetic algorithm to obtain the best value for
K. Therefore, this paper presents an image restoration method
which employs a Computer Aided Design (CAD) to image
restoration where there is no need to original safe image. It
means that, degraded image is as input and restored one is as
output of CAD. Simulation results confirm that this method
is successful and has executive ability in most applications.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
This document summarizes an article that proposes improvements to the nonlocal means (NLM) image denoising algorithm. The proposed method first applies Gaussian blurring to pre-process the noisy image. It then extracts features from image patches using Hu's moment invariants. Next, it performs k-means clustering on the feature vectors to group similar patches. Finally, it applies row image weighted averaging to reconstruct the image. The experimental results showed this method can perform better denoising than the original NLM, especially at higher noise levels, by providing more reliable candidate patches for the weighted averaging.
IRJET- Image De-Blurring using Blind De-Convolution AlgorithmIRJET Journal
The document describes a study on blind image deblurring using a blind deconvolution algorithm. It discusses how blurring occurs in images and various techniques used for image restoration. The proposed method uses a blind deconvolution technique to restore an original sharp image from a blurred input image without prior knowledge of the point spread function. It adds blur to test images using Gaussian, motion and average blur models. The algorithm then applies maximum likelihood estimation and blind deconvolution to restore the blurred image. Experimental results show that the blind deconvolution method can deblur images faster than conventional approaches.
This document discusses a method for segmenting abdominal organs using parallelization and oriented active appearance models (OAAM). The method uses landmarks to represent organ shape and appearance, and live wire segmentation. It then parallelizes the OAAM approach by uniformly partitioning the graph and processing blocks in parallel threads. This reduces segmentation time compared to the sequential OAAM approach. Experimental results on 20 patient CT scans showed the parallel OAAM method segmented the liver 53.3% faster than sequential OAAM. The parallel approach has potential to segment multiple organs faster than conventional segmentation methods.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
Under the certain circumstances of the low and unacceptable accuracy on image recognition, the feature
extraction method for optical images based on the wavelet space feature spectrum entropy is recently
studied. With this method, the principle that the energy is constant before and after the wavelet
transformation is employed to construct the wavelet energy pattern matrices, and the feature spectrum
entropy of singular value is extracted as the image features by singular value decomposition of the matrix.
At the same time, BP neural network is also applied in image recognition. The experimental results show
that high image recognition accuracy can be acquired by using the feature extraction method for optical
images proposed in this paper, which proves the validity of the method.
Similar to Usage of Shape From Focus Method For 3D Shape Recovery And Identification of 3D Object Position (20)
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Usage of Shape From Focus Method For 3D Shape Recovery And Identification of 3D Object Position
1. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 116
Usage of Shape From Focus Method For 3D Shape Recovery
And Identification of 3D Object Position
Joanna Florczak florczak@agh.edu.pl
Faculty of Mechanical Engineering and Robotics
Department of Robotics and Mechatronics
AGH University of Science and Technology
Cracow, 30-059, Poland
Maciej Petko petko@agh.edu.pl
Faculty of Mechanical Engineering and Robotics
Department of Robotics and Mechatronics
AGH University of Science and Technology
Cracow, 30-059, Poland
Abstract
Shape from focus is a method of 3D shape and depth estimation of an object from a
sequence of pictures with changing focus settings. In this paper we propose a novel method
of shape recovery, which was originally created for shape and position identification of glass
pipette in medical hybrid robot. In the proposed algorithm, Sum of Modified Laplacian is used
as a focus operator. Each step of the algorithm is tested in order to select operators with the
best results. Reconstruction allows not only to determine the shape but also precisely define
the position of the object. The results of proposed method, obtained on real objects, have
shown the efficiency of this scheme.
Keywords: Shape From Focus, Image Processing, 3D Shape Recovery.
1. INTRODUCTION
Recovery of a three-dimensional shape from two-dimensional images has been actively
studied for a long time. In the literature, there are many proposed techniques like Shape From
Shading[1] or Shape From Motion[2]. One of them is also Shape From Focus (SFF)[3]. In this
method, the shape of the objects is estimated from series of images of the same scene with
changing focus level. In SFF, after image acquisition, a focus measure operator is applied to
specify points with the best focus for every image. In the next step, every pixel is assigned to
the frame where the best focus has been obtained. Thanks to that, it is possible to reconstruct
the shape of the object[3]. In this paper, a novel Shape From Focus algorithm, which was
originally developed for the shape and the position identification of a glass pipette in a
medical hybrid robot, is presented. This element is hard to examine because of it’s partial
transparency, so the algorithm needs to be adapted to the type of a reconstructed object. The
proposed algorithm reconstructs shape even when object is difficult to determine or is very
complex. Different focus measure operators have been studied and the one with the best
results has been chosen. After focus measure, obtained pictures have been filtered with a
Butterworth first order filter in order to get rid of high frequencies, occurring on the
reconstruction as picks. Method for elimination of noise and determination, not only the
shape, but also the position of the object was developed. Every pixel was assigned to the
height were it obtained the maximum focus measure. The beginning of the pipette was the
critical area, which was crucial to defined in this experiment, this is why it was used as a
threshold in the reconstruction process. According to this threshold, the decision was made if
the focus value of the pixel is enough to be considered during the reconstruction. In the end,
the shape was filtered with the average filter, in order to make edges smoother and also to
eliminate remained noises.
2. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 117
2. PREVIOUS WORKS
There are numerous solutions of SFF method examined in the literature. Each of them deals
with different aspects of this algorithm. Many of works consider focus measure operator used
in the algorithm. Mostly common are : Modified Laplacian (ML), Sum of Modified Laplacian
(SML), Tenenbaum gradient (TEN) and gray level variance (GLV)[3].
SML is based on Laplace filter. In this filter, second derivatives in x and y directions can
cancel each other. This is why Modified Laplacian model has been proposed [3]. It is defined
as:
, =
,
+
,
,where g(x,y) is the gray value for the coordinates (x,y). To improve the results of
reconstruction focus measure in coordinates x,y are presented as a sum of ML in local
window.
, = ,
, ∈⋃ ,
Tennenbaum method is based on Sobel gradient operator. This focus measure method uses
sum of squares of horizontal and vertical Sobel masks, also summed in a local window[4].
, = [ , +
, ∈⋃ ,
,
The last one, GLV focus measure is based on variations in a gray level values. On sharp
image this variation is bigger that on image that is blurred. We get information about focus of
the object by detecting variations between gray level variance in local window[3].
, =
1
− 1
[ , − "⋃ , ]
, ∈⋃ ,
where "⋃ , is an average gray level variance of neighborhood pixels ⋃ , .
There are also some innovative methods of focus measure proposed in the literature. One of
them measures focus with fast discrete curvelent transform – FDCT. [5] Another one uses
Anisotropic Nonlinear Diffusion Filtering – ANDF [6].
Effectiveness of focus measure not only depends on used algorithm. Very important is also
the type of window used in the calculations. In one of proposed algorithm instead of 2D, 3D
window was used[7].
Some of works focus on the initial state of the algorithm which is acquisition of the pictures.
An example is the algorithm based on combinatorial optimization[6]. This algorithm tries to
find combination of frames for which the maximum focus level will be achieved. A similar
problem is taken into consideration in the paper about the minimum number of images
required to obtain sufficient shape reconstruction [8].
Another issue often analyzed in order to improve SFF is a method of approximation of edges
after focus measure process. An examples of such improvement can be usage of Lorentzian-
Cauchy function[9] or kernel regression [10].
3. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 118
3. PROPOSED ALGORITHM
In this chapter, a new approach for improvement of the SFF method, is shown. The proposed
algorithm is stated below. Basic process contains steps like: image acquisition, normalization,
focus measure, filtering, creating initial and final depth map, as shown in FIGURE 1.
FIGURE 1: Proposed Algorithm.
After image acquisition, normalization process was performed. Every frame was divided by
the maximum value of this frame. Thanks to that every frame was in range 0÷1.
FIGURE 2: Normalization In Time.
In the second step, it was decided which focus operator should be used in the algorithm.
Many operators have been proposed to measure the degree of focus level of the picture. The
most commonly used are: Laplacian (Lap), Modified Laplacian (ML), Sum of Modified
Laplacian (SML) and Tenenbaum gradient (TEN).[3] Focus measure values in this paper
have been obtained by using the Sum of Modified Laplacian, because it gives the best results
in experiments. Results of one of this experiments are shown below. In the FIGURE 3 one
frame, from series that was taken into analysis, is presented. Focus measure values obtained
with different focus operators for pixel marked in red are shown on FIGURE 4.Knowledge of
the series, indicates that maximum focus measure should be obtained on the last – 18 frame.
Image acquisition Normalization in time
SML
• window N =13
• step sp = 6
Low-pass Butterworth
filter
• I order
• cutoff frequency coefficient
= 0,015
Shape recovery
• threshold = 0,9 of picked
value
Average filter
• mask=25
4. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 119
FIGURE 3: Frame from series that was taken into analysis.
As it can be seen only SML was able to detect the maximum focus correctly. ML found the
maximum focus on 18
th
frame too, but it also detected high focus on 14
th
frame, what was
incorrect. This is an example of experiments conducted to find the best focus measure
operator. According to the results of those experiments SML will be operator used in this
algorithm.
FIGURE 4: Focus measure obtained with different focus operators.
Afterwards, frames were filtered with low pass Butterworth filter. This allows to get rid of
undesirable high frequencies, occurring on reconstructed image as a peaks. It is also the
method of approximation of edges after focus measure process. Butterworth filter has flat
frequency response, so it will smooth the edges without significant distortion. After
experiments, first-order Butterworth filter with cutoff coefficient equal 0.015, shown in FIGURE
5, was used.
FIGURE 5: First-order Butterworth Filter.
5. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 120
Prepared images were ready for the shape recovery process. For every pixel, frame where
the highest focus measure level was obtained, was found. Knowing picture number and high
of that picture made possible to assign every pixel to high where it obtained maximum focus
measure. It was necessary to set a threshold. It was a the minimal value of pixel focus
measure that should be reached to allow the assignment process. To do this one point was
picked, for this reconstruction, it was beginning of the pipette because correct estimation for
this area was the most crucial. Experimentally, the threshold was set as a 90% of picked
value. This process allowed to create initial depth map. To enhance results of reconstruction,
average filtering was applied. Thanks to that final reconstructed shape was smoother and did
not have a high frequency noises.
4. RESULTS
In this section we show the improved performance of proposed scheme by testing it on a
various images. Experiments were conducted on four different types of objects shown on
FIGURE 6 (a-d).
a b
c d
FIGURE 6: Objects on which experiments were proceeded (a-d).
All series of pictures have been made with specially designed optical system. It contains fully
apochromatic zoom system, where zoom, iris diaphragm and focus are motorized. It provides
high resolution images without chromatic distortions. The system also has objective LEICA
PLAN APO 5x/0.5 LWD, what makes 95% of an area flat during the observation. In addition,
system has illumination module that provides a coaxial illumination of the test object. First
object was a partially transparent pipette (FIGURE 6a), for which this algorithm was originally
dedicated. This object is highly difficult to reconstruct because of it’s transparent structure.
The shape was reconstructed from a sequence of 22 pictures.
FIGURE 7: Reconstruction of the Pipette.
6. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 121
As it can be seen on FIGURE 7, there are areas for which reconstruction was impossible
because of transparency of those places. However, the general shape was successfully
reconstructed. Also the pick of pipette, which was crucial in this experiment was exactly
detected.
Second object was DOF 5-5 Depth of Field Target, Edumnd Optics (FIGURE 6b). The shape
was reconstructed from a sequence of 22 pictures. This experiment was mostly performed to
show that it is possible not only to reconstruct shape but also keep proportions and exact
depth map of the object. Depth of Field Target was placed so that it would be possible to
observe the surface at an angle of 45 degrees. As it can be seen in the FIGURE 8 this angle
was preserved while reconstruction process. Steps that are visible on reconstructed object
are a result of established step of scanning and iris settings. To rectify this effect it is need to
decrease the scanning step or change depth of filed by opening the iris. In this experiment it
was important to verify the correct angle of the reconstructed surface, exact shape was not
crucial, that is why big step has been used.
FIGURE 8: Reconstructions of DOF 5-5 Depth of Field Target, Edumnd Optics.
In the FIGURE 9, it is shown how the reconstruction changed when depth of field was higher.
Algorithm with the same number of pictures was able to reconstruct shape properly.
FIGURE 9: Reconstructions of DOF 5-5 Depth of Field Target, Edumnd
Optics with high depth of field.
In third experiment a biological object was examined. A fly was used for this purpose
(FIGURE 6c). This shape is very complicated so it was possible to fully prove the possibilities
of the algorithm. Of course reconstruction was not exact, because only view from one
perspective was available. However, object has been reconstructed accurately despite from
it’s high complexity.
7. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 122
FIGURE 10: Fly’s Head Reconstruction.
The last experiment was conducted also with the fly, but attention was focused on smaller
part (FIGURE 6d). Only the eye of the fly was reconstructed. Pictures were made with six
time zoom and iris open with 100%, what gives very small depth of field. Distance of 1275 um
was scanned with step of 25 um, what gives 53 pictures. Those parameters were needed
because of small size of the object and small variation of surface. Results of the experiment
shown on FIGURE 11 are very satisfying.
FIGURE 11: Fly’s Eye Reconstruction.
Shape of the eye has been carefully restored, even small changes on the surface were
detected by the algorithm.
One more experiment was proceed to shows possibilities of algorithm to define the position of
the object and distance between two parts of reconstructed shape. Small areas on fly’s eye
were selected, what is shown of FIGURE 12 (a,b).
a B
FIGURE 12: Selected Areas (a,b).
In FIGURE 13, it is shown how value of focus measure depends on high in Z axis. Plot blue is
for one selected area and plot green is for the other. As it can be seen, maximum focus
measure for both is strictly defined, so it is possible to find the high of them and consequently
distance between them. The results confirm that it is possible to obtain accurate information
about individual elements, even when they are really small or complicated.
8. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 123
FIGURE 13: Value of focus measure depending on high in Z axis.
5. DISCUSSION
Proposed algorithm is a substantial improvement in the field of shape recovery methods.
While creating the processing scheme, every aspect of reconstruction process was carefully
analyzed and the most adequate method was selected. For each step the extensive literature
background was checked, the best solutions were found, tested, and in the end, parameters
for those methods were chosen to fulfill the requirements as close as it was
possible[11][12][13]. The novel Shape from Focus method was able to reconstruct objects
with high accuracy and preserve proportions even with a large complexity of the object. Also
the height of a single point and the distance between two points were possible to obtain. Thr
proposed methodology was dedicated for a glass pipette, which is difficult to reconstruct, but
the algorithm managed to gain this goal. There is anxiety that the algorithm created for one
specific case would not be efficient for different objects. To limit this problem, there are steps
like normalization in time, or picking the threshold, which make the proposed method useful
for diverse cases. In most of researched papers authors were focused on one aspect of the
algorithm, like the image acquisition[6][8],the focus measure operator[5][6][14], the size of the
window[7] or approximation methods [9][10]. In the proposed scheme all aspects of the
algorithm were analyzed to create the most efficient solution. Comparative evaluation with
other authors wasn't fully possible, because object that were used in other experiments were
not available for tests. The algorithm proposed in this paper, in comparison to other works,
was able to reconstruct even really complex and difficult to determine structures.
6. CONCLUSION
In this paper, a novel SFF method was presented for 3D Shape recovery and identification of
a 3D object position. The algorithm was originally created for the shape and the position
identification of a glass pipette in a medical hybrid robot. In the proposed algorithm, after
image acquisition, there is a normalization process. In the next step Sum of Modified
Laplacian is applied in order to find areas with the maximum focus. After this, images are
filtered with a first order low-pass Butterworth filter. The initial depth map is obtained by
assigning each pixel to the height, where it reached the maximum focus value. The threshold
in the reconstruction process depends on the value of a pixel, for which proper reconstruction
is the most important. In this case, it is beginning of a pipette. Thanks to that, it is sure that
the most important area will be reconstructed correctly. In the end, average filtering is applied
to smooth the edges of the reconstructed shape. The proposed scheme was tested on four
different objects and reconstruction gave really good results. Moreover, the proposed
algorithm is able to define the position of the object and distance between two selected areas.
9. Joanna Florczak & Maciej Petko
International Journal of Image Processing (IJIP), Volume (8) : Issue (3) : 2014 124
7. Acknowledgements
The project was funded by the National Science Centre.
8. REFERENCES
[1] M. Breuß, O. Vogel and A. Tankus. “ Modern shape from shading and beyond”.
Image Processing (ICIP), 2011 18th IEEE International Conference on. IEEE, pp. 1-4,
2011.
[2] M. Marques and J. Costeira. “3D face recognition from multiple images: A shape-from-
motion approach”, Automatic Face & Gesture Recognition, 2008. FG'08. 8th IEEE
International Conference on. IEEE, pp. 1-6, 2008.
[3] S.K. Nayar and Y.Nakagawa. “Shape from focus”, Pattern analysis and machine
intelligence, IEEE Transactions on, 16.8, pp. 824-831,1994.
[4] J.M. Tenenbaum. “Accommodation in computer vision”. Stanford Univ Ca Dept of
Computer Science, 1970.
[5] R. Minhas , A. Adeel Mohammed and Q.M. Jonathan Wu. “Shape from focus using fast
discrete curvelet transform”. Pattern Recognition, 44.4, pp. 839-853. 2011.
[6] M.T. Mahmood and T.S. Choi. “Nonlinear Approach for Enhancement of Image Focus
Volume in Shape From Focus”, 21.5, pp. 2866-2873, 2012.
[7] S.O. Shim and T.S. Choi. “A novel iterative shape from focus algorithm based on
combinatorial optimization”. Pattern Recognition, 43.10, pp. 3338-3347, 2010.
[8] M.S. Muhammad and T.S. Choi “Sampling for Shape from Focus in Optical
Microscopy”. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34.3,
pp. 564-573, 2012.
[9] M.S. Muhammad and T.S. Choi “3D shape recovery by image focus using Lorentzian-
Cauchy function”. Image Processing (ICIP), 2010 17th IEEE International Conference
on. IEEE, pp. 4065-4068, 2010.
[10] M. Tariq Mahmood and T.S. Choi “Shape from focus using kernel regression”. Image
Processing (ICIP), 2009 16th IEEE International Conference on. IEEE, pp. 4293-4296,
2009.
[11] P. Kamencay, M. Breznan, R. Jarina and M. Zachariasova. “Estimation and Image
Segmentation of a Sparse Disparity Map for 3D Reconstruction.”
[12] N.N. Kachouie. “Anisotropic Diffusion for Medical Image Enhancement”. International
Journal Of Image Processing (IJIP), 4.4, pp.436, 2008.
[13] R. Maini and H. Aggarwal. “Study and Comparison of Various Image Edge Detection
Techniques”. International Journal of Image Processing (IJIP), 3.1, pp. 1-11, 2009.
[14] F.S. Helmli and S. Scherer. “Adaptive Shape from Focus with an Error Estimation in
Light Microscopy”. Image and Signal Processing and Analysis, 2001. ISPA 2001.
Proceedings of the 2nd International Symposium on. IEEE, pp. 188-193, 2001.