Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
NIR Three dimensional imaging of breast model using f-DOT Nagendra Babu
NIR three dimensional optical imaging of breast model using f-DOT using f-DOT with target specified contrast agent.
Three dimensional mathematical modeling of DOT,f-DOT.
Classification and Segmentation of Glaucomatous Image Using Probabilistic Neu...ijsrd.com
The gradual visual field loss and there is a characteristic type of damage to the retinal nerve fiber layer associated with the progression of the disease glaucoma. Texture features within images are actively pursued for accurate and efficient glaucoma classification. Energy distribution over wavelet subband is applied to find these important texture features. In this paper, we investigate the discriminatory potential of wavelet features obtained from the Daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signatures to different feature ranking and feature selection strategies. Here my project aims at the use of Probabilistic Neural Network (PNN), Fuzzy C-means (FCM) and K-means helps for the detection of glaucoma disease. For this, fuzzy c-means clustering algorithm and k-means algorithm is used. Fuzzy c-means results faster and reliably good clustering when compare to k-means.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
NIR Three dimensional imaging of breast model using f-DOT Nagendra Babu
NIR three dimensional optical imaging of breast model using f-DOT using f-DOT with target specified contrast agent.
Three dimensional mathematical modeling of DOT,f-DOT.
Classification and Segmentation of Glaucomatous Image Using Probabilistic Neu...ijsrd.com
The gradual visual field loss and there is a characteristic type of damage to the retinal nerve fiber layer associated with the progression of the disease glaucoma. Texture features within images are actively pursued for accurate and efficient glaucoma classification. Energy distribution over wavelet subband is applied to find these important texture features. In this paper, we investigate the discriminatory potential of wavelet features obtained from the Daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signatures to different feature ranking and feature selection strategies. Here my project aims at the use of Probabilistic Neural Network (PNN), Fuzzy C-means (FCM) and K-means helps for the detection of glaucoma disease. For this, fuzzy c-means clustering algorithm and k-means algorithm is used. Fuzzy c-means results faster and reliably good clustering when compare to k-means.
SEGMENTATION OF LUNG GLANDULAR CELLS USING MULTIPLE COLOR SPACESIJCSEA Journal
Early detection of lung cancer is a challenging problem, the world faces today. Prior to classify glandular cells as malignant or benign a reliable segmentation technique is required. In this paper we present a novel lung glandular cell segmentation technique. The technique uses a combination of multiple color spaces and various clustering algorithms to automatically find the best possible segmentation result. Unsupervised clustering methods of K-means and Fuzzy C-means were used on multiple color spaces such as HSV, LAB, LUV, xyY. Experimental results of segmentation using various color spaces are provided to show the performance of the proposed system.
Abstract: We present a new algorithm, called the soft-tissue filter that can make both soft and bone tissue clearly visible in digital cephalic radiographies under a wide range of exposures. It uses a mixture model made up of two Gaussian distributions and one inverted lognormal distribution to analyze the image histogram. The image is clustered in three parts: background, soft tissue, and bone using this model. Improvement in the visibility of both structures is achieved through a local transformation based on gamma correction, stretching, and saturation, which is applied using different parameters for bone and soft-tissue pixels. A processing time of 1 s for 5 M pixel images allows the filter to operate in real time. Although the default value of the filter parameters is adequate for most images, real-time operation allows adjustment to recover under- and overexposed images or to obtain the best quality subjectively. The filter was extensively clinically tested: quantitative and qualitative results are reported here Index Terms: Digital radiography, histogram-based clustering, image enhancement, local gamma correction
A Novel Multiple-kernel based Fuzzy c-means Algorithm with Spatial Informatio...CSCJournals
Fuzzy c-means (FCM) algorithm has proved its effectiveness for image segmentation. However, still it lacks in getting robustness to noise and outliers, especially in the absence of prior knowledge of the noise. To overcome this problem, a generalized a novel multiple-kernel fuzzy cmeans (FCM) (NMKFCM) methodology with spatial information is introduced as a framework for image-segmentation problem. The algorithm utilizes the spatial neighborhood membership values in the standard kernels are used in the kernel FCM (KFCM) algorithm and modifies the membership weighting of each cluster. The proposed NMKFCM algorithm provides a new flexibility to utilize different pixel information in image-segmentation problem. The proposed algorithm is applied to brain MRI which degraded by Gaussian noise and Salt-Pepper noise. The proposed algorithm performs more robust to noise than other existing image segmentation algorithms from FCM family.
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesIOSR Journals
Abstract : Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid
pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational
decision support systems for the early detection of glaucoma can help prevent this complication. The retinal
optic nerve fibre layer can be assessed using optical coherence tomography, scanning laser polarimetry, and
Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma
detection using an Haralick Texture Features from digital fundus images. K Nearest Neighbors (KNN)
classifiers are used to perform supervised classification. Our results demonstrate that the Haralick Texture
Features has Database and classification parts, in Database the image has been loaded and Gray Level Cooccurrence
Matrix (GLCM) and thirteen haralick features are combined to extract the image features, performs
better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than
98%. The impact of training and testing is also studied to improve results. Our proposed novel features are
clinically significant and can be used to detect glaucoma accurately.
Keywords: Glaucoma, Haralick Texture features, KNN Classifiers, Feature Extraction
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
Medical image fusion is one of the major research fields in image processing. Medical imaging has become a
vital component in major clinical applications such as detection/ diagnosis and treatment. Joint analysis of
medical data collected from same patient using different modalities is required in many clinical applications.
This paper introduces an optimal fusion technique for multiscale-decomposition based fusion of medical images
and measuring its performance with existing fusion techniques. This approach incorporates genetic algorithm
for optimal coefficient selection and employ various multiscale filters for noise removal. Experiments
demonstrate that proposed fusion technique generate better results than existing rules. The performance of
proposed system is found to be superior to existing schemes used in this literature.
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...IJARIDEA Journal
Abstract— Image Restoration is the operation of taking a degenerate picture and assessing the perfect, unique picture. Intially the range is separated from caught scene and coordinated with the word reference and are stacked together. At last the pictures are reestablished utilizing SDL calculation. The PSNR qualities are observe to be higher than customary condition of all pressure procedures. The point of word reference learning is to finding an edge in which some preparation information concedes an inadequate portrayal. In this strategy the specimens are taken underneath the Nyquist rate. In any case, in specific cases a lexicon that is prepared to fit the information can essentially enhance the sparsity, which has applications in information disintegration.
Keywords— FSIM , Group based Sparse Representation, PSNR, Sparse Dictionary Learning.
Robust Clustering of Eye Movement Recordings for QuantiGiuseppe Fineschi
Characterizing the location and extent of a viewer’s interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-ofinterest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.
Corner Detection Using Mutual InformationCSCJournals
This work presents a new method of corner detection based on mutual information and invariant to image rotation. The use of mutual information, which is a universal similarity measure, has the advantage of avoiding the derivation which amplifies the effect of noise at high frequencies. In the context of our work, we use mutual information normalized by entropy. The tests are performed on grayscale images.
An ensemble classification algorithm for hyperspectral imagessipij
Hyperspectral image analysis has been used for many purposes in environmental monitoring, remote
sensing, vegetation research and also for land cover classification. A hyperspectral image consists of many
layers in which each layer represents a specific wavelength. The layers stack on top of one another making
a cube-like image for entire spectrum. This work aims to classify the hyperspectral images and to produce
a thematic map accurately. Spatial information of hyperspectral images is collected by applying
morphological profile and local binary pattern. Support vector machine is an efficient classification
algorithm for classifying the hyperspectral images. Genetic algorithm is used to obtain the best feature
subjected for classification. Selected features are classified for obtaining the classes and to produce a
thematic map. Experiment is carried out with AVIRIS Indian Pines and ROSIS Pavia University. Proposed
method produces accuracy as 93% for Indian Pines and 92% for Pavia University.
One approach to computerized histopathology image analysis is to leverage the multi-scale texture information resulting from single nuclei appearance to entire cell populations. In this talk, we will introduce a novel framework for learning highly adaptive texture-based local models of biomedical tissue. I will discuss our initial experience with the differentiation of brain tumor types in digital histopathology.
Survey on Brain MRI Segmentation TechniquesEditor IJMTER
Image segmentation is aimed at cutting out, a ROI (Region of Interest) from an image. For
medical images, segmentation is done for: studying the anatomical structure, identifying ROI ie tumor
or any other abnormalities, identifying the increase in tissue volume in a region, treatment planning.
Currently there are many different algorithms available for image segmentation. This paper lists and
compares some of them. Each has their own advantages and limitations.
A ‘Yes’ vote at the Scottish Independence referendum will carve up the UK, but what will be the impact on Sterling? With the vote on a knife-edge, Saxo’s NEW infographic dissects the cost of an independent Scotland.
SEGMENTATION OF LUNG GLANDULAR CELLS USING MULTIPLE COLOR SPACESIJCSEA Journal
Early detection of lung cancer is a challenging problem, the world faces today. Prior to classify glandular cells as malignant or benign a reliable segmentation technique is required. In this paper we present a novel lung glandular cell segmentation technique. The technique uses a combination of multiple color spaces and various clustering algorithms to automatically find the best possible segmentation result. Unsupervised clustering methods of K-means and Fuzzy C-means were used on multiple color spaces such as HSV, LAB, LUV, xyY. Experimental results of segmentation using various color spaces are provided to show the performance of the proposed system.
Abstract: We present a new algorithm, called the soft-tissue filter that can make both soft and bone tissue clearly visible in digital cephalic radiographies under a wide range of exposures. It uses a mixture model made up of two Gaussian distributions and one inverted lognormal distribution to analyze the image histogram. The image is clustered in three parts: background, soft tissue, and bone using this model. Improvement in the visibility of both structures is achieved through a local transformation based on gamma correction, stretching, and saturation, which is applied using different parameters for bone and soft-tissue pixels. A processing time of 1 s for 5 M pixel images allows the filter to operate in real time. Although the default value of the filter parameters is adequate for most images, real-time operation allows adjustment to recover under- and overexposed images or to obtain the best quality subjectively. The filter was extensively clinically tested: quantitative and qualitative results are reported here Index Terms: Digital radiography, histogram-based clustering, image enhancement, local gamma correction
A Novel Multiple-kernel based Fuzzy c-means Algorithm with Spatial Informatio...CSCJournals
Fuzzy c-means (FCM) algorithm has proved its effectiveness for image segmentation. However, still it lacks in getting robustness to noise and outliers, especially in the absence of prior knowledge of the noise. To overcome this problem, a generalized a novel multiple-kernel fuzzy cmeans (FCM) (NMKFCM) methodology with spatial information is introduced as a framework for image-segmentation problem. The algorithm utilizes the spatial neighborhood membership values in the standard kernels are used in the kernel FCM (KFCM) algorithm and modifies the membership weighting of each cluster. The proposed NMKFCM algorithm provides a new flexibility to utilize different pixel information in image-segmentation problem. The proposed algorithm is applied to brain MRI which degraded by Gaussian noise and Salt-Pepper noise. The proposed algorithm performs more robust to noise than other existing image segmentation algorithms from FCM family.
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesIOSR Journals
Abstract : Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid
pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational
decision support systems for the early detection of glaucoma can help prevent this complication. The retinal
optic nerve fibre layer can be assessed using optical coherence tomography, scanning laser polarimetry, and
Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma
detection using an Haralick Texture Features from digital fundus images. K Nearest Neighbors (KNN)
classifiers are used to perform supervised classification. Our results demonstrate that the Haralick Texture
Features has Database and classification parts, in Database the image has been loaded and Gray Level Cooccurrence
Matrix (GLCM) and thirteen haralick features are combined to extract the image features, performs
better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than
98%. The impact of training and testing is also studied to improve results. Our proposed novel features are
clinically significant and can be used to detect glaucoma accurately.
Keywords: Glaucoma, Haralick Texture features, KNN Classifiers, Feature Extraction
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
Medical image fusion is one of the major research fields in image processing. Medical imaging has become a
vital component in major clinical applications such as detection/ diagnosis and treatment. Joint analysis of
medical data collected from same patient using different modalities is required in many clinical applications.
This paper introduces an optimal fusion technique for multiscale-decomposition based fusion of medical images
and measuring its performance with existing fusion techniques. This approach incorporates genetic algorithm
for optimal coefficient selection and employ various multiscale filters for noise removal. Experiments
demonstrate that proposed fusion technique generate better results than existing rules. The performance of
proposed system is found to be superior to existing schemes used in this literature.
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...IJARIDEA Journal
Abstract— Image Restoration is the operation of taking a degenerate picture and assessing the perfect, unique picture. Intially the range is separated from caught scene and coordinated with the word reference and are stacked together. At last the pictures are reestablished utilizing SDL calculation. The PSNR qualities are observe to be higher than customary condition of all pressure procedures. The point of word reference learning is to finding an edge in which some preparation information concedes an inadequate portrayal. In this strategy the specimens are taken underneath the Nyquist rate. In any case, in specific cases a lexicon that is prepared to fit the information can essentially enhance the sparsity, which has applications in information disintegration.
Keywords— FSIM , Group based Sparse Representation, PSNR, Sparse Dictionary Learning.
Robust Clustering of Eye Movement Recordings for QuantiGiuseppe Fineschi
Characterizing the location and extent of a viewer’s interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-ofinterest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.
Corner Detection Using Mutual InformationCSCJournals
This work presents a new method of corner detection based on mutual information and invariant to image rotation. The use of mutual information, which is a universal similarity measure, has the advantage of avoiding the derivation which amplifies the effect of noise at high frequencies. In the context of our work, we use mutual information normalized by entropy. The tests are performed on grayscale images.
An ensemble classification algorithm for hyperspectral imagessipij
Hyperspectral image analysis has been used for many purposes in environmental monitoring, remote
sensing, vegetation research and also for land cover classification. A hyperspectral image consists of many
layers in which each layer represents a specific wavelength. The layers stack on top of one another making
a cube-like image for entire spectrum. This work aims to classify the hyperspectral images and to produce
a thematic map accurately. Spatial information of hyperspectral images is collected by applying
morphological profile and local binary pattern. Support vector machine is an efficient classification
algorithm for classifying the hyperspectral images. Genetic algorithm is used to obtain the best feature
subjected for classification. Selected features are classified for obtaining the classes and to produce a
thematic map. Experiment is carried out with AVIRIS Indian Pines and ROSIS Pavia University. Proposed
method produces accuracy as 93% for Indian Pines and 92% for Pavia University.
One approach to computerized histopathology image analysis is to leverage the multi-scale texture information resulting from single nuclei appearance to entire cell populations. In this talk, we will introduce a novel framework for learning highly adaptive texture-based local models of biomedical tissue. I will discuss our initial experience with the differentiation of brain tumor types in digital histopathology.
Survey on Brain MRI Segmentation TechniquesEditor IJMTER
Image segmentation is aimed at cutting out, a ROI (Region of Interest) from an image. For
medical images, segmentation is done for: studying the anatomical structure, identifying ROI ie tumor
or any other abnormalities, identifying the increase in tissue volume in a region, treatment planning.
Currently there are many different algorithms available for image segmentation. This paper lists and
compares some of them. Each has their own advantages and limitations.
A ‘Yes’ vote at the Scottish Independence referendum will carve up the UK, but what will be the impact on Sterling? With the vote on a knife-edge, Saxo’s NEW infographic dissects the cost of an independent Scotland.
Transparent unstained samples do not absorb light and are called phase objects. When light passes through a sample area with no phase object, there is no significant change in the refractive index or optical path length. Non-diffracted light is referred to as direct or zero-order light as it continues unchanged through the sample. On the other hand, when the light passes through an area of the sample with a phase object, small changes in the refractive index will diffract and scatter some light and cause changes to the optical path length, depending on the thickness and refractive index of each structure. Thicker the structure, the greater the diffraction of the light. The diffracted light represents only a small part of the total light that has passed through the sample. This diffracted light arrives at the detector out of phase with the direct light. The small phase shift created by this, is not enough to cause great interference between the direct and diffracted light. Which along with the low absorption of transparent structures means there is negligible amplitude difference between areas where such structures are present and where they are not. Phase-contrast microscopy is a method that manipulates this property of phase objects to introduce additional interference between the direct and diffracted light. This method transforms differences in phase into differences in brightness, increasing contrast in images of non-absorbing samples.
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION acijjournal
ABSTRACT
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of image fusion algorithms that would combine the image from these sensors in an efficient way to give an image that is more perceptible to human eye. Multispectral Image fusion is the process of combining images optically acquired in more than one spectral band. In this paper, we present a pixel-level image fusion that combines four images from four different spectral bands namely near infrared(0.76-0.90um), mid infrared(1.55-1.75um),thermal- infrared(10.4-12.5um) and mid infrared(2.08-2.35um) to give a composite colour image. The work coalesces a fusion technique that involves linear transformation based on Cholesky decomposition of the covariance matrix of source data that converts multispectral source images which are in grayscale into colour image. This work is composed of different segments that includes estimation of covariance matrix of images, cholesky decomposition and transformation ones. Finally, the fused colour image is compared with the fused image obtained by PCA transformation.
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSIONacijjournal
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more perceptible to human eye. Multispectral Image fusion is the process of combining
images optically acquired in more than one spectral band. In this paper, we present a pixel-level image
fusion that combines four images from four different spectral bands namely near infrared(0.76-0.90um),
mid infrared(1.55-1.75um),thermal- infrared(10.4-12.5um) and mid infrared(2.08-2.35um) to give a
composite colour image. The work coalesces a fusion technique that involves linear transformation based
on Cholesky decomposition of the covariance matrix of source data that converts multispectral source
images which are in grayscale into colour image. This work is composed of different segments that
includes estimation of covariance matrix of images, cholesky decomposition and transformation ones.
Finally, the fused colour image is compared with the fused image obtained by PCA transformation.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
ADAPTIVE SEGMENTATION OF CELLS AND PARTICLES IN FLUORESCENT MICROSCOPE IMAGEJournal For Research
Understanding the mechanisms of cell motility and their regulation is an important challenge in biomedical research. The ability of cells to exert forces on their environment and alter their shape as they move is essential to various biological processes such as the immune response, embryonic development, or tumor genesis .Recent technological advances in con-focal fluorescence microscopy have given researchers the opportunity to investigate these complex processes in vivo. However, they also lead to a tremendous increase in the amount of image data acquired during the studies. Therefore, the analysis of time-lapse experiments relies increasingly on automated image processing techniques. Namely, there is a high demand for fast and robust methods to help biologists to quantitatively analyze time-lapse image data. The potential of the proposed tracking scheme and the advantages and disadvantages of both frameworks are demonstrated on 2-D and 3-D time-lapse series of rat adipose-derived mesenchymal stem cells and human lung squamous cell carcinoma cells, respectively. The crucial tasks are, in particular, segmenting, tracking, and evaluating movement tracks and morphological changes of cells, sub-cellular components and other particles.
Image fusion is a technique of
intertwining at least two pictures of same scene to
shape single melded picture which shows indispensable
data in the melded picture. Picture combination
system is utilized for expelling clamor from the
pictures. Commotion is an undesirable material which
crumbles the nature of a picture influencing the
lucidity of a picture. Clamor can be of different kinds,
for example, Gaussian commotion, motivation clamor,
uniform commotion and so forth. Pictures degenerate
some of the time amid securing or transmission or
because of blame memory areas in the equipment.
Picture combination should be possible at three
dimensions, for example, pixel level combination,
highlight level combination and choice dimension
combination. There are essentially two kinds of picture
combination methods which are spatial area
combination systems and transient space combination
procedures. (PCA) combination, Normal strategy, high
pass sifting are spatial area techniques and strategies
which incorporate change, for example, Discrete
Cosine Transform, Discrete wavelet change are
transient space combination strategies. There are
different techniques for picture combination which
have numerous favorable circumstances and
detriments. Numerous procedures experience the ill
effects of the issue of shading curios that comes in the
intertwined picture shaped. Also, the Cyclopean One
of the most astonishing properties of human stereo
vision is the combination of the left and right
perspectives of a scene into a solitary cyclopean one.
Under typical survey conditions, the world shows up as
observed from a virtual eye set halfway between the
left and right eye positions. The apparent picture of
the world is never recorded specifically by any tangible
exhibit, however developed by our neural equipment.
The term cyclopean alludes to a type of visual
upgrades that is characterized by binocular
dissimilarity alone. He suspected that stereo-psis may
find concealed articles, this may be helpful to discover
disguised items. The critical part of this examination
when utilizing arbitrary dab stereo-grams was that
uniqueness is adequate for stereo-psis, and where had
just demonstrated that binocular difference was vital
for stereo-psis.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
Fluorescence technique involves the optical detection and spectral analysis of light emitted by a substance undergoing a transition from an excited electronic state to a lower electronic state. The aim of this study is to assess the -amino levulinic acid (-ALA) uptake. Based on image processing technique, Matlab was used to analyze the fluorescence images resulted from activation of (-ALA) and follow its uptake along one week. Analyzing the RGB colours pixel profile from obtained results showed different profiles for malignant tissues, normal tissues, treated just after PDT and finally at one week post PDT. The treated tissues fluorescence profile showed changes from closer to malignant tissue profile till been closed to normal one.
AOS is an industry leader in diffractive optics design & manufacturing. We offer the best lens design, metrology, and extensive fabrication of diffractive optical elements with high precision and diffraction efficiency. Visit Us
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A Fuzzy Watermarking Approach Based on Human Visual SystemCSCJournals
The implementation of our watermarking system is based on a hybrid system combining the human visual system (HVS) and the fuzzy inference system (FIS), which always passes through the transcription of human expertise in the form of fuzzy rules expressed in natural language, which allows our watermarking system remain understandable for non expert and become more friendly. The technique discussed in this paper is the use of an advanced approach to the technique of watermark that is the multi-watermark or the watermarking multiple of medical images in the frequency domain. In this approach, the emphasis will be on the safe side and the invisibility while maintaining robustness against a certain target range of attacks. Furthermore, this approach is based on an entirely blind technique which we will detail later.
Similar to Compensation of Inhomogeneous Fluorescence Signal Distribution in 2D Images Acquired by Confocal Microscopy_preprint (20)
A Fuzzy Watermarking Approach Based on Human Visual System
Compensation of Inhomogeneous Fluorescence Signal Distribution in 2D Images Acquired by Confocal Microscopy_preprint
1. Compensation of Inhomogeneous Fluorescence Signal
Distribution in 2D Images Acquired by Confocal Microscopy
JAN MICHA´ LEK,1
* MARTIN Cˇ APEK,1,2
AND LUCIE KUBI´NOVA´
AQ1
1
1
Department of Biomathematics, Institute of Physiology, Academy of Sciences of the Czech Republic, v.v.i.,
Vı´denˇska´ 1083, 14220 Prague 4, Czech Republic
2
Czech Technical University in Prague, Faculty of Biomedical Engineering, na´m. Sı´tna´ 3105, 272 01 Kladno, Czech Republic
KEY WORDS confocal laser scanning microscopy; image enhancement; morphology filters
ABSTRACT In images acquired by confocal laser scanning microscopy (CLSM), regions corre-
sponding to the same concentration of fluorophores in the specimen should be mapped to the same
grayscale levels. However, in practice, due to multiple distortion effects, CLSM images of even homo-
geneous specimen regions suffer from irregular brightness variations, e.g., darkening of image edges
and lightening of the center. The effects are yet more pronounced in images of real biological speci-
mens. A spatially varying grayscale map complicates image postprocessing, e.g., in alignment of
overlapping regions of two images and in 3D reconstructions, since measures of similarity usually
assume a spatially independent grayscale map. We present a fast correction method based on esti-
mating a spatially variable illumination gain, and multiplying acquired CLSM images by the inverse
of the estimated gain. The method does not require any special calibration of reference images since
the gain estimate is extracted from the CLSM image being corrected itself. The proposed approach
exploits two types of morphological filters: the median filter and the upper Lipschitz cover. The pre-
sented correction method, tested on images of both artificial (homogeneous fluorescent layer) and
real biological specimens, namely sections of a rat embryo and a rat brain, proved to be very fast and
yielded a significant visual improvement. Microsc. Res. Tech. 00:000–000, 2010. VVC 2010 Wiley-Liss, Inc.
INTRODUCTION
A confocal laser scanning microscope (CLSM) makes
it possible to acquire two-dimensional (2D) digital
images of thin optical sections within a thick specimen;
thus it enables us to obtain three-dimensional (3D)
image data composed from 2D images of perfectly reg-
istered serial optical sections (Pawley, 1995AQ2 ). For fur-
ther processing and analysis of images acquired by
CLSM, such as 3D volume reconstruction of selected
features in the specimen, it would be ideal if the image
brightness were proportional to the concentration of
the fluorescent dye in the specimen. However, in prac-
tice this is not the case, neither in the axial direction
where the light attenuation with increasing depth
inside the specimen can be observed, nor in the lateral
direction where irregularities in image brightness
across the field of view are encountered. In our previ-
ous study (Cˇ apek et al., 2006), we proposed methods
for compensation of light attenuation with depth, while
in the present study we focus on compensating inhomo-
geneous signal distribution in 2D images of focal
planes.
In the following, we assume according to Wilson
(2002) that the fluorescent field is proportional to the
intensity of the incident radiation. In an ideal case, a
2D digital image captured from specimen regions hav-
ing the same concentration of the fluorescent dye
should consist of equally bright pixels. This can be
expressed by the formula presented by Heintzmann
(2008 private communication):
Iem x; yð Þ ¼ Iex x; yð Þ Á Obj x; yð Þ ð1Þ
where Iem(x, y) denotes the intensity of the emitted
light, Iex(x, y) is the excitation intensity, and Obj x; yð Þ
represents the fluorescent dye concentration in the
pixel of the specimen image given by the coordinates
x; yð Þ:
In accordance with formula (1), after applying
constant excitation intensity across the field of view
the emitted light intensity for object regions having
the same fluorescent dye concentration should be the
same. However, in real CLSM images this is not the
case, and different pixels of the acquired images of
the specimen regions with the same fluorescent dye
concentration have different grayscale levels (Figs. F11
and F22). These irregularities in image brightness across
the field of view may be caused by (i) lateral chromatic
aberration of the microscope objective. In fluorescence
microscopy, the excitation wavelength is in general
different from the emission wavelength. Because of lat-
eral chromatic aberration, an off-axis ray at the excita-
tion wavelength will cross the intermediate image
plane at a different point than the longer wavelength
emission ray coming from the same point on the speci-
men. As a result, the emission ray may partially or
totally miss the pinhole, and the image will become
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 1
ID: subramaniank Date: 24/10/10 Time: 12:56 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
NOTE TO AUTHORS: This will be your only chance to review this proof.
Once an article appears online, even as an EarlyView article, no additional corrections will be made.
*Correspondence to: Jan Micha´lek, Department of Biomathematics, Institute
of Physiology, Academy of Sciences of the Czech Republic, v.v.i., Vı´denˇska´ 1083,
14220 Prague 4, Czech Republic. E-mail: michalek@biomed.cas.cz
Received 6 September 2010; accepted in revised form 4 October 2010
Contract grant sponsor: Grant Agency of the Czech Republic; Contract grant
numbers: 102/08/0691, 304/09/0733; Contract grant sponsor: Academy of
Sciences of the Czech Republic (Institutional Research Concept); Contract grant
number: AV0Z50110509; Contract grant sponsor: Ministry of Education, Youth,
and Sports of the Czech Republic; Contract grant numbers: LC06063, ME09010,
and MSM6840770012
DOI 10.1002/jemt.20965
Published online in Wiley Online Library (wileyonlinelibrary.com).
VVC 2010 WILEY-LISS, INC.
MICROSCOPY RESEARCH AND TECHNIQUE 00:000–000 (2010)
2. darker away from the microscope optical axis (Pawley,
2006), (ii) curvature of field especially when using non-
plan objectives, when an image of a planar surface
loses intensity off optical axis (Pawley, 2006), (iii)
spherical aberrations of the point spread function
(PSF), which affect excitation efficiency in the sample
(iv) improper alignment of the optical parts of the con-
focal microscope system, etc.
The undesirable darkening of certain image regions
may shadow significant details of the specimen from
the observer’s eye. The effect may become even more
striking when a mosaic is composed of a number of ad-
jacent fields of view (Fig.F3 3a). Moreover, an automatic
mosaicking algorithm may fail to recognize overlap-
ping regions in two neighboring fields of view (Figs.
F5 5a–5c). The reason for the misalignment is that in simi-
larityAQ3 measures (Roche et al., 1999) used in image-
stitching software it is assumed that, between the
grayscale values of the two images to be stitched, there
exists either a functional relationship independent of
the pixel position or statistical relationship with sta-
tionary probability. However, in images with irregular-
ities in image brightness (Figs. 5a and 5b), these
assumptions are violated. As a result, the algorithm
may not identify the corresponding regions correctly
(Fig. 5c).
Spatially varying brightness mapping was dealt
with in various fields of biomedical imaging applying
different approaches (Hovhannisyan et al., 2008; Lee
and Bajcsy, 2006; Mangin, 2000). Mangin (2000) used
entropy minimization for automatic correction of in-
tensity nonuniformity in magnetic resonance (MR)
images. His method assumes that there is a narrow
intensity distribution for each tissue class; however,
this assumption is not necessarily satisfied in confocal
microscopy. Moreover, the algorithm is too slow (in the
order of minutes for a single image) to be of practical
interest for processing stacks of CLSM images. Lee
and Bajcsy (2006) proposed an intensity correction
technique in CLSM images, called mean-weight filter-
ing. Their method is based on searching for an opti-
mal, spatially adaptive, intensity transformation that
maximizes intensity contrast with respect to back-
ground, minimizes overall spatial intensity variation
for large area, and minimizes distortion of intensity
gradient for local features. The size and shape of
the filtering kernel has to be found experimentally,
thus the procedure is not fully automatic. Another
approach to image heterogeneity correction is based
on multiplication of the image acquired by CLSM or
multiphoton microscopy by a lateral correction factor
calculated from an image of a uniform fluorescent
sample (Hovhannisyan et al., 2008; Oldmixon and
Carlsson, 1993). However, a correction factor based on
a uniform sample makes it necessary to acquire
images of the calibration sample under conditions
very close to those applied to the acquisition of the
biological specimen, which can be tedious and difficult
to achieve, e.g., in optical sections deep inside a rela-
tively thick biological specimen.
In the present study, our aim was to find a fully auto-
matic, data driven (i.e., not relying on, e.g., a calibra-
tion set), method of compensation of lateral inhomoge-
neity in fluorescence signal distribution in CLSM
images, both for single frames and for mosaics com-
posed of a series of images acquired using, e.g., an
automatic motorized stage. Our direct motivation for
developing such a method was the need for fast
improvement of 3D volume reconstructions of biologi-
cal specimens larger than the field of view based on
composing stacks of CLSM images, both in lateral and
axial directions (Cˇ apek et al., 2009). In the recon-
structed volumes we were able to correct light attenua-
tion in axial direction (Cˇ apek et al., 2006), however, we
missed a convenient and efficient method for compen-
sation of image brightness irregularities in lateral
direction.
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 2
ID: subramaniank Date: 24/10/10 Time: 12:56 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Fig. 1. CLSM image of homogeneous solution of DIOC3(3) fluores-
cent dye. HC PLAPO 320 water immersion objective (N.A. 5 0.70),
the excitation wavelength of 488 nm and emission wavelength range
from 531 to 627 nm were used. The image size is 550 lm 3 550 lm.
Fig. 2. CLSM image of rat embryo specimen, one field of view.
HCX PL FLUOTAR 35 dry objective (N.A. 5 0.15), the excitation
wavelength of 488 nm and emission wavelength range from 500 to
560 nm were used. The image size is 2.20 mm 3 2.20 mm.
Microscopy Research and Technique
2 J. MICHA´ LEK ET AL.
3. MATERIALS AND METHODS
Specimen Preparation
For testing of the proposed method we used images
of both artificial (homogeneous fluorescent layer) and
real biological specimens (rat embryo and brain).
Homogeneous Fluorescent Layer. A fluorescent
marker DIOC3(3) (Sigma-Aldrich Company) was dis-
solved in water to final concentration of 40 lM. A drop-
let of this solution was placed between a glass slide and
a cover glass and sealed with nail polish.
Rat Embryo Specimen. A 17-day-old embryo was
fixed and stained for 24 h in fixative (10% formalin
with 1% eosin). After dehydration the embryo was em-
bedded in paraffin. Then a series of 30-lm thick physi-
cal slices was cut by a rotary microtome HM 340 E
(MICROM Laborgera¨te, Walldorf, Germany) and
mounted on slides by poly-L-lysine (for details see
Cˇ apek et al., 2009; Jirkovska´ et al., 2005).
Rat Brain Specimen. The preparation of rat brain
specimen is described in detail by Mao et al. (2010).
Briefly, an anesthetized Sprague-Dawley rat received an
intravenous injection of biotin-labeled Lycopersicon
esculentum lectin (Vector Laboratories Burlingame, CA)
before vascular perfusion with 4% paraformaldehyde in
PBS (pH 7.4) via a 21-gauge cannula in the left ventri-
cle. After the perfusion, the brain was excised and
immersed in fixative, rinsed with PBS, infiltrated over-
night with 30% sucrose in PBS, and then embedded in
OCT (Sakura, Torrance, CA) and frozen. A 30-lm thick
brain coronal section was rinsed in PBS and incubated
in a solution containing fluorescently labeled streptavi-
din in 1% normal goat serum for 1 h for microvessel
staining. The sections were mounted with VectashieldTM
hard mount (Vector Laboratories, Burlingame, CA).
Acquisition of CLSM Images
Raw images of test specimens were acquired by a
Leica SPE CLSM using a solid state laser (15 mW)
yielding an excitation wavelength of 488 nm.
Homogeneous Fluorescent Layer. The images
(Fig. 1, 512 3 512 pixels, 550 lm 3 550 lm) were
acquired by a HC PLAPO 203 water immersion objec-
tive (N.A. 5 0.70) using the excitation wavelength of 488
nm and emission wavelength range from 531 to 627 nm.
Rat Embryo Specimen. The images (Fig. 2, 512 3
512 pixels, 2.20 mm 3 2.20 mm) were acquired by HCX
PL FLUOTAR 53 dry objective (N.A. 5 0.15), using the
excitation wavelength of 488 nm and emission wave-
length range from 500 to 560 nm. Series of images were
acquired from 64 successive physical sections of the rat
embryo. Each physical section was split into six to eleven
overlapping horizontal fields of view, and the correspond-
ing stacks of optical sections, 9.7-lm apart, were
acquired using a manual microscopic stage. Mosaics of
such overlapping stacks were composed semiautomati-
cally, using the algorithm implemented in the GlueMRC
software (Karen et al., 2003). Images used for our tests
were either single 2D images in one field of view (Figs. 2,
5a and 5b), or a mosaic of one optical section of the rat
embryo, composed of eight fields of view, shown in Figure
3a. For description of image acquisition, vertical linking
of 3D images of embryo physical sections and correspond-
ing volume reconstruction see Cˇ apek et al. (2009).
Rat Brain Specimen. Stacks of images (Fig. F44a, 512
3 512 pixels, 1.10 mm 3 1.10 mm) were acquired by HC
PLAPO 203 water immersion objective (N.A. 5 0.70),
using the excitation wavelength of 488 nm and emission
wavelength range from 500 to 600 nm.
Method for Compensation of Brightness
Inhomogeneities in CLSM Images
Our approach is based on estimating a spatially
varying gain which models the influence of effects such
as the lateral chromatic aberration, curvature of field,
and uneven excitation in the field of view on the signal
distribution, and correcting acquired images by invert-
ing the estimated gain. For gain estimation we
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 3
ID: subramaniank Date: 24/10/10 Time: 12:56 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Fig. 3. Mosaic composed of CLSM images from eight fields of view of rat embryo specimen. (a) Origi-
nal image acquired by CLSM. (b) Lipschitz-cover estimated gain. (c) Image corrected after applying the
upper Lipschitz cover morphological operator without median filtering (d) Image corrected by using a
median filter and then applying the upper Lipschitz cover. The mosaic size is 4.3 3 7.2 mm2
.
Microscopy Research and Technique
3COMPENSATION OF CONFOCAL MICROSCOPY IMAGES
4. exploited a morphological operation applying the upper
Lipschitz cover and, moreover, in noisy images we
applied a median filter to reduce noise.
Estimation of the Space-Variable Gain Using the
Upper Lipschitz Cover Morphological Operator
In real CLSM images acquired using constant excita-
tion intensity Iex, the image brightness varies depend-
ing on the pixel location within the frame (Fig. 1), even
if the concentration of fluorescent dye is constant
across the specimen. To model this dependence we sup-
pose that the recorded light at different pixel positions
is a product of the dye concentration, the excitation
light intensity and a single function, gain x; yð Þ, that in
total accounts for any aberrations causing uneven sig-
nal distribution (Fig.F6 6):
Irec x; yð Þ ¼ gain x; yð Þ Á Obj x; yð Þ Á Iex ð2Þ
If an estimate of the gain function, g˜ain(x, y), were
known, one could easily correct the recorded image to
obtain the dye concentration in the specimen:
Obj x; yð Þ ¼
1
~gain x; yð Þ Á Iex
Á Irec x; yð Þ ð3Þ
To separate the gain from the object in the
recorded image, we need some qualitative features
that distinguish the function Obj(x, y) from
gain(x, y). For example, the gain changes slowly, is
continuous and has therefore only a small number of
minima or maxima, while the concentration of the
fluorescent dye in the microscope specimen changes
abruptly on boundaries of biological structures, which
in turn produces discontinuities and consequently
high frequencies. Such distinguishing features are
listed in Table T11.
It is obvious from formula (3) that in a CLSM
image of a specimen with uniform dye concentration
such as in Figure 1 the gray value distribution
assumes a shape identical to that of the gain func-
tion. To obtain a gain estimate for a real, nonuniform,
specimen, we assume that local maxima in the
acquired image correspond to specimen regions with
the highest fluorescent dye concentration, and try to
fill (pad) submaximal regions numerically with this
maximal concentration.
Because the form of the padded image should
reflect the form of the gain, it must satisfy the con-
dition that the rate of change is slow. This is guar-
anteed, if the padded function satisfies the Lipschitz
condition:
gain x1; y1ð Þ À gain x2; y2ð Þj j K Á x1; y1ð Þ À x2; y2ð Þj j ð4Þ
for any two pixels (x1, y1) and (x2, y2) of the image. K is a
constant factor called the Lipschitz constant, limiting
the maximum rate of change of gain(x, y). Padding the
acquired image numerically can be done very fast by
subjecting the image to a morphological operator called
‘‘the upper Lipschitz cover.’’ The upper Lipschitz cover of
an image I(x, y) is the infimum of functions L(x, y) satis-
fying the conditions:
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 4
ID: subramaniank Date: 24/10/10 Time: 12:57 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Fig. 4. CLSM stack of images 7-lm apart from a rat brain speci-
men: (a) the original images, (b) the reciprocal of the Lipschitz-cover
estimated gain, (c) images corrected after applying the upper Lip-
schitz cover morphological operator. HC PLAPO 320 water immer-
sion objective (N.A. 5 0.70), the excitation wavelength of 488 nm and
emission wavelength range from 500 to 600 nm were used. The size of
each image is 550 lm 3 550 lm.
Microscopy Research and Technique
4 J. MICHA´ LEK ET AL.
5. L x1; y1ð Þ À L x2; y2ð Þj j K Á x1; y1ð Þ À x2; y2ð Þj j
L x; yð Þ ! I x; yð Þ ð5Þ
The upper Lipschitz cover constructed in the process
is used as the gain estimate g˜ain(x, y). The Lipschitz
constant K which bounds the rate of change of the gain
estimate is a selectable parameter of the algorithm. A
fast algorithm for numerical computation of the Lipschitz
cover was presented by Sˇtencel and Jana´cˇek (2006).
Noise Removal Using the Morphology Operator
Fast Median Filter
If the Lipschitz cover-based gain estimation is done
from noisy images, the upper Lipschitz cover creates
cones with vertices at noise peaks, which produce
undesirable artifacts. Therefore, in noisy images we
reduced the noise by using a median filter before apply-
ing the upper Lipschitz cover. The choice of the median
kernel size depends on the noise content of the particu-
lar CLSM image stack.
Computer Implementation of the Method
We used the algorithm of the upper Lipschitz cover,
described by Sˇtencel and Jana´cˇek (2006). We were pro-
vided with the C-code of the algorithm by Dr. Jana´cˇek.
Image handling like read/write, thresholding or multi-
plication by the correction factor were, for greater
convenience, programmed in MATLAB. The median fil-
ter implementation supplied with MATLAB’s Image
Processing Toolbox turned out to produce incorrect val-
ues at the image edges. Therefore, we used the elegant
median filter algorithm described by Perreault and
He´bert (2007), the C code of which is available from the
author’s homepage. Perreault’s algorithm runs in con-
stant time (the fastest run time possible), and is—
besides providing correct results—several times faster
on the images of interest than the MATLAB median fil-
ter. Both the C-coded Lipschitz filter and the median
filter were made callable from within MATLAB by
embedding them in respective MATLAB wrappers.
RESULTS
First, we applied our compensation algorithm to the
image in Figure 1 of a uniform sample represented by a
homogeneous fluorescent layer. Figure F77a shows the
upper Lipschitz cover of the uniform sample image in
Figure 1, Figure 7b the inverse gain, and Figure 7c the
corrected image. The gain estimate and its inverse
show circular artifacts which give rise to darker spots
centered at the noise peaks in the raw image. There-
fore, as described above, in noisy images we reduced
the noise by using a median filter before applying the
upper Lipschitz cover. Figure F88 shows the result when
the original image (Fig. 1) is median-filtered prior to
constructing the upper Lipschitz cover.
Further, we tested our approach for correction of a
highly nonhomogeneous image represented by a
mosaic composed of CLSM images from eight fields of
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 5
ID: subramaniank Date: 24/10/10 Time: 12:57 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Fig. 5. Alignment of two fields of view of rat embryo. (a–c) Misalignment of two fields of view of rat
embryo specimen caused by space dependent grayscale mapping between the left and the right image:
(a) left field of view, (b) right field of view, (c) stitched image. (d–f) Proper alignment of the same two
fields of view after correction by the upper Lipschitz cover. (d) corrected left field of view, (e) corrected
right field of view, (f) stitched image. The size of each field of view is 2.20 mm 3 2.20 mm.
Microscopy Research and Technique
5COMPENSATION OF CONFOCAL MICROSCOPY IMAGES
6. view of a rat embryo. After applying the upper Lip-
schitz cover morphological operator (Fig. 3c) the com-
pensation of inhomogeneous fluorescence signal distri-
bution was satisfactory, however the image appearance
was rather patch-like. This artifact was again compen-
sated using a median filter and then applying the
upper Lipschitz cover (Fig. 3d).
Finally, we checked our method for a fibrous struc-
ture with relatively homogeneous background. We cor-
rected a CLSM stack of images from a rat brain speci-
men (Fig. 4a) by first median filtering each image in
the stack and then applying the upper Lipschitz cover
morphological operator to estimate the inverted gains
(Fig. 4b). Multiplying the original images by the
inverted gains yielded the corrected stack (Fig. 4c).
The median filter is applied only for the gain esti-
mate, not on the corrected image. In Figure 3c, the
gain was estimated by applying the upper Lipschitz
cover on the raw image. Contrary to that, in Figures 3d
and 4c, the gain was estimated by applying the upper
Lipschitz cover on the median-filtered image. But in all
examples, the corrected images are obtained by multi-
plying the raw, noisy images, not median-filtered
images, by the inverse gain estimate. Thus, all fine
structures of the acquired images are preserved, and
no low-pass filtering takes place.
Figures 5d–5f demonstrates the improvement of
image stitching achieved when the spatial brightness
irregularity is corrected before stitching. In Figures
5a–5c, spatially variable grayscale mapping made a
stitching algorithm fail, because the measure of simi-
larity of the two images assumed spatially independent
grayscale mapping between the images. The two
arrows in Figure 5c indicate the structure which
is erroneously included twice in the stitched image.
Figure 5d–5f shows that the misalignment problem is
eliminated when Lipschitz-cover-based correction of
the spatially variable gain is applied before stitching.
The arrow in Figure 5f points to the location where the
previous alignment error has disappeared.
DISCUSSION
The method for lateral brightness correction pre-
sented in this article is based on the assumption that
the distortions are caused by a multiplicative gain
modeled by formula (2). The results shown in Figures
3, 4, and 8 suggest that the multiplicative assumption
captures well the image formation process. Median fil-
tering removing noise peaks in the raw images enables
the upper Lipschitz cover to reconstruct the slowly
varying gain precisely enough to yield corrected images
that—besides significant visual improvement—are
much better suited for further processing. The
improvement of image stitching achieved after the spa-
tial brightness irregularity is corrected is demon-
strated in Figures 5d–5f. In Figures 5a–5c, spatially
variable grayscale mapping made a stitching algorithm
fail, because the measure of similarity of the two
images assumed spatially independent grayscale map-
ping between the images. Figures 5d–5f show that the
misalignment problem is eliminated when Lipschitz-
cover based correction of the spatially variable gain is
applied before stitching.
Our approach proved to be advantageous when com-
pared with previous methods of lateral brightness cor-
rection in images. Unlike Mangin’s (2000) method
based on entropy minimization assuming a narrow in-
tensity distribution for each tissue class, we made
more realistic assumptions, tailored for confocal micro-
scopic images. Our technique is fully automatic, which
is not the case for the correction technique suggested
by Lee and Bajcsy (2006), requiring the experimental
assessment of the size and shape of the filtering kernel.
Moreover, our approach does not need any calibration,
which is necessary in some other correction methods,
usually applying a lateral correction factor calculated
from an image of a uniform fluorescent sample (Hov-
hannisyan et al., 2008; Oldmixon and Carlsson, 1993).
Thus, difficulties that may arise when attempting to
match exactly conditions of the test specimen acquisi-
tion and those applied to the acquisition of the biologi-
cal specimen are excluded.
Our new algorithm for lateral brightness correction
provides the user with high quality laterally corrected
images substantially faster than any of the aforemen-
tioned approaches. Fully automatic processing of a
whole stack of 60 CLSM images (512 3 512 3 8 bit)
takes typically about 11 s on a 3-GHz PC when the fast
C-coded algorithm for the upper Lipschitz cover
described by Sˇtencel and Jana´cˇek (2006) and the fast
C-coded algorithm for the median filter running in O(1)
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 6
ID: subramaniank Date: 24/10/10 Time: 12:57 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Fig. 6. Formation of the recorded image according to formula (2).
Iex—excitation intensity, Obj(x, y)—concentration of the fluorophore
at the point (x, y), gain(x, y)—gain at (x, y), Irec(x, y)—recorded light
intensity.
TABLE 1. Comparison of the features of the object function and the
gain function in CLSM images
Gain(x, y) Obj(x, y)
Continuity Continuous Discontinuous
Number of minima/maxima Small Large
Rate of change Slow Fast
Microscopy Research and Technique
6 J. MICHA´ LEK ET AL.
7. time published by Perreault and He´bert (2007) are
used. Given its processing speed, our method is well
suited for routine lateral brightness correction of large
CLSM stacks, that can even be acquired using an auto-
matic motorized stage for composing mosaics of 3D
images of large portions of biological tissues. Thus, in
the reconstructed volumes, in addition to the light
attenuation in axial direction (Cˇ apek et al., 2006; Gopi-
nath et al., 2008; Wu and Ji, 2005), lateral brightness
variations can be compensated using the new method
outlined in this article.
ACKNOWLEDGMENTS
The authors thank Dr. Radomı´ra Va´gnerova´ (Insti-
tute of Histology and Embryology, 1st Faculty of Medi-
cine, Charles University, Prague, Czech Republic) for
preparing the rat embryo specimens used in Figures 2,
3, and 5, Dr. Xiao Wen Mao (Loma Linda University,
CA) for providing with the rat brain specimen (Fig. 4),
and Zuzana Burdı´kova´ for her help with the specimen
of homogeneous fluorescent layer (Fig. 1). They
thank Dr. Jirˇı´ Jana´cˇek for providing the C-coded
Lipschitz-cover algorithm, as well as for proofreading
the manuscript.
REFERENCES
Cˇ apek M, Jana´cˇek J, Kubı´nova´ L. 2006. Methods for compensation of
the light attenuation with depth of images captured by a confocal
microscope. Microsc Res Tech 69:624–635.
Cˇ apek M, Bru˚zˇa P, Jana´cˇek J, Karen P, Kubı´nova´ L, Va´gnerova´ R.
2009. Volume reconstruction of large tissue specimens from serial
physical sections using confocal microscopy and correction of cut-
ting deformations by elastic registration. Microsc Res Techn
72:110–119.
Gopinath S, Wen Q, Thakoor N, Luby-Phelps K, Gao JX. 2008. A sta-
tistical approach for intensity loss compensation of confocal micros-
copy images. J Microsc 230:143–159.
Hovhannisyan VA, Su PJ, Chen YF, Dong CY. 2008. Image heteroge-
neity correction in large-area, three-dimensional multiphoton mi-
croscopy. Opt Express 16:5107–5117.
Jirkovska´ M, Na´prstkova´ I, Jana´cˇek J, Kucˇera T, Maca´sek J, Karen P,
Kubı´nova´ L. 2005. Three-dimensional reconstructions from non-
deparaffinized tissue sections. Anat Embryol 210:163–173.
Karen P, Jirkovska´ M, Tomori Z, Demje´nova´ E, Jana´cˇek J, Kubı´nova´
L. 2003. Three-dimensional computer reconstruction of large tissue
volumes based on composing series of high-resolution confocal
images by GlueMRC and LinkMRC software. Microsc Res Tech
62:415–422.
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 7
ID: subramaniank Date: 24/10/10 Time: 12:57 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Fig. 7. Correction of the CLSM image of homogeneous solution of DIOC3(3) fluorescent dye shown in
Figure 1. (a) Lipschitz-cover estimated gain, (b) the inverse of the gain, (c) image corrected after apply-
ing the upper Lipschitz cover morphological operator. The image size is 550 lm 3 550 lm.
Fig. 8. Correction of the CLSM image of homogeneous solution of DIOC3(3) fluorescent dye shown in
Figure 1 using a median filter and then applying the upper Lipschitz cover. (a) Lipschitz-cover estimated
gain from median-prefiltered image, (b) the inverse of the gain, (c) image corrected using a median filter
and then applying the upper Lipschitz cover morphological operator. The image size is 550 lm 3 550 lm.
Microscopy Research and Technique
7COMPENSATION OF CONFOCAL MICROSCOPY IMAGES
8. Lee SC, Bajcsy P. 2006. Spatial intensity correction of fluorescent con-
focal laser scanning microscope images by mean-weight filtering. J
Microsc 221:122–136.
Mangin J. 2000. Entropy minimization for automatic correction of in-
tensity nonuniformity. Math Methods in Biomed. Image Analysis
2000:162–169.
Mao XW, Favre CJ, Fike JR, Kubı´nova´ L, Anderson E, Campbell-
Beachler M, Jones T, Smith A, Rightnar S, Nelson GA. 2010. High-
LET radiation-induced response of microvessels in the hippocam-
pus. Radiat Res 173:486–493.
Oldmixon EH, Carlsson K. 1993. Methods for large data volumes
from confocal scanning laser microscopy of lung. J Microsc
170:221–228.
Pawley JB, editor. 2006. Handbook of biological confocal microscopy,
3rd ed. Berlin: Springer.
Perreault S, He´bert P. 2007. Median filtering in constant time. IEEE
Trans Image Process 16:2389–2394.
Roche A, Malandain G, Ayache N, Prima S. 1999. Towards a better
comprehension of similarity measures used in medical image regis-
tration. MICCAI 1999:555–566.
Sˇtencel M, Jana´cˇek J. 2006. On calculation of chamfer distance and
Lipschitz covers in digital images. In: Lechnerova´ R, Saxl I, Benesˇ
V, editors. Proceedings S4G. Prague: Union of Czech Mathemati-
cians and Physicists. pp. 517–522.
Wilson T. 2002. Confocal microscopy: Basic principles and architec-
tures. In: Diaspro A, editor. Confocal and two-photon microscopy:
Foundations, applications, and advances. New York: Wiley-Liss.
pp. 27.
Wu HX, Ji L. 2005. Fully automated intensity compensation for confo-
cal microscopic images. J Microsc 220:9–19.
J_ID: Z3U Customer A_ID: 10-140.R1 Cadmus Art: JEMT20965 Date: 24-OCTOBER-10 Stage: I Page: 8
ID: subramaniank Date: 24/10/10 Time: 12:57 Path: N:/3b2/JEMT/Vol00000/100165/APPFile/JW-JEMT100165
Microscopy Research and Technique
8 J. MICHA´ LEK ET AL.