This document presents a new tensor-based morphometric framework for quantifying cortical shape variations using local area elements computed from Riemannian metric tensors. It introduces a novel weighted spherical harmonic (SPHARM) representation that generalizes the traditional SPHARM. The weighted-SPHARM provides a smooth parametrization of cortical meshes and can be used to compute metric tensors and local area elements, allowing quantification of cortical growth along both normal and tangential directions. It also describes how the weighted-SPHARM relates to diffusion smoothing and provides advantages for cortical surface analysis and registration.
Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. The purpose of this paper is the level set approach to simultaneous tissue segmentation and bias correction of Magnetic Resonance Imaging (MRI) images. A modified level set approach to joint segmentation and bias correction of images with intensity in homogeneity. A sliding window is used to transform the gradient intensity domain to another domain, where the distribution overlap between different tissues is significantly suppressed. Tissue segmentation and bias correction are simultaneously achieved via a multiphase level set evolution process. The proposed methods are very robust to initialization, and are directly compatible with any type of level set implementation. Experiments on images of various modalities demonstrated the superior performance over state-of-the-art methods.
A MORPHOLOGICAL MULTIPHASE ACTIVE CONTOUR FOR VASCULAR SEGMENTATIONijbbjournal
This paper presents a morphological active contour ideal for vascular segmentation in biomedical images.
The unenhanced images of vessels and background are successfully segmented using a two-step
morphological active contour based upon Chan and Vese’s Active Contour without Edges. Using dilation
and erosion as an approximation of curve evolution, the contour provides an efficient, simple, and robust
alternative to solving partial differential equations used by traditional level-set Active Contour models. The
proposed method is demonstrated with segmented data set images and compared to results garnered from
multiphase Active Contour without Edges, morphological watershed, and Fuzzy C-means segmentations.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Reconstructing Vehicle License Plate Image from Low Resolution Images using N...CSCJournals
In this study, non-uniform interpolation method was adopted to reconstruct license plate image from a series of low resolution vehicle license plate images. Several image registration methods which were used to estimate the position and orientation differences between these low resolution images are tested in this study. It was found that the Fourier method is superior to other methods. The non-uniform interpolation method is then used to reconstruct vehicle license plate images from images with a character size as small as 3 × 6 pixels. Results show that although the number or character is still not easy to read, the reconstructed image shows a better readability than the original image. Keywords: image enhancement, image registration, license plate recognition.
Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. The purpose of this paper is the level set approach to simultaneous tissue segmentation and bias correction of Magnetic Resonance Imaging (MRI) images. A modified level set approach to joint segmentation and bias correction of images with intensity in homogeneity. A sliding window is used to transform the gradient intensity domain to another domain, where the distribution overlap between different tissues is significantly suppressed. Tissue segmentation and bias correction are simultaneously achieved via a multiphase level set evolution process. The proposed methods are very robust to initialization, and are directly compatible with any type of level set implementation. Experiments on images of various modalities demonstrated the superior performance over state-of-the-art methods.
A MORPHOLOGICAL MULTIPHASE ACTIVE CONTOUR FOR VASCULAR SEGMENTATIONijbbjournal
This paper presents a morphological active contour ideal for vascular segmentation in biomedical images.
The unenhanced images of vessels and background are successfully segmented using a two-step
morphological active contour based upon Chan and Vese’s Active Contour without Edges. Using dilation
and erosion as an approximation of curve evolution, the contour provides an efficient, simple, and robust
alternative to solving partial differential equations used by traditional level-set Active Contour models. The
proposed method is demonstrated with segmented data set images and compared to results garnered from
multiphase Active Contour without Edges, morphological watershed, and Fuzzy C-means segmentations.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Reconstructing Vehicle License Plate Image from Low Resolution Images using N...CSCJournals
In this study, non-uniform interpolation method was adopted to reconstruct license plate image from a series of low resolution vehicle license plate images. Several image registration methods which were used to estimate the position and orientation differences between these low resolution images are tested in this study. It was found that the Fourier method is superior to other methods. The non-uniform interpolation method is then used to reconstruct vehicle license plate images from images with a character size as small as 3 × 6 pixels. Results show that although the number or character is still not easy to read, the reconstructed image shows a better readability than the original image. Keywords: image enhancement, image registration, license plate recognition.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations
involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Automated Medical image segmentation for detection of abnormal masses using W...IDES Editor
Research in the segmentation of medical images
will strive towards improving the accuracy, precision, and
computational speed of segmentation methods, as well as
reducing the amount of manual interaction. Accuracy and
precision can be improved by incorporating prior information
from atlases and by combining discrete and continuous-based
segmentation methods. The medical images consists a general
framework to segment curvilinear 2D objects The proposed
method based on Watershed algorithm is traditionally applied
on image domain. Then model uses a Markov chain in scale
to model the class labels that form the segmentation, but
augments this Markov chain structure by incorporating tree
based classifiers to model the transition probabilities between
adjacent scales. Experiments with real medical images show
that the proposed segmentation framework is efficient.
Enhanced Morphological Contour Representation and Reconstruction using Line S...CSCJournals
The paper proposes an enhanced morphological contour/edge representation algorithm for the representation of 2D binary shapes of digital images. The concise representation algorithm uses representative lines of different sizes and types to cover all the significant features of the binary contour/edge image. These well characterized representative line segments, which may overlap among different types, take minimum representative points than that of most other prominent shape representation algorithms including MST and MSD. The new algorithm is computationally efficient than most other algorithms in the literature and is also capable of approximating edge images. The approximated outputs produced by the proposed algorithm by using minimal number of representative points are more natural to the original shapes than that of MST and MSD.
2D Shape Reconstruction Based on Combined Skeleton-Boundary FeaturesCSCJournals
Reconstructing a shape into meaningful representation plays a strong role in shape-related applications. It is motivated by recent studies in visual human perception discussing the importance of certain shape boundary features as well as features of the shape area; it utilizes certain properties of the shape skeleton based on symmetry axes combined with boundary features based on curvature to determine protrusion strength. The main contribution of this paper is the combination of skeleton and boundary information by deploying the symmetry –curvature duality method to simulate human perception based on results of research in visual perception. The experiments directly compare our algorithm with experiments on human subjects. They show that the proposed approach meets the human perceptual intuition. In comparison to existing methods, our method gives a perceptually more reasonable and stable result. Furthermore, the noisy shape reconstruction demonstrates the robustness of our method, experiments of different data sets prove the invariant representation of the combined skeleton-boundary approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Dynamic texture based traffic vehicle monitoring systemeSAT Journals
Abstract Dynamic Textures are function of both space and time which exhibit spatio-temporal stationary property. They have spatially repetitive patterns which vary with time. In this paper, importance of phase spectrum in the signals is utilized and a novel method for vehicle monitoring is proposed with the help of Fourier analysis, synthesis and image processing. Methods like Doppler radar or GPS navigation are used commonly for tracking. The proposed image based approach has an added advantage that the clear image of the object (vehicle) can be used for future reference like proof of incidence, identification of owner and registration number. Keywords-Fourier Transform, dynamic texture, phase spectrum
Study on Reconstruction Accuracy using shapiness index of morphological trans...ijcseit
Basin, lakes, and pore-grain space are important geophysical shapes, which can fit with the several
classical and fractal binary shapes, are processed by employing morphological transformations, and
methods. The decomposition of skeleton network (minimum morphological information) using various
classical structures like square, octagon and rhombus. Then derive the dilated subsets respective degree by
the structures for reconstruct the original image. Through shapiness index of pattern spectrum procedure,
we try test the reconstruction accuracy in a quantitative manner. It gives some general procedure to
characterise the shape-size complexity of surface water body. The reconstruction accuracy is against the
size of water bodies with which we produce the some example of different shapiness index for different
structuring element of shapes. In which quantitative manner approach yields better reconstruction level.
The complexity of water bodies are compared with the surfaces.
Collinearity Equations
Kinds of product that can be derived by the collinearity equation
- Space Resection By Collinearity
- Space Intersection By Collinearity
- Interior Orientation
- Relative Orientation
- Absolute Orientation
- Self-Calibration
Image processing techniques applied for pitting corrosion analysiseSAT Journals
Abstract
In order to study the behavior of the early stage of pitting corrosion, an image analysis based on discrete wavelet packet transform
and fractals was used. Image feature parameters were extracted and analyzed to characterize the pitting corrosion development with
test time. It was found that the feature parameters: Shannon entropy, energy, fractal dimension and intercept increased with the test
time. Therefore the image processing techniques were promising and effective tools to analyze and detect the pitting corrosion.
Keywords: corrosion, pitting corrosion, surface topography, surface analysis, carbon steel, tap water
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
Change Detection of Water-Body in Synthetic Aperture Radar ImagesCSCJournals
Change detection is the art of quantifying the changes in the Synthetic Aperture Radar (SAR) images that have happened over a period of time. Remote sensing has been the parental technique to perform change detection analysis. This paper empirically investigates the impact of applying the combination of texture features for different classification techniques to separate water body from non-water body. At first, the images are classified using unsupervised Principle Component Analysis (PCA) based K-means clustering for dimension reduction. Then the texture features like Energy, Entropy, Contrast , Inverse Differential Moment , Directional Moment and the Median are extracted using Gray Level Co-occurrence Matrix (GLCM) and these features are utilized in Linear Vector Quantization (LVQ) and Support Vector Machine (SVM) classifiers. This paper aims to apply a combination of the texture features in order to significantly improve the accuracy of detection. The utility of detection analysis, influences management and policy decision making for long-term construction projects by predicting the preventable losses.
Left Ventricle Volume Measurement on Short Axis MRI Images Using a Combined R...IDES Editor
Segmentation and volume measurement of the
cardiac ventricles is an important issue in cardiac disease
diagnosis and function assessment. Cardiac Magnetic
Resonance Imaging (CMRI) is the current reference standard
for the assessment of both left and right ventricular volumes
and mass. Several methods have been proposed for
segmentation and measurement of cardiac volumes like
deformable models, active appearance, shape models, atlas
based methods, etc. In this paper a novel method is proposed
based on a parametric superellipse model fitting for
segmentation and measurement volume of the left ventricle
on cardiac Cine MRI images. Superellipses can be used to
represent in a large variety of shapes. For fitting superellipse
on MR images, a set of data points have been needed as a
partial data. This data points resulted from a semi-automatic
region growing method that segment the homogenous region
of the left ventricle. Because of ellipsoid shape of left ventricle,
fitting superellipse on cardiac cine MRI images has excellent
accuracy. The results show better fitting and also less
computation and time consuming compared to active contour
methods, which is commonly used method for left ventricle
segmentation.
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The Active Shape Model (ASM) is one of the most popular local texture models for face alignment. It applies in many fields such as locating facial features in the image, face synthesis, etc. However, the experimental results show that the accuracy of the classical ASM for some applications is not high. This paper suggests some improvements on the classical ASM to increase the performance of the model in the application: face alignment. Four of our major improvements include: i) building a model combining Sobel filter and the 2-D profile in searching face in image; ii) applying Canny algorithm for the enhancement edge on image; iii) Support Vector Machine (SVM) is used to classify landmarks on face, in order to determine exactly location of these landmarks support for ASM; iv) automatically adjust 2-D profile in the multi-level model based on the size of the input image. The experimental results on CalTech face database and Technical University of Denmark database (imm_face) show that our proposed improvement leads to far better performance.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
A HYBRID MORPHOLOGICAL ACTIVE CONTOUR FOR NATURAL IMAGESIJCSEA Journal
Morphological active contours for image segmentation have become popular due to their low computational complexity coupled with their accurate approximation of the partial differential equations
involved in the energy minimization of the segmentation process. In this paper, a morphological active contour which mimics the energy minimization of the popular Chan-Vese Active Contour without Edges is coupled with a morphological edge-driven segmentation term to accurately segment natural images. By using morphological approximations of the energy minimization steps, the algorithm has a low computational complexity. Additionally, the coupling of the edge-based and region-based segmentation techniques allows the proposed method to be robust and accurate. We will demonstrate the accuracy and robustness of the algorithm using images from the Weizmann Segmentation Evaluation Database and report on the segmentation results using the Sorensen-Dice similarity coefficient.
Automated Medical image segmentation for detection of abnormal masses using W...IDES Editor
Research in the segmentation of medical images
will strive towards improving the accuracy, precision, and
computational speed of segmentation methods, as well as
reducing the amount of manual interaction. Accuracy and
precision can be improved by incorporating prior information
from atlases and by combining discrete and continuous-based
segmentation methods. The medical images consists a general
framework to segment curvilinear 2D objects The proposed
method based on Watershed algorithm is traditionally applied
on image domain. Then model uses a Markov chain in scale
to model the class labels that form the segmentation, but
augments this Markov chain structure by incorporating tree
based classifiers to model the transition probabilities between
adjacent scales. Experiments with real medical images show
that the proposed segmentation framework is efficient.
Enhanced Morphological Contour Representation and Reconstruction using Line S...CSCJournals
The paper proposes an enhanced morphological contour/edge representation algorithm for the representation of 2D binary shapes of digital images. The concise representation algorithm uses representative lines of different sizes and types to cover all the significant features of the binary contour/edge image. These well characterized representative line segments, which may overlap among different types, take minimum representative points than that of most other prominent shape representation algorithms including MST and MSD. The new algorithm is computationally efficient than most other algorithms in the literature and is also capable of approximating edge images. The approximated outputs produced by the proposed algorithm by using minimal number of representative points are more natural to the original shapes than that of MST and MSD.
2D Shape Reconstruction Based on Combined Skeleton-Boundary FeaturesCSCJournals
Reconstructing a shape into meaningful representation plays a strong role in shape-related applications. It is motivated by recent studies in visual human perception discussing the importance of certain shape boundary features as well as features of the shape area; it utilizes certain properties of the shape skeleton based on symmetry axes combined with boundary features based on curvature to determine protrusion strength. The main contribution of this paper is the combination of skeleton and boundary information by deploying the symmetry –curvature duality method to simulate human perception based on results of research in visual perception. The experiments directly compare our algorithm with experiments on human subjects. They show that the proposed approach meets the human perceptual intuition. In comparison to existing methods, our method gives a perceptually more reasonable and stable result. Furthermore, the noisy shape reconstruction demonstrates the robustness of our method, experiments of different data sets prove the invariant representation of the combined skeleton-boundary approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Dynamic texture based traffic vehicle monitoring systemeSAT Journals
Abstract Dynamic Textures are function of both space and time which exhibit spatio-temporal stationary property. They have spatially repetitive patterns which vary with time. In this paper, importance of phase spectrum in the signals is utilized and a novel method for vehicle monitoring is proposed with the help of Fourier analysis, synthesis and image processing. Methods like Doppler radar or GPS navigation are used commonly for tracking. The proposed image based approach has an added advantage that the clear image of the object (vehicle) can be used for future reference like proof of incidence, identification of owner and registration number. Keywords-Fourier Transform, dynamic texture, phase spectrum
Study on Reconstruction Accuracy using shapiness index of morphological trans...ijcseit
Basin, lakes, and pore-grain space are important geophysical shapes, which can fit with the several
classical and fractal binary shapes, are processed by employing morphological transformations, and
methods. The decomposition of skeleton network (minimum morphological information) using various
classical structures like square, octagon and rhombus. Then derive the dilated subsets respective degree by
the structures for reconstruct the original image. Through shapiness index of pattern spectrum procedure,
we try test the reconstruction accuracy in a quantitative manner. It gives some general procedure to
characterise the shape-size complexity of surface water body. The reconstruction accuracy is against the
size of water bodies with which we produce the some example of different shapiness index for different
structuring element of shapes. In which quantitative manner approach yields better reconstruction level.
The complexity of water bodies are compared with the surfaces.
Collinearity Equations
Kinds of product that can be derived by the collinearity equation
- Space Resection By Collinearity
- Space Intersection By Collinearity
- Interior Orientation
- Relative Orientation
- Absolute Orientation
- Self-Calibration
Image processing techniques applied for pitting corrosion analysiseSAT Journals
Abstract
In order to study the behavior of the early stage of pitting corrosion, an image analysis based on discrete wavelet packet transform
and fractals was used. Image feature parameters were extracted and analyzed to characterize the pitting corrosion development with
test time. It was found that the feature parameters: Shannon entropy, energy, fractal dimension and intercept increased with the test
time. Therefore the image processing techniques were promising and effective tools to analyze and detect the pitting corrosion.
Keywords: corrosion, pitting corrosion, surface topography, surface analysis, carbon steel, tap water
COMPOSITE TEXTURE SHAPE CLASSIFICATION BASED ON MORPHOLOGICAL SKELETON AND RE...sipij
After several decades of research, the development of an effective feature extraction method for texture
classification is still an ongoing effort. Therefore , several techniques have been proposed to resolve such
problems. In this paper a novel composite texture classification method based on innovative pre-processing
techniques, skeletonization and Regional moments (RM) is proposed. This proposed texture classification
approach, takes into account the ambiguity brought in by noise and the different caption and digitization
processes. To offer better classification rate, innovative pre-processing methods are applied on various
texture images first. Pre-processing mechanisms describe various methods of converting a grey level image
into binary image with minimal consideration of the noise model. Then shape features are evaluated using
RM on the proposed Morphological Skeleton (MS) method by suitable numerical characterization
measures for a precise classification. This texture classification study using MS and RM has given a good
performance. Good classification result is achieved from a single region moment RM10 while others failed
in classification.
Change Detection of Water-Body in Synthetic Aperture Radar ImagesCSCJournals
Change detection is the art of quantifying the changes in the Synthetic Aperture Radar (SAR) images that have happened over a period of time. Remote sensing has been the parental technique to perform change detection analysis. This paper empirically investigates the impact of applying the combination of texture features for different classification techniques to separate water body from non-water body. At first, the images are classified using unsupervised Principle Component Analysis (PCA) based K-means clustering for dimension reduction. Then the texture features like Energy, Entropy, Contrast , Inverse Differential Moment , Directional Moment and the Median are extracted using Gray Level Co-occurrence Matrix (GLCM) and these features are utilized in Linear Vector Quantization (LVQ) and Support Vector Machine (SVM) classifiers. This paper aims to apply a combination of the texture features in order to significantly improve the accuracy of detection. The utility of detection analysis, influences management and policy decision making for long-term construction projects by predicting the preventable losses.
Left Ventricle Volume Measurement on Short Axis MRI Images Using a Combined R...IDES Editor
Segmentation and volume measurement of the
cardiac ventricles is an important issue in cardiac disease
diagnosis and function assessment. Cardiac Magnetic
Resonance Imaging (CMRI) is the current reference standard
for the assessment of both left and right ventricular volumes
and mass. Several methods have been proposed for
segmentation and measurement of cardiac volumes like
deformable models, active appearance, shape models, atlas
based methods, etc. In this paper a novel method is proposed
based on a parametric superellipse model fitting for
segmentation and measurement volume of the left ventricle
on cardiac Cine MRI images. Superellipses can be used to
represent in a large variety of shapes. For fitting superellipse
on MR images, a set of data points have been needed as a
partial data. This data points resulted from a semi-automatic
region growing method that segment the homogenous region
of the left ventricle. Because of ellipsoid shape of left ventricle,
fitting superellipse on cardiac cine MRI images has excellent
accuracy. The results show better fitting and also less
computation and time consuming compared to active contour
methods, which is commonly used method for left ventricle
segmentation.
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The Active Shape Model (ASM) is one of the most popular local texture models for face alignment. It applies in many fields such as locating facial features in the image, face synthesis, etc. However, the experimental results show that the accuracy of the classical ASM for some applications is not high. This paper suggests some improvements on the classical ASM to increase the performance of the model in the application: face alignment. Four of our major improvements include: i) building a model combining Sobel filter and the 2-D profile in searching face in image; ii) applying Canny algorithm for the enhancement edge on image; iii) Support Vector Machine (SVM) is used to classify landmarks on face, in order to determine exactly location of these landmarks support for ASM; iv) automatically adjust 2-D profile in the multi-level model based on the size of the input image. The experimental results on CalTech face database and Technical University of Denmark database (imm_face) show that our proposed improvement leads to far better performance.
INNOVANDO POR UN CAMINO SEGURO
Creada en 1973, Tecnivial es el referente español en soluciones integrales de señalización y balizamiento de carreteras, y en los sectores de ferrocarril, aeropuertos e imagen corporativa. La apuesta por la permanente evolución, la innovación tecnológica y la satisfacción del cliente son nuestras señas de identidad.
Presentación sobre Proyectos realizados en Eshow 2012 en Madrid, donde se presentan soluciones que van más allà de las funcionalidades estandars de Joomla y Virtuemart
Caso Magic Costa Blanca: Beneficios de las redes sociales en cadenas hotelera...David Vicent
Presentación para TCV Innovación 2010. Caso Magic Costa Blanca, cadena pionera en el aprovechamiento de los beneficios de las redes sociales en múltiples planos de su estrategia de marketing. Ejemplos de campañas de "Serendipia" por Publicities y Modelo Magic Team de relación con el cliente on-off line. #KnowledgeCuration
Der Kunstsommer Arnsberg vom 23.08.2013 bis zum 01.09.2013: Die letzten Tage der Sommerferien; Kunst, Kultur, Aufführungen, Workshops, Projekte. Eine herrlich Atmosphäre – leicht und doch voller Spannung, auch Anspannung und Neugier. Die Stadt als Atelier, als Konzertsaal. Hier gibts es alle Fotos & Videos rund um den Kunstsommer Arnsberg: http://kunstsommer.blogspot.com
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
We illustrate unsupervised and supervised learning algorithms that accurately classify the lithological variations in the 3D seismic data. We demonstrate blind source separation techniques such as the principal components (PCA) and noise adjusted principal
components in conjunction with Kohonen Self organizing maps to produce superior unsupervised classification maps.
Further, we utilize the PCA space training in Maximum likelihood (ML) supervised classification. Results demonstrate that the ML supervised classification produces an improved classification of the facies in the 3D seismic dataset from the Anadarko basin in central Oklahoma.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Towards better performance: phase congruency based face recognitionTELKOMNIKA JOURNAL
Phase congruency is an edge detector and measurement of the significant feature in the image. It is a robust method against contrast and illumination variation. In this paper, two novel techniques are introduced for developing alow-cost human identification system based on face recognition. Firstly, the valuable phase congruency features, the gradient-edges and their associate dangles are utilized separately for classifying 130 subjects taken from three face databases with the motivation of eliminating the feature extraction phase. By doing this, the complexity can be significantly reduced. Secondly, the training process is modified when a new technique, called averaging-vectors is developed to accelerate the training process and minimizes the matching time to the lowest value. However, for more comparison and accurate evaluation,three competitive classifiers: Euclidean distance (ED),cosine distance (CD), and Manhattan distance (MD) are considered in this work. The system performance is very competitive and acceptable, where the experimental results show promising recognition rates with a reasonable matching time.
Nose Tip Detection Using Shape index and Energy Effective for 3d Face Recogni...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
An Efficient Feature Extraction Method With Local Region Zernike Moment for F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
IMPROVEMENTS OF THE ANALYSIS OF HUMAN ACTIVITY USING ACCELERATION RECORD OF E...sipij
The use of Holter Electrocardiograph (Holter ECG) is rapidly spreading. It is a wearableelectrocardiograph that records 24-hour electrocardiograms in a built-in flash memory, making it possibleto detect atrial fibrillation (Atrial Fibrillation, AF) through all-day activities. It is also useful for screeningfor diseases other than atrial fibrillation and for improving health. It is said that more useful informationcan be obtained by combining electrocardiograph with the analysis of physical activity. For that purpose,the Holter electrocardiograph is equipped with heart rate sensor and acceleration sensors. If accelerationdata is analysed, we can estimate activities in daily life, such as getting up, eating, walking, usingtransportation, and sitting. In combination with such activity status, electrocardiographic data can be expected to be more useful.
ESTIMATING THE CREST LINES ON POLYGONAL MESH MODELS BY AN AUTOMATIC THRESHOLDijcsit
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
May Marketo Masterclass, London MUG May 22 2024.pdf
Brain
1. IEEE TRANSACTIONS ON MEDICAL IMAGING 1
Tensor-based Cortical Surface Morphometry via
Weighted Spherical Harmonic Representation
Moo K. Chung∗, Kim M. Dalton, Richard J. Davidson
Abstract— We present a new tensor-based morphometric
framework that quantifies cortical shape variations using a
local area element. The local area element is computed from
the Riemannian metric tensors, which are obtained from the
smooth functional parametrization of a cortical mesh. For the
smooth parametrization, we have developed a novel weighted
spherical harmonic (SPHARM) representation, which generalizes
the traditional SPHARM as a special case. For a specific choice of
weights, the weighted-SPHARM is shown to be the least squares
approximation to the solution of an isotropic heat diffusion on
a unit sphere. The main aims of this paper are to present the
weighted-SPHARM and to show how it can be used in the tensor-
based morphometry. As an illustration, the methodology has been
applied in the problem of detecting abnormal cortical regions in
the group of high functioning autistic subjects.
Index Terms— Spherical harmonics, tensor-based morphome-
try, SPHARM, cortical surface
I. INTRODUCTION
In many previous cortical morphometric studies, cortical
thickness has been mainly used to quantify cortical shape
variations in a population [13] [21] [27] [33] [34] [35].
The cortical thickness measures the amount of gray matter
along the normal direction on a cortical surface. However,
the gray matter growth can be characterized by both the
normal and the tangential directions along the surface [13].
In this paper, we present a new tensor-based morphometry
(TBM) that quantifies the amount of gray matter along the
tangential direction via the concept of a local area element.
The local area element is obtained from the Riemannian
metric tensors, which are computed from the novel weighted
spherical harmonic (SPHARM) representation [9]. We will
review literature that are directly related to our methodology
and address what our specific contributions are.
Unlike the deformation-based morphometry (DBM) [5] [12]
[49], which uses deformation obtained from the nonlinear
registration of brain images, TBM uses the high order spatial
derivatives of deformation in constructing morphological ten-
sor maps such as Jacobian determinant, torsion and vorticity
[4] [12] [13] [20] [44]. From these tensor maps, 3D statistical
parametric maps (SPM) are constructed to quantify variations
in higher order changes of deformation fields. The main
morphometric measure in the TBM is the Jacobian determinant
Manuscript received October 31, 2006; revised July 18, 2007 Asterisk
indicates corresponding author.
M.K. Chung is with the Department of Biostatistics and Medical Informat-
ics, and the Waisman Laboratory for Brain Imaging and Behavior, University
of Wisconsin, Madison, WI 53706 (e-mail: mkchung@wisc.edu).
K. M. Dalton and R. J. Davidson are with the Waisman Laboratory for
Brain Imaging and Behavior, University of Wisconsin, Madison.
of the deformation field since it directly measures tissue
growth and atrophy. The advantage of TBM over DBM is that
TBM can directly characterize tissue growth while DBM only
characterize the relative positional difference so the Jacobian
determinant is a more relevant metric for quantifying tissue
growth and atrophy [12]. In this study, the concept of the
Jacobian determinant is generalized to a local area element
via the Riemannian metric tensor formulation. Our local area
element is the differential geometric generalization of the
Jacobian determinant in Riemannian manifolds. So the area
element can be used to quantify the tangential cortical tissue
growth and atrophy directly.?
As a subset of TBM, cortical surface specific morphometries
have been developed [11] [13] [16] [45] [47]. Unlike 3D whole
brain volume based TBM, the surface specific morphometries
have the advantage of providing a direct quantification of
cortical morphology. Further, better sensitivity and specificity
can be obtained in analyzing cortical surface specific mor-
phometric changes [1] [13] [33]. The cerebral cortex is a
2-dimensional highly convoluted sheet without any holes or
handles topologically equivalent to a sphere [20]. Most of
the features that distinguish these cortical regions can only
be measured relative to the local orientation of the cortical
surface [16]. It is likely that different clinical populations
will exhibit different cortical surface geometry. By analyzing
surface measures such as cortical thickness, curvatures, surface
area, local area element and fractal dimension, brain shape
differences can be quantified locally along the cortical surface
[13] [11] [45] [47]. Cortical surface analyses require the
segmentation of tissue boundaries, which are mainly obtained
as high resolution triangle meshes from deformable surface
algorithms [19] [16] [34]. The interface between gray and
white matter is called the inner surface while the gray matter
and cerebrospinal fluid (CSF) interface is called the outer
surface. In this study, we will only use the outer surface for
the local area element computation.
Once we have a triangular mesh as a realization of a
cortical surface, surface related geometric quantities can be
computed from the mesh. Due to the discrete nature of trian-
gle mesh, surface parameterization is necessary for accurate
and smooth estimation of the geometric quantities. Cortical
surface parameterization has been mainly done by fitting a
quadratic polynomial locally [13] [28] [29]. Then from this
local parameterization, Gaussian and mean curvatures and
the Riemmanian metric tensors are computed to characterize
cortical shape variations. However, the quadratic polynomial fit
requires an accurate normal vector estimation, which tend to be
highly unstable in triangle meshes [36] [53]. In contrast to this
2. local approach, a global parameterization via SPHARM is also
available [23] [24] [31] [40] [41]. This traditional SPHARM
representation has been mainly used as a data reduction
technique rather than obtaining high order spatial derivative
information. 2D anatomical boundaries such as ventricle sur-
faces [23] and hippocampal surfaces [41] are parameterized by
SPHARM and its coefficients are fed into statistical analyses
such as a principal component analysis, a linear discriminant
analysis and support vector machines. The main geometric
features are encoded in low degree spherical harmonic, while
the noise will be in high degree spherical harmonics [24].
In this study, we generalize the traditional SPHARM by
weighting its coefficients with exponentially decaying factors,
and develop an iterative analytic differentiation framework
for computing the Riemannian metric tensors. It will be
shown that the weighted-SPHARM is a more generalized
framework than the traditional SPHARM. Since we are not
performing a finite difference based numerical differentiation,
the tensor estimation should be more stable. Compared to
the local polynomial fitting, our approach completely avoids
estimating unstable normal vectors. The weighted-SPHARM
tends to be computationally expensive compared to the local
quadratic polynomial fitting while providing more accuracy
and flexibility for a hierarchical representation.
In previous TBM, computations on a discrete triangle mesh
produced significant mesh noise in cortical measure. In order
to increase the signal-to-noise ratio (SNR) and the sensitivity
of statistical analysis for cortical measure, cortical surface
based data smoothing such diffusion smoothing was necessary
[1] [8] [13] [33] [46]. The drawback of the diffusion smoothing
is the complexity of setting up a finite element method (FEM)
and making the numerical scheme stable [8] [13]. Since
the weighted-SPHARM is mathematically equivalent to the
diffusion smoothing [9] while it uses the exact analytical basis,
it offers a more accurate numerical approximation over the
diffusion smoothing without additional computation. Further,
because the full width at the half maximum (FWHM) of the
smoothing kernel can be exactly computed in the weighted-
SPHARM while it is only an asymptotic approximation in
the diffusion smoothing [11], the random field theory based
statistical analysis can be used. This provides a more coherent
and unified cortical surface analysis framework.
Once we compute the local area elements from the
weighted-SPHARM, it is necessary to compare them across
subjects via surface registration. Most previous surface reg-
istration methods are formulated as an optimization problem
by minimizing an objective function that measures the global
fit of two surfaces while maximizing the smoothness of the
deformation in such a way that the gyral patterns are matched
smoothly [11] [16] [18] [21] [38] [47]. These type of surface
registration techniques are computationally expensive. In the
weighted-SPHARM representation, the surface registration is
straightforward and does not require any sort of optimizations
explicitly. Corresponding surface positions are established by
matching spherical harmonics [9]. This technique has an
advantage of bypassing the computationally expensive opti-
mization problem since the correspondence across subjects are
built into the weighted-SPHARM representation itself.
Fig. 1. Using the deformable surface algorithm that establish a mapping
from a unit sphere to a cortical surface, we parameterize the cortical surface
with the polar angle θ and the azimuthal angle ϕ.
Fig. 2. Spherical harmonic basis of selective degree and orders. The center
of the concentric circles at order m = 0 is the north pole.
II. PRELIMINARY
In this section, we introduce mathematical notations and
basic concepts of SPHARM. Since there are variations on
defining the associated Legendre polynomials and the spher-
ical harmonics, it is necessary to clearly state the exact
mathematical forms to minimize confusion. Many SPHARM
literature [7] [23] [24] [41] use the complex-valued spherical
harmonics while we are using the real-valued spherical har-
monics since it is more intuitive to set up a statistical model.
A. Surface Parametrization
Let M and S2
be a cortical surface and a unit sphere
respectively. M and S2
are realized as polygonal meshes
with more than 80000 triangle elements. It is natural to
assume the cortical surface to be a smooth 2-dimensional
Riemannian manifold parameterized by two parameters [19].
This parametrization is constructed in the following way. A
point u = (u1, u2, u3) ∈ S2
is mapped to p = (x, y, z) ∈
M via the mapping U, which is obtained by a deformable
surface algorithm that preserves anatomical homology and the
topological connectivity of meshes (Figure 1). We will refer
this mapping as the spherical mapping. Then we parameterize
u by the spherical coordinates:
(u1, u2, u3) = (sin θ cos ϕ, sin θ sin ϕ, cos θ)
with (θ, ϕ) ∈ N = [0, π] ⊗ [0, 2π). The polar angle θ is the
angle from the north pole and the azimuthal angle ϕ is the
angle along the horizontal cross section of a MRI (Figure 1).
The mapping from the parameter space N to the unit sphere
S2
will be denoted as X, i.e. X : N → S2
. Then we have a
3. composite mapping Z from the parameter space to the cortical
surface: Z = U ◦ X : N → M. Z is a 3D vector of surface
coordinates and it will be stochastically modeled as
Z(θ, ϕ) = ν(θ, ϕ) + ǫ(θ, ϕ), (1)
where ν is a unknown true differentiable parametrization and
ǫ is a random vector field on the unit sphere. The computation
of the Riemannian metric tensors and the local area element
require estimating differentiable function ν.
B. Spherical Harmonic Representation
The basis functions on a unit sphere are given as the
eigenfunctions satisfying ∆f + λf = 0, where ∆ is the
spherical Laplacian:
∆ =
1
sin θ
∂
∂θ
sin θ
∂
∂θ
+
1
sin2
θ
∂2
∂2ϕ
.
There are 2l + 1 eigenfunctions, denoted as Ylm(|m| ≤ l),
corresponding to the same eigenvalue λ = l(l + 1). Ylm is
called the spherical harmonic of degree l and order m [15].
The explicit form of the 2l + 1 spherical harmonics of degree
l is given by
Ylm =
clmP
|m|
l (cos θ) sin(|m|ϕ), −l ≤ m ≤ −1,
clm√
2
P
|m|
l (cos θ), m = 0,
clmP
|m|
l (cos θ) cos(|m|ϕ), 1 ≤ m ≤ l,
where clm = 2l+1
2π
(l−|m|)!
(l+|m|)! and Pm
l is the associated
Legendre polynomial of order m. Spherical harmonics of
particular degrees and orders are illustrated in Figure 2.
For fixed l, Pm
l form orthogonal polynomials over [−1, 1].
Following the convention used in Arfken [3], we have omitted
the phase (−1)m
in the definition of the associated Legnedre
polynomial. Many previous SPHARM literature [7] [23] [24]
[41] used the complex-valued spherical harmonics so the care
is needed in comparing different numerical implementations
of the associated Legendre polynomials and the spherical
harmonics. The spherical harmonics form orthonormal bases
on S2
such that
S2
Yij(p)Ylm(p) dµ(p) =
1 if i = l, j = m,
0 otherwise.
(2)
For f, h ∈ L2
(S2
), the space of square integrable functions
in S2
, the inner product is defined as
f, h =
2π
0
π
0
f(θ, ϕ)h(θ, ϕ) sin θdθdϕ
where the Lebesgue measure dµ(θ, ϕ) = sin θdθdϕ. Consider
the subspace
Hk = {
k
l=0
l
m=−l
βiYlm : βi ∈ R} ⊂ L2
(S2
),
which is spanned by up to the k-th degree spherical harmonics.
We are interested in estimating f ∈ L2
(S2
) using a function
in Hk. The least squares estimation (LSE) of f in the subspace
Hk is then given by the finite Fourier series expansion.
Theorem 1:
k
l=0
l
m=−l
flmYlm = arg min
h∈Hk
f − h 2
,
where the norm is defined as f = f, f 1/2
.
This is the basis of the traditional SPHARM representation
of closed anatomical boundaries [23], [24], [41].
III. WEIGHTED-SPHARM
A. Basic Theory
The traditional SPHARM is only one possible representa-
tion of functional data measured on a unit sphere. We present
a more general representation called the weighted-SPHARM,
which weights the coefficients of the traditional SPHARM by
the eigenvalues of a kernel. It can be shown that the traditional
SPHARM is the special case of the weighted-SPHARM.
We start with the spectral representation of a positive
definite kernel in S2
. Consider the positive definite kernel
K(p, q) of the form
K(p, q) =
∞
l=0
l
m=−l
λlmYlm(p)Ylm(q), (3)
where the ordered eigenvalues
λ00 ≥ λ1m1 ≥ λ2m2 ≥ · · · ≥ 0
satisfy
S2
K(p, q)Ylm(q) dµ(q) = λlmYlm(p). (4)
This is the special case of the Mercer’s theorem [15]. From (4),
it follows that K is a reproducing kernel in L2
(S2
). Without
the loss of generality, we assume the kernel is normalized as
S2
K(p, q) dµ(q) = 1. (5)
The smooth functional estimation h of measurement f is
searched in Hk that minimizes the integral of the weighted
square distance between f and h:
Theorem 2:
k
l=0
l
m=−l
λlmflmYlm
= arg min
h∈Hk S2 S2
K(p, q)|f(q) − h(p)|2
dµ(p)dµ(q).
We will call the finite expansion given in Theorem 6 as
the weighted-SPHARM representation. The theorem can be
proved by substituting h = k
l=0
l
m=−l clmYlm(p) and
optimizing with respect to clm.
Define kernel smoothing as the integral convolution
K ∗ f(p) =
S2
f(q)K(p, q) dµ(q). (6)
=
∞
l=0
l
m=−l
λlm f, Ylm Ylm(p). (7)
The last equation is obtained by substituting (3) into (6). The
equation (7) shows that the weighted-SPHARM is the finite
4. Fig. 3. The shape of the heat kernel Kσ(p, q) for various bandwidths σ.
The shape is computed from the harmonic addition theorem and equation
(16). The point p is fixed to be the north pole and the horizontal axis is the
angle cos−1(p · q).
Fig. 4. The first column is a hat shaped 3D step function. The weighted-
SPHARM (bottom) of the step function at different bandwidths (σ = 0.01,
0.001, 0.0005, 0.0001) and the corresponding traditional SPHARM (top) of
the same degree. The weighted-SPHARM has less ringing artifacts.
expansion of kernel smoothing. Kernel smoothing (3) can be
further shown to be the minimizer of the following integral.
Theorem 3: For fixed point p ∈ S2
,
K ∗ f(p) = arg min
h∈L2(S2) S2
K(p, q) f(q) − h
2
dµ(q).
This theorem can be proved by differentiating the integral
with respect to h.
For the choice of eigenvalues λlm = e−l(l+1)σ
, the corre-
sponding kernel is called the heat kernel or Gauss-Weistrass
kernel [7] [11] [39] and it will be denoted as
Kσ(p, q) =
∞
l=0
l
m=−l
e−l(l+1)σ
Ylm(p)Ylm(q). (9)
The parameter σ determines the spread of kernel as shown in
Figure 3. As σ → 0, λlm → 1 and the heat kernel Kσ(p, q) →
δ(p−q), the Dirac-delta function. So the traditional SPHARM
is a special case of the weighted-SPHARM. It is interesting
to note that even though the regularizing cost functions are
different in Theorem 1 and 2, they are related asymptotically.
Another interesting property of the weighted-SPHARM is
observed by noting that Kσ ∗ f is the unique solution of the
isotropic heat diffusion
∂g
∂t
= ∆g, g(p, t = 0) = f(p) (10)
at time t = σ2
/2 [9] [11] [39]. Hence the weighted-SPHARM
is the finite expansion of the isotropic heat diffusion. Instead of
solving the heat equation numerically, which tend to be unsta-
ble [1] [13], the weighted-SPHARM representation provides
a more stable approach. In a similar approach, Bulow used
spherical harmonics in developing isotropic heat diffusion via
the Fourier transform on a unit sphere as form of hierarchical
surface representation [7].
There are two main advantages of using the weighted-
SPHARM over the traditional SPHARM. The weighted-
SPHARM reduces the substantial amount of the Gibbs phe-
nomenon (ringing artifacts) [9] [22] that is associated with
the convergence of a Fourier series. Either discontinuous or
rapidly changing measurements will have slowly decaying
Fourier coefficients and thus the traditional SPHARM repre-
sentation converges slowly. However, the weighted-SPHARM
additionally weights the Fourier coefficients with the expo-
nentially decaying weights contributing to more rapid con-
vergence. The Gibbs phenomenon is visually demonstrated in
Figure 4 with a hat shaped step function (z = 1 if x2
+y2
< 1
and z = 0 if 1 ≤ x2
+ y2
≤ 2). The bottom figures are
the weighted-SPHARM at different scales (σ = 0.01, 0.001,
0.0005, 0.0001) and the top figures are the corresponding
traditional SPHARM of the same degree. The degree selection
process is discussed in the next section. The top figures exhibit
significant ringing artifacts while the bottom figures show less
ringing artifacts.
The second advantage of using the weighted-SPHARM
is related to heat kernel smoothing formulation used in the
random field theory [11] [51] [52]. The random field theory
that is need to correct for multiple comparisons requires the
smoothness of signal, as measured as the full width at the
half maximum (FWHM) of the heat kernel. In the traditional
SPHARM, the heat kernel degenerates to the Dirac-delta
function so we can not apply the random field theory directly.
B. Numerical Implementation
The eigenvalues λlm are given analytically from a given
kernel. So we only need to numerically compute the Fourier
coefficients flm in the weighted-SPHARM representation.
Previously, the computation for the Fourier coefficients com-
putation for the Fourier coefficients was based on the direct
numerical integration over high resolution triangle meshes
with more than 80000 triangles and the average inter-vertex
distance of 0.0189 mm [10]. Unfortunately the direct nu-
merical integration is extremely slow and it is not practical
when high degree spherical harmonics are needed. So we
have recently developed a new numerical technique called the
iterative residual fitting (IRF) algorithm [9] [40]. Compared to
the numerical integration, which takes more than few hours,
the IRF algorithm takes only about 5 minutes per subject for
computing all the coefficients up to 78 degrees in a personal
computer.
5. Fig. 5. The mean (left) and the standard deviation (middle) of the Fourier
coefficients for 28 subjects. The vertical axis is the degree and the horizontal
axis is the order arranged from the lowest to the highest. Right: The p-value
of a Jarque-Bera test for normality. Smaller p-values indicate the tendency
for nonnormality. Only 10 out of total 1849 coefficients show nonnormality
at α = 0.05 level.
Fig. 6. Left: the cross correlation of up to 42 degree SPHARM coefficients.
Right: the enlargement of a small white square in the left figure. The cross
correlation map shows very low correlation in most pairs. The average
correlation is 0.16
The IRF algorithm estimates the Fourier coefficients it-
eratively by breaking a large least squares problem in the
subspace Hk into smaller subspaces. Let us decompose the
subspace Hk into the smaller subspaces as the direct sum:
Hk = I0 ⊕ I1 · · · ⊕ Ik, where the subspace
Il = {
l
m=−l
βlmYlm(p) : βlm ∈ R}
is spanned by the l-th degree spherical harmonics only.
Then the IRF algorithm estimates the Fourier coefficients
flm in each subspace Il iteratively from degree 0 to k.
This hierarchical estimation from lower to higher degree
is possible due to the orthonormality of spherical harmon-
ics. The technical detail of the IRF algorithm, numerical
implementation, accuracy issues are given in [9] and [40].
The MATLAB implementation of IRF is freely available at
http://www.stat.wisc.edu/∼mchung/
softwares/weighted-SPHARM.html with a sample
outer cortical surface.
While increasing the degree of the weighted-SPHARM
increases the goodness-of-fit, it also increases the number of
coefficients to be estimated quadratically. So it is necessary
to find the optimal degree where the goodness-of-fit and the
number of parameters balance out. In most previous SPHARM
literature [23] [24] [40] [41], the degree is simply selected
based on a pre-specified error bound that depends on the
size of an anatomical structure. We have adapted a model
selection framework [37] that does not depend on the size
of the anatomical structure.
The Fourier coefficients flm can be modeled to follow
independent normal distribution N(µlm, σ2
l ). Within the same
degree, equal variance is assumed. We have checked the model
assumption on our data set. Figure 5 shows the sample mean
and variance of the Fourier coefficients for the x-coodrinates of
28 subjects. The vertical direction is the degree k and the hor-
izontal direction is the order arranged from −k to k. The third
figure shows the p-value of testing normality using a Jarque-
Bera statistic [26]. Only 10 out of total (42 + 1)2
= 1849
coefficients show nonnormality at α = 0.05 level indicating
our normality assumption is valid. We have also computed
cross correlation of all 18492
pairs of coefficients to check
independence (Figure 6). Note that Gaussian random variables
are independent if cross correlations are zero. The most pairs
show extremely low correlation and the average correlation is
0.16 indicating the independence assumption is valid.
The above model assumption is equivalent to the following
linear model
f(pi) =
k
l=0
l
m=−l
e−λ(λ+1)σ
µlmYlm(pi) + ǫ(pi), (11)
where ǫ is a zero mean isotropic Gaussian random field.
Once we determined all the coefficients up to the k-th degree
using the IRF algorithm, we check if adding the next terms
µk+1,−(k+1), · · · , µk+1,k+1 to the k-th degree model (11) is
statistically significant in a forward model section framework
[9] [37]. If the corresponding p-value of the test statistic (page
50 in [37]) is bigger than the pre-specified significance level of
0.05, we stop the iteration. If not, we increase the degree and
repeat the process. For bandwidths σ = 0.01, 0.001, 0.0005,
0.0001, the optimal degrees are 18, 42, 52, 78 respectively
(Figure 7).
We have compared the IRF result against the analytical
solution of equation (10). For any arbitrary initial condition
of the form
f =
k
l=0
l
m=−l
αlmel(l+1)
Ylm ∈ Hk, (12)
the solution to equation (10) is given by
Kσ ∗ f =
k
l=0
l
m=−l
αlmYlm. (13)
Comparing the analytical solution (13) to the result obtained
from the IRF algorithm serves as the basis for validation. It
is sufficient to use a single term in (12) for validation. Let
f = el(l+1)
Ylm be an initial condition of equation (10). Then
the solution of equation (10) is given by Kσ ∗ f = Ylm.
Table I shows the comparison for various degrees and orders.
The fifth column shows the mean absolute error between
the theoretical value Ylm and the numerical result obtained
from the IRF algorithm. The mean is taken over all mesh
vertices. The last column shows the numerical computation of
integral S2 Y 2
l′m′ (p) dµ(p) = 1. Table I shows our numerical
implementation provides sufficiently good numerical accuracy.
IV. TENSOR-BASED MORPHOMETRY
Taking weighted-SPHARM as a global parameterization for
cortical surface M, we can compute the Riemannian metric
tensors that are needed in computing the local area element.
6. degree l order m bandwidth σ FWHM mean error flm
18 17 0.01 0.3456 0.0575 0.9995
42 41 0.001 0.1257 0.0126 0.9992
52 51 0.0005 0.0968 0.0101 0.9988
78 77 0.0001 0.0597 0.0068 0.9984
TABLE I
FWHM AND ACCURACY FOR THE WFS REPRESENTATION
A. Metric Tensor Estimation
The weighted-SPHARM estimation ν of the unknown true
parametrization µ in equation (1) is given by
ν(θ, ϕ) =
k
l=0
l
m=−l
λlmZlmYlm
with Zlm = Z, Ylm . For this study, we used eigenvalue
λlm = e−l(l+1)σ
corresponding to the heat kernel. The Rie-
mannian metric tensors gij will be computed by analytically
differentiating the weighted-SPHARM. The estimation of the
Riemannian metric tensors requires partial derivatives of ν.
Denoting the partial differential operators as ∂1 = ∂θ and
∂2 = ∂ϕ, we have
∂iν =
k
l=0
l
m=−l
λlmZlm∂iYlm(θ, ϕ).
The derivatives of spherical harmonics can be analytically
computed. We start with the derivative for the associated
Legendre polynomials.
∂θP
|m|
l (x) = l cot θP
|m|
l (x) − (l + |m|) sin−1
θP
|m|
l−1(x),
where x = cos θ. Note that P
|m|
l−1 = 0 if |m| ≥ l. The
recursive formula introduces a numerical singularity at the
north and south poles (θ = 0, π) so we have chosen the
poles to be the regions of non-interest that connect the left
and the right hemispheres (Figure 1). Then the derivatives of
spherical harmonics are expressed as the functions of spherical
harmonics.
∂θYlm = l cot θYlm − (l + |m|)
clm
cl−1,m
sin−1
θYl−1,m
with the convention Yl−1,m = 0 if |m| ≥ l. The constant in
the second term can be further simplified as
(l + |m|)
clm
cl−1,m
=
2l + 1
2l − 1
(l2 − m2).
The derivative with respect to ϕ is simply given as
∂ϕYlm = −mYl,−m.
This recursive relation reduces the computational time by recy-
cling the spherical harmonics used in estimating the SPHARM
coefficients.
The 3 × 2 Jacobian matrix J of mapping from parameter
space N to cortical surface M is given by J = (∂θ ˆν, ∂ϕ ˆν).
The Riemannian metric tensors are g = (gij) = Jt
J. The
component is given by gij = ∂iν · ∂jν with the vector inner
product ·. The Riemannian metric tensors measure the amount
of deviation of a cortical surface from a flat Euclidean plane.
If the cortical surface is flat, we obtain gij = δij, the identity
matrix. The Riemannian metric tensors enable us to compute
the local area element
√
det g. The area element measures the
amount of the transformed area in M of the unit area in the
parameterized space N via the mapping ν. Figure 8 shows the
estimation of the metric tensors for a subject. Using the area
element, the total surface area of M can be written as
µ(M) =
2π
0
π
0
det g(θ, ϕ) dθ dϕ.
Locally, surface deformation can be decomposed into the
tangential and the normal components with respect to a surface
normal vector [13]. At each point p, we define local gray
matter volume as V (p) = det g(p)C(p), where C is cortical
thickness. Then the total gray matter volume is approximately
given as S2 V (p) dµ(p). The gray matter volume will change
if either the area element increases (tangential expansion)
or cortical thickness increases (normal expansion). Then the
change in the gray matter volume is the sum of the change in
local area and the change in cortical thickness [13]:
dV
V
=
d
√
det g
√
det g
+
dC
C
.
The change in the local area element can be viewed as to
contributing to the tangential component of the gray matter
volume change.
The scale invariant area element is defined as√
det g/µ(M), where the total surface area µ(M) is
estimated by summing the area of triangles in a mesh. Figure
9 shows the scale invariant area element for randomly selected
12 subjects. Although the scale invariant area element is
invariant under affine scaling, it is not invariant under different
parameterizations such as conformal mappings [2] [24] [25],
quasi-isometric mappings [48] and area preserving mappings
[6] [41] [43]. Considering these parameterizations introduce
area distortion, it is necessary to use a parameterization
invariant metric for a stable statistical analysis. This can
be obtained by directly measuring the area expansion rate
with respect to a template surface M0 rather than the
parameter space N. Consider a mapping from the template
M0 to the cortical surface M. The Jocobian of this mapping
will be noted as J0. The Jacobian J0 is expected to be
invariant under different parameterizations and only depends
on the registration between the two surfaces. Let g0 be
the metric tensors of M0. The area element of M is√
det g = det J0
√
det g0. Then the parameterization invariant
measure is obtained by simply computing the percentage
change of area expansion with respect to the template as
√
det g −
√
det g0
√
det g0
= det J0 − 1 (14)
giving a local area related measure invariant under parameter-
ization. This quantity is called the surface area dilatation and
it is approximately the trace of the Jacobian determinant [12].
Our methodology does not work for area-preserving mappings
since the Jacobian determinant is 1. For this singular case, we
compute the Jacobian determinant directly from the surface
registration result.
7. Fig. 7. The weighted-SPHARM representation at different bandwidths.
The first column is the original cortical surface. The color bar indicates x-
coordinate values. The second row shows the result mapped on a unit sphere.
The black dot in the center indicates the north pole.
Fig. 8. Metric tensor estimation. The metric tensors gij are estimated by
analytically differentiating the weighted-SPHARM representation. The local
area element
√
det g measures the amount of area expansion and shrinking
with respect to the parameter space N .
B. Statistical Inference on Unit Sphere
For the i-th subject (1 ≤ i ≤ m), we denote the cortical
surface as Mi and its parameterization invariant surface
Jacobian determinant det Ji(θ, ϕ). Then we have the following
general linear model (GLM):
det Ji(θ, ϕ) = β0 + β1(θ, ϕ) · groupi + ǫ(θ, ϕ),
where ǫ is a mean zero Gaussian random field. groupi is a
categorical dummy variable ( 0 for autism and 1 for control).
We are interested in localizing any group differences in the
local area element map by testing if β1(θ, ϕ) = 0 for all
(θ, ϕ). At each fixed point (θ, ϕ), the test statistic T (θ, ϕ)
is a two sample t-statistic with m − 2 degrees of freedom.
Since we need to perform the test at every points (θ, ϕ), this
becomes a multiple comparison problem. We used the random
field theory [50] [51] [52] based thresholding to determine the
statistical significance. The p-value for the one sided alternate
hypothesis, i.e. β1 > 0, is given by
p(θ, ϕ) = P sup
p∈S2
T (p) > t(θ, ϕ) ≈
2
d=0
Rd(S2
)ρd(h), (15)
where Rd is the d-dimensional resels of S2
and ρd is the
d-dimensional Euler characteristic (EC) density of a T -field
with m − 2 degrees of freedom, and t(θ, ϕ) is the observed
two sample t-statistic at (θ, ϕ). The resels are
R0(S2
) = 2, R1(S2
) = 0, R2(S2
) =
µ(S2
)
FWHM2 ,
where FWHM is the full width at half maximum of the
weighted-SPHARM. The mathematical formulas for the EC-
densities are given in [51]. Although there are some variations
Fig. 9. Scale invariant area elements for randomly selected 6 control subjects
(top) and 6 autistic subjects (bottom). The color scale is thresholded at 1.5
(150%) for better visualization. With respect to the parameter space N , there
is up to 300% area expansion.
Fig. 10. The p-value map projected on the average weighted-SPHARM
surface of 28 subjects. The p-value is thresholded at 0.05 using the random
field theory. The focalized red (blue) regions show more (less) surface area
in the autistic subjects compared to the controls.
in defining resels and EC-density through the literature [50]
[51] [52], we have used the convention used in [52]. Figure
10 shows the resulting corrected p-value map showing highly
localized regions of abnormal pattern in autistic subjects. The
p-value map is projected on the average cortical surface of
38 subjects used in the study. We have used the threshold
h = ±5.19 corresponding to the corrected p-value of 0.05.
In computing the corrected p-value, it is necessary to
compute the FWHM but it is not trivial since there is no
known close form expression for the FWHM as a function of
σ. So the FWHM is computed numerically. The heat kernel
can be simplified from equation (9), via the harmonic addition
theorem [3], as
Kσ(p, q) =
∞
l=0
2l + 1
4π
e−l(l+1)σ
P0
l (cos ϑ), (16)
where ϑ is the angle between p and q, i.e. cos ϑ = p · q. By
fixing p to be the north pole, i.e. ϕ = 0 and p = (0, 0, 1) while
varying q = (sin ϑ, 0, cos ϑ) for 0 ≤ ϑ = cos−1
(p · q) ≤ π,
we can obtain the shape of heat kernel and its corresponding
FWHM numerically for each σ (Figure 3). Table I shows the
FWHM for various bandwidths σ. In previous diffusion and
heat kernel smoothing [13] [11], between 20 to 30 mm FWHM
was used. The FWHM used in this study is extremely small
since the analysis is performed on a unit sphere rather than a
larger cortical surface. However, the comparable Resels can be
obtained by using the bandwidth of σ = 0.001 corresponding
to the FWHM of 0.1257 mm.
8. V. APPLICATION TO AUTISM STUDY
Three Tesla T1-weighted MR scans were acquired for 16
high functioning autistic and 12 control right handed males.
16 autistic subjects were diagnosed via The Autism Diagnostic
Interview - Revised (ADI-R) used by a trained and certified
psychologist at the Waisman center at the University of
Wisconsin-Madison [17]. The average ages are 17.1 ± 2.8
and 16.1 ± 4.5 for control and autistic group respectively.
Image intensity nonuniformity was corrected using a nonpara-
metric nonuniform intensity normalization method [42] and
then the image was spatially normalized into the Montreal
neurological institute (MNI) stereotaxic space using a global
affine transformation [14]. Afterwards, an automatic tissue-
segmentation algorithm based on a supervised artificial neural
network classifier was used to classify each voxel into three
classes: CSF, gray matter and white matter [32].
Triangular meshes for outer cortical surfaces were obtained
by the anatomic segmentation using the proximities (ASP)
method [34], which is a variant of deformable surface al-
gorithms. The algorithm generates 40962 vertices and 81920
triangles with the identical mesh topology for all subjects. The
vertices indexed identically on two cortical meshes will have
a very close anatomic homology [13] [30] [34]. The mesh
starts as a sphere located outside the brain and is shrunk to
match the cortical boundary by minimizing a cost function
that contains the image, stretch, bending, and vertex-to-vertex
proximity terms. The deformation in the MNI stereotaxic
coordinate system combined with stretch constraints that limit
the movement of vertices, effectively enforces a relatively
consistent placement of points on the cortical surface. This
provides the same spherical parameterization at identically
indexed vertices across different cortical surfaces.
If we detect anatomical changes along the inner surface, it
is unclear if the changes are due to changes in gray or white
matters, or possibly both. On the other hand, changes in the
outer cortical surface are the direct consequence of changes
in the gray matter. Choosing the outer surface representation
reduces the ambiguity of interpreting the statistical result.
Therefore, we have chosen the outer surface over the inner
surface for the study.
Once we obtained the outer cortical surfaces of 28 sub-
jects, the weighted-SPHARM representation were constructed.
We have used the bandwidth σ = 0.001 corresponding to
k = 42 degrees. The corresponding FWHM is 0.1257 mm.
The Fourier coefficients were estimated using the IRF algo-
rithm. The corresponding surface positions across two different
weighted-SPHARM surfaces are obtained by matching the
harmonics of the same degree and order via the SPHARM-
correspondence [9]. This is equivalent to obtaining the optimal
displacement, in the least squares sense, by taking the differ-
ence between the two weighted-SPHARM representations.
Using the weighted-SPHARM representation, area elements
and corresponding surface Jacobian determinants are ana-
lytically computed, and compared across subjects. The two
sample t-statistic map is computed and its corrected p-value
map is projected on the average weighted-SPHARM surface
of 28 subjects (Figure 10). The average surface is constructed
by averaging the Fourier coefficients within the spherical
harmonic of the same degree and order. The average surface
serves as an anatomical landmark for showing where group
differences are located.
We performed the random field theory based multiple com-
parison correction on the computed t-statistic map. Figure 10
shows the regions of statistically significant group difference
thresholded at α = 0.05 level (corresponding to the t-value of
± 5.19). Although there are other regions of group difference,
the left inferior frontal gyrus show the most significant group
difference.
VI. CONCLUSIONS
In this paper, we presented the weighted-SPHARM repre-
sentation and its application in TBM. The weighted-SPHARM
is used as a differentiable parametrization of the cortex. Based
on a new iterative formulation, spatial derivatives of the
weighted-SPHARM are computed and used to derive metric
tensors and an area element. The ratio of area elements is then
used to compute the surface Jacobian determinant invariant un-
der parameterization. The surface Jacobian determinant is used
in determining statistically significant regions of abnormal
cortical tissue expansion and shrinking for autistic subjects.
The weighted-SPHARM is a very flexible function estima-
tion technique for scalar and vector data defined on a unit
sphere. We have shown that the weighted-SPHARM is related
to heat kernel smoothing. Since heat kernel smoothing is
related to isotropic heat diffusion [9] [13] [39], we were able
to connect our new representation to the isotropic heat diffu-
sion. This argument can be further extended. By choosing a
kernel induced from a particular self-adjoint partial differential
equation (PDE), we can construct the least squares estimation
of the PDE without numerically solving it [9]. This should
serve as a spring board for investigating other PDE-based data
smoothing techniques in the weighted-SPHARM framework.
ACKNOWLEDGMENT
The authors wish to thank Paul Thompson of the Lab-
oratory of Neuro Imaging at UCLA for the discussion on
the area element and the surface Jacobian determinant, and
Richard Hartley of the Department of Systems Engineering
at the Australian National University for the discussion on
the weighted-SPHARM representation and positive definite
kernels on spheres.
REFERENCES
[1] A. Andrade, F Kherif, J. Mangin, K.J. Worsley, A. Paradis, O. Simon,
S. Dehaene, D. Le Bihan, and J-B. Poline. Detection of fmri activation
using cortical surface mapping. Human Brain Mapping, 12:79–93, 2001.
[2] S. Angenent, S. Hacker, A. Tannenbaum, and R. Kikinis. On the laplace-
beltrami operator and brain surface flattening. IEEE Transactions on
Medical Imaging, 18:700–711, 1999.
[3] G.B. Arfken. Mathematical Methods for Physicists. Academic Press,
5th. edition, 2000.
[4] J Ashburner, C. Good, and K.J. Friston. Tensor based morphometry.
NeuroImage, 11S:465, 2000.
[5] J. Ashburner, C. Hutton, R. S. J. Frackowiak, I. Johnsrude, C. Price, and
K. J. Friston. Identifying global anatomical differences: Deformation-
based morphometry. Human Brain Mapping, 6:348–357, 1998.
9. [6] C. Brechbuhler, G. Gerig, and O. Kubler. Parametrization of closed
surfaces for 3d shape description. Computer Vision and Image Under-
standing, 61:154–170, 1995.
[7] T. Bulow. Spherical diffusion for 3d surface smoothing. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 26:1650–
1654, 2004.
[8] A. Cachia, J.-F. Mangin, D. Rivi´ere, D. Papadopoulos-Orfanos,
F. Kherif, I. Bloch, and J. R´egis. A generic framework for parcellation
of the cortical surface into gyri using geodesic vorono¨ı diagrams. Image
Analysis, 7:403–416, 2003.
[9] M.K. Chung, Shen L. Dalton, K.M., A.C. Evans, and R.J. Davidson.
Weighted fourier representation and its application to quantifying the
amount of gray matter. IEEE transactions on medical imaging, 26:566–
581, 2007.
[10] M.K. Chung. Heat kernel smoothing on unit sphere. In Proceedings of
IEEE International Symposium on Biomedical Imaging (ISBI), 2006.
[11] M.K. Chung, S. Robbins, Davidson R.J. Alexander A.L. Dalton, K.M.,
and A.C. Evans. Cortical thickness analysis in autism with heat kernel
smoothing. NeuroImage, 25:1256–1265, 2005.
[12] M.K. Chung, K.J. Worsley, T. Paus, D.L. Cherif, C. Collins, J. Giedd,
J.L. Rapoport, , and A.C. Evans. A unified statistical approach to
deformation-based morphometry. NeuroImage, 14:595–606, 2001.
[13] M.K. Chung, K.J. Worsley, S. Robbins, T. Paus, J. Taylor, J.N. Giedd,
J.L. Rapoport, and A.C. Evans. Deformation-based surface morphometry
applied to gray matter deformation. NeuroImage, 18:198–213, 2003.
[14] D.L. Collins, P. Neelin, T.M. Peters, and A.C. Evans. Automatic 3d
intersubject registration of mr volumetric data in standardized talairach
space. J. Comput. Assisted Tomogr., 18:192–205, 1994.
[15] R. Courant and D. Hilbert. Methods of Mathematical Physics: Volume
II. Interscience, New York, english edition, 1953.
[16] A.M. Dale and B. Fischl. Cortical surface-based analysis i. segmentation
and surface reconstruction. NeuroImage, 9:179–194, 1999.
[17] K.M. Dalton, B.M. Nacewicz, T. Johnstone, H.S. Schaefer, M.A. Gerns-
bacher, H.H. Goldsmith, A.L. Alexander, and R.J. Davidson. Gaze
fixation and the neural circuitry of face processing in autism. Nature
Neuroscience, 8:519–526, 2005.
[18] C. Davatzikos. Spatial transformation and registration of brain images
using elastically deformable models. Comput. Vis. Image Underst.,
66:207–222, 1997.
[19] C. Davatzikos and R.N. Bryan. Using a deformable surface model to
obtain a shape representation of the cortex. Proceedings of the IEEE
International Conference on Computer Vision, 9:2122–2127, 1995.
[20] C. Davatzikos, M. Vaillant, S.M. Resnick, J.L. Prince, S. Letovsky, and
N Bryan. A computerized approach for morphological analysis of the
corpus callosum. Journal of Computer Assisted Tomography, 20:88–97,
1996.
[21] B. Fischl, M.I. Sereno, R. Tootell, and A.M. Dale. High-resolution
intersubject averaging and a coordinate system for the cortical surface.
Hum. Brain Mapping, 8:272–284, 1999.
[22] A. Gelb. The resolution of the gibbs phenomenon for spherical
harmonics. Mathematics of Computation, 66:699–717, 1997.
[23] G. Gerig, M. Styner, D. Jones, D. Weinberger, and J. Lieberman. Shape
analysis of brain ventricles using spharm. In MMBIA, pages 171–178,
2001.
[24] X. Gu, Y.L. Wang, T.F. Chan, T.M. Thompson, and S.T. Yau. Genus zero
surface conformal mapping and its application to brain surface mapping.
IEEE Transactions on Medical Imaging, 23:1–10, 2004.
[25] M. K. Hurdal and K. Stephenson. Cortical cartography using the discrete
conformal approach of circle packings. NeuroImage, 23:S119S128,
2004.
[26] C.M. Jarque and A.K. Bera. A test for normality of observations and
regression residuals. International Statistical Review, 55:1–10, 1987.
[27] S.E. Jones, B.R. Buchbinder, and I. Aharon. Three-dimensional mapping
of cortical thickness using laplace’s equation. Human Brain Mapping,
11:12–32, 2000.
[28] S.C. Joshi, U. Grenander, and M.I. Miller. The geometry and shape of
brain sub-manifolds. International Journal of Pattern Recognition and
Artificial Intelligence: Special Issue on Processing of MR Images of the
Human, 11:1317–1343, 1997.
[29] S.C. Joshi, J. Wang, M.I. Miller, D.C. Van Essen, and U. Grenander.
On the differential geometry of the cortical surface. Vision Geometry
IV, pages 304–311, 1995.
[30] N. Kabani, D. Le Goualher, G. MacDonald, and A.C. Evans. Mea-
surement of cortical thickness using an automated 3-D algorithm: a
validation study. NeuroImage, 13:375–380, 2000.
[31] A. Kelemen, G. Szekely, and G. Gerig. Elastic model-based segmenta-
tion of 3-d neuroradiological data sets. IEEE Transactions on Medical
Imaging, 18:828–839, 1999.
[32] K. Kollakian. Performance analysis of automatic techniques for tissue
classification in magnetic resonance images of the human brain. Tech-
nical Report Master’s thesis, Concordia University, Montreal, Quebec,
Canada, 1996.
[33] J. P. Lerch and A.C. Evans. Cortical thickness analysis examined through
power analysis and a population simulation. NeuroImage, 24:163–173,
2005.
[34] J.D. MacDonald, N. Kabani, D. Avis, and A.C. Evans. Automated 3-
D extraction of inner and outer surfaces of cerebral cortex from mri.
NeuroImage, 12:340–356, 2000.
[35] M.I. Miller, A.B. Massie, J.T. Ratnanather, K.N. Botteron, and J.G.
Csernansky. Bayesian construction of geometrically based cortical
thickness metrics. NeuroImage, 12:676–687, 2000.
[36] D.L. Page, A. Koschan, Y. Sun, J. Paik, and M.A. Abidi. Robust crease
detection and curvature estimation of piecewise smooth surfaces from
triangle mesh approximations using normal voting. In Computer Vision
and Pattern Recognition (CVPR), volume I, pages 162–167, 2001.
[37] C.R. Rao and H. Toutenburg. Linear Models:Least Squares and
Alternatives. 1999.
[38] S.M. Robbins. Anatomical standardization of the human brain in
euclidean 3-space and on the cortical 2-manifold. Technical Report
PhD thesis, School of Computer Science, McGill University, Montreal,
Quebec, Canada, 2003.
[39] S. Rosenberg. The Laplacian on a Riemannian Manifold. Cambridge
University Press, 1997.
[40] L. Shen and M.K. Chung. Large-scale modeling of parametric surfaces
using spherical harmonics. In Third International Symposium on 3D
Data Processing, Visualization and Transmission (3DPVT), 2006.
[41] L. Shen, J. Ford, F. Makedon, and A. Saykin. surface-based approach
for classification of 3d neuroanatomical structures. Intelligent Data
Analysis, 8:519–542, 2004.
[42] J.G. Sled, A.P. Zijdenbos, and A.C. Evans. A nonparametric method
for automatic correction of intensity nonuniformity in mri data. IEEE
Transactions on Medical Imaging, 17:87–97, 1988.
[43] M. Styner, I. Oguz, S. Xu, C. Brechbuhler, D. Pantazis, J. Levitt,
M. Shenton, and G. Gerig. Framework for the statistical shape analysis
of brain structures using spharm-pdm. In Insight Journal, Special Edition
on the Open Science Workshop at MICCAI, 2006.
[44] P.M. Thompson, J.N. Giedd, R.P. Woods, D. MacDonald, A.C. Evans,
and A.W Toga. Growth patterns in the developing human brain detected
using continuum-mechanical tensor mapping. Nature, 404:190–193,
2000.
[45] P.M. Thompson, Hayashi K.M., de Zubicaray G., Janke A.L., Rose S.E.,
Semple J., Herman D., Hong M.S., Dittmer S.S., Doddrell D.M., and
Toga A.W. Dynamics of gray matter loss in alzheimer’s disease. J.
Neurosci., 23:994–1005, 2003.
[46] P.M. Thompson, Hayashi K.M., de Zubicaray G., Janke A.L., Rose S.E.,
Semple J., Hong M.S., D.H. Herman, D. Gravano, D.M. Doddrell, and
Toga A.W. Mapping hippocampal and ventricular change in alzheimer
disease. NeuroImage, 22:1754–1766, 2004.
[47] P.M. Thompson and A.W. Toga. A surface-based technique for warping
3-dimensional images of the brain. IEEE Transactions on Medical
Imaging, 15, 1996.
[48] B. Timsari and R. Leahy. An optimization method for creating semi-
isometric flat maps of the cerebral cortex. In The Proceedings of SPIE,
Medical Imaging, 2000.
[49] L. Wang, J.S. Swank, I. E. Glick, M. H. Gado, M. I. Miller, J. C. Morris,
and J.G. Csernansky. Changes in hippocampal volume and shape across
time distinguish dementia of the alzheimer type from healthy aging.
NeuroImage, 20:667–682, 2003.
[50] K.J. Worsley. Detecting activation in fmri data. Statistical Methods in
Medical Research., 12:401–418, 2003.
[51] K.J. Worsley, S. Marrett, P. Neelin, A.C. Vandal, K.J. Friston, and A.C.
Evans. A unified statistical approach for determining significant signals
in images of cerebral activation. Human Brain Mapping, 4:58–73, 1996.
[52] K.J. Worsley, J.E. Taylor, F. Tomaiuolo, and J. Lerch. Unified univariate
and multivariate random field theory. NeuroImage, 23:S189–195, 2004.
[53] P. Yuen, N. Khalili, and F. Mokhtarian. Curvature estimation on
smoothed 3-d meshes. In Proceedings of British Machine Vision
Conference, pages 133–142, 1999.