This document describes the UCSF pipeline for processing arterial spin labeling (ASL) MRI data to produce cerebral blood flow (CBF) maps. The pipeline involves motion correction, computation of perfusion-weighted images (PWI), co-registration of ASL and structural MRI data, geometric distortion correction, partial volume correction, and quantification of CBF values. The outputs include PWI and CBF images in the ASL and structural MRI spaces, as well as regional CBF statistics within anatomically defined regions of interest. Limitations include the distortion correction method not accounting for intensity variations, scaling not accounting for regional relaxation rate variations, and partial volume correction not accounting for regional perfusion ratio variations.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Contourlet Transform Based Method For Medical Image DenoisingCSCJournals
Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.
Remote sensing image fusion using contourlet transform with sharp frequency l...Zac Darcy
This paper addresses four different aspects of the remote sensing image fusion: i) image fusion method, ii)
quality analysis of fusion results, iii) effects of image decomposition level, and iv) importance of image
registration. First, a new contourlet-based image fusion method is presented, which is an improvement
over the wavelet-based fusion. This fusion method is then utilized withinthe main fusion process to analyze
the final fusion results. Fusion framework, scheme and datasets used in the study are discussed in detail.
Second, quality analysis of the fusion results is discussed using various quantitative metrics for both spatial
and spectral analyses. Our results indicate that the proposed contourlet-based fusion method performs
better than the conventional wavelet-based fusion methodsin terms of both spatial and spectral analyses.
Third, we conducted an analysis on the effects of the image decomposition level and observed that the
decomposition level of 3 produced better fusion results than both smaller and greater number of levels.
Last, we created four different fusion scenarios to examine the importance of the image registration. As a
result, the feature-based image registration using the edge features of the source images produced better
fusion results than the intensity-based imageregistration.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Contourlet Transform Based Method For Medical Image DenoisingCSCJournals
Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.
Remote sensing image fusion using contourlet transform with sharp frequency l...Zac Darcy
This paper addresses four different aspects of the remote sensing image fusion: i) image fusion method, ii)
quality analysis of fusion results, iii) effects of image decomposition level, and iv) importance of image
registration. First, a new contourlet-based image fusion method is presented, which is an improvement
over the wavelet-based fusion. This fusion method is then utilized withinthe main fusion process to analyze
the final fusion results. Fusion framework, scheme and datasets used in the study are discussed in detail.
Second, quality analysis of the fusion results is discussed using various quantitative metrics for both spatial
and spectral analyses. Our results indicate that the proposed contourlet-based fusion method performs
better than the conventional wavelet-based fusion methodsin terms of both spatial and spectral analyses.
Third, we conducted an analysis on the effects of the image decomposition level and observed that the
decomposition level of 3 produced better fusion results than both smaller and greater number of levels.
Last, we created four different fusion scenarios to examine the importance of the image registration. As a
result, the feature-based image registration using the edge features of the source images produced better
fusion results than the intensity-based imageregistration.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Adaptive lifting based image compression scheme using interactive artificial ...csandit
This paper presents image compression method using Interactive Artificial Bee Colony (IABC) optimization algorithm. The proposed method reduces storage and facilitates data transmission by reducing transmission costs. To get the finest quality of compressed image, utilizing local search, IABC determines different update coefficient, and the best update coefficient is chosen
optimally. By using local search in the update step, we alter the center pixels with the coefficient in 8-different directions with a considerable window size, to produce the compressed image, expressed in terms of both PSNR and compression ratio. The IABC brings in the idea of
universal gravitation into the consideration of the affection between onlooker bees and the employed bees. By passing on different values of the control parameter, the universal gravitation involved in the IABC has various quantities of the single onlooker bee and employed bees. As a result when compared to existing methods, the proposed work gives better PSNR.
Image Registration using NSCT and Invariant MomentCSCJournals
Image registration is a process of matching images, which are taken at different times, from different sensors or from different view points. It is an important step for a great variety of applications such as computer vision, stereo navigation, medical image analysis, pattern recognition and watermarking applications. In this paper an improved feature point selection and matching technique for image registration is proposed. This technique is based on the ability of nonsubsampled contourlet transform (NSCT) to extract significant features irrespective of feature orientation. Then the correspondence between the extracted feature points of reference image and sensed image is achieved using Zernike moments. Feature point pairs are used for estimating the transformation parameters mapping the sensed image to the reference image. Experimental results illustrate the registration accuracy over a wide range for panning and zooming movement and also the robustness of the proposed algorithm to noise. Apart from image registration proposed method can be used for shape matching and object classification. Keywords: Image Registration, NSCT, Contourlet Transform, Zernike Moment.
In the present day automation, the researchers have been using microcomputers and its allies to carryout processing of physical quantities and detection of Cholesterol in blood and bio-medical Images. The latest trend is to use FPGA counter parts, as these devices offer many advantages in comparison with Programmable devices. These devices are very fast and involve hardwired logic. FPGA are dedicated hardware for processing logic and do not have an operating system. That means that speeds can be very fast and multiple control loops can run on a single FPGA device at different rates. In this paper, an attempt is being made to develop a prototype system to sense the Cholesterol portion in MRI image using modified Set Partitioning in Hierarchical Trees (SHIPT) wavelets transformation and Radial Basis Function (RBF). An each stage of Cholesterol detection are displayed on LCD monitor for clear view of improved version of MRI image and to find Cholesterol area. The performance parameters have been measured in terms of Peak Signal to Noise Ratio (PSNR), and Mean Square Error (MSE).
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles .
Compressed Medical Image Transfer in Frequency DomainCSCJournals
A common approach to the medical image compression algorithm begins by separating the region of interests from the background of the medical images and then lossless and lossy compression schemes are applied on the ROI part and background respectively. The compressed files (ROI and background) are now transmitted through different media of communications (local host, Intranet and Internet) between the server and clients. In this work, a medical image transfer coding scheme based on lossless Haar wavelet transforms method is proposed. At first, the proposed scheme is tested on Intranet (for both RoI and background) in order to compare its results with Internet tests. An adaptive quantization algorithm is used to apply on quasi lossless ROI wavelet coefficients while a uniform quantization is used to apply on lossy background wavelet coefficients. Finally, the retained quantization indices are entropy encoded with an optimal variable coding algorithm. The test results have indicated that the performance of the proposed MITC via Intranet is much better than via Internet in terms of transferring time, while the quality of the reconstructed medical image remains constant despite the medium of communication. For best adopted parameters, a compressed medical image file (760 KB „³ 19.38 KB) is transmitted through Internet (bandwidth= 1024 kbps) with transfer time = 0.156 s while the uncompressed file is sent with transfer time = 6.192 s.
Leaky Blood-Brain Barrier a Harbinger of Alzheimer s Disease - presentation made at Alzforum's live webinar of February 17, 2015. See details at: http://www.alzforum.org/webinars/leaky-blood-brain-barrier-harbinger-alzheimers
Leaky Blood-Brain Barrier a Harbinger of Alzheimer s Disease - presentation made at Alzforum's live webinar of February 17, 2015. See details at: http://www.alzforum.org/webinars/leaky-blood-brain-barrier-harbinger-alzheimers
Leaky Blood-Brain Barrier a Harbinger of Alzheimer s Disease - presentation made at Alzforum's live webinar of February 17, 2015. See details at: http://www.alzforum.org/webinars/leaky-blood-brain-barrier-harbinger-alzheimers
Presentation delivered by Dr. Carol Manning at the live webinar hosted by AlzPossible at www.alzpossible.org on the 17th of March, 2014.
www.alzpossible.org
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Adaptive lifting based image compression scheme using interactive artificial ...csandit
This paper presents image compression method using Interactive Artificial Bee Colony (IABC) optimization algorithm. The proposed method reduces storage and facilitates data transmission by reducing transmission costs. To get the finest quality of compressed image, utilizing local search, IABC determines different update coefficient, and the best update coefficient is chosen
optimally. By using local search in the update step, we alter the center pixels with the coefficient in 8-different directions with a considerable window size, to produce the compressed image, expressed in terms of both PSNR and compression ratio. The IABC brings in the idea of
universal gravitation into the consideration of the affection between onlooker bees and the employed bees. By passing on different values of the control parameter, the universal gravitation involved in the IABC has various quantities of the single onlooker bee and employed bees. As a result when compared to existing methods, the proposed work gives better PSNR.
Image Registration using NSCT and Invariant MomentCSCJournals
Image registration is a process of matching images, which are taken at different times, from different sensors or from different view points. It is an important step for a great variety of applications such as computer vision, stereo navigation, medical image analysis, pattern recognition and watermarking applications. In this paper an improved feature point selection and matching technique for image registration is proposed. This technique is based on the ability of nonsubsampled contourlet transform (NSCT) to extract significant features irrespective of feature orientation. Then the correspondence between the extracted feature points of reference image and sensed image is achieved using Zernike moments. Feature point pairs are used for estimating the transformation parameters mapping the sensed image to the reference image. Experimental results illustrate the registration accuracy over a wide range for panning and zooming movement and also the robustness of the proposed algorithm to noise. Apart from image registration proposed method can be used for shape matching and object classification. Keywords: Image Registration, NSCT, Contourlet Transform, Zernike Moment.
In the present day automation, the researchers have been using microcomputers and its allies to carryout processing of physical quantities and detection of Cholesterol in blood and bio-medical Images. The latest trend is to use FPGA counter parts, as these devices offer many advantages in comparison with Programmable devices. These devices are very fast and involve hardwired logic. FPGA are dedicated hardware for processing logic and do not have an operating system. That means that speeds can be very fast and multiple control loops can run on a single FPGA device at different rates. In this paper, an attempt is being made to develop a prototype system to sense the Cholesterol portion in MRI image using modified Set Partitioning in Hierarchical Trees (SHIPT) wavelets transformation and Radial Basis Function (RBF). An each stage of Cholesterol detection are displayed on LCD monitor for clear view of improved version of MRI image and to find Cholesterol area. The performance parameters have been measured in terms of Peak Signal to Noise Ratio (PSNR), and Mean Square Error (MSE).
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles .
Compressed Medical Image Transfer in Frequency DomainCSCJournals
A common approach to the medical image compression algorithm begins by separating the region of interests from the background of the medical images and then lossless and lossy compression schemes are applied on the ROI part and background respectively. The compressed files (ROI and background) are now transmitted through different media of communications (local host, Intranet and Internet) between the server and clients. In this work, a medical image transfer coding scheme based on lossless Haar wavelet transforms method is proposed. At first, the proposed scheme is tested on Intranet (for both RoI and background) in order to compare its results with Internet tests. An adaptive quantization algorithm is used to apply on quasi lossless ROI wavelet coefficients while a uniform quantization is used to apply on lossy background wavelet coefficients. Finally, the retained quantization indices are entropy encoded with an optimal variable coding algorithm. The test results have indicated that the performance of the proposed MITC via Intranet is much better than via Internet in terms of transferring time, while the quality of the reconstructed medical image remains constant despite the medium of communication. For best adopted parameters, a compressed medical image file (760 KB „³ 19.38 KB) is transmitted through Internet (bandwidth= 1024 kbps) with transfer time = 0.156 s while the uncompressed file is sent with transfer time = 6.192 s.
Leaky Blood-Brain Barrier a Harbinger of Alzheimer s Disease - presentation made at Alzforum's live webinar of February 17, 2015. See details at: http://www.alzforum.org/webinars/leaky-blood-brain-barrier-harbinger-alzheimers
Leaky Blood-Brain Barrier a Harbinger of Alzheimer s Disease - presentation made at Alzforum's live webinar of February 17, 2015. See details at: http://www.alzforum.org/webinars/leaky-blood-brain-barrier-harbinger-alzheimers
Leaky Blood-Brain Barrier a Harbinger of Alzheimer s Disease - presentation made at Alzforum's live webinar of February 17, 2015. See details at: http://www.alzforum.org/webinars/leaky-blood-brain-barrier-harbinger-alzheimers
Presentation delivered by Dr. Carol Manning at the live webinar hosted by AlzPossible at www.alzpossible.org on the 17th of March, 2014.
www.alzpossible.org
Many algorithms have been developed to find sparse representation over redundant dictionaries or
transform. This paper presents a novel method on compressive sensing (CS)-based image compression
using sparse basis on CDF9/7 wavelet transform. The measurement matrix is applied to the three levels of
wavelet transform coefficients of the input image for compressive sampling. We have used three different
measurement matrix as Gaussian matrix, Bernoulli measurement matrix and random orthogonal matrix.
The orthogonal matching pursuit (OMP) and Basis Pursuit (BP) are applied to reconstruct each level of
wavelet transform separately. Experimental results demonstrate that the proposed method given better
quality of compressed image than existing methods in terms of proposed image quality evaluation indexes
and other objective (PSNR/UIQI/SSIM) measurements.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on visual examination by radiologist or a physician may lead to missing diagnosis when a large number of MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the diagnosis of dementia. In this research work, advanced classification techniques using Support Vector Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction technique yields better results than PCA.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild
and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on
visual examination by radiologist or a physician may lead to missing diagnosis when a large number of
MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the
diagnosis of dementia. In this research work, advanced classification techniques using Support Vector
Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of
SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction
technique yields better results than PCA.
Performance Evaluation of Percent Root Mean Square Difference for ECG Signals...CSCJournals
Electrocardiogram (ECG) signal compression is playing a vital role in biomedical applications. The signal compression is meant for detection and removing the redundant information from the ECG signal. Wavelet transform methods are very powerful tools for signal and image compression and decompression. This paper deals with the comparative study of ECG signal compression using preprocessing and without preprocessing approach on the ECG data. The performance and efficiency results are presented in terms of percent root mean square difference (PRD). Finally, the new PRD technique has been proposed for performance measurement and compared with the existing PRD technique; which has shown that proposed new PRD technique achieved minimum value of PRD with improved results.
To reduce the size of the biosignal data is important because a huge amount of data is made by
various experiments. In the paper, we efficiently compress the excitatory postsynaptic potentials
(EPSPs) which is one of the biosignal types. To the best of authors' knowledge, EPSPs
compression has not been studied yet. The EPSP signal has a feature that the adjacent signals
in single excitatory postsynaptic potential have similar characteristics. Using this feature, we
propose a method which removes temporal redundancy and statistical redundancy of EPSPs.
The compressed and reconstructed EPSPs are similar to the original signal without the loss of
analytic information.
COMPRESSED SENSING OF EXCITATORY POSTSYNAPTIC POTENTIAL BIO-SIGNALScscpconf
To reduce the size of the biosignal data is important because a huge amount of data is made by various experiments. In the paper, we efficiently compress the excitatory postsynaptic potentials (EPSPs) which is one of the biosignal types. To the best of authors' knowledge, EPSPs compression has not been studied yet. The EPSP signal has a feature that the adjacent signals in single excitatory postsynaptic potential have similar characteristics. Using this feature, we propose a method which removes temporal redundancy and statistical redundancy of EPSPs. The compressed and reconstructed EPSPs are similar to the original signal without the loss of analytic information.
COMPRESSED SENSING OF EXCITATORY POSTSYNAPTIC POTENTIAL BIO-SIGNALScscpconf
To reduce the size of the biosignal data is important because a huge amount of data is made by various experiments. In the paper, we efficiently compress the excitatory postsynaptic potentials (EPSPs) which is one of the biosignal types. To the best of authors' knowledge, EPSPs compression has not been studied yet. The EPSP signal has a feature that the adjacent signals in single excitatory postsynaptic potential have similar characteristics. Using this feature, we propose a method which removes temporal redundancy and statistical redundancy of EPSPs. The compressed and reconstructed EPSPs are similar to the original signal without the loss of analytic information.
This presentation is a starter for folks interested in the implementation and application of compressed sensing (CS) MRI. It includes a Matlab demo and list of well-known resources for CS MRI.
Lung Nodule Segmentation in CT Images using Rotation Invariant Local Binary P...IDES Editor
As the lung cancer is the leading cause of cancer
death in the medical field, Computed Tomography (CT) scan
of the thorax is widely applied in diagnoses for identifying
the lung cancer. In this paper, a technique of rotation invariant
with Local Binary Pattern (LBP) for segmentation of various
lung nodules from the Lung CT cancer data sets is used. This
is tested on various lung data sets from teaching files of
Casimage database and National Cancer Institute (NCI) of
National Biomedical Imaging Archive (NBIA). The results
show the segmented nodules with clear boundaries, which is
helpful in diagnosis of lung cancer. Further, the results are
compared with the watershed segmentation method, which
shows that LBP based method yields better segmentation
accuracy.
Lung Nodule Segmentation in CT Images using Rotation Invariant Local Binary P...
LONI_ASL_DOC
1. UCSF ASL Perfusion Processing Methods
Daniel Cuneo, Derek Flenniken, Duygu Tosun-Turgut, PhD, Norbert Schuff, PhD
November 6, 2012
2. Contents
Overview 1
Brief Description of ASL 1
ASL-MRI Pre-Processing Steps 2
Structural-to-ASL coregistration and nonlinear
geometric distortion correction 3
Partial Volume Correction 4
Computation of CBF in T1 Space 5
Free Surfer ROI Statistics 5
CBF Output Quality Control (QC) Criteria and Steps 6
Limitations 7
Processing Steps Flow Diagram 8
Simulated T2-weighted MRI 9
Data Set Information 9
References 9
Contact Information 10
3. Overview
The Center for Imaging of Neurodegenerative Diseases (CIND) processing pipeline for Arterial Spin Label (ASL)
imaging, prepares perfusion-weighted images (PWI) and computes a quantitative map of cerebral blood flow
(CBF) and a regional analysis. The quantification of CBF and correspondence to high-resolution anatomical
MRI data, is achieved by implementing multiple tools from the public domain such as, SPM8 [1], EPI nonlinear
geometric distortion correction [2], various FSL tools [3], FreeSurfer [4], Insight Toolkit (ITK) [5] and in house
MATLAB scripts.
In brief, the pipeline performs 1) motion correction of the individual ASL frames, 2) computation of the PWI
by subtracting the mean of tagged from untagged ASL data sets, 3) alignment of ASL and structural MRI data,
4) geometric distortion correction, 5) partial volume correction, 6) and CBF quantification in physical units by
normalizing ASL to an estimated blood water density signal.
The outputs include the PWI and a CBF image, both corrected for EPI distortion in two representations: a) in
native perfusion MRI space and b) in the subject-specific space of the corresponding structural MRI data. Each
output offers users a variety of options for further processing and analysis of the data, such as subject-specific
analysis using anatomical regions of interest or group-specific analysis, such as parametric mapping.In addition,
summary measures of CBF within anatomically defined brain regions are provided separately.
Brief Description of ASL
The concept of ASL provides a means of measuring cerebral blood flow (CBF) completely noninvasive by
exploiting the endogenous spins of arterial water as a proxy for blood flow. Labeling (or tagging) is achieved by
selectively inverting the magnetization of the arterial spins using MRI principles. After a short delay, allowing
the arterial spins to reach the brain, conventional MRI methods are used to map the spatial distribution of the
ASL signal. The signal to noise ratio of ASL signal is inherently low due to intense background signal from
tissue water. Therefore, the ADNI-2 ASL acquisition protocol requires a set of 52 pairs of repeated tagged and
untagged image frames which are then averaged to increase the SNR.
Many variants of ASL have been developed over the years, such as continuous, pulsed, and velocity selective
labeling. Here, we describe a pipeline to process a version of pulsed ASL method referred to as “QUIPSS II with
thin-slice TI1 periodic saturation” or “Q2TIPS” [6], augmented by conventional echo-planar imaging (EPI),
which has been used in ADNI-2. In Q2TIPS, arterial spins are labeled within a thick (≈ 10 cm) inversion slab,
placed proximal to the imaging slices, as shown in the figure below. The critical timing parameters of Q2TIPS
for quantifying flow are the ASL bolus duration and the transit delay from the time of inversion to the time of
EPI mapping.
1
4. Diagram of ASL regions
http://www.birncommunity.org/wp-content/uploads/2009/08/4077_Rasmussen.pdf
ASL-MRI Pre-Processing Steps
1. Motion Correction: The ASL image series is converted from DICOM to NiFTI format. Each frame is
aligned to the first frame to correct for head motion, by a rigid body transformation and least squares
fitting, using SPM8.
2. PWI Computations: The ASL images are arranged in groups of tagged and untagged scans and the mean
of each group is computed and saved as an image. Then the difference of the mean-tagged and mean-
untagged images (meantagged −meanuntagged) is taken to obtain a PWI. In addition, the very first untagged
ASL image, which provides a fully relaxed MRI signal, is used as a reference of water density and termed
M0. The reference image is primarily used to calibrate the ASL signal for computations of the CBF and
also as an intermediate frame to estimate the transformation to co-register between images from the ASL
MRI and structural MRI. In this capacity, the M0 is used as the blood-water-density proxy, Mblood, see
below.
3. Intensity Scaling: The PWI image as well as the M0 image, are intensity scaled. The PWI is scaled to
account for signal decay during acquisition and also to provide intensities in meaningful physical units.
The M0 is scaled to become Mblood, that is, water to blood density.
Description of PWI Scaling: PWI intensities are corrected for the longitudinal relaxation of the spin
tracers during data acquisition, including time lags between consecutive image slices, as well as for incomplete
labeling. Assuming an ascending slice order at acquisition with n = 1 indicating the first slice, the intensity
correction for the nth slice is:
PWIlag−corr =
PWIraw
2αTI1
e[R1a(TI2+(n−1) τ)]
, (1)
where α is the tagging efficiency, R1a is the longitudinal relaxation rate of blood, TI1 is the inversion time of
arterial spins and TI2, is the total transit time of the spins, n is the nth slice, and τ is the time lag between
slices.Specific intensity correction parameters used in our implementation are reported in Table 1. R1a is a
2
5. biological parameter whereas TI1, TI2, and τ are specific to ADNI-2 acquisition protocol. The resultant image
in the native ASL image space is provided in units of blood flow.
Intensity Scaling of M0: The control (M0) image is scaled to estimate a map of the blood magnetization,
Mblood by correcting for the lower density of tissue water relative to arterial blood water, λ , and for the different
relaxation characteristics of tissue and arterial blood water,as follows:
Mblood =
M0
λ
e(R∗
2tissue− R∗
2blood)TE
, (2)
where λ is the blood/water ratio in tissue, R∗ are the two relaxation rates and TE is the blood relaxation time.
See table 1 for the parameter values.
Table 1
Parameter Values
α 0.95
λ 0.90
τ 22.5 ms
TE 12 ms or 13 ms protocol specific
TI1 700 (ms)
TI2 1900 (ms)
R1a 1/1684 (kHz) [8]
R∗2tissue 1/44 ms [9]
R∗2blood 1/43 ms [10]
Computing A CBF in ASL Space: A CBF map in the ASL space, can be formed with the Mblood and
the PWIlag−corr, using,
CBFASL =
PWIlag−corr
Mblood
. (3)
Structural-to-ASL coregistration and nonlinear
geometric distortion correction
EPI-based perfusion images suffer from nonlinear geometric distortion due to magnetic susceptibility variations
whereas structural MR images are less susceptible to geometric distortions. This is a key challenge in achieving
an accurate anatomical match between data from ASL-MRI and structural MRIs. We use a variational image-
based approach to correct for these EPI-induced nonlinear geometric distortions in ASL-MRIs.
The geometric distortion correction step is performed in the PWI native space. Instead of using the T2∗ image,
we use a simulated T2 weighted image, made from the T1 in the acquisition series. The T2 simulation process
is briefly described in the appendix.
The processing steps for nonlinear geometric distortion correction areas follows:
• SimT2 image is masked leaving a layer of sulcal CSF around the brain.
3
6. • Mblood is skull stripped with BET [7] and coregistered to the masked T2 using the normalized mutual
information metric with 9 degrees of freedom.
• Masked SimT2 is down sampled to the Mblood space and named, T2rPWI.
• Distortion correction performed on the Mblood using the T2rPWI and the correction field applied to the
PWI.
Partial Volume Correction
We are interested in the blood flow of gray matter tissue (GM), which is bounded by cerebrospinal fluid (CSF)
and white matter (WM) tissues. To correct for variations in both PWIlag−corr and Mblood due to variable
coverage of GM, WM and CSF at each voxel, both images are corrected for the tissue partial volume effects
before computation of a CBF map.
Both PWIlag−corr and Mblood maps are first upsampled to the T1-weighted MR image space using trilinear
interpolation and inverse affine transformation estimated to coregister simulated T2 to masked Mblood.
The PWIlag−corr intensities are adjusted voxel wise, according to GM and WM tissue density distributions
estimated using 4 tissue segmentation provided by SPM8, and assuming a constant ratio between GM and WM
matter perfusion. The intensities are corrected according to,
PWIgm = (PWIrT1)
[
GM
GM + 0.4WM
.
]
(4)
Here, PWIrT1 is the PWIlag−corr resliced to T1 image space. βGM and βWM are the volume fractions of gray
matter and white matter respectively.
In this partial volume model of PWIrT1, we assume that signal at each voxel is a weighted linear combination of
perfusion from GM and WM, with the weighting coefficients expressing perfusion in terms of the corresponding
tissue densities (i.e., βi for i = GMi , WMi) and the ratio of GM to WM perfusion is spatially constant at 2.5
value (Kanetaka et al., 2004) [11].
A similar PVC is performed for the Mblood . In contrast to PWI, however, partial volume correction for the tissue
water reference image needs to include CSF. Partial volume correction of the Mblood is computed according to,
Mblood−PV C = (MrT1−blood)
[
γGM
(
γ−1
GM + γ−1
1 WM + γ−1
2 CSF
)]
+ κ . (5)
The ratios of tissues to water are,
γ =
GM
H2O
= 0.78 , γ1 =
WM
H2O
= 0.65 , γ2 =
CSF
H2O
= 0.97 ,
and
κ =
1 −
∑
(GM + WM + CSF)
1 × 10100
.
4
7. The κ parameter is used to regularize the CBF computation (see next section),in the limits where the voxels
are entirely made up of gray matter in the CBF ( numerator ) image, or CSF.
Computation of CBF in T1 Space
The PWIPV C, is normalized by the Mblood−PV C to express the ASL signal in physical units of arterial water
density
(
ml
100g 60s
)
, yielding CBF according to
CBF =
PWIPV C
Mblood−PV C
. (6)
The Mblood in the denominator, is spatially smoothed using a 3D isotropic Gaussian smoothing kernel, whose
FWHM is the largest voxel dimension in millimeters, of the native ASL MR image. The normalization also
eliminates spatial B1-inhomogeneity by definition, since both maps are subject to the same B1-inhomogeneity
distribution.
The CBF image is also threshold-ed for high intensity outliers by finding the intensity, C, that corresponds
to 1% of mean CBF signal. The global mean CBF is computed as the center of a the mode in a histogram of
the normalized output. CBF intensities are then truncated such that intensities greater than are set to C + ϵr,
where ϵr is a random number between zero and one.
The CBF data type is initially double precision floating point numbers. The final output CBF is saved with a
data type of INT32. To preserve the data through this change of data type, the values are scaled up by 1 × 104.
Therefore, all values read from a viewer must be divided by 1 × 104 to obtain CBF in units of ml/100mg/min .
CBF =
voxel intensity
1 × 104 (7)
Free Surfer ROI Statistics
In addition to the CBF image, ROI statistics from anatomical regions of interest are also computed, from the
CBF image in T1 space. FreeSurfer is used to segment sub-cortical structures and parcellate the cortical surface
into regions of interest (ROI). Because the ASL-MRI acquisition might have partial brain coverage (e.g., possible
temporal and occipital cut-offs), FreeSurfer ROIs are truncated with a non-zero CBF brain mask. An Insight-
Toolkit (ITK) based in house C++ program is then employed to compute the quantities listed below. The
output is a space delimited text file named ‘ROIperfusion.log’ and contains only regions with a non-zero count
of voxels in the ROI.
• number of voxels in the ROI
• minimum
• maximum
• mean
• median
• standard deviation
5
8. CBF Output Quality Control (QC) Criteria and Steps
1. Image Quality QC
(a) Raw Data QC: Pass/Fail
i. This is the only level where we are making a judgment based on the image quality.
ii. All data is run regardless of QC
2. Processing QC: Registration check to determine the success of the processing pypeline
(a) Registration QC: The output image check and the only category that determines exclusion from
uploads.
i. This QC is performed by overlaying the Final Mblood file (“SubjectCode Mblood-EpiCorr-rT1-
PVC-Smoothed.nii”) onto the raw T1 (“SubjectCode T1.nii”).
ii. Pass: Mblood overlays completely onto T1. Verified by checking brain edges and, if possible,
Vents.
iii. Follow-up: Anything strange; brain edges not aligned, Ventricles off, or perhaps a 3D lateral shift
iv. Note: We are not doing a QC on the quality of the MBlood image. So any signal loss in the
extremities like inferior temps or medialorbitofrontal does not affect the rating. We are checking
for a successful registration only.
(b) There is no rating of Fail in the processing QCs because at the moment we are attempting to resolve
all issues that require follow-up.
Mblood-EpiCorr-rT1-PVC-Smoothed.nii
QC Registration Pass QC Registration Follow Up
6
9. Limitations
1. DEFORMATION FIELD: The method of variational matching for the distortion correction in ASL does
not account for intensity variations due to pixel aliasing. Thus, ASL values from regions with prominent
susceptibility distortions should be interpreted with caution.
2. INTENSITY SCALING: The current implementation of intensity scaling of PWI and M0 maps does not
account for regional and subject-specific variations in the signal relaxation rates. Moreover, the scaling
of PWI does not account for regional and subject-specific variations in transit delays of the ASL bolus.
Therefore, a bias in scaling is introduced to the extent that signal relaxation rates and transit delays vary
across regions and subjects.
3. PARTIAL VOLUME CORRECTION : The current implementation of partial volume correction does not
account for regional variations in the ratio of gray matter to white matter perfusion nor is the ratio value
driven by the data. Therefore, a bias in gray matter perfusion is introduced to the extent that the ratio
varies across regions and subjects.
OUTPUT SUMMARY
EPI Corrected ASL Space
Theoretical Name ................................................ Filename
PWIscaled−epi−corr ................................................ ScaledPWI-aACPC-EpiCorr.nii
Mblood−EpiCorr ................................................ Mblood-EpiCorr.nii
T1 Space
PWIscaled−EpiCorr−rT1 ................................................ ScaledPWI-aACPC-EpiCorr-rT1.nii
Mblood−EpiCorr−rT1 ................................................ Mblood-EpiCorr-rT1.nii
GM segmentation image ................................................ T1-GmMask.nii
WM segmentation image ................................................ T1-WmMask.nii
CSF segmentation image ................................................ T1-CsfMask.nii
PWIscaled−EpiCorr−rT1−pvc ................................................ ScaledPWI-aACPC-EpiCorr-rT1-PVC.nii
Mblood−EpiCorr−rT1−pvc ................................................ Mblood-EpiCorr-rT1-PVC.nii
CBF ................................................ ScaledPWI-aACPC-EpiCorr-rT1-PVC-Norm.nii
FreeSurfer ROI Statistics
First order statistics per ROI ................................................ ROIperfusion.log
7
11. Appendix: Simulated T2-weighted MRI
A framework to simulate subject-specific T2-weighted MR images was engineered by Dr Duygu Tosun-Turgut.
The framework is based on the work by Benoit-Cattin et al.[12] To mimic clinical neuroimaging practice, we
chose the following 2D Turbo Spin Echo (TSE) sequence parameters for simulated T2-MRI.
Parameter Value
TR 3210 ms
TE 101 ms
slice thickness 3 mm
resolution 1 × 1 mm
flip angle 150◦
bandwidth 185 Hz
GRAPPA 2
T2-MRI simulation steps are as follows:
In brief, the steps to make a simulated T2 are:
• Create a binary brain tissue mask from FreeSurfer ’aparc+aseg’ anatomical annotation.
• Apply a morphological closing operator followed by a morphological dilation to eliminate holes in brain
mask and to include sulcal CSF.
• Apply resulting brain mask to T1-MRI, yielding skull-stripped T1 image.
• Downsample skull-stripped T1 image to 1 × 1 × 3 mm3 image resolution to mimic clinical neuro imaging
practice.
• Apply a fuzzy c-means clustering algorithm to estimate GM,WM, and CSF tissue density at each image
voxel in downsampled skull-stripped T1 image space
• Based on selected acquisition parameters and tissue density estimates, simulate a T2-MRI.
Each 2D axial slice is simulated separately and finally merged back to form a simulated 3D T2-MRI.
Data Set Information
UCSF ASL FS Uploaded on November 1st, 2012
References
1. http://www.fil.ion.ucl.ac.uk/spm/
9
12. 2. Ran Tao, P. Thomas Fletcher, et al. (2009). ”A Variational Image-Based Approach to the Correction
of Susceptibility Artifacts in the Alignment of Diffusion Weighted and Structural MRI.” Process Med
Imaging 21: 664-675.
3. http://www.fmrib.ox.ac.uk/fsl/
4. http://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferMethodsCitation
5. T.S. Yoo, M. J. Ackerman, W. E. Lorensen, W. Schroeder, V. Chalana, S. Aylward, D. Metaxes, R.
Whitaker. Engineering and Algorithm Design for an Image Processing API: A Technical Report on ITK -
The Insight Toolkit. In Proc. of Medicine Meets Virtual Reality, J. Westwood, ed., IOS Press Amsterdam
pp 586-592 (2002).
6. Luh et al., Magn. Reson. Med. 41:1246-1254 (1999)
7. S.M. Smith, Fast robust automated brain extraction; Human Brain Mapping, 17(3):143-155, November
2002.
8. Lu at al., Magn. Reson. Med. 52(3):679-82;(2004)
9. Donahue et al., Magn. Reson. Med. 56:1261-73;(2006)
10. Zhao et al.Magn. Reson. Med. 58:592-97;(2007)
11. Kanetaka, H., Matsuda, H., Asada, T., Ohnishi, T., Yamashita, F., Imabayashi, E., Tanaka, F., Nakano, S.,
Takasaki, M., 2004. Effects of partial volume correction on discrimination between very early Alzheimer’s
dementia and controls using brain perfusion SPECT. Eur. J. Nucl. Med. Mol. Imaging 31, 975-980.
12. H. Benoit-Cattin, G. Collewet, B. Belaroussi, H. Saint-Jalmes, C. Odet, The SIMRI project: a versatile
and interactive MRI simulator, Journal of Magnetic Resonance, Volume 173, Issue 1, March 2005, Pages
97-115, ISSN 1090-7807, 10.1016/j.jmr.2004.09.027.
Contact Information
This document was prepared by Daniel Cuneo, UCSF/SF VA Medical Center. For more information please
contact Alix Simonson at alix.simonson@va.gov or Diana Truran-Sacrey at diana.truran@ucsf.edu.
10