Today i will be talking about Masters thesis work - “Rician Nose Removal in DT-MRI.
The way i have organized this talk is as follows: I’ll start with a brief review of DT-mRI . It will be like a quick intro to basic principles involved. Then i will talk about the goals for our thesis....what is it that we wanted to accomplish? i will discuss the motivation behind our work. Why is it that noise removal in DT-MRI is important in the first place and why is it that it is important to consider Rician noise in the filtering process. Next i’ll talk about various recent filtering approaches briefly describing each of them. Then i’ll describe the new Rician bias correction filtering method that came up as part of this thesis We will look at Results and discussion of these results. i will end with conclusion where i will summarize and discuss possible directions for future research and finally questions.
I decided i would start this presentation with this article that i came across which was published in Technology Review earlier this year. Tech. review is an independent magazine publication from MIT which comes out with a bimonthly issue featuring the latest research activities and tech. trends. This years Mar-Apr issue mentions DT-MRI as one of the top 10 emerging technologies. The point of this slide is that although DT_MRI was introduced in the year 1986 by Basser and Bihan, it is only recently that it has started gaining increasing importance as more applications have started making use of it.
This is an image of the white matter fibers in the brain ...I show this figure because it is DT-MRI technology which makes it possible to look at images like this...
So what is DT-MRI. In a nutshell DT-MRI is an imaging technique which computes a 3x3 matrix. This matrix characterizes how water diffuses across fibers in the human brain. The 9 coefficients of this matrix measure diffusion of water along various spatial directions. (x,y,z directions and xy, yz and zx planes) This matrix has some properties namely 1, The matrix is symmetric (D=D transpose) 2. The matrix is positive definite which essential implies that all its eigen values are positive. The key idea is water diffuses more “along” the direction of the fibres than across it. So by measuring water diffusion we are actually measuring orientation of fibre tracts in our brains.
Once we have these diffusion tensors , we typically visualize them in a way that helps us infer the connectivity and orientation of the brain fibres. Tensor visualization by itself is a research area and i will mention a couple of techniques here. One way to visualize the tensors is to form a cuboid whose dimensions are the eigen values of the tensor as you can see on the left figure. Another could be to form an ellipsoid whose semimajor and 2 semiminor axis are the eigen values. Gordon Kindllman came with a class of fucntions called superquadrics which are even better for tensor visualization, Notice that if one of the eigen values is large these shapes will appear long and skinny and indicate that the tensors are oriented in that direction.... In addition to the shape of the glyph , there are two common tensor derived measures that are also visualized: 1. Tensors Orientation: (Direction of the principal eigen vector) also orientation of white matter fibre 2. Tensor’s Directionality/Anisotropy The most common measure of a tensor’s directionality/anisotropy is the FA . It varies from (0-1) 0 meaning completely isotropic and 1 means very high anisotropy. The figure on the right is an example of a coronal slice from our brain where these tensor are visualized using superquadrics . The hue encodes the direction (so the red tensors are oriented along x and blue along y) and saturation encodes anisotropy..so deep shades are more anisotropic than the lighter ones.
Lets now briefly look at how these tensors are actually computed. The main idea is that you start by placing the patient in a uniform magnetic field and you then apply radio frequency pulses. The echo from these radio pulses are measures by a scanner and form what is called the “ baseline image”. In the next stage in addition to the uniform magenetic field an additionl gradient magnetic field is applied in various directions to obtain what are known as diffusion weighted images or gradient sensitized images. The point to note here is that if there is more diffuison along a particular gradient direction, the intensity in the corresponding DWI is attenuated with respect to the baseline image. So greater the attenuation of the signal , higher is the amount of diffusion. Example Point out CSF...... Stejksal Tanner gave an equation which relates the intenisty of the baseline image and that of the DWI with the diffusion tensor(D) In this equation Ai is the intensity of the ith DWI A0 is the intensity of the baseline image, b is the strength of the uniform magnetic field (typically 1-2 Tesla) , g is a vector denoting the gradient direction and D is the Diffusion tensor matrix. In this equation we see that our only unknown is D..Ai and A0 are obtained through measurements, b and g_i are known. If we apply logarithms , the equation can written as shown. Since D is our unknown and it has six unique coeffiecients (since D is symmetric) we need a minimum of 6 DWI images to solve it. Having more DWIs overconstrains the system and we can obtain a better solution using linear least squares fit.
So what were the goals for this thesis? Simply put we wanted to be able to answer questions like.... What is the best method to remove noise in DT-MRI data? How good are the current filtering methods and is there one that is better... Is there a better way of doing the filtering?
let us now look at the motivation for this work. Why is it that we want to do all this? The answer to that lies in answering these 2 questions namely 1. Why is DT-MRI filtering important in the first place? and secondly 2. Why is that we need to account for Rician noise in the filtering process? Lets look at each of these question in further detail....
So why perform DT-MRI filtering? The biggest reason is that DT-MRI is inherently plagued by low Signal-to-noise ratios.. Typically multiple scans are needed to increase SNR but this is constrained by issues of acquisition time , patient comfort and system throughput. More scans need more time and this requires the patient to stay still for a longer duration. Also more scans means lesser number of individual patients can be scanned reducing throughput. (2x2x2 mm) resolution scan of human brain on 3T machine takes around 12 mins.) or longer.... It is Std procedure to acquire multiple scans and average them to increase SNR// Also there are issues with registration....If the patient moves between acquisition of two separate slices..these would need to be registered introducing the possibilty of errors... Partial Voluming refers to the problem arising out of resolution of acquistion. If the resolution is not small a voxel will cover a region of non homogeneous tissue and the acquired signals will be a combination of signals correspomding to each tissue category. So all for all these reasons, SNR in DT-MRI is low and post processsing methods like filtering are vital to improve SNR.
The next question that we want to answer is that why is it important to account for Rician noise in the filtering process. To do this lets look at 3 related questions.... Firstly we need to understand what is Rician noise and how does it arise in DT-MRI We need to know how rician noise effects tensors and finally we need to look at what previous filtering methods have done to address this problem. //Rician noise is the kind of noise ....in MRI imaging
To understand Rician noise lets look at how it arises in DT-MRI. DWI images are magnitudes of complex values signals. So each pixel is the absolute value of a complex number. If we add Gaussian noise to the real and imaginary components of a complex number and take its magnitude, the resulting magitude image will have noise that is characterized by a RIcian distribution. So in this slide A is a complex number, G(sigma) denotes gaussian noise with std deviation sigma and A_0 is th magnitude image. Sohere A_0 is corrupted with rician noise.
Rician noise mathematically is described by an expression for its probability distribution. Formally a signal A is said to be corrupted with Rician noise with std. deviation sigma if its pdf (prob. distrib function ) is given by this expression.
Now if we plot this distribution we get the following graph. This plot shows the Rician pdf for various values of the true signal value A. On the x axis is the true value and on y we have the probability. 1.) As we can see from the shape of the pdf that the plots are not always symmetric about the true value of the signal....for example when the true value is 4...the pink plot appears to be symmetrc and when the true value is 0 or 1...the plot appears to have a bias towards the positive side .(purple or black) 2.)What this means is when the true value is smaller , adding noise will result in a noisy value which will have a positive bias. Example: when the true value is zero, adding noise will always causes the signal value to become more positive...
Now if we compare the Rician pdf with the Gaussian| Normal distribution, we see that the normal distribution is always symmetric about the true value of the signal... which means if you add noise to the signal it is equally likely that the noise will cause the signal value to increase or decrease... There is no bias... 3. And Finally if we go back to the Rician distribution we see that as the true values of the signal becomes larger...the Rice distribution tends towards a normal distribution....
To emphasize the positive bias introduced by the Rician distribuition...i did an experiment...i took 10000 samples of a signal and added both rician and gaussian noise (with std deviation sigma=20),to it...i averaged all the different samples and subtracted it from the original clean value.. to compute the bias... So essentially the Bias was E[Noisy Signal]- True Value The plot shows true value of the signal on the x axis and bias on the y axis.... In the plot the green is the Gaussian and red the Rician distribution...we see that for the gaussian distribution...bias is always close to zero....as the averaging tends to cancel out the noise but for the Rician distribution... the bias is significant for signal values less than 100.
Now that we understand rician noise , lets see how Rician noise affects the estimated tensors.... Previous studies in literature have shown that as noise increases....the trace decreases and FA increases...however when we performed Monte Carlo simulations...using tensors charactersitic of those found in the human brain we found that these effects depend on tensor(fiber tract) orientation with respect to the gradient directions used in the acquisition. For example when ...tensors are aligned with a gradient direction...adding noise can cause the FA to actually go down..and the trace to decrease even further...
So in this slide the tensor on top splits the gradient direction, while the lower one is alligned with a gradient direction...
So based on these simulations we concluded that rician noise can lead to incorrect FA estimation based on how a person sits inside a scanner.! because the FA estimate depend on orientation. This can have important affects on clinical studies based on the computed FA . The bottom line then is that is important to address Rician noise in the filtering approach....
Lets now look at some previous filtering techniques for DT-MRI filtering.... All existing DT-MRI filtering techniques can be broadly classified into two categories: 1. Those which operate on the DW images 2. And those which regularize the estimated tensors. In the DWI space two recent papers are the one by Parker and another by Wang and Vemuri. Both Parker and Vemuri papers basically perform an anisotropic diffusion filtering on the inputs vemuri additionaly constrain the estimated tensors to remain positive definite... Just to briefly recap,in anisotropic diffusion we iteratively updating the noisy image with its laplacian weighted using a conductance term...to preserve edges and prevent blurring of features.. Edge preserving aniso diffusion was first shown by (Perona Malik 1990) On the tensor domain , we have the work done by pennec .Pennec introduced a framework which preserves the property of tensors being positive definite...They perform anisotropic diffusion on a Riemannian manifold and each point on the manifold represents a symmetric and positive definite tensor... The other paper by Martin uses Gaussian markov random fields and again is equivalent to an anisotropic diffusion scheme on the tensor components...Apart from these i have seen some work which does median and K-space filtering too but these methods have not been too popular... Even though all these techniques are effectivenone of them have an explicit Rician noise model built into them...
In this section i will describe the Rician bias correction filter that we developed for filtering DT_MRI images. Some key features of this filter are 1. It is a DWI space filter and works on the DW images and not on the tensor images. 2. It is based on a Maximum A Posterrior approach to the image reconstruction problem. We know that in statistics MAP estimation is used to obtain a point estimate of an unobserved quantity based on empirically observed data.
Briefly reviewing MAP based image reconstruction we know that any MAP based reconstruction method has 3 key components. 1. It has a prior mode which basically describes the probability distribution of the unknown image to be reconstructed. 2. It has a likelihood or noise model which describes the nature of noise in the image.( It is used to describe the probability distribution of the noise in the image.) and 3. An optimization scheme. The MAP problem results in an optimization problem which aims to maximize a posterior term based on the empirical data. (Posterior is the probability of the unknown data given some known data)
This is how we formulate our filtering problem as a MAP optimization. We are given an initial noisy image u0 and we want to estimate a clean image u... We allready know that p(u0|u) that is the probability of noisy value given a clean value has a Rician distribution. From Baye’s rule we can write this probaility as shown: In this expression we know that p(u0) the pdf of the noisy image is fixed and a constant. To obtain our filtered image we want to maximize the posterior term or p(u|u0)
In practice, we maximize the log of the posterior term. So we apply logarithms to both sides of the previous equation and remove the constant term p(u0) thus replacing the equality with a proprtionality to get the following expression. In this expression log p(u) is our prior term. It captures some prior knowledge about the filtered image. For example we can enforce a smoothing criteria on the image. log p(u0|u) is the likeloihood term and captures the noise model in the data. and the term on the left is the posterior term which essentially says what is the probability of the clean image given a noisy image Now, Our objective is to maximize the posterior term. To do this maximization we basically use a gradient ascent. To do gradient ascent we need to take derivatives.
If we substitute theexpression for the Rician distribution in the likelihood term, the expression looks like this. And now we take derivative with respect to the free variable that is u...to get what we call the Bias Correction term B or the Rician attachment term.
For the prior term we use the Gibb’s prior model which enforces a smoothng constraint on the filtered image. As you can see , the Gibb’s prior is a function of an energy term. This energy enforces a penalty on the image gradient using a conductance function which allows the image to be smoothed without blurring features/edges. It has been shown that minimizing this energy is equivalent to performing the well -known Perona Malik Aniostropic diffusion on the input image.
If we combine the derivative of the likelihood term with that of the prior ( which is the variational of the energy functional) we form an iteartive update equation for the image being filtered. Implementation: 1.From an implementation perspective,this filter can be implemented by modifying a PDE based filtering scheme like Perona & Malik to add this attachemnt term after every iteration of the Laplacian update. 2.Since we are filtering a set of 7 or more DWI images , the bias term is calculated for each component of the vector separately and added. 3. The weighing factor lamda acts an addition tuning parameter which controls the degree of attachment. If we set lamda to zero, the filter become a simple vector valued anisotropic diffusion filter.
In this section i will present the results of various filtering experiments that were performed. To evaluate the effectiveness of various filtering techniques we made a peformance comparison. The filters that we compared in this study are On the DWI space we had our Rician Bias COrrection filter along with a simple vector valued anisotropic diffusion filter. On the tensor space we implemented Pennec’s Riemannian space filter and also a simple anisotropic filtering of the tensors. /*Here i would like to point out that to perform anisotropic diffusion filtering on tensors we need to weigh the off diagonal elements by a factor of root 2 to account for the fact that our vector has only 6 components out of which the offdiagonal ones are repeated in the actual 3x3 matrix.*/
So we used 3 different error metrics for our performance comparisons. These were the RMS error in the tensor components. The error in FA ansiotropy and the error in trace between the filtered and the clean images. We chose to use these measures because these are the tensor derived measures widely used for making quantitative calculations in various applications. For ex trace is used for diagnosis of Shizophrenia in patients.
Our first synthetic data was a cuboidal volume with tensors oriented in two directions. While one group of tensors have major axes which split the gradient directions , the other group were aligned with a gradient. The way we generated these tensors was by first generating the DWIs correspoonding to thje desired tensor orientations and subsequently estimating the tensors from them. The base line DWI used in the synthetic data had a signal value of 250 which is close to what is typically found in the white matter of the human brain. We had 6 gradient directions at 45 degree angles on the xy, yz and zx planes. This slide shows the clean image and the noisy image (sigma of around 17 SNR=15)
These are the results of performing the filtering on the DWI space. On the left is plain anisotropic diffusion filtering. On the right is the Rician bias correction filter. The way we did all the filtering was to optimize for the best RMS error in tensor components as we felt that was the best measure of overall error. From this image you can see that the Rician filter tends to preserve the trace and fa better as the tensor on the left seem to be more smoothed out and faded.
These display the results of filtering on the tensor domain. On the left is filtering in the euclidean space and on the right is the Riemannian space tensor filtering.Visually it is difficult to make out any real differences here but again the tensors here seem more smoothed out compared to the Rician filter on the DWI space.
This is a graph of the error in Tensor Components using the 4 different filtering methods. The x-axis here is 1/SNR which means as we go from left to right noise increases. As we can see the Rician filter (in blue) gives the best RMS performance among all the 4.
This is a similar plot but with FA. Again we see that the errors for the Rician filtering are the least.
Finally the plot for errors in trace which again shows Rician filters performance to be better than the others. It is interesting to note that while euclidean filtering gives the next best performance for trace , for FA the next best is the Riemannian filter...we think that since trace is a linear measure the euclidean filter performs better for trace while FA being a non linear measure the Riemannian filter does better on it.
Our first synthetic data set had just two tensor orientations. Since on a real data set this is not true we simulated another synthetic dataset which capture this variability in direction. So we generated a hollow torus with tensor oriented in all possible 3D orientations. We filtered this torus using the same 4 filtering techniques. The gradient directions and baseline signal value used were the same here as for the first synthetic dataset.
This is our original hollow torus dataset. //The radius of the torus was 20 units, inner tube radius of 7 units and outer tube radius of 10 units.
This is the same torus after adding noise with standard deviation 10.
This is the result of performing the Euclidean filtering
This was result of the Riemannian space filtering. The image still looks quite noisy here..
This is the result after performing Anisotropic diffusion which makes the filtered image very regular and smoothed.
And finally the RIcian bias correction filter. Even though the filtered image looks noisy the hue on the tensors is more saturated indicating that FA is preserved better.
This is the plot of the error in the tensor norm and again shows the Rician filter to perform better than both aniso dwi and euclidean space filtering. The Riemannian space filtering seems to fare really poorly here and i think this is because of the need to adjust for negative eigen values. Also having more variability in direction i think hurts its performance.
Apart from those two synthetic data sets we applied the filtering techniques on a real data set as well. The main problem here was that there was no ground truth or clean DT-MRI dataset that was available. Such ground truth data exists for structural MRI (brainweb database) but none for DT-MRI. So in such a situation how do we evaluate filtering performance... Assuming that we have multiple samples of the signal available one technique would be try averaging all the samples..the greater the number the better our estimate..however such a scheme would not work since the noise model in the samples is notGaussian.. So we came up with this idea of maximum likelihood...estimation... Essentially if we have multiple samples of the intensity at a voxel , and if we know the probability distribition of noise in these multiple samples we can maximize the likelihood function as shown.... here x_i is the intensity value of the ith sample, N is total number of samples and A is the true value of the signal. IF we maximize this likelihood term we can get the true value of the signal...We could use any optimization scheme like the golden search or the brents method to actually do the optimization.
To compare how ML estimation performs compared to a conventional averaging scheme we did some experiments. So what i did was, I took a signal with value 100 and added Rician noise to it. I then estimated the true value of the signal from the noisy signal using 2 different methods. a) Average all the samples avaliable b) Use ML to estimate the true value of the sample. i then repeated this experiment 10000 times and plotted the mean and 95% confidence interval of the estimate for different noise levels. These are the results using a single sample. The x axis here is the noise level, the y axis is the estimated value and the bars show the standard deviation in the estimate...the red is for the averaging and the green for the ML estimate..As you can see as noise increases the both the average estimate and the ML estimate errors increase...
This is the same graph but if we have two samples available...
This is for 4 samples
This for 8 samples
this for 16 samples
and finally 32 samples. Notice that as we increase the number of samples...the confidence intervals for both estimation methods become narrower but the bias error decreases only for the ML estimate.
For our filtering comparisons on the real data , we used data provided by Dr. Guido Gerig and Dr Weili Lin from UNC. It consisted of 5 scans of a healthy volunteer with a resolution of 2x2x2 mm on a 3Tesla scanner. Since the data allready had on scanner averaging it ruins the Rician distribution. Hence we did not use the ML method for the ground truth.we simply averaged all the data to form a ground truth. However, If we did have more unaveraged data an ML estimation would give a better ground truth.
This is a coronal slice of the Real data that i described.
This is the slice after we add synthtic Rician noise with std deviation 10.
This the slice after we ran our Euclidean space tensor filtering.
This is the result of running the Riemannian space filter.
And now the DWI space filters: this one is the plain Anisotropic diffusion filtering....
And finally the results of using the Rician bias correction filtering. /notice darker hues on Rician filter....
If we look at the graphs for the error in tensor norm we see the Rician filter and the Aniso-DWI filter perform the best . The Rician filter is better than the the aniso filtering...the riemannian filter’s performance is poor.
For FA, again the Rician filter gives better results hoever, all the other filtering methods seem to closely clustered together.
And finally for the trace, again Rician filterappears to be slightly better than the others though not by a huge amount.
So to briefly summarize the performance numnbers we see that 1.the Rician filter does better on ems error on tensor components for both real and synthetic data. 2. for the real data, the DWI space filters clearly perform better than filtering in the tensor domain. Both aniso dwi and rician filtering gace better results. 3. the overall poor performance of the Riemannian filter is beacuse of the need to adjust for negative eigen values. The riemannian filter needs all eigenvalues of its tensors to be positve to start with.
So just to summarize the contributions of this work, we a) demonstrated the effects of bias introduced by Rician noise on the estimated tensors. we showed how these effects depend on signal level and tensor orientation. b) we formulated a new a DWI space filtering method which explicitly accounts for the rician bias based on a MAP approach to image recontruction c) we presented a systematic comparison of this new method with other techniques from literature. We used 3 different error metrics using common tensor derived measures for performance evaluations.
We showed how a maximum likelihood estimation could be used to generate low noise DWIs from repeated DWI scans and presented a comparison of this technique with a simple averaging technique. Finally we developed a set of tools to filter both DWI and tensor data using several recent filtering techniques along with our own filtering scheme. These could be useful for future experimentation and research.
Finally future work that could be done.. 1. DT-MRI Ground truth generation..: Even though we did a preliminary study on how ML based estimation could be used to generate low noise DWIs...we haven’t looked at the problem in to much detail...Like brainweb database it would be useful to have a DT-MRI phantom/ground truth database publicly available to help researchers working on DT-MRI processing.. 2. Application dependent effects: We have not looked at how noise actually affects diagnostic decisions of clinicians using tensor data for studies. An interesting thing to do would be to study how much noise can actually affect a diagnosis for example of Shizphrenia which uses tensor trace.... 3. Finally , it would be interesting to see if a Rician model could be incorporated in the tensor estimation process...Current linear /non linear estimation process do not account for bias while doing tensor estimation....
I would like to end this talk by acknowledging all the people without whose help this work would not have been possible. Firstly my comitteee...Tom for being a an excellent advisor .. Ross and Tolga for providing valuable inputs at various points of time...Gordon for his help with teem toolkit...Josh for help with itk ...Dr. Guido Gerig and Wei lin from UNC for helping out with the datasets... and to all others who have helped me at various stages of this work...Thank you..
That is it and i will be happy to take questions...
Rician Noise Removal in Diffusion Tensor MRI with speaker notes
Rician Noise Removal in Diffusion Tensor - MRI <ul><li>Thesis Defense </li></ul><ul><li>Saurav Basu </li></ul><ul><li>School of Computing </li></ul><ul><li>University of Utah </li></ul>
Organization <ul><li>Brief overview of DT MRI </li></ul><ul><li>Goals for this thesis </li></ul><ul><li>Motivation : </li></ul><ul><ul><li>why noise removal ? </li></ul></ul><ul><ul><li>why Rician noise ? </li></ul></ul><ul><ul><li>previous DT-MRI filtering methods </li></ul></ul><ul><li>Rician Bias Correction Filter </li></ul><ul><li>Results and discussion </li></ul><ul><li>Conclusion: summary, future work </li></ul><ul><li>Questions ? </li></ul>
March-April 2006 DT-MRI is the most recent in a series of astonishing breakthroughs in brain imaging
HyperStreamLines used to Visualize White Matter Fibres in the brain
Brief overview of DT-MRI <ul><li>Symmetric </li></ul><ul><li>Positive Definite </li></ul><ul><li>All eigenvalues are positive </li></ul>Diffusion Tensor Imaging technique to compute a 3x3 matrix (D) Characterizes diffusion of water across brain tissue Used to study structure of brain fibres Key: More diffusion along fibres than across fibres
<ul><li>Visualize the eigen values of D over the volume to infer connectivity and structure </li></ul>Tensor Orientation: Principle Eigen Vector Tensor Anisotropy: directional characteristics
How is the tensor computed ? A 0 A i g i Stejskal Tanner equation Known: b, g i Measured: A 0 , A i Find D Most Common: Linear Least Squares on
Goals for this Thesis: <ul><li>What is the best way to filter DT-MRI data? </li></ul><ul><li>How do current filtering methods compare ? </li></ul><ul><li>Is there a better way of doing filtering? </li></ul>Answer these questions.
Motivation <ul><li>Why is DT - MRI filtering important? </li></ul><ul><li>Why is it important to account for Rician noise in the filtering process? </li></ul>
<ul><li>DT MRI plagued by low SNR </li></ul><ul><ul><li>Multiple Scans needed to increase SNR </li></ul></ul><ul><ul><li>Issues: long acquisition time, patient comfort system throughput </li></ul></ul><ul><ul><li>Mis-registration issues (motion artifacts) </li></ul></ul><ul><ul><li>Partial voluming (volume averaging) : voxel covers a non homogeneous tissue region </li></ul></ul>Why DT - MRI filtering?
<ul><li>what is Rician noise? how does it arise in DT MRI? </li></ul><ul><li>how does it effect tensors? </li></ul><ul><li>previous filtering methods </li></ul>Why Rician noise removal?
<ul><li>DWI images are magnitudes of complex valued signals. </li></ul><ul><li>If the real and imaginary components of the signal are assumed to have a Gaussian noise, the resulting magnitude image will have Rician distributed noise. </li></ul>Rician noise in DT MRI ? gaussian magnitude where is zero mean , stationary Gaussian noise with standard deviation
Rician Noise A signal is said to be corrupted with Rician noise if the pdf of the noisy signal has a Rice distribution
How does Rician noise affect estimated tensors? previous studies show noise trace and FA “ However, when we performed Monte Carlo simulations with Rician noise with diffusion tensors characteristic of those in the human brain we found FA and trace can be incorrectly estimated when tensors are aligned with gradient directions.” aligned tensors: noise FA trace
Tensor Splitting Gradient direction Tensor aligned with gradient direction
This tells us FA can be overestimated or underestimated depending on how a person sits inside the scanner ! Bottom Line! It is important to consider Rician noise in filtering process Can Seriously affect the validity of clinical studies using these FA estimates.
Previous filtering approaches 2 categories DWI space Tensor Space 1) Non linear smoothing for reduction of systematic errors. Parker(2000) 2) Constrained Variational approach Wang, Vemuri (2004) 2) Bayesian regularization using Gaussian markov random fields. Martin (2004) 1) Riemannian Space filtering Pennec (2004) Very effective techniques, but do not explicitly handling Rician noise as part of the filtering process. Others: Median filtering, K- Space(Fourier Domain) methods
<ul><li>DWI Space filter </li></ul><ul><li>Based on maximum a posteriori (MAP) approach to image reconstruction </li></ul><ul><li>( In statistics MAP estimation is used to obtain a point estimate of an unobserved quantity based on empirical data ) </li></ul>Rician Bias Correction Filter
MAP Image Reconstruction <ul><li>A Prior Model </li></ul><ul><li>A Likelihood or Noise Model </li></ul><ul><li>Optimization Scheme (maximize posterior) </li></ul>3 Key components
<ul><li>To estimate the clean value we want to maximize p(u|u 0 ) </li></ul>From Baye’s Rule: constant for a given noisy image u 0 MAP Formulation Given: Noisy Image u 0 estimate Output: Clean/Filtered Image u Known: p(u 0 |u) has a Rician distribution
posterior likelihood prior maximize with gradient ascent Capture some prior knowledge about the filtered image. Example: enforce smoothing criteria on the image Captures the noise model on the data Essentially says : What is the probability of the clean image given that i have a particular noisy image. For gradient ascent we need to take derivatives!
Likelihood Term Taking derivative w.r.t u , Rician attachment term or Bias correction term After Substituting for Rice pdf The Likelihood Term:
We use a Gibb’s prior with an energy functional which enforces a smoothness without blurring edges The Prior Term: Gibb’s prior Energy functional conductance weighing factor edge preserving smoothing prior
Combining the Rician correction term with the variational of the energy functional we get the update equation for the filtered image Derivative of likelihood term Variational of energy functional Implementation: Modify PDE Diffusion Filtering to use this modified update term 1. u = vector image of 7 or more DWIs 2 Bias correction term term computed independently for each component of the vector Note:
Results We compared 4 different filtering methods on both synthetic and real data sets DWI Space Tensor Space 1. Anisotropic Diffusion without Rician attachment 2. Rician Bias Correction filter 1. Anisotropic Diffusion in euclidean space. 2. Anisotropic Diffusion on the Riemannian manifold
<ul><li>To check whether variability in directions affects results we generated a torus with tensors oriented in all possible directions. </li></ul><ul><li>Ran the filtering on the torus data set </li></ul>Synthetic Data Set -2 : Hollow Torus
<ul><li>use repeated scans of the same subject. </li></ul>Real Data Results Issue: No ground truth data available for DT-MRI ! How do we evaluate filtering performance quantitatively? Solution: p(x/A) is the Rician pdf Maximize (Brent’s| Golden search method) ML Estimate: LIKELIHOOD FUNCTION
ML Estimator versus Averaging for generating Ground truth
<ul><li>5 scans of healthy volunteer </li></ul><ul><li>Resolution: 2 mm x 2 mm x 2 mm </li></ul><ul><li>3T scanner , scan time 12 mins. </li></ul>Real Data Filtering Results About the Real Data: Added Rician noise SNR levels of 10,15 and 20 with respect to white matter signal level. and ran our filtering methods.
<ul><li>ML method: low noise DWIs </li></ul><ul><li>Filtering tools: </li></ul><ul><ul><li>Rician Bias Correction Filter </li></ul></ul><ul><ul><li>Riemannian Space Tensor Filter </li></ul></ul><ul><ul><li>Anisotropic Diffusion Filter on tensors </li></ul></ul><ul><ul><li>Anisotropic Diffusion Filter on DWIs </li></ul></ul>
<ul><li>DT-MRI Ground Truth: investigate ML methods </li></ul><ul><li>Noise effects: </li></ul><ul><ul><li>fiber-tractography, </li></ul></ul><ul><ul><li>diagnostic decisions. </li></ul></ul><ul><li>Rician Noise model : Tensor estimation </li></ul>Future Work:
Acknowledgments Advisor: Dr. P. T. Fletcher Committee: Dr. Ross T. Whitaker, Dr. Tolga Tasdizen Gordon for help with Deft and teem Josh for help with ITK Dr. Guido Gerig, Dr. Wei Lin from UNC for providing us the real DT-MRI data VIPER, NAMIC : Funding .