An overview of multi-modal image registration methods and their applicaiton with the Insight Toolkit (ITK)
PyData Triangle
November 1st, 2017
Matthew McCormick, PhD, Kitware, Inc
1. An overview of multi-modal image
registration methods and their application
with the Insight Toolkit (ITK)
PyData Triangle
November 1st, 2017
Matthew McCormick, PhD, Kitware, Inc
2. Lessons in Data Analysis
How do we identify corresponding groups within
the same population but with different features?
3. Outline
1. Image registration: definition
2. Multi-modal challenges
3. Approaches to address the challenges
4. Open science implementation with the Insight
Toolkit (ITK)
4. What is image registration?
Image registration finds the spatial transformation that aligns multiple images.
Avants, B et. al. A Unified Image Registration Framework for ITK. https://doi.org/10.1007/978-3-642-31340-0_28
I J
J(A(x))J(𝜙(x))
5. Why do we want the transformation?
• Compare appearance across modalities
https://www.slicer.org/wiki/Documentation/Nightly/Registration/RegistrationLibrary
Expose to moisture
• Quantify structural changes before and after
treatment
• Track changes over time
• Register to segmented statistical atlas
6. Multi-modal challenges
Intensities do not correspond
https://www.slicer.org/wiki/Documentation:Nightly:Registration:RegistrationLibrary:RegLib_C02
Structural T1-weighted MRI Lesion-highlighting FLAIR MRI
7. Multi-modal challenges
Artifacts do not correspond
T2 MRI
https://www.slicer.org/wiki/Documentation:Nightly:Registration:RegistrationLibrary:RegLib_C27
MR DTI
8. Multi-modal challenges
Artifacts do not correspond
https://www.slicer.org/wiki/Documentation:Nightly:Registration:RegistrationLibrary:RegLib_C48
pre-op MRIintra-op CT
9. Approach: Mutual information-
based similarity metric
Roche, Alexis. Recalage d'images médicales par inférence statistique. 2001 Thesis. https://tel.archives-ouvertes.fr/tel-00636180
Mutual Information:
10. Approach: Mutual information-
based similarity metric
Roche, Alexis. Recalage d'images médicales par inférence statistique. 2001 Thesis. https://tel.archives-ouvertes.fr/tel-00636180
Mutual Information:
11. Approach: Pre-process to reduce
noise
• Improves effectiveness of the matching metric
• Use knowledge of target object and
characteristics of the imaging system
Unfiltered and gradient anisotropic diffusion filtered
Mirebeau, J.M., et. al. "Anisotropic Diffusion in ITK." https://arxiv.org/pdf/1503.00992.pdf
12. Approach: Registration of
segmented structures
Segmentation: delineate the object of interest from
an image
Shusil Dangi et al. iCSPlan: https://blog.kitware.com/kitware-fuels-pediatric-surgery-planning-project-with-1-5-million-award/
13. Approach: Registration of
segmented structures
Distance map sum of squares distance metric
Before Registration After Registration
Shusil Dangi et al. iCSPlan: https://blog.kitware.com/kitware-fuels-pediatric-surgery-planning-project-with-1-5-million-award/
14. Approach: Registration of feature
points
Landmark registration
Kim, R, Johnson, J., Williams, N. "Affine Transformation for Landmark Based Registration Initializer in ITK."
http://hdl.handle.net/10380/3299
15. Open Source: The Insight Toolkit
(ITK)
• The Insight Segmentation and Registration
Toolkit (ITK) is an open-source, freely available,
cross-platform library for N-dimensional image
analysis
• Extensive suite of algorithms for processing,
registering, segmenting, analyzing, and
quantifying scientific data.
• https://itk.org/
20. Approach: Mutual information-
based similarity metric
Ibanez, McCormick, Johnson et al. The ITK Software Guide. 2017. https://itk.org/ITKSoftwareGuide/html/
Mutual Information
21. Approach: Multi-scale registration
• Multi-stage, multi-scale for robustness
• Downsampling without aliasing
Ibanez, McCormick, Johnson et al. The ITK Software Guide. 2017. https://itk.org/ITKSoftwareGuide/html/
23. Approach: Registration of feature
points
Select feature points that correspond
Liu et al. "An ITK implementation of a physics-based non-rigid registration method for brain deformation in image-guided surgery."
https://doi.org/10.3389/fninf.2014.00033
Editor's Notes
The outline for today's talk is:
First, let's review what registration is.
Next, we will discuss registration challenges with multi-modal and multi-length scale images.
We will follow the brief introduction of the challenges with an overview of a number of approaches that can be taken to address these challenges.
Finally, we will introduce high quality, open source tools that allow you to apply these approaches in your research.
The outline for today's talk is:
First, let's review what registration is.
Next, we will discuss registration challenges with multi-modal and multi-length scale images.
We will follow the brief introduction of the challenges with an overview of a number of approaches that can be taken to address these challenges.
Finally, we will introduce high quality, open source tools that allow you to apply these approaches in your research.
What is image registration?
Given two images, image registration finds the spatial transformation that aligns multiple images.
Here we have two car images. We can first find an affine transformation that align the structures in the cars. We can also find a more complex deformable transformation that aligns the cars.
Notice that registration for real-world problems is challenging because the problem is ill-posed. Why? Noise, artifacts, and structural differences cause ambiguity in correspondence.
So why do we want to perform registration -- why do we want to find the spatial transformation?
There are many situations where registration is critical for quantified research.
It allows use to compare appearance across modalities. For example, we can compare a structural and functional image.
We can track changes over time. For example, we can quantify motion across the diaphram.
We can quantify structural changes before and after treatment. For instance, we quantify the strain that occurs to the wood cells after they have been exposed to moisture.
We can also register a sample to a segmented statistical atlas. The atlas allows us to identify labels for structures, compare to standard sizes and image intensities.
Now, when we consider multi-modal imaging in general, what are challenges for registration that we encounter?
First and most obvious challenge is that intensities in general do not correspond.
Regions with high intensity in one modality may have low intensities in the other modality, and regions with moderate intensity in one modality may have extreme intensities in the other modality. In general the intensity relationship is non-linear and non-monotonic.
Another challenge particular to multi-modality imaging is that artifacts do not even correspond.
Artifacts pose an issue for registration, but they are more disruptive when they are not consistent..
Here, we see how the skull in this brain image is not present in the image on the right, while it at least has some content in the image on the left. Also, there are regions of high intensity on the right not present on the left.
Another example where artifacts do correspond: the image on the left has high intensity artifacts and the image on the right has its own motion-related artifacts.
Entropy is a measure of unpredictability of the state, or equivalently, of its average information content.
If the log base 2 is used, the units of mutual information are bits.Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero. At the other extreme, if X is a deterministic function of Y and Y is a deterministic function of X then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the entropy of Y (or X). Moreover, this mutual information is the same as the entropy of X and as the entropy of Y. (A very special case of this is when X and Y are the same random variable.)Mutual information is a measure of the inherent dependence expressed in the joint distribution of X and Y relative to the joint distribution of X and Y under the assumption of independence. Mutual information therefore measures dependence in the following sense: I(X; Y) = 0 if and only if X and Y are independent random variables. This is easy to see in one direction: if X and Y are independent, then p(x,y) = p(x) p(y), and therefore:{\displaystyle \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!} \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)).
Entropy is a measure of unpredictability of the state, or equivalently, of its average information content.
If the log base 2 is used, the units of mutual information are bits.Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero. At the other extreme, if X is a deterministic function of Y and Y is a deterministic function of X then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the entropy of Y (or X). Moreover, this mutual information is the same as the entropy of X and as the entropy of Y. (A very special case of this is when X and Y are the same random variable.)Mutual information is a measure of the inherent dependence expressed in the joint distribution of X and Y relative to the joint distribution of X and Y under the assumption of independence. Mutual information therefore measures dependence in the following sense: I(X; Y) = 0 if and only if X and Y are independent random variables. This is easy to see in one direction: if X and Y are independent, then p(x,y) = p(x) p(y), and therefore:{\displaystyle \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!} \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)).
The outline for today's talk is:
First, let's review what registration is.
Next, we will discuss registration challenges with multi-modal and multi-length scale images.
We will follow the brief introduction of the challenges with an overview of a number of approaches that can be taken to address these challenges.
Finally, we will introduce high quality, open source tools that allow you to apply these approaches in your research.
How do we identify groups from the same population with different features?
{\displaystyle \mathrm{P} (X)}
{\displaystyle \mathrm {H} (X)=-\sum _{i = 1}^n{\mathrm {P} (x_{i})\log _{2}\mathrm {P} (x_{i})}}
{\displaystyle \mathrm {P} (X, Y)}
{\displaystyle MI(X;Y)=\sum _{i = 1}^n\sum _{j = 1}^m P(x_i,y_j)\log_{2} {\left({\frac {P(x_i,y_j)}{P(x_i)\,P(y_j)}}\right)},\,\!}
{\displaystyle = \mathrm{H}(X) + \mathrm{H}(Y) - \mathrm{H}(X,Y)}
Entropy is a measure of unpredictability of the state, or equivalently, of its average information content.
If the log base 2 is used, the units of mutual information are bits.Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero. At the other extreme, if X is a deterministic function of Y and Y is a deterministic function of X then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the entropy of Y (or X). Moreover, this mutual information is the same as the entropy of X and as the entropy of Y. (A very special case of this is when X and Y are the same random variable.)Mutual information is a measure of the inherent dependence expressed in the joint distribution of X and Y relative to the joint distribution of X and Y under the assumption of independence. Mutual information therefore measures dependence in the following sense: I(X; Y) = 0 if and only if X and Y are independent random variables. This is easy to see in one direction: if X and Y are independent, then p(x,y) = p(x) p(y), and therefore:{\displaystyle \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!} \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)).
Entropy is a measure of unpredictability of the state, or equivalently, of its average information content.
If the log base 2 is used, the units of mutual information are bits.Intuitively, mutual information measures the information that X and Y share: it measures how much knowing one of these variables reduces uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero. At the other extreme, if X is a deterministic function of Y and Y is a deterministic function of X then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the entropy of Y (or X). Moreover, this mutual information is the same as the entropy of X and as the entropy of Y. (A very special case of this is when X and Y are the same random variable.)Mutual information is a measure of the inherent dependence expressed in the joint distribution of X and Y relative to the joint distribution of X and Y under the assumption of independence. Mutual information therefore measures dependence in the following sense: I(X; Y) = 0 if and only if X and Y are independent random variables. This is easy to see in one direction: if X and Y are independent, then p(x,y) = p(x) p(y), and therefore:{\displaystyle \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!} \log {\left({\frac {p(x,y)}{p(x)\,p(y)}}\right)}=\log 1=0.\,\!Moreover, mutual information is nonnegative (i.e. I(X;Y) ≥ 0; see below) and symmetric (i.e. I(X;Y) = I(Y;X)).