This document summarizes an automated method for localizing fetal organs in magnetic resonance images. The method uses machine learning to sequentially localize the brain, heart, lungs and liver. It first normalizes fetal size based on gestational age. It then localizes the brain, uses this to search for the heart between two spheres. The heart location guides searching inside a third sphere for the lungs and liver. Features incorporate spatial relationships modeled by Gaussian distributions. Classification predicts organ candidates, regression refines locations, and spatial optimization selects the final detection by maximizing votes and relative organ positions. Training involves extracting random cube features around labeled pixels to classify organs.
Good agricultural practices 3rd year bpharm. herbal drug technology .pptx
Ā
PyData London 2015 - Localising Organs of the Fetus in MRI Data Using Python
1. Automated Organ Localisation
in Fetal Magnetic Resonance Imaging
K. Keraudren1, B. Kainz1, O. Oktay1, M. Kuklisova-Murgasova2,
V. Kyriakopoulou2, C. Malamateniou2, M. Rutherford2,
J. V. Hajnal2 and D. Rueckert1
1 Biomedical Image Analysis Group, Imperial College London
2 Department Biomedical Engineering, Kingās College London
PyData London 2015
2. 1) Background:
Fetal Magnetic Resonance Imaging
Python for medical imaging
2) Localising the brain of the fetus
3) Localising the body of the fetus
4. Magnetic Resonance Imaging (MRI)
MRI scanner
Source: Wikimedia Commons
Huge magnet (1.5T)
Safe: no ionising radiations
High quality images
Slow acquisition process
4
5. Challenges in fetal MRI
1 Fetal motion
2 Arbitrary orientation of the fetus
3 Variability due to fetal growth
5
6. Fast MRI acquisition methods
MRI data is acquired as stacks of 2D slices
that freeze in-plane motion
but form an incoherent 3D volume.
6
7. Retrospective motion correction
Orthogonal stacks of
misaligned 2D slices
3D volume
Localising fetal organs can be used to initialise motion correction.
B. Kainz et al., āFast Volume Reconstruction from Motion Corrupted Stacks of 2D Slices,ā
in IEEE Transactions on Medical Imaging, 2015.
7
8. Retrospective motion correction
Orthogonal stacks of
misaligned 2D slices
3D volume
Localising fetal organs can be used to initialise motion correction.
B. Kainz et al., āFast Volume Reconstruction from Motion Corrupted Stacks of 2D Slices,ā
in IEEE Transactions on Medical Imaging, 2015.
8
11. Interfacing IRTK through Cython
What is a medical image?
4D volume of voxel data (X, Y, Z, T)
Spatial information: ImageToWorld and WorldToImage
Why the Image Registration Toolkit (IRTK)?
Same backend as my colleagues:
same conventions
same features & bugs
State-of-the-art algorithms for aligning images
github.com/BioMedIA/IRTK 11
12. Interfacing IRTK through Cython
Solution:
Subclass numpy arrays
Dictionary attribute holding:
dimension, orientation, origin and pixel size
Access IRTK through cython
Additional beneļ¬ts:
__getitem__ overloaded to update coordinates when
cropping/slicing
Coordinates preserved when resampling/aligning images
conda install -c kevin-keraudren python-irtk
github.com/BioMedIA/python-irtk 12
15. Machine learning approach to organ localisation
Learning from annotated examples
Generalise from training database to new subjects
Implicitly model variability:
age
pose (articulated body)
maternal tissues
Small dataset limits capacity to model all age categories:
Infer size from gestational age
15
16. Training data: fetal brain
59 healthy fetuses, 450 stacks
Annotated boxes for the brain
github.com/kevin-keraudren/crop-boxes-3D 16
17. Training data: full body
30 healthy & 25 IUGR fetuses
Manual segmentations: brain, heart, lungs, liver and kidneys
M. Damodaram et al., āFoetal Volumetry using Magnetic Resonance Imaging in Intrauterine Growth
Restriction,ā in Early Human Development, 2012.
17
25. Size constraints for brain detection
OFDOFD
BPDBPD
14 19 24 29 34 39
Gestational Age
20
40
60
80
100
120
140
mm
Occipitofrontal diameter
median
5th/95th centile
14 19 24 29 34 39
Gestational Age
0
20
40
60
80
100
120
140
mm
Biparietal diameter
median
5th/95th centile
25
26. Localisation results for the fetal brain
>70%
Ground truth
Detection
Median error: 5.7 mm
>70% of the brain: 100%
Complete brain: 85%
Size inferred from gestational age
Runtime: <1min (desktop PC)
26
27. Localisation results for the fetal brain
>70%
Ground truth
Detection
Median error: 5.7 mm
>70% of the brain: 100%
Complete brain: 85%
Size inferred from gestational age
Runtime: <1min (desktop PC)
27
31. Localising the body of the fetus
Brain largest organ, ellipsoidal shape
Lungs & liver irregular shape
Motivates 3D approach despite motion corruption
(only coarse localisation)
31
32. Localising the body of the fetus
1) Size normalisation based on gestational age
2) Sequential localisation of fetal organs
3) Image features steered by the fetal anatomy
32
33. Size normalisation
24 weeks 30 weeks 38 weeks
20 25 30 35 40
Gestational age
0
20
40
60
80
100
120
140
Heart-braindistanceinmm
Scanner coordinates
Healthy
IUGR
R1, R2
A single model can be trained across all gestational ages.
33
34. Size normalisation
24 weeks 30 weeks 38 weeks
20 25 30 35 40
Gestational age
0
20
40
60
80
100
120
140
Heart-braindistanceinvoxels
Image grid
Healthy
IUGR
R1, R2
A single model can be trained across all gestational ages.
33
35. Size normalisation
The crown-rump length (CRL), estimated from the gestational age,
is used to normalise the size of the fetus.
CRLCRL
12 17 22 27 32 37 42
Gestational age
50
100
150
200
250
300
350
400
450
mm
Crown-rump length
median
5th
/ 95th
centile
Resampling factor: CRLga/CRL30
34
36. Sequential search
R1R1
R2R2
The heart lies between two spheres of radii R1 and R2 centered on the brain.
The lungs and liver lie inside a sphere of radius R3 centered on the heart.
R1, R2 and R3 are independent of gestational age thanks to size normalisation.
35
37. Sequential search
R
3
R
3
The heart lies between two spheres of radii R1 and R2 centered on the brain.
The lungs and liver lie inside a sphere of radius R3 centered on the heart.
R1, R2 and R3 are independent of gestational age thanks to size normalisation.
35
38. Sequential search
R1R1
R2R2
R
3
R
3
The heart lies between two spheres of radius R1 and R2 centered on the brain.
The lungs and liver lie inside a sphere of radius R3 centered on the heart.
R1, R2 and R3 are independent of gestational age thanks to size normalisation.
35
39. Image descriptor built from rectangle features
BB
AA
if Āµ(IA) > Āµ(IB) then 1 else 0
1
36
40. Image descriptor built from rectangle features
BB
AA
if Āµ(IA) > Āµ(IB) then 1 else 0
1
1
36
41. Image descriptor built from rectangle features
BB
AA
if Āµ(IA) > Āµ(IB) then 1 else 0
1
1
0
36
42. Image descriptor built from rectangle features
BBAA
if Āµ(IA) > Āµ(IB) then 1 else 0
1
1
0
1
36
43. Image descriptor built from rectangle features
BB
AA
if Āµ(IA) > Āµ(IB) then 1 else 0
1
1
0
1
0
...
36
44. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
45. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
46. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
47. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
48. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
49. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
50. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
51. Integral image for axis-aligned cube features
ii(x,y) = ā
x ā¤x,y ā¤y
I(x ,y )
i_img = np.cumsum(np.cumsum(np.cumsum(img,2),1),0)
1 2
3 4
A B
C D
1 = A
2 = A+B
3 = A+C
4 = A+B+C +D
D = 4ā3ā2+1
Compute the sum of pixels over an image patch
independently of the patch size:
In 2D, 4 table lookups
In 3D, 8 table lookups
37
52. Steerable features
At training time, 3D features are extracted in a coordinate system
aligned with the fetal anatomy.
38
u0u0v0v0
53. Steerable features
At test time, 3D features are extracted in a rotated coordinate system:
the brain ļ¬xes a point
Pu randomly
oriented
while the heart ļ¬xes an axis.
39
uu
vv
54. Steerable features
At test time, 3D features are extracted in a rotated coordinate system:
the brain ļ¬xes a point
Pu randomly
oriented
while the heart ļ¬xes an axis.
40
uu
vv
55. Classiļ¬cation then regression: heart
Classiļ¬cationClassiļ¬cation
J. Gall and V. Lempitsky, āClass-speciļ¬c Hough Forests for Object Detection,ā in CVPR, 2009.
41
56. Classiļ¬cation then regression: heart
RegressionRegression
J. Gall and V. Lempitsky, āClass-speciļ¬c Hough Forests for Object Detection,ā in CVPR, 2009.
42
57. Classiļ¬cation then regression: heart
RegressionRegression
J. Gall and V. Lempitsky, āClass-speciļ¬c Hough Forests for Object Detection,ā in CVPR, 2009.
43
58. Classiļ¬cation then regression: lungs & liver
Classiļ¬cationClassiļ¬cation
J. Gall and V. Lempitsky, āClass-speciļ¬c Hough Forests for Object Detection,ā in CVPR, 2009.
44
59. Classiļ¬cation then regression: lungs & liver
RegressionRegression
J. Gall and V. Lempitsky, āClass-speciļ¬c Hough Forests for Object Detection,ā in CVPR, 2009.
45
60. Spatial optimization of candidate organs
brainbrain
heartheart
Sagittal plane
liverliver
Coronal plane
left lungleft lung
rightright
lunglung
Transverse plane
For each candidate location for the heart, hypotheses are
formulated for the position of the lungs & liver.
The ļ¬nal detection is obtained by maximizing:
the regression votes p(xl)
the relative positions of organs, modeled as Gaussian distributions
(ĀÆxl,Ī£l).
ā
lāL
Ī»p(xl)+(1āĪ»)eā1
2 (xlāĀÆxl) Ī£ā1
l (xlāĀÆxl)
l ā L = { heart, left lung, right lung, liver }
46
61. Implementation: training
# predefined set of cube features
offsets = np.random.randint( -o_size, o_size+1, size=(n_tests,3) )
sizes = np.random.randint( 0, d_size+1, size=(n_tests,1) )
X = []
Y = []
for l in range(nb_labels):
pixels = np.argwhere(np.logical_and(narrow_band>0,seg==l))
pixels = pixels[np.random.randint( 0,
pixels.shape[0],
n_samples)]
u,v,w = get_orientation_training( pixels, organ_centers )
x = extract_features( pixels, w, v, u )
y = seg[pixels[:,0],
pixels[:,1],
pixels[:,2]]
X.extend(x)
Y.extend(y)
clf = RandomForestClassifier(n_estimators=100) # scikit-learn
clf.fit(X,Y)
47
62. Implementation: training
# predefined set of cube features
offsets = np.random.randint( -o_size, o_size+1, size=(n_tests,3) )
sizes = np.random.randint( 0, d_size+1, size=(n_tests,1) )
X = []
Y = []
for l in range(nb_labels):
pixels = np.argwhere(np.logical_and(narrow_band>0,seg==l))
pixels = pixels[np.random.randint( 0,
pixels.shape[0],
n_samples)]
u,v,w = get_orientation_training( pixels, organ_centers )
x = extract_features( pixels, w, v, u )
y = seg[pixels[:,0],
pixels[:,1],
pixels[:,2]]
X.extend(x)
Y.extend(y)
clf = RandomForestClassifier(n_estimators=100) # scikit-learn
clf.fit(X,Y)
47
63. Implementation: training
# predefined set of cube features
offsets = np.random.randint( -o_size, o_size+1, size=(n_tests,3) )
sizes = np.random.randint( 0, d_size+1, size=(n_tests,1) )
X = []
Y = []
for l in range(nb_labels):
pixels = np.argwhere(np.logical_and(narrow_band>0,seg==l))
pixels = pixels[np.random.randint( 0,
pixels.shape[0],
n_samples)]
u,v,w = get_orientation_training( pixels, organ_centers )
x = extract_features( pixels, w, v, u )
y = seg[pixels[:,0],
pixels[:,1],
pixels[:,2]]
X.extend(x)
Y.extend(y)
clf = RandomForestClassifier(n_estimators=100) # scikit-learn
clf.fit(X,Y)
47
64. Implementation: training
# predefined set of cube features
offsets = np.random.randint( -o_size, o_size+1, size=(n_tests,3) )
sizes = np.random.randint( 0, d_size+1, size=(n_tests,1) )
X = []
Y = []
for l in range(nb_labels):
pixels = np.argwhere(np.logical_and(narrow_band>0,seg==l))
pixels = pixels[np.random.randint( 0,
pixels.shape[0],
n_samples)]
u,v,w = get_orientation_training( pixels, organ_centers )
x = extract_features( pixels, w, v, u )
y = seg[pixels[:,0],
pixels[:,1],
pixels[:,2]]
X.extend(x)
Y.extend(y)
clf = RandomForestClassifier(n_estimators=100) # scikit-learn
clf.fit(X,Y)
47
65. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
66. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
67. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
68. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
69. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
70. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
71. Implementation: testing
def get_orientation( brain, pixels ):
u = brain - pixels
u /= np.linalg.norm( u, axis=1 )[...,np.newaxis]
# np.random.rand() returns random floats in the interval [0;1[
v = 2*np.random.rand( pixels.shape[0], 3 ) - 1
v -= (v*u).sum(axis=1)[...,np.newaxis]*u
v /= np.linalg.norm( v,
axis=1 )[...,np.newaxis]
w = np.cross( u, v )
# u and v are perpendicular unit vectors
# so ||w|| = 1
return u, v, w
Brain
Pixel
u
v
w
48
72. Implementation: testing
img = irtk.imread(...) # Python interface to IRTK
proba = irtk.zeros(img.get_header(),dtype=āfloat32ā)
...
pixels = np.argwhere(narrow_band>0)
u,v,w = get_orientation(brain_center,pixels)
# img is 3D so all features cannot fit in memory at once:
# use chunks
for i in xrange(0,pixels.shape[0],chunk_size):
j = min(i+chunk_size,pixels.shape[0])
x = extract_features( pixels[i:j],w[i:j],v[i:j],u[i:j])
pr = clf_heart.predict_proba(x)
for dim in xrange(nb_labels):
proba[dim,
pixels[i:j,0],
pixels[i:j,1],
pixels[i:j,2]] = pr[:,dim]
49
73. Implementation: testing
img = irtk.imread(...) # Python interface to IRTK
proba = irtk.zeros(img.get_header(),dtype=āfloat32ā)
...
pixels = np.argwhere(narrow_band>0)
u,v,w = get_orientation(brain_center,pixels)
# img is 3D so all features cannot fit in memory at once:
# use chunks
for i in xrange(0,pixels.shape[0],chunk_size):
j = min(i+chunk_size,pixels.shape[0])
x = extract_features( pixels[i:j],w[i:j],v[i:j],u[i:j])
pr = clf_heart.predict_proba(x)
for dim in xrange(nb_labels):
proba[dim,
pixels[i:j,0],
pixels[i:j,1],
pixels[i:j,2]] = pr[:,dim]
49
75. How to reduce the runtime?
tweak parameters: #trees, #features, evaluate every two pixels, ...
Use a Random Forest implementation for sliding windows
struct SlidingWindow {
pixeltype* img;
int shape0, shape1, shape2;
int x,y,z;
void set ( int _x, int _y, int _z );
pixeltype mean( int cx, int cy, int cz,
int dx, int dy, int dz );
}
template <class PointType, class TestType>
class RandomForest;
51
79. Conclusion
Automated localisation of fetal organs in MRI using Python:
Brain, heart, lungs & liver
Training one model across all ages and orientations
MSER & SIFT from OpenCV
Image processing from scikit-image & scipy.ndimage
SVM and Random Forest from scikit-learn
And Cython for interfacing with C++
55