How to Remove Document Management Hurdles with X-Docs?
44 paper
1. Segmentation-free Full Voxel Matching for 3D Image
Registration
Toshimitsu Suzuki and Ryuichi Oka
Department of Computer Engineering and Science, University of Aizu
Aizu-Wakamatsu City, Fukushima, Japan
Abstract—We propose a new algorithm for 3D registration called
three-dimensional continuous dynamic programming (3DCDP).
3DCDP carries out segmentation-free and optimal matching between
two 3D voxel patterns. The proposed method realizes matching
between not only surfaces of 3D object but also full voxels inside of
3D objects. The matching is carried out between a reference 3D
image representing a partial object and an input 3D image including
parts which are similar to the reference image. The reference 3D
image is segmentation-freely matched with the parts of input 3D
image. Segmentation-free property requires no segmentation for two
3D patterns before applying 3DCDP. We conducted some
experiments for verifying the ability of matching by obtaining voxel
correspondences between two 3D images.
Keywords- 3D registration; segmentation; full voxel matching;
optimal matching; dynamic programming
I. INTRODUCTION
Alignment of 3D data is widely used for registration of
medical data, data mapping of mixed reality, recognition of 3D
data etc. We propose a new algorithm for non-linear alignment
between 3D images. This alignment carries out in the way of
segmentation-free and optimal matching. Conventional
methods [1,2,3] for 3D alignment are realized only surface
parts of 3D objects. Our method conducts full voxel alignment
between two 3D images including inside parts of 3D images.
Moreover no segmentation is allowed for two 3D images. A
reference image can be aligned in the inside part of the input
3D image as shown in Fig.1.
II. 3D CONTINUOUS DYNAMIC PROGRAMMING
A. Model of 3D Continuous Dynamic Programming
The matching problem is formulated as follows:
A reference 3D image is represented by:
An input 3D image is represented by:
Local distance between voxels of the two images is defined by:
Then three mapping functions are defined by:
.
The minimum value of the evaluation function is defined
for each , ) by:
The value of (1) is obtained by the proposed algorithm
called 3-dimensional continuous dynamic programming
(3DCDP) which is an extended version of continuous dynamic
programming (CDP) [4,5,6].
Figure 1. Segmentation-free and full voxel matching between 3D images
by 3DCDP.
B. 2D Continuous Dynamic Programming
The algorithm of 3DCDP is composed of three two-
dimensional continuous dynamic programming (2DCDP)
algorithms [6]. 2DCDP is itself an expanded version of CDP
and performs full pixel correspondence between two images
Proceedings of 3DSA2013, P3-4
1
2. keeping two-dimensional relative positions between
neighboring pixels. The detail algorithm of 2DCDP is
described in [6]. The key framework of 2DCDP is described
using following notations:
Notations of 2DCDP:
f(i,j): input image, g(m,n): reference image,
I: width of input image, J: height of input image,
M: width of reference image,
N: height of reference image,
d(i,j,m,n)=||f(i,j) -g(m,n)||: local distance,
The minimum value of the evaluation function of 2DCDP
is formulated as:
The solution algorithm based on dynamic programming for
obtaining the minimum value of the accumulated local
distances is given by:
Five varibles of (3) are graphically indicated in Fig.2.
Figure 2. The recursive algorithm of 2DCDP is composed of five
variables. The computation scheme is a tensor field.
C. Algorithm of 3D Continuous Dynamic Programming
The algorithm of 3DCDP is a combination of three
2DCDPs as shown in the upper figure of Fig.3. The one of
three 2DCDP is indicated in the lower figure of Fig.3.
Figure 3. The schematic representation of 3DCDP. The upper figure
indicates a composed three planes each of which is an algorithm of 2DCDP.
The one of three 2DCDPs is indicated in the lower figure.
The main recursive equation of 3DCDP for calculating the
minimum value of accumulating local distances define by (1) is
given by:
. (4)
Each variable consisting of right-hand side of (4) is
determined by the following formula:
(5)
Proceedings of 3DSA2013, P3-4
2
3. where
,
,
,
,
Assuming a point (l,m,n) of the reference 3D image is
corresponding to a point (i,j,k) of the input 3D image, the
optimally accumulated local distance is represented by
D(i,j,k,l,m,n). This value is recursively updated by 3D dynamic
programming shown in (4). In this process, local constraints
shown in Fig.4 are used.
D. Spotting and extraction of full voxel corresponding data
The computation of 3DCDP results in the following value for
each , ) ∊I×J×K of the input 3D image:
(6)
Then the spotting point, ) ∊I×J×K, is determined by:
(7)
The spotting point is regarded as the best matching voxel point
of the input 3D image resulting in the minimum accumulating
distances and also corresponding the voxel point (L,M,N) of
the reference 3D image. Then the all of the rest corresponding
pairs between the reference and the input 3D images are
obtained by back tracing of the optimally matching paths
starting the pair , (L,M,N)) as shown in the Fig. 5.
Figure 4. A set of local constraints. Each one is indicating a
combination of local scaling from 1/2 times shrinking to 2times expansion, and
local rotation ranges from +45 to -45 degree.
Figure 5. Back tracing starting the spotting point for obtaining of all
corresponding data between the input and reference 3D images.
III. EXPERIMENTAL RESULTS
A. Experiment 1
We use a 3D image with the size 30×30×6 voxels as
shown in Fig.6 assuming the reference and input 3D images are
same in applying 3DCDP. Theoretically 100 % correspondence
accuracy is expected. However the accuracy of experiment is
less than 100% as shown in Fig.7.
Proceedings of 3DSA2013, P3-4
3
4. Figure 6. A sample 3D image.
Figure 7. Accuracy of voxel correspondence of the experiment 1.
Horizontal line indicates voxel distance of error. Vertical line indicates the
histogram of voxels.
B. Experiment 2
The second experiment is the application of 3DCDP for
spotting a reference image in the input image as shown in Fig.8.
Figure 8. Left figure is the input 3D image (20×20×20), right the
reference 3D image (15×15×15).
The result is shown in Fig.9. The upper and middle figures
indicate two views of slice representation of the input and the
reference 3D images, respectively. The lower figure indicates a
slice representation of voxel-wise corresponded two 3D
images : the left is a part of input and the right is the reference
image. We can see that the reference image is spotted in the
inside part of the input image.
Figure 9. Two views of lice representation of the input (upper) and
reference (middle). Corresponded images of input and reference (lower).
IV. CONCLUSION
We proposed a new alignment algorithm for segmentation-
free full voxel matching between two 3D images including
inside voxels and showed two experimental results.
REFERENCES
[1] T. Masuda and K. Ikeuchi, “Registration and deformation of 3D shape
data through parameterized formulation”, IPSJ Transactions on
Computer Vision and Image Media, vol. 48, No. SIG9, June 2007.
[2] Paul J. Besl and Neil D. McKay, “A method for registration of 3D
shapes”, IEEE Transactions on Pattern Anakysis and Mechine
Intelligence, vol. 4, No.2, February 1992.
[3] Q. Jin, Y. Cheng, C. Guo, G. Li and Y. Sato, “Point to point registration
based on MRI sequences”, Proceedings of the 2009 WRI Global
Congress on Intelligent Systems - Volume 03, pages 381-384, 2009.
[4] R.Oka, “Spotting method for classification of real world data”, The
Computer Journal, vol.41, No.8, pp.559-565, 1998.
[5] Y.Yaguchi, K.Iseki and R.Oka,”Two-dimensional continuoous dynamic
programming for spotting recognition of image”,Proceedings of MIRU,
pp.708-714, July 2008.
[6] R.Oka,Y.Yaguchi and S.Mizoe,”General scheme of continuous dynamic
programming -- Optimal full pxel matching for spotting image --”,
IEICE Technical Report, PRMU2010-87, IBISML2010-59, 2012.
Proceedings of 3DSA2013, P3-4
4