Image fusion using nsct denoising and target extraction for visual surveillance
Image Registration
1. HIGH PERFORMANCE MUTUAL INFORMATION
FOR MEDICAL IMAGE REGISTRATION
B. Angu Ramesh, PG Scholar,
Department of ECE
St. Xavier’s catholic college of engineering
Nagercoil, TamilNadu, INDIA-629003
anguramesh@yahoo.in
Abstract:-
Multimodal Image registration is a class
of algorithms to find correspondence from
different modalities, which do not exhibit the
same characteristics, finding accurate
correspondence and computational time still
remains a challenge. To deal with this, mutual
information (MI)-based registration has been a
preferred choice. However, MI has some
limitations. First, MI based registration often
fails when there are local intensity variations in
the volumes. Second, MI only considers the
statistical intensity relationships between both
volumes and ignores the spatial and geometric
information about the pixel. In this work, this
system propose to address these limitations by
incorporating spatial and geometric information
via a Harris operator. In particular, this method
focus on the registration between a high-
resolution image and a low-resolution image
(MRI/CT). The MI cost function is computed in
the regions where there are large spatial
variations such as corner geometric information
derived from the Harris operator through local
autocorrelation function. The robustness and
accuracy of the proposed method were going to
be demonstrate using experiments on synthetic
and clinical data. The method running on a
GeForce GTX 580 graphics processing unit
(GPU) which is based on compute unified device
architecture (CUDA) that exploitation of on-chip
memory, which is increase the parallel execution
efficiency form 4% to 46%. The proposed
method provided accurate registration and
yielded better performance over standard
registration methods.
Index term:-
Image Registration, Harries Operator,
Graphics Processing Unit (GPU)
C. John Moses M.E., (PhD), Assistant Professor
Department of ECE
St. Xavier’s catholic college of engineering
Nagercoil, TamilNadu, INDIA-629003
erjohnmoses@gmail.com
I. INTRODUCTION
Image processing methods, which are
possibly able to visualize objects inside the human
body, are of special interest. Advances in computer
science have led to reliable and efficient image
processing methods useful in medical diagnosis,
treatment planning and medical research. In clinical
diagnosis using medical images, integration of
useful data obtained from separate images is often
desired. The images need to be geometrically
aligned for better observation. This procedure of
mapping points from one image to corresponding
points in another image is called Image Registration.
It includes a wide range of usage, but it is mainly
used on radiological imaging. The image might be
acquired with different sensor or same at different
times. Image registration may categories depends on
the application. It is classified by the modalities as
single or multi and dimensionalities as 2D/2D,
2D/3D, 3D/3D.
II. MAJOR CONSIDERATION
A. Spatial Domain:
Matching intensity patterns or feature in
images, operator chooses corresponding control
points in images[1]. Warp the image such that
functionally homologous regions form different
subject are as close together as possible.
Advantages, algorithm simultaneously minimizes
mean squared difference between templates and
source image. Disadvantages are no exact match
between structure and function, not enough
information in the images, computationally
expensive, challenging high dimensional
optimization.
B. Frequency Domain:
Find the transformation parameters for
registration such as translation, rotation and scaling.
Appling phase correlation method to a pair of two
images produced a third image which contain single
peak. Location of the peak corresponds to the
relative translation between the two images [1]. It
uses correlations and geometric projection
techniques to extract rotational and translational
parameters. Advantages are no initial estimation, no
matching of features required. Disadvantages is
2. require many transformation data’s like FFT and
correlation imaging histogram to achieve result.
C. Intensity Based Method:
Compare intensity pattern in images via correlation
matrices and cost function. It can register either
images or sub images. It maps certain pixels in each
images to same location based on relative intensity
patterns [9]. Advantages this approach eliminate the
feature extraction. Disadvantage of this method
deals with without attempting to detect salient
objects.
D. Feature Based Method:
Find the correspondence between image
features such as points, lines, and contours. Control
points involves points themselves and points or
centre of line features centre of gravity of region etc.
Advantages are having fast computing algorithm
and no need for initial guess [9]. Disadvantages are
it can handle only global motion models and it can
only provide the best result for satellite images.
E. Transformation Models:
Relate the target image space to the
reference image space, have 2 class, Rigid or Linear
and Non-rigid or elastic transformation.
Rigid transformation [1] is refer to as the
number of freedom as translation and rotation, for
2D-to-2D registration there will three degree of
freedom such as two translation and one rotation,
For 3D-to-3D registration there are six degree of
freedom such as three translation and three rotation.
It is preserve all distance, straightness of line and all
non-zero angle between straight lines as shown in
figure 1.3. Advantages of this method is simple to
specify. Disadvantage of this method is can only
correct the rotational and translational differences.
Non-rigid transformation [1] has capable of
locally warping the target image to align with
reference image, have many more degree of freedom
are required. Two general categories are intersubjuct
and intrasubject registration which is not only for
non-rigid anatomy but also for rigid anatomy. Here
distance change but line remains straight as shown
in figure 1.4. Advantages of this method lies in the
fact that feature matching and mapping function
design steps of registration are done simultaneously.
III. SIMILAITY MEASURE
Similarity measurement is a process that
quantifies the degree of similarity between intensity
patterns in two images and it’s depends on the
modality of the images to be registered. Image
similarity measures include Cross-correlation, sum
of squared intensity differences and ratio image
uniformity, which is commonly used for same
modality and mutual information (MI), normalized
MI, Entropy Cross Correlation where used for multi-
modality. Maximizing the similarity between the
two images while maintaining smoothness. Mutual
information is the most popular image similarity
measures for registration of multimodality images.
Generally the similarity measure can classified
into two categories [9] such as feature based and
intensity based, feature based methods are
computationally efficient, but its restrict the manual
intervention, its often required to improve the
accuracy. Intensity based methods are more accurate
then feature based, but it’s totally ignore the spatial
assumptions which is required to achieve the
successful registration. Even though the MI cannot
fix this problem.
IV.PROPOSED METHOD
Proposed method is based on a methodology of
computing MI which incorporates the spatial and
geometrical information by split the image into set
of non-overlapping by using harries operator
through local autocorrelation function in order to
perform registration on spatially meaningful
regions.
Figure: 1 incorporating harries operator with MI
A. Harries Operator:
Harries operator [9] is nothing but a corner
detector which is based on interest point detection.
We should easily recognize the point by looking at
intensity values within an image and we can able to
absolves the change of Appearance in
Neighbourhood of a pixel obtained and the feature
extracted with respect to intensity pattern, so there is
a possibility for incorporating either spatial and
geometrical information
In computer vision, usually we need to find
matching points between different frames of an
environment. Due to know how two images relate to
each other, we can use both images to extract
information of them. When we say matching points
we are referring, in a general sense, to characteristics
in the scene that we can recognize easily. We call
these characteristics features such as flat regions,
MOVING
IMAGE
TARGET
IMAGE
HARRIES OPERATOR
MUTUAL INFORMATION
TRANSFORMATION
OPTIMIZATION
3. edges, corners (interest points) which is shown in the
figure 3.2 and figure 3.3. Corners are special than
remaining two because, since it is the intersection of
two edges, it represents a point in which the
directions of these two edges change. Hence, the
gradient of the image (in both directions) have a high
variation, which can be used to detect it.
Since corners represents a variation in the
gradient in the image, we will look for this
“variation”. Consider a grayscale image ‘I’. We are
going to sweep a window w(x,y) (with
displacements ‘Δx’ in the x direction and ‘Δy’ in the
y direction) on the image ‘I’ and will calculate the
variation of intensity.
(1). Auto Correlation Function:
Q(x, y) = |
∑ Ix(x, y)2
w ∑ Ix(x, y).Iy(x, y)W
∑ Ix(x, y).I Y(x, y)W ∑ Iy(x, y)2
w
| = |
A B
B C
|
(1)
Here ‘W’ as Window function (3 x 3) and Ix, Iy are
pixel row and pixel column of the image I(x,y)
c(x, y, Δx, Δy) = [Δx, Δy] Q(x, y) |
Δx
Δy
| (2)
(𝛥𝑥, 𝛥𝑦) is the sifted (neighbourhood) pixel of (𝑥, 𝑦)
(2). Harries Operator:
ƛ1 ƛ2 = det Q(x, y) = AC − B2
; ƛ1+ƛ2 = trace Q(x, y) = (A + C)
H = ƛ1 ƛ2 – 0.04(ƛ1+ƛ2)2
(3)
‘det’ is the matrix determinant of ‘W’ and ‘trace is
the diagonal element and ƛ1, ƛ2 are the curvature
(intensity value) in X and Y direction
Figure: 2 Overall Block Diagram of Harris Operator
computing some test images and medical
images as a X direction [A(x,y)], Y direction
[C(x,y)], Diagonal direction [B(x,y)] and also
corner detected images with their corresponding
input image [I(x,y)] are shown in the figure:2.
(a) (b)
(b) (d)
(e) (f)
(g) (h)
Figure: 3 Example of test image (a) and retinal image (e) has
involved in auto correlation function for corner detection through
harries operator. (b) and (f) are the output of A(x,y), (c) and (g)
are output of C(x,y), (d) and (h) are the output of B(x,y)
B.Mutual Information:
Mutual information [2] is a quantity that
measures a relationship between two random
variables that are sampled simultaneously. In
particular, it measures how much information in the
target image which is communicated or similar to
reference image.
Mutual information consider both the joint
entropy H(A,B) and the individual entropy H(A) and
H(B).
(1). Single Entropy:
H(A) = − ∑ 𝑃𝐴(𝑎)log10 𝑃𝐴(𝑎)𝑎 (4)
(2). Joint entropy:
H(A,B) = − ∑ 𝑃𝐴𝐵(𝑎, 𝑏) log10 𝑃𝐴𝐵(𝑎, 𝑏)𝑎,𝑏 (5)
(3). Mutual information:
I(A,B) = H(A)+H(B) – H(A,B) (6)
Here the single and joint entropy equation
has replaced by harries operator equation by
reference image H(A) is equated by H as shown in
equation(3) and as same for target image H(B).
4. C. B-Spline transformation:
Figure: 4 2-D example of the B-spline deformation model: (a)
mesh of control points with uniform spacing δ placed over the
image domain and (b) 4δ × 4δ neighbourhood domain Di,j
affected by control point φi,j .
As shown in Fig. 4(a), a B-spline free-form
deformation represents a non-rigid transformation
by manipulating a mesh of control points overlaid on
the image domain Ω. Let Φ be a 3-D mesh of control
point’s φ𝑖,𝑗,𝑘 ∈ Φ, and let δ be the initial distance
between the control points. A non-rigid
transformation T of any voxel (x, y, z) ∈ Ω is then
calculated by its surrounding 4 × 4 × 4
neighbourhood of control points as follows:
T(x, y, z) = ∑ 𝐵𝑙(𝑢)φi+l
3
𝑙=0 (7)
φi+l = ∑ ∑ 𝐵 𝑚(𝑣)3
𝑛=0
3
𝑚=0 𝐵𝑛(𝑤)φi+l,j+m,k+n (8)
Where B𝑙 (0 ≤ l ≤ 3) represents the 𝑙 𝑡ℎ
basis
function of cubic B-splines [3], i = [x/δ] − 1, j = [y/δ]
− 1, k = [z/δ] − 1, u = x/δ – [x/δ], v = y/δ – [y/δ], and
w = z/δ – [z/δ]. In other words, B-spline
deformations are locally controlled because each
control point φ𝑖,𝑗,𝑘 affects only its 4δ × 4δ × 4δ
neighbourhood subdomain D𝑖,𝑗,𝑘 as shown in Fig.
4(b).
Note that the value of φ𝑖+𝑙 is identical for
all voxels in one row located within the same cell of
the mesh Φ [6]. Because such voxels have the same
coordinates y and z, they are transformed according
to the same indexes j and k (i.e., the same control
points) and the same relative positions v and w
within the cell (i.e., the same coefficients).
Consequently, Rohlfing and Maurer [6] reduced the
amount of computation by reusing (8) between
voxels.
D. Optimization:
Rueckert’s algorithm [7] employs steepest
descent optimization to find the optimal
transformation parameters Φ that minimize the cost
function C. To estimate the gradient vector ∇C =
∂C/∂Φ with respect to the transformation parameters
Φ, the algorithm computes a local gradient
∂C/∂φi,j,k for each control point φi,j,k ∈ Φ by using
the finite-difference approximation. Because the
deformation of φ𝑖,𝑗,𝑘 affects only its neighbourhood
domainD𝑖,𝑗,𝑘, a pre-computation technique [8] is
useful to accelerate this gradient computation. That
is, a joint histogram of unaffected region Ω − D𝑖,𝑗,𝑘
is computed in advance and is then compounded
with a local-joint histogram of the affected region
D𝑖,𝑗,𝑘 for each control point displacement. This pre-
computation technique reduces the computational
requirement to 1/6 because joint histograms are
computed for six displacements (±x, ±y, ±z) per
control point.
The optimization [2] procedure mentioned
previously is accelerated with a multi resolution
representation that organizes both the images and
the control point mesh in a hierarchy. The image
resolution γ and the control point spacing δ are then
progressively refined at each level of the hierarchy.
E. Compute Unified Device Architecture:
In general, CUDA programs [5] consist of
host code and device code, which run on a CPU and
a GPU, respectively. The host code typically
invokes the device code on the GPU to accelerate the
time consuming part of the application. The device
code can be implemented as a function called kernel.
The GPU executes a kernel with tens of thousands
of CUDA threads to achieve acceleration by
exploiting the data parallelism in the application.
These threads compose a series of thread blocks to
adapt their organization to the hierarchical processor
architecture [4] deployed for the GPU. A thread
block is then partitioned into a series of warps. A
warp contains 32 threads, which are executed in a
single-instruction, multiple-thread (SIMT) manner
[10]. On account of this SIMT execution, a branch
within a warp can result in a thread divergence,
which significantly lowers the efficiency of parallel
execution.
Threads belonging to the same thread block
are allowed to share small capacity but fast memory.
In the current architecture, the maximum size of
shared memory a thread block can allocate is 48 kB.
By contrast, any thread can access large capacity but
slow off-chip memory. Off-chip memory can be
allocated as a texture, which provides hardware
accelerated interpolation of Texel values. This
ability is useful to accelerate deformation of the
floating image on a GPU.
CUDA provides a synchronization
mechanism for threads of the same thread block, but
not for those of different thread blocks. One special
exception is the family of atomic operations, which
is useful to count the number of intensities for
histogram computation. Atomic operations are
available to both on-chip and off-chip memories;
however, they can cause thread serialization. The
only way to achieve global synchronization is to
finish and restart the running kernel. However, this
increases the amount of off-chip memory access
because register files and shared memory are cleared
5. at the end of kernel execution. From this point of
view, a series of kernel invocations should be
unified into a single invocation if the kernel can be
implemented without global synchronization.
The important GPU concepts that are
strongly related to our registration algorithm are
fourfold, saving off-chip memory accesses by data
reuse on shared memory, reducing the amount of
off-chip memory access by kernel unification,
Maximizing GPU resource utilization by texture-
based interpolation and Maximizing the efficiency
of SIMT execution by avoiding thread divergence.
V. RESULT AND DISCUSSION
For a human, it is easier to identify a
“corner”, but a mathematical detection is required in
case of algorithms. Chris Harris and Mike Stephens
in 1988 improved upon Moravec's corner detector
by taking into account the differential of the corner
score with respect to direction directly instead of
using shifted patches. Moravec only considered
shifts in discrete 45 degree angles whereas Harris
considered all directions.
Harris detector has proved to be more
accurate in distinguishing between edges and
corners. In this a circular Gaussian window is used
to reduce noise and local autocorrelation function is
used for find out the correlation between the original
position and sifted position. Harris equation
provides the both Eigen values of x and y direction,
when the both Eigen values are larger that’s become
corner or interest points, when only any one of Eigen
values of x and y direction are larger that’s become
edges, and both Eigen values are become low, that’s
become flat region. Here by the feature has been
extracted with respect to intensity pattern.
The harries operator equation replace the
process of both entropy and joint entropy function in
the MI similarity measures in order to locate the
matching points between reference and target
images. B-Spline transformation has used to
estimate the deformation field, which depends on
maximization of similarities between 2 images to be
registered and Rueckert’s algorithm has to be used
for optimizing the transformed image.
Experimental machine had a quad-core
Intel Core i5-2500K processor with 16-GBRAMand
a 512-core NVIDIA GeForce GTX 580 graphics
card with 1.5-GB VRAM. The graphics card was
connected with a PCI Express bus (generation 2).We
used CUDA 4.2 [10] running on Windows 7, which
has to accelerate the performance of registration.
VI. CONCLUTION
The aim of this project is to improve the
performance of Mutual Information in order to
achieve the efficient image registration. Over main
challenge is to incorporate the spatial and
geometrical information with similarity
measurement as MI. the image features are extracted
with respect to the intensity pattern of the image
using harries corner detector which is derived by the
local autocorrelation function. Here by the system
can able to achieve the spatial and geometrical
information during the mutual information
similarity measurement process in image
registration in order to get the better and high image
resolution. And CUDA based GeForce GTX 580
graphic processing unit has to be used for accelerate
the performance of registration.
REFERENCE
[1]. Ardeshir Goshtasby.A, “2D and 3D Image
Registration,” in John Wiley & Sons, Inc., ISBN 0-
471-64954-6, Copy right @ 2005.
[2]. Kei Ikeda et al., “Efficient Acceleration of Mutual
Information Computation for Nonrigid Registration
Using Cuda,” In IEEE Journal of Biomedical and
Health Informatics, Vol. 18, No. 3, May 2014.
[3]. Lee.S, G. Wolberg, and S. Y. Shin, “Scattered data
interpolation with multilevel B-splines,” IEEE
Trans. Vis. Comput. Graph, vol. 3, no. 3, pp. 228–
244, Jul. 1997.
[4]. Lindholm.E, J. Nickolls, S. Oberman, and J.
Montrym, “NVIDIA Tesla: A unified graphics and
computing architecture,” IEEE Micro, vol. 28, no. 2,
pp. 39–55, Mar. 2008.
[5]. NVIDIA Corporation. (2012, Apr.). CUDA
Programming Guide Version 4.2. [Online].
Available: http://developer.nvidia.com/cuda/
[6]. Rohlfing.T and C. R. Maurer, “Nonrigid image
registration in sharedmemory multiprocessor
environments with application to brains, breasts, and
bees,” IEEE Trans. Inf. Technol. Biomed., vol. 7,
no. 1, pp. 16–25, Mar. 2003.
[7]. Rueckert.D, L. I. Sonoda, C. Hayes, D. L. G. Hill,
M. O. Leach, and D. J. Hawkes, “Nonrigid
registration using free-form deformations:
Application to breast MR images,” IEEE Trans.
Med. Imag., vol. 18, no. 8, pp. 712–721, Aug. 1999.
[8]. Studholme.C, R. T. Constable, and J. S. Duncan,
“Accurate alignment of functional EPI data to
anatomical MRI using a physics-based distortion
model,” IEEE Trans. Med. Imag., vol. 19, no. 11,
pp. 1115–1127, Nov. 2000.
[9]. Woo.J, M.Stone and L.Prince, “Multimodel
registration via mutual information incorporating
geometric and spatial context,” in IEEE
Transactions On Image Processing, Vol. 24, No. 2,
February 2015.