SlideShare a Scribd company logo
1 of 15
Hybrid Pixel-Based Method
for Multimodal Medical Image Fusion
Based on Integration of Pulse-Coupled
Neural Network (PCNN) and Genetic
Algorithm (GA)
R. Indhumathi, S. Nagarajan, and K. P. Indira
1 Introduction
Medical imaging plays a vital role in medical diagnosis and treatment. However,
distinct imagingmodalityyields informationonlyinlimiteddomain. Studies aredone
for analysis information collected from distinct modalities of same patient. This led to
the introduction of image fusion in the field of medicine and the progression of image
fusion techniques. Image fusion is characterized as the amalgamation of significant
data from numerous images and their incorporation into seldom images, generally
a solitary one. This fused image will be more instructive and precise than the indi-
vidual source images that have been utilized, and the resultant fused image comprises
paramount information. The main objective of image fusion is to incorporate all the
essential data from source images which would be pertinent and comprehensible for
human and machine recognition. Image fusion is the strategy of combining images
from distinct modalities into a single image [1]. The resultant image is utilized in
variety of applications such as medical diagnosis, identification of tumor and surgery
treatment [2]. Before fusing images from two distinct modalities, it is essential to
preserve the features so that the fused image is free from inconsistencies or artifacts
in the output.
Medical images canbeobtainedfromdistinct modalities suchas computedtomog-
raphy (CT), magnetic resonance imaging (MRI), positron emission tomography
(PET), single-photon emission tomography (SPECT) and X-ray. For instant, X-ray
R. Indhumathi (B) · S. Nagarajan
EEE Department, Jerusalem College of Engineering, Chennai 600100, India
e-mail: indhuraja.phd@gmail.com
S. Nagarajan
e-mail: nagu_shola@yahoo.com
K. P. Indira
ECE Department, BVC Engineering College, Allavaram, India
e-mail: kpindiraphd@gmail.com
© Springer Nature Singapore Pte Ltd. 2021
S. Patnaik et al. (eds.), Advances in Machine Learning and Computational Intelligence,
Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-5243-4_82
853
854 R. Indhumathi et al.
and computed tomography (CT) are used to provide information about dense struc-
tures like bones, whereas magnetic resonance imaging (MRI) is used to provide
information about soft tissues, while positron emission tomography (PET) provides
information about metabolic activity taking place within the body. Hence, it necessi-
tates to integrate information from distinct images into a single image for a compre-
hensive view. Multimodal medical image fusion helps to diagnose the disease and
also reduces the cost of storage by amalgamating numerous source images into a
single image.
Image fusion can be accomplished at three levels—pixel level, feature level and
decision level [3]. Pixel-level image fusion is usually performed by combining pixel
values from individual source images. Feature-level image fusion performs fusion
only after segmenting the source image into numerous features such as pixel intensity
and edges. Decision-level image fusion is a high-level fusion strategy which utilizes
fuzzy rule, heuristic algorithms, etc. In this paper, image fusion has been performed
on pixel level owing to their advantages such as easy implementation and improved
efficiency.
Image fusion techniques are partitioned into two major categories—spatial-
domain and frequency-domain techniques [4, 5]. Spatial-domain techniques are
further categorized into average, maximum fusion algorithms and principle compo-
nent analysis. Frequency-domain techniques are further categorized into pyramid-
based decomposition and wavelet transforms.
Averaging method is a simple image fusion strategy where the output is deter-
mined by calculating the average value of each pixel [6]. Though easier to under-
stand, averaging method yields output with low contrast and washout appearance.
Choosing maximum fusion rule selects the maximum pixel value from the source
images as the output. This in turn yields a highly focused output [7]. PCA is a
vector space transformation methodology which reduces multi-dimensional datasets
to lower dimensions which is a powerful tool to analyze graphical data since elucida-
tion is tedious for data of higher dimensions. Frequency-domain strategies are further
categorized into pyramidal method and wavelet-based transforms. Pyramidal method
consists of bands of source images. Each level will be usually smaller compared to its
predecessor. This results in higher levels concentrated on higher spatial frequencies.
Pyramidal method is further classified into [8] Gaussian pyramid method [9], Lapla-
cian pyramid method, ratio of low-pass pyramid method and morphological pyramid
method. Gaussian pyramid method is a low pass filtered version of its predecessor.
LP is a bandpass filtered version of its predecessor. In ratio of low-pass pyramid, the
image at each level is the ratio between two successive levels in Gaussian pyramid
method. Morphological pyramid strategy used morphological filters to extract the
details of an image without causing any adverse effects.
Wavelet image fusion techniques are further classified into [10] discrete wavelet
transform (DWT) [11], Stationary wavelet transform (SWT) [12], non-subsampled
contourlet transform (NSCT). DWT technique decomposes an image into low-
and high-frequency components. Low-frequency components provide approxima-
tion information, while high-frequency components provide detailed information
contained within an image. DWT strategy suffers from a major drawback called shift
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 855
variance. In order to provide a shift-invariant output, Stationary wavelet transform
(SWT) was introduced. This technique accomplishes upsampling strategy which
results in the decomposed image which has the same size as the input image. Though
these wavelet transforms perform well, they failed to perform well along edges. To
provide information along edges, non-subsampled contourlet transform (NSCT) was
proposed which is a geometric analysis procedure which increases localization, shift
invariance, etc. In spite of providing various advantages, NSCT technique requires
proper filter tuning and proper reconstruction filters for any particular application
[13]. To overcome the above disadvantages, neural technique strategies were intro-
duced. Pulse-coupled neural network (PCNN) is a neural network technique evolved
by utilizing the synchronous pulse emergence from cerebral visual cortex of some
mammals.
Pulse-coupled neural network (PCNN) is a most commonly used image fusion
strategy for medical diagnosis and treatment [14]. Initially, manual adjustment was
done to tune PCNN variables. Lu et al. [14] have utilized a distinct strategy where
pulse-coupled neural network (PCNN) was optimized by utilizing multi-swarm fruit
fly optimization algorithm (MFOA). Quality assessment was utilized as hybrid fitness
function which enhances the performance of MFOA. Experimental results illustrate
the effectiveness of the proposed strategy.
Gai et al. [15] have put forward a novel image fusion technique which utilized
pulse-coupled neural network (PCNN) and non-subsampled shearlet transform
(NSST). Initially, the images were decomposed by utilizing non-subsampled shearlet
transform (NSST) strategy. This decomposes the image into low- and high-frequency
components. Low-frequency components have been fused by utilizing improved
sparse representation method. This technique eliminates the detailed information by
using Sobel operator, while information preservation has been done by using guided
filter. High-frequency components have been fused by utilizing pulse-coupled neural
network (PCNN) strategy. Finally, inverse transform is done to yield the fused output.
The effectiveness of the proposed strategy has been validated against seven different
fusion methods. The author has also fused information from three distinct modalities
to justify the superiority of the proposed method. Subjective and objective analyses
illustrate the effectiveness of the proposed method.
A multimodal fusion approach which utilizes non-subsampled shearlet trans-
form (NSST) and simplified pulse-coupled neural network model (S-PCNN) was
put forward by Hajer et al. [16]. Initially, the images were transformed into YIQ
components. The images were initially disintegrated into low- and high-frequency
components using NSST strategy. The low-frequency components were fused using
weight region standard deviation (SD) and local energy, and high-frequency compo-
nents are fused by utilizing S-PCNN strategy and finally, inverse NSST and inverse
YIQ technique. The final discussion illustrates that the proposed strategy outper-
forms quantitatively in terms of performance measures such as mutual information,
entropy, SD, fusion quality and spatial frequency.
Jia et al. [17] have put forward a novel framework which utilized improved adap-
tive PCNN. PCNN is a technique that emerged from the visual cortex of mammals
and has proved to be very suitable in the field of image fusion. The source images
856 R. Indhumathi et al.
were initially fed to the parallel PCNN, and the gray value of the image was utilized
to trigger PCNN. Meanwhile, sum-modified Laplacian was chosen as the evaluation
function, and the linking strength of neuron which corresponds to PCNN was evalu-
ated. The ignition map was generated after ignition of PCNN. The clearer part of the
images was chosen to yield the fused image. Quantitative and qualitative analyses
illustrated that the proposed strategy outperformed than the existing strategies.
In this paper, Wang et al. [18] have put forward an image fusion technique which
utilized discrete wavelet transform (DWT) and dual-channel pulse-coupled neural
network (PCNN). For fusing low-frequency coefficients, choosing maximum fusion
rule has been utilized, while spatial frequency of high-frequency components has
been chosen to motivate dual-channel PCNN. Finally, inverse DWT has been utilized
to yield the fused image. Visual and quantitative analyses illustrated the superiority
of the proposed approach than other image fusion strategies.
Arif et al. [19] proposed an existing image fusion strategies that lacked the capa-
bility to produce a fused image which could preserve the complete information
content from individual source images which utilized combination of curvelet trans-
form and genetic algorithm (GA) to yield a fused image. Curvelet transform helped in
preserving the information along the edges, and genetic algorithm helped to acquire
the fine details from the source images. Quantitative analysis demonstrated that the
proposed strategy outperformed than the existing baseline strategies.
Fu et al. [20] have put forward a novel image fusion approach which utilized
non-subsampled contourlet transform (NSCT) and pulse-coupled neural network
(PCNN) jointly in image fusion algorithms. High- and low-frequency coefficients
have been processed using modified PCNN. Determining the degree of matching
between input images is utilized in fusion rules. Finally, inverse NSCT has been
employed to reconstruct the fused image. Experimental analysis illustrated that the
proposed strategy outperformed wavelet, contourlet and traditional PCNN methods
in terms of higher mutual information content. Also, the proposed strategy preserved
edge as well as texture information, thereby including more information content in
the fused image. The author concluded by stating that research about selection of
parameters for image fusion should be performed deeply.
Image fusion is the strategy in which the input from multiple images is combined
to yield an efficient fused image. Lacewell et al. [21] have put forward a strategy
which utilized combination of discrete wavelet transform and genetic algorithm to
produce an efficient fused image. DWT has been utilized to extract features, while
genetic algorithm (GA) has been utilized to yield an enhanced output. Quantitative
and comparison analyses illustrated that the proposed strategy produced superior
results in terms of mutual information and root mean square error.
Wang et al. [22] have put forward a novel image fusion approach which utilizes
pulse-coupled neural network and wavelet-based contourlet transform. In order to
motivate PCNN, spatial high frequency in WBCT has been utilized. High-frequency
coefficients can be selected by utilizing weighted method of firing times. Wavelet
transform strategies perform better at isolated discontinuities but not along curved
edges especially for 3D images. In order to overcome the above drawbacks, PCNN
hasbeenutilizedwhichperformsbetterforhigher-dimensionalimages.Experimental
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 857
analysis illustrated that WBCT-PCNN performed better from both subjective and
objective analyses.
From the literature survey, it is inferred that combination of two distinct image
fusiontechniquesprovidesbetterresultsintermsofbothqualityandquantity.Though
the solution may be obtained by utilizing any image fusion strategy, the optimal
solution can be obtained only by utilizing genetic algorithm (GA). Hence, an attempt
has been made to integrate the advantages of both PCNN and GA to yield an output
image from both quality and quantitative analyses.
The rest of the paper is organized as follows: Sect. 2 provides a detailed expla-
nation about pulse-coupled neural network (PCNN), Sect. 3 illustrates about the
proposed methodology, the proposed algorithm is provided in Sect. 4, qualitative
and quantitative analyses have been provided in Sect. 5 and conclusion has been
provided in Sect. 6.
2 Pulse-Coupled Neural Network (PCNN)
A new type of neural network, distinct from traditional neural network strategies,
is pulse-coupled neural network (PCNN) [23]. PCNN is developed by utilizing the
synchronous pulse emergence from cerebral visual cortex of some mammals. A
gathering of neurons is usually associated to frame PCNN. Each neuron correlates
to a pixel value whose intensity is contemplated as external stimulant. Every neuron
interfaces with another neuron in such a manner that a single-layer two-dimensional
cluster of PCNN is constituted. When linking coefficient beta is zero, every neuron
pulses naturally due to external stimulant. When beta is nonzero, the neurons are
associated mutually. At a point, when the neuron fires, its yield subscribes the
other adjacent neurons leading them to pulse before the natural period. The yield
of captured neurons influences the other neurons associated with them to change the
internal activity and the outputs. When iteration terminates, the output of every stage
is added to get an aggregate yield which is known as the firing map.
There are two primary issues in the existing image fusion strategies [24]. Firstly,
pyramid and wavelet transform strategies treat each pixel in an individual manner
rather than considering the relationship between them. Further, the images should
be entirely registered before fusion.
In order to overcome the above drawbacks, PCNN has been proposed [25]. Fusion
strategy usually takes place in two major ways—either by choosing the better pixel
value or by outlining a major–minor network for different networks, thereby choosing
the yield of the first PCNN as the fusion result.
The pulse-coupled neural network comprises three compartments: receptive field,
linking part or modulation and pulse generator.
Receptive field is an essential part which receives input signals from neigh-
boring neurons and external sources. It consists of two internal channels—feeding
compartment (F) and linking compartment (L). Compared to feeding compartment,
the linking inputs have quicker characteristic response time constant. In order to
858 R. Indhumathi et al.
generate the total internal activity (U), the biased and multiplied linking inputs are
multiplied with the feeding inputs. The net result constitutes the linking/modulation
part. At last, pulse generator comprises step function generator and a threshold signal
generator.
The ability of neurons in the network to respond to external stimulant is known
as firing which enables the internal activity of neuron to exceed a certain threshold
value. Initially, the yield of neuron is set to 1. The threshold value starts rotting till the
next internal activity of the neuron. The output generated is then iteratively nourished
back with a delay of single iteration. As soon as the threshold exceeds the internal
activity (U), the output will be reset to zero. A temporal series of pulse outputs are
generated by PCNN after n number of iterations which carries the data about the
input images. Input stimulus which corresponds to pixel’s color intensity is given to
feeding compartment. The pulse output of PCNN helps to make a decision on the
content of the image.
Initially, double-precision operation is performed on the acquired input CT and
PET images. In order to reduce the memory requirements of an image, unsigned
integers (unit 8 or unit 16) can be utilized. An image whose information lattice has
unit 8 and unit 16 class is known as 8-bit image and 16-bit image, respectively.
Though the measure of colors emitted cannot be differentiated in a grayscale
image, the aggregate sum of radiated light for each pixel can be partitioned since a
small measure of light is considered as dark pixels, while more amount of light is
considered as bright pixels. On conversion from RGB to grayscale image, the RGB
value of each pixel is ought to be taken and made as the yield of a solitary value
which reflects the brightness of the pixel.
Normalization (norm) converts the scale of pixel intensity values and is known
as contrast stretching or histogram stretching. Normalization can be determined by
using the following formula
I norm = (I abs − I min)/(I max − I min)
where
abs represents absolute value
min represents minimum value
max represents maximum value
Each neuron in firing pulse model comprises receptive field, modulation field and
pulse generator.
Two important features are necessary to fire a pulse generator—spiking cortex
model(SCM)andsynapticweightmatrix.SCMhasbeendemonstratedincompliance
with Weber–Fechner law, since it has higher sensitivity for low-intensity stimulant
and lower sensitivity for high-intensity stimulant. In order to improve the perfor-
mance and make the output reachable, synaptic weight matrix is applied to linking
field and sigmoid function is applied in firing pulse. PCNN comprises neuron capture
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 859
property which causes any neuron’s firing to make the nearby neurons whose lumi-
nance is similar to be fired. This property makes the information couple and transmis-
sion to be automatically acknowledged which makes PCNN satisfactory for image
fusion.
In this model, one of the original images is chosen as input to the main PCNN
network randomly and another image as input to the subsidiary network. The firing
information about the subsidiary network is transferred to the main PCNN network
with the help of information couple and transmission properties. By doing so, image
fusion can be figured out. When a neuron is fired, the firing information about the
subsidiary network is communicated to the adjacent neurons and neurons of the main
PCNN network. The capturing property of PCNN makes it suitable for image fusion.
Eventually, the output obtained is transformed to unit 8 format, and finally, the fused
image is obtained.
3 Genetic Algorithm
Genetic algorithm (GA) is a heuristic search algorithm used to solve the problems
of optimization [26]. The essence of GA is to simulate the evolution of nature, i.e., a
processinwhichaspeciesexperiencestheselectiveevolutionandgeneticinheritance.
At first, a random group is formed, and then, with mutual competition and genetic
development, the group goes through the following operation process: selection,
crossover, mutation, etc. The subgroup who has better fitness will survive and form
a new generation. The process cycles continuously until the fittest subgroups are
formed. The surviving groups are supposed to well adapt to the living conditions.
Thus, the genetic algorithm is actually a type of random search algorithm. Moreover,
it is nonlinear and parallelizable, so it has great advantages when compared with the
traditional optimization algorithm.
Four entities that help to define a GA problem are the representation of the candi-
date solutions, the fitness function, the genetic operators to assist in finding the
optimal or near optimal solution and specific knowledge of the problem such as
variables [27]. GA utilizes the simplest representation, reproduction and diversity
strategy. Optimization with GA is performed through natural exchange of genetic
material between parents. Offspring are formed from parent genes. Also, fitness of
offspring is evaluated. The best-fitting individuals are only allowed to breed.
Image fusion based on GA consists of three types of genetic operations—
crossover, mutation and replication [28]. The procedure is as follows:
1. Encode the unknown image weight and define the objective function f (xi). SML
is used as the fitness function using GA.
2. N stands for the initial population size of fusion weight. Pm represents the
probability of mutation and Pc the probability of crossover.
3. Generate randomly a feature array whose length is L to form an initial fusion
weight group.
860 R. Indhumathi et al.
4. Followthestepsbelowandconductiterativeoperationuntilterminationcondition
is achieved.
(a) Calculate the adaptability of an individual in the group.
(b) On the basis of adaptability, Pc and Pm, operate crossover, mutation and
replication.
5. The best-fitting individuals in the surviving subgroup are elected as the result.
So, the optimal fusion weight is obtained.
From the steps above, the optimal fusion weights of the images to be fused can
be obtained after several iterations. However, since this method does not take into
account the relationship between the images to be fused, a lot of time is wasted to
search for the fusion weights in the clear focal regions, which leads to low accuracy
of the fusion.
4 Optimization of Image Fusion Using PCNN aND GA
A new image fusion algorithm using pulse-coupled neural network (PCNN) with
genetic algorithm (GA) optimization has been proposed which uses the firing
frequency of neurons to process the image fusion in PCNN. Aiming at the result
of image fusion being affected by neuron parameters, this algorithm is dependent on
the parameters image gradient and independent of other parameters. The PCNN is
built in each high-frequency sub-band to simulate the biological activity of human
visual system. On comparing with traditional algorithms where the linking strength
of each neuron is set constant or continuously changed according to features of each
pixel, here, the linking strength as well as the linking range is determined by the
prominence of corresponding low-frequency coefficients, which not only reduces
the calculation of parameters but also flexibly makes good use of global features of
images.
The registration has been conducted to the images to be fused, and the image
focusing is different; the fusion coefficients (Q) of the two pixels in the same position
have a certain hidden relationship [29]. Besides, the constraint condition of the fusion
coefficient is Q1 + Q2 = 1, Q1 > 0, Q2 > 0, so modeling for either coefficient is
enough. Therefore, in the process of wavelet transformation, the PCNN for fusion
weight coefficient (Q) of each pixel of each layer in a hierarchical order from high
to low has been built. Q stands for fusion coefficient and S stands for hidden state.
In this model, S is defined as three states, i.e., S ∈ {1, 2, 3}. When the pixel is
in the focused region of image 1, or when located much nearer the focal plane of
image 1 than that of image 2, S is set as 1. When the pixel is in the focused region
of image 2, or if it is located much nearer the focal plane of image 2 than that of
image 1, S is set as 3. If the pixel is not in the focused region of image 1 or image 2
and there is no obvious difference between the distances from focal plane of image 1
and that of image 2, S is set as 2.
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 861
In addition, if the fusion coefficient in the neighboring region is greater than 0.9
or less than 0.1, then S is set to be 1 or 3. The state transfer matrixes from the parent
node Si to the sub-node Si + 1 are defined as follows.
According to the fact that details of the low frequency in the clear area are more
than that in unclear area, this tries to get fusion weights from high scale to low scale
by using GA after the process of wavelet transformation. Meanwhile, the author
constructs a PCNN for each fusion weight in each layer and figures out its hidden
states layer by layer with the help of the fusion weights calculated by GA. PCNN
is a single-layered, two-dimensional, laterally connected neural network of pulse-
coupled neurons where the inputs to the neuron are given by feeding and linking
inputs. Feeding input is the primary input from the neurons receptive area. The neuron
receptive area consists of the neighboring pixels of corresponding pixel in the input
image. Linking input is the secondary input of lateral connections with neighboring
neurons. The difference between these inputs is that the feeding connections have
a slower characteristic response time constant than the linking connections. Guided
by the hidden states of the fusion weights in the upper layer, the author gets the
values directly from the clear area of the next layer without GA. In this way, the
population size in the process of GA is reduced, which contributes a lot to improving
the precision of calculation in the same amount of time.
5 Algorithm
Step 1: Apply PCNN to N layers of the image and then figure out the fusion
coefficient of Layer N by using GA. Set variate i = 0.
Step 2: Search the neighboring region of Layer N − i for the pixels, whose fusion
coefficients are greater than 0.9 or less than 0.1. Set Si = 1 or 3.
Step 3: Search Layer N − (i + 1) for the pixels whose parent node are Si = 1 or
3, and then, the Qs of these pixels are set to be 1 or 0, and accordingly, Si + 1 are
set to be 1 or 3.
Step 4: Search Layer N − (i + 1) and find out the pixels whose parent node are
Si = 2 and then apply GA to work out the fusion coefficients of these pixels. Set
their Si + 1 to be 2, and set variate i = i + 1. After that, go back to Step 2 and
circulate the operation.
Step 5: Circulate Step 2, Step 3 and Step 4. Work out the fusion coefficients until
the last layer.
862 R. Indhumathi et al.
6 Results and Discussion
6.1 Quantitative Analysis
Percentage Residual Difference (PRD): Percentage residual difference reflects the
degree of deviation between source image and the fused image [24]. Lower the value
of PRD, higher is the quality of the image. On comparing PRD values of PCNN
and GA with PCNN, it is inferred that PCNN and GA offer better results for all 16
datasets (Table 1).
Table 1 Objective analysis of PCNN and hybridization of PCNN and GA
CT PET PCNN
Gradient
PCNN
& GA
A1 A2 A3 A4
1
B1 B2 B3 B4
2
C1 C2 C3 C4
3
D1 D2 D3 D4
4
E1 E2 E3 E4
5
F1 F2 F3 F4
6
G1 G2 G3 G4
7
H1 H2 H3 H4
8
CT PET PCNN
Gradient
PCNN & GA
I1 I2 I3 I4
9
J1 J2 J3 J4
10
K1 K2 K3 K4
11
L1 L2 L3 L4
12
M1 M2 M3 M4
13
N1 N2 N3 N4
14
O1 O2 O3 O4
15
P1 P2 P3 P4
16
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 863
Root Mean Square Error (RMSE): RMSE is a measure of difference between
predicted value and the actual value [25]. Lower the value of RMSE, higher is the
quality of the image. On comparing RMSE values of PCNN and GA with PCNN, it
is inferred that PCNN and GA offer better results for all 16 datasets (Table 2).
Peak Signal-to-Noise Ratio (PSNR): PSNR is the ratio between maximum
possible power of a signal and the power of corrupting noise affecting the image.
The quality of an image will be better if the value of PSNR is high. On comparing
PSNR values of PCNN and GA with PCNN, it is inferred that PCNN and GA offer
better results for all 16 datasets (Table 3).
Entropy: Entropy reflects the amount of information content which is available
in the fused image. Higher the value of entropy, higher is the quality of fused image.
On comparing entropy values of PCNN and GA with PCNN, it is inferred that PCNN
and GA offer better results for almost all datasets.
7 Conclusion and Future Scope
A new image fusion algorithm using pulse-coupled neural network (PCNN) with
genetic algorithm (GA) optimization has been proposed which uses the firing
frequency of neurons to process the image fusion in PCNN. The performance of
the proposed algorithm has been evaluated using sixteen sets of computed tomog-
raphy (CT) and positron emission tomography (PET) images obtained from Bharat
Scans. Qualitative and quantitative analyses demonstrate that “optimization of image
fusion using pulse-coupled neural network (PCNN) and genetic algorithm (GA)”
outperforms PCNN technique.
The proposed strategy can be extended to merge color images since color carries
remarkable information and our eyes can observe even minute variations in color.
With the emerging advances in remote airborne sensors, ample and assorted informa-
tionisaccessibleinthefieldsofresourceinvestigation,environmentalmonitoringand
disaster prevention. The existing strategies discussed in the literature survey intro-
duce distortion in color. The algorithm proposed can be extended to fuse remote
sensing images obtained from optical, thermal, multispectral and hyperspectral
sensors without any color distortion.
Multimodal medical image fusion has been implemented with static images in
the proposed work. At present, fusion of multimodal video sequences generated by
a network of multimodal sources is turning out to be progressively essential for
surveillance purposes, navigation and object tracking applications. The integral data
provided by these sensors should be merged to yield a precise gauge so as to serve
more efficiently in distinct tasks such as detection, recognition and tracking. From the
fused output, it is conceivable to produce a precise representation of the recognized
scene which in turn finds its use in variety of applications.
864 R. Indhumathi et al.
Table 2 Comparison
analysis of PRD and RMSE
for PCNN and hybridization
of PCNN and GA
Percentage residual error (PRD)
Datasets PCNN (gradient) Hybridization of PCNN and GA
1 0.4273 3.3080e−008
2 0.3893 4.8480e−008
3 0.4878 4.8667e−008
4 0.4283 1.5920e−008
5 0.3807 5.0838e−008
6 0.4216 7.4327e−008
7 0.4041 7.2718e−008
8 0.3904 8.0547e−008
9 0.1121 6.9992e−008
10 46.6390 5.1606e−008
11 0.1795 6.5642e−008
12 6.9132 4.8696e−008
13 0.1654 4.4340e−008
14 1.1723 5.2005e−008
15 0.1393 6.8369e−008
16 1.5822e−004 7.0856e−008
Root mean square error (RMSE)
Datasets PCNN (gradient) Hybridization of PCNN and GA
1 0.0057 5.4255e−013
2 0.0054 6.9413e−013
3 0.0085 6.7392e−013
4 0.0094 2.2869e−013
5 0.0069 8.2856e−013
6 0.0091 8.7955e−013
7 0.0076 8.5858e−013
8 0.0070 8.2650e−013
9 0.0089 9.0324e−013
10 0.0098 7.6707e−013
11 0.0110 9.1334e−013
12 0.0085 8.2101e−013
13 0.0088 6.7181e−013
14 3.2809e−004 4.552e−013
15 0.0077 8.7183e−013
16 0.0049 5.7605e−013
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 865
Table 3 Comparison
analysis of PSNR and entropy
for PCNN and hybridization
of PCNN and GA
Peak signal-to-noise ratio(PSNR)
Datasets PCNN (gradient) Hybridization of PCNN and GA
1 54.5123 55.7234
2 55.0001 57.2346
3 53.4523 54.8976
4 56.1234 57.1042
5 55.6321 57.1234
6 56.1235 57.8432
7 55.4567 56.7046
8 55.1732 56.5460
9 57.8432 58.4389
10 57.2341 58.8975
11 57.6574 59.1004
12 55.6978 57.9874
13 54.2054 55.5512
14 56.1800 58.1254
15 57.8358 58.2657
16 55.8526 57.2116
Entropy
Datasets PCNN (gradient) Hybridization of PCNN and GA
1 7.3120 8.1423
2 7.4250 8.4146
3 7.3690 8.0799
4 7.9854 8.3523
5 7.8453 8.0733
6 8.0001 8.0452
7 7.7785 8.3483
8 7.4567 8.4209
9 7.3001 8.2272
10 7.5254 7.6642
11 7.3001 7.3740
12 7.9784 8.1282
13 7.8546 8.1151
14 7.8945 8.2251
15 7.9000 8.0205
16 8.1234 8.6886
866 R. Indhumathi et al.
References
1. P. Hill, M. Ebrahim Al-Mualla, D. Bull, Perceptual image fusion using wavelets. IEEE Trans.
Image Process. 26(3), 1076–1088
2. N. Mittal, et al., Decomposition & Reconstruction of Medical Images in Matlab Using Different
Wavelet Parameters, in 1st International Conference on Futuristic Trend In Computational
Analysis and Knowledge Management. IEEE (2015). ISSN 978-1-4799-8433-6/15
3. K.P. Indira, et al., Impact of Co-efficient Selection Rules on the Performance of Dwt Based
Fusion on Medical Images, in International Conference On Robotics, Automation, Control
and Embedded Systems-Race. IEEE (2015)
4. Y. Yang, M. Ding, S. Huang, Y. Que, W. Wan, M. Yang, J. Sun, Multi-Focus Image Fusion Via
Clustering PCA Based Joint Dictionary Learning, vol. 5, pp.16985–16997, Sept 2017
5. A. Ellmauthaler, C.L. Pagliari, et al., Image fusion using the undecimated wavelet transform
with spectral factorization and non orthogonal filter banks. IEEE Trans. Image Process. 22(3),
1005–1017 (2013)
6. V. Bhateja, H. Patel, A. Krishn, A. Sahu, Multimodal medical image sensor fusion framework
using cascade of wavelet and contourlet transform domains. IEEE Sens. J. 15(12), 6783–6790
(2015)
7. B. Erol, M. Amin, Generalized PCA Fusion for Improved Radar Human Motion Recognition,
in IEEE Radar Conference (RadarConf), Boston, MA, USA (2019), pp. 1–5
8. V.S. Petrovic, C.S. Xydeas, Gradient based multi resolution image fusion. IEEE Trans. Image
Process. 13(2), 228–237 (2004)
9. P.J. Burt, E.H. Adelson, The Laplacian pyramid as a compact image code. IEEE Trans.
Commun. 31(4), 532–540 (1983)
10. J. Tian, L. Chen, Adaptive multi-focus image fusion using a waveletbased statistical sharpness
measure. Signal Process. 92(9), 2137–2146 (2012)
11. M.D. Nandeesh, M. Meenakshi, A Novel Technique of Medical Image Fusion Using Stationary
Wavelet Transform and Principal Component Analysis, in 2015 International Conference on
Smart Sensors and Systems (IC-SSS), Bangalore (2015), pp. 1–5
12. Q.M. Gaurav Bhatnagar, W. Jonathan, Z. Liu, Directive contrast based multimodal Medical
image fusion in NSCT domain. IEEE Trans. Multimedia 15(5), 1014–1024 (2013)
13. S. Das. M.K. Kundu, A neuro-fuzzy approach for medical image fusion. IEEE Trans. Biomed.
Eng. 60(12), 3347–3353 (2013)
14. T. Lu, C. Tian, X. Kai, Exploiting quality-guided adaptive optimization for fusing multimodal
medical images. IEEE Access 7, 96048–96059 (2019)
15. D. Gai, X. Shen, H. Cheng, H. Chen, Medical image fusion via PCNN based on edge preser-
vation and improved sparse representation in NSST domain. IEEE Access 7, 85413–85429
16. O. Hajer, O. Mourali, E. Zagrouba, Non-subsampled shearlet transform based MRI and PET
brain image fusion using simplified pulse coupled neural network and weight local features in
YIQ colour space. IET Image Proc. 12(10), 1873–1880 (2018)
17. Y. Jia, C. Rong, Y. Wang, Y. Zhu, Y. Yang, A Multi-Focus Image Fusion Algorithm Using
Modified Adaptive PCNN Model, in 12th International Conference on Natural Computation,
Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE (2016), pp. 612–617
18. N. Wang, W. Wang, An Image Fusion Method Based on Wavelet and Dual-Channel Pulse
Coupled Neural Network, in 2015 IEEE International Conference on Progress in Informatics
and Computing (PIC) (2015), pp. 270–274
19. M. Arif, N. Aniza Abdullah, S. Kumara Phalianakote, N. Ramli, M. Elahi, Maximizing Informa-
tion of Multimodality Brain Image Fusion using Curvelet Transform with Genetic Algorithm, in
IEEE 2014 International Conference on Computer Assisted System in Health (CASH) (2014),
pp. 45–51
20. L. Fu, L. Yifan, L. Xin, Image Fusion Based on Nonsubsampled Contourlet Transform and
Pulse Coupled Neural Networks, in IEEE Fourth International Conference on Intelligent
Computation Technology and Automation, vol. 2 (2011), pp. 180–183
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 867
21. C.W Lacewell, M. Gebril, R. Buaba, A. Homaifar, Optimization of Image Fusion using
Genetic Algorithm and Discrete Wavelet Transform, in Proceedings of the IEEE 2010 National
Aerospace and Electronics Conference (NAECON) (2010), pp. 116–121
22. X. Wang, L. Chen, Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse
Coupled Neural Networks in Wavelet Based Contourlet Transform Domain, in 2nd Confer-
ence on Environmental Science and Information Application Technology, vol. 2. IEEE (2010),
pp. 411–414
23. Y. Yang, J. Dang, Y. Wang, Medical Image Fusion Method Based on Lifting Wavelet Transform
and Dual-channel PCNN, in 9th IEEE Conference on Industrial Electronics and Applications
(2014), pp. 1179–1182
24. Y. Wang, J. Dang, Q. Li, S. Li, Multimodal Medical Image Fusion Using Fuzzy Radial Basis
Function Neural Networks, in IEEE, Proceedings of the 2007 International Conference on
Wavelet Analysis and Pattern Recognition, vol. 2 (2007), pp. 778–782
25. T. Li, Y. Wang, Multi scaled combination of MR and SPECT images in neuroimaging: a simplex
method based variable-weight fusion. Comput. Method Programs Biomed. 105:35–39
26. C.W Lacewell, M. Gebril, R. Buaba, A., Optimization of Image Fusion using Genetic Algorithm
and Discrete Wavelet Transform, in Proceedings of the IEEE 2010 National Aerospace and
Electronics Conference (NAECON) (2010), pp. 116–121
27. R. Gupta, D. Awasthi, Wave-Packet Image Fusion Technique Based on Genetic Algorithm,
in IEEE, 5th International Conference on Confluence The Next Generation Information
Technology Summit (2014), pp. 280–285
28. A. Krishn, V. Bhateja, Himanshi, A. Sahu, Medical Image Fusion Using Combination of
PCA and Wavelet Analysis, in IEEE International Conference on Advances in Computing,
Communications and Informatics (2014), pp. 986–991
29. A. Sahu, V. Bhateja, A. Krishn, Himanshi, Medical Image Fusion with Laplacian Pyra-
mids, in IEEE, 2014 International Conference on Medical Imaging, m-Health and Emerging
Communication Systems (2014), pp. 448–453

More Related Content

What's hot

Image Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused ImagesImage Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused ImagesCSCJournals
 
Techniques of Brain Cancer Detection from MRI using Machine Learning
Techniques of Brain Cancer Detection from MRI using Machine LearningTechniques of Brain Cancer Detection from MRI using Machine Learning
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
 
International Journal of Image Processing (IJIP) Volume (2) Issue (1)
International Journal of Image Processing (IJIP) Volume (2) Issue (1)International Journal of Image Processing (IJIP) Volume (2) Issue (1)
International Journal of Image Processing (IJIP) Volume (2) Issue (1)CSCJournals
 
IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...
IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...
IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...IRJET Journal
 
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMFUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMAM Publications
 
Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...IJECEIAES
 
Literature Survey on Detection of Brain Tumor from MRI Images
Literature Survey on Detection of Brain Tumor from MRI Images Literature Survey on Detection of Brain Tumor from MRI Images
Literature Survey on Detection of Brain Tumor from MRI Images IOSR Journals
 
Comparison of Image Segmentation Algorithms for Brain Tumor Detection
Comparison of Image Segmentation Algorithms for Brain Tumor DetectionComparison of Image Segmentation Algorithms for Brain Tumor Detection
Comparison of Image Segmentation Algorithms for Brain Tumor DetectionIJMTST Journal
 
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Images
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI ImagesComparative Assessments of Contrast Enhancement Techniques in Brain MRI Images
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Imagesijtsrd
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Development and Comparison of Image Fusion Techniques for CT&MRI Images
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesDevelopment and Comparison of Image Fusion Techniques for CT&MRI Images
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
 
Brain tumor detection by scanning MRI images (using filtering techniques)
Brain tumor detection by scanning MRI images (using filtering techniques)Brain tumor detection by scanning MRI images (using filtering techniques)
Brain tumor detection by scanning MRI images (using filtering techniques)Vivek reddy
 
3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor ImagingIJAEMSJORNAL
 

What's hot (19)

Image Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused ImagesImage Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused Images
 
Techniques of Brain Cancer Detection from MRI using Machine Learning
Techniques of Brain Cancer Detection from MRI using Machine LearningTechniques of Brain Cancer Detection from MRI using Machine Learning
Techniques of Brain Cancer Detection from MRI using Machine Learning
 
International Journal of Image Processing (IJIP) Volume (2) Issue (1)
International Journal of Image Processing (IJIP) Volume (2) Issue (1)International Journal of Image Processing (IJIP) Volume (2) Issue (1)
International Journal of Image Processing (IJIP) Volume (2) Issue (1)
 
Ea4301770773
Ea4301770773Ea4301770773
Ea4301770773
 
IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...
IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...
IRJET - Fusion of CT and MRI for the Detection of Brain Tumor by SWT and Prob...
 
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMFUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
 
Q04503100104
Q04503100104Q04503100104
Q04503100104
 
Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...
 
Literature Survey on Detection of Brain Tumor from MRI Images
Literature Survey on Detection of Brain Tumor from MRI Images Literature Survey on Detection of Brain Tumor from MRI Images
Literature Survey on Detection of Brain Tumor from MRI Images
 
Comparison of Image Segmentation Algorithms for Brain Tumor Detection
Comparison of Image Segmentation Algorithms for Brain Tumor DetectionComparison of Image Segmentation Algorithms for Brain Tumor Detection
Comparison of Image Segmentation Algorithms for Brain Tumor Detection
 
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Images
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI ImagesComparative Assessments of Contrast Enhancement Techniques in Brain MRI Images
Comparative Assessments of Contrast Enhancement Techniques in Brain MRI Images
 
Report (1)
Report (1)Report (1)
Report (1)
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Development and Comparison of Image Fusion Techniques for CT&MRI Images
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesDevelopment and Comparison of Image Fusion Techniques for CT&MRI Images
Development and Comparison of Image Fusion Techniques for CT&MRI Images
 
Brain tumor detection by scanning MRI images (using filtering techniques)
Brain tumor detection by scanning MRI images (using filtering techniques)Brain tumor detection by scanning MRI images (using filtering techniques)
Brain tumor detection by scanning MRI images (using filtering techniques)
 
F010224446
F010224446F010224446
F010224446
 
3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging
 
vol.4.1.2.july.13
vol.4.1.2.july.13vol.4.1.2.july.13
vol.4.1.2.july.13
 
BRAIN CANCER CLASSIFICATION USING BACK PROPAGATION NEURAL NETWORK AND PRINCIP...
BRAIN CANCER CLASSIFICATION USING BACK PROPAGATION NEURAL NETWORK AND PRINCIP...BRAIN CANCER CLASSIFICATION USING BACK PROPAGATION NEURAL NETWORK AND PRINCIP...
BRAIN CANCER CLASSIFICATION USING BACK PROPAGATION NEURAL NETWORK AND PRINCIP...
 

Similar to Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integration of Pulse-Coupled Neural Network (PCNN) and Genetic Algorithm (GA)

INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image FusionOptimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image FusionIJERA Editor
 
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
 
AN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGES
AN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGESAN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGES
AN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGEScscpconf
 
Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...ijma
 
IRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and Thresholding
IRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and ThresholdingIRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and Thresholding
IRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and ThresholdingIRJET Journal
 
Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...
Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...
Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...IJECEIAES
 
Classification of MR medical images Based Rough-Fuzzy KMeans
Classification of MR medical images Based Rough-Fuzzy KMeansClassification of MR medical images Based Rough-Fuzzy KMeans
Classification of MR medical images Based Rough-Fuzzy KMeansIOSRJM
 
IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...
IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...
IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...IRJET Journal
 
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...CSCJournals
 
Optimizing Problem of Brain Tumor Detection using Image Processing
Optimizing Problem of Brain Tumor Detection using Image ProcessingOptimizing Problem of Brain Tumor Detection using Image Processing
Optimizing Problem of Brain Tumor Detection using Image ProcessingIRJET Journal
 
BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...
BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...
BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...IAEME Publication
 
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...IRJET Journal
 
Paper id 28201445
Paper id 28201445Paper id 28201445
Paper id 28201445IJRAT
 
BRAINREGION.pptx
BRAINREGION.pptxBRAINREGION.pptx
BRAINREGION.pptxVISHALAS9
 
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...IOSRjournaljce
 

Similar to Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integration of Pulse-Coupled Neural Network (PCNN) and Genetic Algorithm (GA) (20)

INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image FusionOptimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image Fusion
 
Mm2
Mm2Mm2
Mm2
 
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...
 
AN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGES
AN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGESAN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGES
AN ANN BASED BRAIN ABNORMALITY DETECTION USING MR IMAGES
 
Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...Mr image compression based on selection of mother wavelet and lifting based w...
Mr image compression based on selection of mother wavelet and lifting based w...
 
IRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and Thresholding
IRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and ThresholdingIRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and Thresholding
IRJET- Brain Tumor Detection using Hybrid Model of DCT DWT and Thresholding
 
M010128086
M010128086M010128086
M010128086
 
Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...
Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...
Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visual...
 
Classification of MR medical images Based Rough-Fuzzy KMeans
Classification of MR medical images Based Rough-Fuzzy KMeansClassification of MR medical images Based Rough-Fuzzy KMeans
Classification of MR medical images Based Rough-Fuzzy KMeans
 
L045047880
L045047880L045047880
L045047880
 
IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...
IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...
IRJET- Brain Tumor Detection and Classification with Feed Forward Back Propag...
 
Fuzzy c-means
Fuzzy c-meansFuzzy c-means
Fuzzy c-means
 
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
 
Optimizing Problem of Brain Tumor Detection using Image Processing
Optimizing Problem of Brain Tumor Detection using Image ProcessingOptimizing Problem of Brain Tumor Detection using Image Processing
Optimizing Problem of Brain Tumor Detection using Image Processing
 
BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...
BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...
BRAIN TUMOR CLASSIFICATION IN 3D-MRI USING FEATURES FROM RADIOMICS AND 3D-CNN...
 
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
 
Paper id 28201445
Paper id 28201445Paper id 28201445
Paper id 28201445
 
BRAINREGION.pptx
BRAINREGION.pptxBRAINREGION.pptx
BRAINREGION.pptx
 
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...
 

More from Dr.NAGARAJAN. S

EEE graduates career options
EEE graduates career optionsEEE graduates career options
EEE graduates career optionsDr.NAGARAJAN. S
 
Induction motor fed by fault tolerant inverter fed induction motor drive
Induction motor fed by fault tolerant inverter fed induction motor driveInduction motor fed by fault tolerant inverter fed induction motor drive
Induction motor fed by fault tolerant inverter fed induction motor driveDr.NAGARAJAN. S
 
An overview of sustainable path way for the global energy transition
An overview of sustainable path way for the global energy transitionAn overview of sustainable path way for the global energy transition
An overview of sustainable path way for the global energy transitionDr.NAGARAJAN. S
 
DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...
DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...
DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...Dr.NAGARAJAN. S
 
curricular and cocurricular event details during lockdown period
curricular and cocurricular event details during lockdown periodcurricular and cocurricular event details during lockdown period
curricular and cocurricular event details during lockdown periodDr.NAGARAJAN. S
 
PhD viva voce Presentation
PhD viva voce PresentationPhD viva voce Presentation
PhD viva voce PresentationDr.NAGARAJAN. S
 
Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...
Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...
Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...Dr.NAGARAJAN. S
 
Hybrid Green Energy Systems for Uninterrupted Electrification
Hybrid Green Energy Systems for Uninterrupted ElectrificationHybrid Green Energy Systems for Uninterrupted Electrification
Hybrid Green Energy Systems for Uninterrupted ElectrificationDr.NAGARAJAN. S
 
Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...
Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...
Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...Dr.NAGARAJAN. S
 
Detection and analysis of eccentricity
Detection and analysis of eccentricityDetection and analysis of eccentricity
Detection and analysis of eccentricityDr.NAGARAJAN. S
 

More from Dr.NAGARAJAN. S (20)

EEE graduates career options
EEE graduates career optionsEEE graduates career options
EEE graduates career options
 
Induction motor fed by fault tolerant inverter fed induction motor drive
Induction motor fed by fault tolerant inverter fed induction motor driveInduction motor fed by fault tolerant inverter fed induction motor drive
Induction motor fed by fault tolerant inverter fed induction motor drive
 
An overview of sustainable path way for the global energy transition
An overview of sustainable path way for the global energy transitionAn overview of sustainable path way for the global energy transition
An overview of sustainable path way for the global energy transition
 
DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...
DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...
DETECTION OF INTERTURN FAULT IN THREE PHASE SQUIRREL CAGE INDUCTION MOTOR USI...
 
Ph.D Abstract
Ph.D Abstract  Ph.D Abstract
Ph.D Abstract
 
curricular and cocurricular event details during lockdown period
curricular and cocurricular event details during lockdown periodcurricular and cocurricular event details during lockdown period
curricular and cocurricular event details during lockdown period
 
PhD viva voce Presentation
PhD viva voce PresentationPhD viva voce Presentation
PhD viva voce Presentation
 
Ph.D thesis sample
Ph.D thesis samplePh.D thesis sample
Ph.D thesis sample
 
Bus Building Algorithm
Bus Building AlgorithmBus Building Algorithm
Bus Building Algorithm
 
NBA Presentation-EEE
NBA Presentation-EEENBA Presentation-EEE
NBA Presentation-EEE
 
Power System Analysis
Power System AnalysisPower System Analysis
Power System Analysis
 
Power System Analysis
Power System AnalysisPower System Analysis
Power System Analysis
 
Power System Analysis
Power System AnalysisPower System Analysis
Power System Analysis
 
Power System Analysis
Power System AnalysisPower System Analysis
Power System Analysis
 
Power System Analysis
Power System AnalysisPower System Analysis
Power System Analysis
 
Power System Analysis
Power System Analysis Power System Analysis
Power System Analysis
 
Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...
Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...
Detection of Broken Bars in Three Phase Squirrel Cage Induction Motor using F...
 
Hybrid Green Energy Systems for Uninterrupted Electrification
Hybrid Green Energy Systems for Uninterrupted ElectrificationHybrid Green Energy Systems for Uninterrupted Electrification
Hybrid Green Energy Systems for Uninterrupted Electrification
 
Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...
Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...
Current mode controlled fuzzy logic based inter leaved cuk converter SVM inve...
 
Detection and analysis of eccentricity
Detection and analysis of eccentricityDetection and analysis of eccentricity
Detection and analysis of eccentricity
 

Recently uploaded

APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 

Recently uploaded (20)

APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 

Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integration of Pulse-Coupled Neural Network (PCNN) and Genetic Algorithm (GA)

  • 1. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integration of Pulse-Coupled Neural Network (PCNN) and Genetic Algorithm (GA) R. Indhumathi, S. Nagarajan, and K. P. Indira 1 Introduction Medical imaging plays a vital role in medical diagnosis and treatment. However, distinct imagingmodalityyields informationonlyinlimiteddomain. Studies aredone for analysis information collected from distinct modalities of same patient. This led to the introduction of image fusion in the field of medicine and the progression of image fusion techniques. Image fusion is characterized as the amalgamation of significant data from numerous images and their incorporation into seldom images, generally a solitary one. This fused image will be more instructive and precise than the indi- vidual source images that have been utilized, and the resultant fused image comprises paramount information. The main objective of image fusion is to incorporate all the essential data from source images which would be pertinent and comprehensible for human and machine recognition. Image fusion is the strategy of combining images from distinct modalities into a single image [1]. The resultant image is utilized in variety of applications such as medical diagnosis, identification of tumor and surgery treatment [2]. Before fusing images from two distinct modalities, it is essential to preserve the features so that the fused image is free from inconsistencies or artifacts in the output. Medical images canbeobtainedfromdistinct modalities suchas computedtomog- raphy (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single-photon emission tomography (SPECT) and X-ray. For instant, X-ray R. Indhumathi (B) · S. Nagarajan EEE Department, Jerusalem College of Engineering, Chennai 600100, India e-mail: indhuraja.phd@gmail.com S. Nagarajan e-mail: nagu_shola@yahoo.com K. P. Indira ECE Department, BVC Engineering College, Allavaram, India e-mail: kpindiraphd@gmail.com © Springer Nature Singapore Pte Ltd. 2021 S. Patnaik et al. (eds.), Advances in Machine Learning and Computational Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-5243-4_82 853
  • 2. 854 R. Indhumathi et al. and computed tomography (CT) are used to provide information about dense struc- tures like bones, whereas magnetic resonance imaging (MRI) is used to provide information about soft tissues, while positron emission tomography (PET) provides information about metabolic activity taking place within the body. Hence, it necessi- tates to integrate information from distinct images into a single image for a compre- hensive view. Multimodal medical image fusion helps to diagnose the disease and also reduces the cost of storage by amalgamating numerous source images into a single image. Image fusion can be accomplished at three levels—pixel level, feature level and decision level [3]. Pixel-level image fusion is usually performed by combining pixel values from individual source images. Feature-level image fusion performs fusion only after segmenting the source image into numerous features such as pixel intensity and edges. Decision-level image fusion is a high-level fusion strategy which utilizes fuzzy rule, heuristic algorithms, etc. In this paper, image fusion has been performed on pixel level owing to their advantages such as easy implementation and improved efficiency. Image fusion techniques are partitioned into two major categories—spatial- domain and frequency-domain techniques [4, 5]. Spatial-domain techniques are further categorized into average, maximum fusion algorithms and principle compo- nent analysis. Frequency-domain techniques are further categorized into pyramid- based decomposition and wavelet transforms. Averaging method is a simple image fusion strategy where the output is deter- mined by calculating the average value of each pixel [6]. Though easier to under- stand, averaging method yields output with low contrast and washout appearance. Choosing maximum fusion rule selects the maximum pixel value from the source images as the output. This in turn yields a highly focused output [7]. PCA is a vector space transformation methodology which reduces multi-dimensional datasets to lower dimensions which is a powerful tool to analyze graphical data since elucida- tion is tedious for data of higher dimensions. Frequency-domain strategies are further categorized into pyramidal method and wavelet-based transforms. Pyramidal method consists of bands of source images. Each level will be usually smaller compared to its predecessor. This results in higher levels concentrated on higher spatial frequencies. Pyramidal method is further classified into [8] Gaussian pyramid method [9], Lapla- cian pyramid method, ratio of low-pass pyramid method and morphological pyramid method. Gaussian pyramid method is a low pass filtered version of its predecessor. LP is a bandpass filtered version of its predecessor. In ratio of low-pass pyramid, the image at each level is the ratio between two successive levels in Gaussian pyramid method. Morphological pyramid strategy used morphological filters to extract the details of an image without causing any adverse effects. Wavelet image fusion techniques are further classified into [10] discrete wavelet transform (DWT) [11], Stationary wavelet transform (SWT) [12], non-subsampled contourlet transform (NSCT). DWT technique decomposes an image into low- and high-frequency components. Low-frequency components provide approxima- tion information, while high-frequency components provide detailed information contained within an image. DWT strategy suffers from a major drawback called shift
  • 3. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 855 variance. In order to provide a shift-invariant output, Stationary wavelet transform (SWT) was introduced. This technique accomplishes upsampling strategy which results in the decomposed image which has the same size as the input image. Though these wavelet transforms perform well, they failed to perform well along edges. To provide information along edges, non-subsampled contourlet transform (NSCT) was proposed which is a geometric analysis procedure which increases localization, shift invariance, etc. In spite of providing various advantages, NSCT technique requires proper filter tuning and proper reconstruction filters for any particular application [13]. To overcome the above disadvantages, neural technique strategies were intro- duced. Pulse-coupled neural network (PCNN) is a neural network technique evolved by utilizing the synchronous pulse emergence from cerebral visual cortex of some mammals. Pulse-coupled neural network (PCNN) is a most commonly used image fusion strategy for medical diagnosis and treatment [14]. Initially, manual adjustment was done to tune PCNN variables. Lu et al. [14] have utilized a distinct strategy where pulse-coupled neural network (PCNN) was optimized by utilizing multi-swarm fruit fly optimization algorithm (MFOA). Quality assessment was utilized as hybrid fitness function which enhances the performance of MFOA. Experimental results illustrate the effectiveness of the proposed strategy. Gai et al. [15] have put forward a novel image fusion technique which utilized pulse-coupled neural network (PCNN) and non-subsampled shearlet transform (NSST). Initially, the images were decomposed by utilizing non-subsampled shearlet transform (NSST) strategy. This decomposes the image into low- and high-frequency components. Low-frequency components have been fused by utilizing improved sparse representation method. This technique eliminates the detailed information by using Sobel operator, while information preservation has been done by using guided filter. High-frequency components have been fused by utilizing pulse-coupled neural network (PCNN) strategy. Finally, inverse transform is done to yield the fused output. The effectiveness of the proposed strategy has been validated against seven different fusion methods. The author has also fused information from three distinct modalities to justify the superiority of the proposed method. Subjective and objective analyses illustrate the effectiveness of the proposed method. A multimodal fusion approach which utilizes non-subsampled shearlet trans- form (NSST) and simplified pulse-coupled neural network model (S-PCNN) was put forward by Hajer et al. [16]. Initially, the images were transformed into YIQ components. The images were initially disintegrated into low- and high-frequency components using NSST strategy. The low-frequency components were fused using weight region standard deviation (SD) and local energy, and high-frequency compo- nents are fused by utilizing S-PCNN strategy and finally, inverse NSST and inverse YIQ technique. The final discussion illustrates that the proposed strategy outper- forms quantitatively in terms of performance measures such as mutual information, entropy, SD, fusion quality and spatial frequency. Jia et al. [17] have put forward a novel framework which utilized improved adap- tive PCNN. PCNN is a technique that emerged from the visual cortex of mammals and has proved to be very suitable in the field of image fusion. The source images
  • 4. 856 R. Indhumathi et al. were initially fed to the parallel PCNN, and the gray value of the image was utilized to trigger PCNN. Meanwhile, sum-modified Laplacian was chosen as the evaluation function, and the linking strength of neuron which corresponds to PCNN was evalu- ated. The ignition map was generated after ignition of PCNN. The clearer part of the images was chosen to yield the fused image. Quantitative and qualitative analyses illustrated that the proposed strategy outperformed than the existing strategies. In this paper, Wang et al. [18] have put forward an image fusion technique which utilized discrete wavelet transform (DWT) and dual-channel pulse-coupled neural network (PCNN). For fusing low-frequency coefficients, choosing maximum fusion rule has been utilized, while spatial frequency of high-frequency components has been chosen to motivate dual-channel PCNN. Finally, inverse DWT has been utilized to yield the fused image. Visual and quantitative analyses illustrated the superiority of the proposed approach than other image fusion strategies. Arif et al. [19] proposed an existing image fusion strategies that lacked the capa- bility to produce a fused image which could preserve the complete information content from individual source images which utilized combination of curvelet trans- form and genetic algorithm (GA) to yield a fused image. Curvelet transform helped in preserving the information along the edges, and genetic algorithm helped to acquire the fine details from the source images. Quantitative analysis demonstrated that the proposed strategy outperformed than the existing baseline strategies. Fu et al. [20] have put forward a novel image fusion approach which utilized non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) jointly in image fusion algorithms. High- and low-frequency coefficients have been processed using modified PCNN. Determining the degree of matching between input images is utilized in fusion rules. Finally, inverse NSCT has been employed to reconstruct the fused image. Experimental analysis illustrated that the proposed strategy outperformed wavelet, contourlet and traditional PCNN methods in terms of higher mutual information content. Also, the proposed strategy preserved edge as well as texture information, thereby including more information content in the fused image. The author concluded by stating that research about selection of parameters for image fusion should be performed deeply. Image fusion is the strategy in which the input from multiple images is combined to yield an efficient fused image. Lacewell et al. [21] have put forward a strategy which utilized combination of discrete wavelet transform and genetic algorithm to produce an efficient fused image. DWT has been utilized to extract features, while genetic algorithm (GA) has been utilized to yield an enhanced output. Quantitative and comparison analyses illustrated that the proposed strategy produced superior results in terms of mutual information and root mean square error. Wang et al. [22] have put forward a novel image fusion approach which utilizes pulse-coupled neural network and wavelet-based contourlet transform. In order to motivate PCNN, spatial high frequency in WBCT has been utilized. High-frequency coefficients can be selected by utilizing weighted method of firing times. Wavelet transform strategies perform better at isolated discontinuities but not along curved edges especially for 3D images. In order to overcome the above drawbacks, PCNN hasbeenutilizedwhichperformsbetterforhigher-dimensionalimages.Experimental
  • 5. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 857 analysis illustrated that WBCT-PCNN performed better from both subjective and objective analyses. From the literature survey, it is inferred that combination of two distinct image fusiontechniquesprovidesbetterresultsintermsofbothqualityandquantity.Though the solution may be obtained by utilizing any image fusion strategy, the optimal solution can be obtained only by utilizing genetic algorithm (GA). Hence, an attempt has been made to integrate the advantages of both PCNN and GA to yield an output image from both quality and quantitative analyses. The rest of the paper is organized as follows: Sect. 2 provides a detailed expla- nation about pulse-coupled neural network (PCNN), Sect. 3 illustrates about the proposed methodology, the proposed algorithm is provided in Sect. 4, qualitative and quantitative analyses have been provided in Sect. 5 and conclusion has been provided in Sect. 6. 2 Pulse-Coupled Neural Network (PCNN) A new type of neural network, distinct from traditional neural network strategies, is pulse-coupled neural network (PCNN) [23]. PCNN is developed by utilizing the synchronous pulse emergence from cerebral visual cortex of some mammals. A gathering of neurons is usually associated to frame PCNN. Each neuron correlates to a pixel value whose intensity is contemplated as external stimulant. Every neuron interfaces with another neuron in such a manner that a single-layer two-dimensional cluster of PCNN is constituted. When linking coefficient beta is zero, every neuron pulses naturally due to external stimulant. When beta is nonzero, the neurons are associated mutually. At a point, when the neuron fires, its yield subscribes the other adjacent neurons leading them to pulse before the natural period. The yield of captured neurons influences the other neurons associated with them to change the internal activity and the outputs. When iteration terminates, the output of every stage is added to get an aggregate yield which is known as the firing map. There are two primary issues in the existing image fusion strategies [24]. Firstly, pyramid and wavelet transform strategies treat each pixel in an individual manner rather than considering the relationship between them. Further, the images should be entirely registered before fusion. In order to overcome the above drawbacks, PCNN has been proposed [25]. Fusion strategy usually takes place in two major ways—either by choosing the better pixel value or by outlining a major–minor network for different networks, thereby choosing the yield of the first PCNN as the fusion result. The pulse-coupled neural network comprises three compartments: receptive field, linking part or modulation and pulse generator. Receptive field is an essential part which receives input signals from neigh- boring neurons and external sources. It consists of two internal channels—feeding compartment (F) and linking compartment (L). Compared to feeding compartment, the linking inputs have quicker characteristic response time constant. In order to
  • 6. 858 R. Indhumathi et al. generate the total internal activity (U), the biased and multiplied linking inputs are multiplied with the feeding inputs. The net result constitutes the linking/modulation part. At last, pulse generator comprises step function generator and a threshold signal generator. The ability of neurons in the network to respond to external stimulant is known as firing which enables the internal activity of neuron to exceed a certain threshold value. Initially, the yield of neuron is set to 1. The threshold value starts rotting till the next internal activity of the neuron. The output generated is then iteratively nourished back with a delay of single iteration. As soon as the threshold exceeds the internal activity (U), the output will be reset to zero. A temporal series of pulse outputs are generated by PCNN after n number of iterations which carries the data about the input images. Input stimulus which corresponds to pixel’s color intensity is given to feeding compartment. The pulse output of PCNN helps to make a decision on the content of the image. Initially, double-precision operation is performed on the acquired input CT and PET images. In order to reduce the memory requirements of an image, unsigned integers (unit 8 or unit 16) can be utilized. An image whose information lattice has unit 8 and unit 16 class is known as 8-bit image and 16-bit image, respectively. Though the measure of colors emitted cannot be differentiated in a grayscale image, the aggregate sum of radiated light for each pixel can be partitioned since a small measure of light is considered as dark pixels, while more amount of light is considered as bright pixels. On conversion from RGB to grayscale image, the RGB value of each pixel is ought to be taken and made as the yield of a solitary value which reflects the brightness of the pixel. Normalization (norm) converts the scale of pixel intensity values and is known as contrast stretching or histogram stretching. Normalization can be determined by using the following formula I norm = (I abs − I min)/(I max − I min) where abs represents absolute value min represents minimum value max represents maximum value Each neuron in firing pulse model comprises receptive field, modulation field and pulse generator. Two important features are necessary to fire a pulse generator—spiking cortex model(SCM)andsynapticweightmatrix.SCMhasbeendemonstratedincompliance with Weber–Fechner law, since it has higher sensitivity for low-intensity stimulant and lower sensitivity for high-intensity stimulant. In order to improve the perfor- mance and make the output reachable, synaptic weight matrix is applied to linking field and sigmoid function is applied in firing pulse. PCNN comprises neuron capture
  • 7. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 859 property which causes any neuron’s firing to make the nearby neurons whose lumi- nance is similar to be fired. This property makes the information couple and transmis- sion to be automatically acknowledged which makes PCNN satisfactory for image fusion. In this model, one of the original images is chosen as input to the main PCNN network randomly and another image as input to the subsidiary network. The firing information about the subsidiary network is transferred to the main PCNN network with the help of information couple and transmission properties. By doing so, image fusion can be figured out. When a neuron is fired, the firing information about the subsidiary network is communicated to the adjacent neurons and neurons of the main PCNN network. The capturing property of PCNN makes it suitable for image fusion. Eventually, the output obtained is transformed to unit 8 format, and finally, the fused image is obtained. 3 Genetic Algorithm Genetic algorithm (GA) is a heuristic search algorithm used to solve the problems of optimization [26]. The essence of GA is to simulate the evolution of nature, i.e., a processinwhichaspeciesexperiencestheselectiveevolutionandgeneticinheritance. At first, a random group is formed, and then, with mutual competition and genetic development, the group goes through the following operation process: selection, crossover, mutation, etc. The subgroup who has better fitness will survive and form a new generation. The process cycles continuously until the fittest subgroups are formed. The surviving groups are supposed to well adapt to the living conditions. Thus, the genetic algorithm is actually a type of random search algorithm. Moreover, it is nonlinear and parallelizable, so it has great advantages when compared with the traditional optimization algorithm. Four entities that help to define a GA problem are the representation of the candi- date solutions, the fitness function, the genetic operators to assist in finding the optimal or near optimal solution and specific knowledge of the problem such as variables [27]. GA utilizes the simplest representation, reproduction and diversity strategy. Optimization with GA is performed through natural exchange of genetic material between parents. Offspring are formed from parent genes. Also, fitness of offspring is evaluated. The best-fitting individuals are only allowed to breed. Image fusion based on GA consists of three types of genetic operations— crossover, mutation and replication [28]. The procedure is as follows: 1. Encode the unknown image weight and define the objective function f (xi). SML is used as the fitness function using GA. 2. N stands for the initial population size of fusion weight. Pm represents the probability of mutation and Pc the probability of crossover. 3. Generate randomly a feature array whose length is L to form an initial fusion weight group.
  • 8. 860 R. Indhumathi et al. 4. Followthestepsbelowandconductiterativeoperationuntilterminationcondition is achieved. (a) Calculate the adaptability of an individual in the group. (b) On the basis of adaptability, Pc and Pm, operate crossover, mutation and replication. 5. The best-fitting individuals in the surviving subgroup are elected as the result. So, the optimal fusion weight is obtained. From the steps above, the optimal fusion weights of the images to be fused can be obtained after several iterations. However, since this method does not take into account the relationship between the images to be fused, a lot of time is wasted to search for the fusion weights in the clear focal regions, which leads to low accuracy of the fusion. 4 Optimization of Image Fusion Using PCNN aND GA A new image fusion algorithm using pulse-coupled neural network (PCNN) with genetic algorithm (GA) optimization has been proposed which uses the firing frequency of neurons to process the image fusion in PCNN. Aiming at the result of image fusion being affected by neuron parameters, this algorithm is dependent on the parameters image gradient and independent of other parameters. The PCNN is built in each high-frequency sub-band to simulate the biological activity of human visual system. On comparing with traditional algorithms where the linking strength of each neuron is set constant or continuously changed according to features of each pixel, here, the linking strength as well as the linking range is determined by the prominence of corresponding low-frequency coefficients, which not only reduces the calculation of parameters but also flexibly makes good use of global features of images. The registration has been conducted to the images to be fused, and the image focusing is different; the fusion coefficients (Q) of the two pixels in the same position have a certain hidden relationship [29]. Besides, the constraint condition of the fusion coefficient is Q1 + Q2 = 1, Q1 > 0, Q2 > 0, so modeling for either coefficient is enough. Therefore, in the process of wavelet transformation, the PCNN for fusion weight coefficient (Q) of each pixel of each layer in a hierarchical order from high to low has been built. Q stands for fusion coefficient and S stands for hidden state. In this model, S is defined as three states, i.e., S ∈ {1, 2, 3}. When the pixel is in the focused region of image 1, or when located much nearer the focal plane of image 1 than that of image 2, S is set as 1. When the pixel is in the focused region of image 2, or if it is located much nearer the focal plane of image 2 than that of image 1, S is set as 3. If the pixel is not in the focused region of image 1 or image 2 and there is no obvious difference between the distances from focal plane of image 1 and that of image 2, S is set as 2.
  • 9. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 861 In addition, if the fusion coefficient in the neighboring region is greater than 0.9 or less than 0.1, then S is set to be 1 or 3. The state transfer matrixes from the parent node Si to the sub-node Si + 1 are defined as follows. According to the fact that details of the low frequency in the clear area are more than that in unclear area, this tries to get fusion weights from high scale to low scale by using GA after the process of wavelet transformation. Meanwhile, the author constructs a PCNN for each fusion weight in each layer and figures out its hidden states layer by layer with the help of the fusion weights calculated by GA. PCNN is a single-layered, two-dimensional, laterally connected neural network of pulse- coupled neurons where the inputs to the neuron are given by feeding and linking inputs. Feeding input is the primary input from the neurons receptive area. The neuron receptive area consists of the neighboring pixels of corresponding pixel in the input image. Linking input is the secondary input of lateral connections with neighboring neurons. The difference between these inputs is that the feeding connections have a slower characteristic response time constant than the linking connections. Guided by the hidden states of the fusion weights in the upper layer, the author gets the values directly from the clear area of the next layer without GA. In this way, the population size in the process of GA is reduced, which contributes a lot to improving the precision of calculation in the same amount of time. 5 Algorithm Step 1: Apply PCNN to N layers of the image and then figure out the fusion coefficient of Layer N by using GA. Set variate i = 0. Step 2: Search the neighboring region of Layer N − i for the pixels, whose fusion coefficients are greater than 0.9 or less than 0.1. Set Si = 1 or 3. Step 3: Search Layer N − (i + 1) for the pixels whose parent node are Si = 1 or 3, and then, the Qs of these pixels are set to be 1 or 0, and accordingly, Si + 1 are set to be 1 or 3. Step 4: Search Layer N − (i + 1) and find out the pixels whose parent node are Si = 2 and then apply GA to work out the fusion coefficients of these pixels. Set their Si + 1 to be 2, and set variate i = i + 1. After that, go back to Step 2 and circulate the operation. Step 5: Circulate Step 2, Step 3 and Step 4. Work out the fusion coefficients until the last layer.
  • 10. 862 R. Indhumathi et al. 6 Results and Discussion 6.1 Quantitative Analysis Percentage Residual Difference (PRD): Percentage residual difference reflects the degree of deviation between source image and the fused image [24]. Lower the value of PRD, higher is the quality of the image. On comparing PRD values of PCNN and GA with PCNN, it is inferred that PCNN and GA offer better results for all 16 datasets (Table 1). Table 1 Objective analysis of PCNN and hybridization of PCNN and GA CT PET PCNN Gradient PCNN & GA A1 A2 A3 A4 1 B1 B2 B3 B4 2 C1 C2 C3 C4 3 D1 D2 D3 D4 4 E1 E2 E3 E4 5 F1 F2 F3 F4 6 G1 G2 G3 G4 7 H1 H2 H3 H4 8 CT PET PCNN Gradient PCNN & GA I1 I2 I3 I4 9 J1 J2 J3 J4 10 K1 K2 K3 K4 11 L1 L2 L3 L4 12 M1 M2 M3 M4 13 N1 N2 N3 N4 14 O1 O2 O3 O4 15 P1 P2 P3 P4 16
  • 11. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 863 Root Mean Square Error (RMSE): RMSE is a measure of difference between predicted value and the actual value [25]. Lower the value of RMSE, higher is the quality of the image. On comparing RMSE values of PCNN and GA with PCNN, it is inferred that PCNN and GA offer better results for all 16 datasets (Table 2). Peak Signal-to-Noise Ratio (PSNR): PSNR is the ratio between maximum possible power of a signal and the power of corrupting noise affecting the image. The quality of an image will be better if the value of PSNR is high. On comparing PSNR values of PCNN and GA with PCNN, it is inferred that PCNN and GA offer better results for all 16 datasets (Table 3). Entropy: Entropy reflects the amount of information content which is available in the fused image. Higher the value of entropy, higher is the quality of fused image. On comparing entropy values of PCNN and GA with PCNN, it is inferred that PCNN and GA offer better results for almost all datasets. 7 Conclusion and Future Scope A new image fusion algorithm using pulse-coupled neural network (PCNN) with genetic algorithm (GA) optimization has been proposed which uses the firing frequency of neurons to process the image fusion in PCNN. The performance of the proposed algorithm has been evaluated using sixteen sets of computed tomog- raphy (CT) and positron emission tomography (PET) images obtained from Bharat Scans. Qualitative and quantitative analyses demonstrate that “optimization of image fusion using pulse-coupled neural network (PCNN) and genetic algorithm (GA)” outperforms PCNN technique. The proposed strategy can be extended to merge color images since color carries remarkable information and our eyes can observe even minute variations in color. With the emerging advances in remote airborne sensors, ample and assorted informa- tionisaccessibleinthefieldsofresourceinvestigation,environmentalmonitoringand disaster prevention. The existing strategies discussed in the literature survey intro- duce distortion in color. The algorithm proposed can be extended to fuse remote sensing images obtained from optical, thermal, multispectral and hyperspectral sensors without any color distortion. Multimodal medical image fusion has been implemented with static images in the proposed work. At present, fusion of multimodal video sequences generated by a network of multimodal sources is turning out to be progressively essential for surveillance purposes, navigation and object tracking applications. The integral data provided by these sensors should be merged to yield a precise gauge so as to serve more efficiently in distinct tasks such as detection, recognition and tracking. From the fused output, it is conceivable to produce a precise representation of the recognized scene which in turn finds its use in variety of applications.
  • 12. 864 R. Indhumathi et al. Table 2 Comparison analysis of PRD and RMSE for PCNN and hybridization of PCNN and GA Percentage residual error (PRD) Datasets PCNN (gradient) Hybridization of PCNN and GA 1 0.4273 3.3080e−008 2 0.3893 4.8480e−008 3 0.4878 4.8667e−008 4 0.4283 1.5920e−008 5 0.3807 5.0838e−008 6 0.4216 7.4327e−008 7 0.4041 7.2718e−008 8 0.3904 8.0547e−008 9 0.1121 6.9992e−008 10 46.6390 5.1606e−008 11 0.1795 6.5642e−008 12 6.9132 4.8696e−008 13 0.1654 4.4340e−008 14 1.1723 5.2005e−008 15 0.1393 6.8369e−008 16 1.5822e−004 7.0856e−008 Root mean square error (RMSE) Datasets PCNN (gradient) Hybridization of PCNN and GA 1 0.0057 5.4255e−013 2 0.0054 6.9413e−013 3 0.0085 6.7392e−013 4 0.0094 2.2869e−013 5 0.0069 8.2856e−013 6 0.0091 8.7955e−013 7 0.0076 8.5858e−013 8 0.0070 8.2650e−013 9 0.0089 9.0324e−013 10 0.0098 7.6707e−013 11 0.0110 9.1334e−013 12 0.0085 8.2101e−013 13 0.0088 6.7181e−013 14 3.2809e−004 4.552e−013 15 0.0077 8.7183e−013 16 0.0049 5.7605e−013
  • 13. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 865 Table 3 Comparison analysis of PSNR and entropy for PCNN and hybridization of PCNN and GA Peak signal-to-noise ratio(PSNR) Datasets PCNN (gradient) Hybridization of PCNN and GA 1 54.5123 55.7234 2 55.0001 57.2346 3 53.4523 54.8976 4 56.1234 57.1042 5 55.6321 57.1234 6 56.1235 57.8432 7 55.4567 56.7046 8 55.1732 56.5460 9 57.8432 58.4389 10 57.2341 58.8975 11 57.6574 59.1004 12 55.6978 57.9874 13 54.2054 55.5512 14 56.1800 58.1254 15 57.8358 58.2657 16 55.8526 57.2116 Entropy Datasets PCNN (gradient) Hybridization of PCNN and GA 1 7.3120 8.1423 2 7.4250 8.4146 3 7.3690 8.0799 4 7.9854 8.3523 5 7.8453 8.0733 6 8.0001 8.0452 7 7.7785 8.3483 8 7.4567 8.4209 9 7.3001 8.2272 10 7.5254 7.6642 11 7.3001 7.3740 12 7.9784 8.1282 13 7.8546 8.1151 14 7.8945 8.2251 15 7.9000 8.0205 16 8.1234 8.6886
  • 14. 866 R. Indhumathi et al. References 1. P. Hill, M. Ebrahim Al-Mualla, D. Bull, Perceptual image fusion using wavelets. IEEE Trans. Image Process. 26(3), 1076–1088 2. N. Mittal, et al., Decomposition & Reconstruction of Medical Images in Matlab Using Different Wavelet Parameters, in 1st International Conference on Futuristic Trend In Computational Analysis and Knowledge Management. IEEE (2015). ISSN 978-1-4799-8433-6/15 3. K.P. Indira, et al., Impact of Co-efficient Selection Rules on the Performance of Dwt Based Fusion on Medical Images, in International Conference On Robotics, Automation, Control and Embedded Systems-Race. IEEE (2015) 4. Y. Yang, M. Ding, S. Huang, Y. Que, W. Wan, M. Yang, J. Sun, Multi-Focus Image Fusion Via Clustering PCA Based Joint Dictionary Learning, vol. 5, pp.16985–16997, Sept 2017 5. A. Ellmauthaler, C.L. Pagliari, et al., Image fusion using the undecimated wavelet transform with spectral factorization and non orthogonal filter banks. IEEE Trans. Image Process. 22(3), 1005–1017 (2013) 6. V. Bhateja, H. Patel, A. Krishn, A. Sahu, Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sens. J. 15(12), 6783–6790 (2015) 7. B. Erol, M. Amin, Generalized PCA Fusion for Improved Radar Human Motion Recognition, in IEEE Radar Conference (RadarConf), Boston, MA, USA (2019), pp. 1–5 8. V.S. Petrovic, C.S. Xydeas, Gradient based multi resolution image fusion. IEEE Trans. Image Process. 13(2), 228–237 (2004) 9. P.J. Burt, E.H. Adelson, The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31(4), 532–540 (1983) 10. J. Tian, L. Chen, Adaptive multi-focus image fusion using a waveletbased statistical sharpness measure. Signal Process. 92(9), 2137–2146 (2012) 11. M.D. Nandeesh, M. Meenakshi, A Novel Technique of Medical Image Fusion Using Stationary Wavelet Transform and Principal Component Analysis, in 2015 International Conference on Smart Sensors and Systems (IC-SSS), Bangalore (2015), pp. 1–5 12. Q.M. Gaurav Bhatnagar, W. Jonathan, Z. Liu, Directive contrast based multimodal Medical image fusion in NSCT domain. IEEE Trans. Multimedia 15(5), 1014–1024 (2013) 13. S. Das. M.K. Kundu, A neuro-fuzzy approach for medical image fusion. IEEE Trans. Biomed. Eng. 60(12), 3347–3353 (2013) 14. T. Lu, C. Tian, X. Kai, Exploiting quality-guided adaptive optimization for fusing multimodal medical images. IEEE Access 7, 96048–96059 (2019) 15. D. Gai, X. Shen, H. Cheng, H. Chen, Medical image fusion via PCNN based on edge preser- vation and improved sparse representation in NSST domain. IEEE Access 7, 85413–85429 16. O. Hajer, O. Mourali, E. Zagrouba, Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space. IET Image Proc. 12(10), 1873–1880 (2018) 17. Y. Jia, C. Rong, Y. Wang, Y. Zhu, Y. Yang, A Multi-Focus Image Fusion Algorithm Using Modified Adaptive PCNN Model, in 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE (2016), pp. 612–617 18. N. Wang, W. Wang, An Image Fusion Method Based on Wavelet and Dual-Channel Pulse Coupled Neural Network, in 2015 IEEE International Conference on Progress in Informatics and Computing (PIC) (2015), pp. 270–274 19. M. Arif, N. Aniza Abdullah, S. Kumara Phalianakote, N. Ramli, M. Elahi, Maximizing Informa- tion of Multimodality Brain Image Fusion using Curvelet Transform with Genetic Algorithm, in IEEE 2014 International Conference on Computer Assisted System in Health (CASH) (2014), pp. 45–51 20. L. Fu, L. Yifan, L. Xin, Image Fusion Based on Nonsubsampled Contourlet Transform and Pulse Coupled Neural Networks, in IEEE Fourth International Conference on Intelligent Computation Technology and Automation, vol. 2 (2011), pp. 180–183
  • 15. Hybrid Pixel-Based Method for Multimodal Medical Image Fusion … 867 21. C.W Lacewell, M. Gebril, R. Buaba, A. Homaifar, Optimization of Image Fusion using Genetic Algorithm and Discrete Wavelet Transform, in Proceedings of the IEEE 2010 National Aerospace and Electronics Conference (NAECON) (2010), pp. 116–121 22. X. Wang, L. Chen, Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Wavelet Based Contourlet Transform Domain, in 2nd Confer- ence on Environmental Science and Information Application Technology, vol. 2. IEEE (2010), pp. 411–414 23. Y. Yang, J. Dang, Y. Wang, Medical Image Fusion Method Based on Lifting Wavelet Transform and Dual-channel PCNN, in 9th IEEE Conference on Industrial Electronics and Applications (2014), pp. 1179–1182 24. Y. Wang, J. Dang, Q. Li, S. Li, Multimodal Medical Image Fusion Using Fuzzy Radial Basis Function Neural Networks, in IEEE, Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, vol. 2 (2007), pp. 778–782 25. T. Li, Y. Wang, Multi scaled combination of MR and SPECT images in neuroimaging: a simplex method based variable-weight fusion. Comput. Method Programs Biomed. 105:35–39 26. C.W Lacewell, M. Gebril, R. Buaba, A., Optimization of Image Fusion using Genetic Algorithm and Discrete Wavelet Transform, in Proceedings of the IEEE 2010 National Aerospace and Electronics Conference (NAECON) (2010), pp. 116–121 27. R. Gupta, D. Awasthi, Wave-Packet Image Fusion Technique Based on Genetic Algorithm, in IEEE, 5th International Conference on Confluence The Next Generation Information Technology Summit (2014), pp. 280–285 28. A. Krishn, V. Bhateja, Himanshi, A. Sahu, Medical Image Fusion Using Combination of PCA and Wavelet Analysis, in IEEE International Conference on Advances in Computing, Communications and Informatics (2014), pp. 986–991 29. A. Sahu, V. Bhateja, A. Krishn, Himanshi, Medical Image Fusion with Laplacian Pyra- mids, in IEEE, 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (2014), pp. 448–453