SlideShare a Scribd company logo
1 of 10
Download to read offline
International Journal of Informatics and Communication Technology (IJ-ICT)
Vol.9, No.3, December 2020, pp. 195~204
ISSN: 2252-8776, DOI: 10.11591/ijict.v9i3.pp195-204  195
Journal homepage: http://ijict.iaescore.com
A neuro fuzzy image fusion using block based
feature level method
S. Mary Praveena, R. Kanmani, A. K. Kavitha
Department of ECE, Sri Ramakrishna Institute of Technology, India
Article Info ABSTRACT
Article history:
Received Dec 28, 2018
Revised Mar 20, 2019
Accepted Jan 12, 2020
Image fusion is a sub field of image processing in which more than one
images are fused to create an image where all the objects are in focus. The
process of image fusion is performed for multi-sensor and multi-focus
images of the same scene. Multi-sensor images of the same scene are
captured by different sensors whereas multi-focus images are captured by the
same sensor. In multi-focus images, the objects in the scene which are closer
to the camera are in focus and the farther objects get blurred. Contrary to it,
when the farther objects are focused then closer objects get blurred in the
image. To achieve an image where all the objects are in focus, the process of
images fusion is performed either in spatial domain or in transformed
domain. In recent times, the applications of image processing have grown
immensely. Usually due to limited depth of field of optical lenses especially
with greater focal length, it becomes impossible to obtain an image where all
the objects are in focus. Thus, it plays an important role to perform other
tasks of image processing such as image segmentation, edge detection, stereo
matching and image enhancement. Hence, a novel feature-level multi-focus
image fusion technique has been proposed which fuses multi-focus images.
Thus, the results of extensive experimentation performed to highlight the
efficiency and utility of the proposed technique is presented. The proposed
work further explores comparison between fuzzy based image fusion and
neuro fuzzy fusion technique along with quality evaluation indices.
Keywords:
Feed forward neural network
Fusion
Multi-focus images
Optimal block
Variance
This is an open access article under the CC BY-SA license.
Corresponding Author:
S. Mary Praveena,
Department of ECE,
Sri Ramakrishna Institute of Technology,
Pachapalayam (Post), Perur Chettipalayam, Coimbatore – 641010, India.
praveena.infant@gmail.com
1. INTRODUCTION
A wide variety of data acquisition devices are available at present, and hence image fusion has
become an important subarea of image processing. There are sensors which cannot generate images of all
objects at various distances with equal clarity. Thus several images of a scene are captured, with focus on
different parts of it [1]. With the availability of multi-sensor data in many fields such as remote sensing,
medical imaging, machine vision and military applications, sensor fusion has emerged as a new and
promising research area. The current definition of sensor fusion is very broad and the fusion can take place at
the signal, pixel, feature, and symbol level. The goal of image fusion is to create new images that are more
suitable for the purposes of human visual perception, object detection and target recognition. In this paper we
address the problem of pixel-level fusion [2, 3] or the so-called image fusion problem. To achieve an image
where all the objects are in focus, the process of images fusion is performed either in special domain or in
transformed domain. Spatial domain includes the techniques which directly incorporate the pixel values.
 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204
196
Multi-scale or multi-resolution approaches provide a means to exploit this fact. After applying certain
operations on the transformed images, the fused image is created by taking the inverse transform.Image
fusion is generally performed at three different levels of information representation including pixel level,
feature level and decision level. In pixel-level image fusion, simple mathematical operations such as
maximum or average are applied on the pixel values of the sources to generate fused image [4, 5]. However
these techniques usually smooth the sharp edges or leave the blurring effects in the fused image. In the
feature level multi-focus image fusion, the source images are first segmented into different regions and then
the feature values of these regions are calculated. Using some fusion rule, the regions are selected to generate
the fused image. In decision level image fusion, the objects in the source images are first detected and then
by using some suitable fusion algorithm, the fused image is generated [6, 7]. In this paper, there is a new
proposed method for multi-focus image fusion
2. FUZZY BASED IMAGE FUSION
Fuzzy image processing is not a unique theory. Fuzzy image processing is the collection of all
approaches that understand, represent and process the images, their segments and features as fuzzy sets. The
representation and processing depend on the selected fuzzy technique and on the problem to be solved. It has
three main stages:
− Image fuzzification((Using membership functions to graphically describe a situation)
− Modification of membership values(Application of fuzzy rules)
− Image defuzzification((Obtaining the crisp or actual results)
The coding of image data (fuzzification) and decoding of the results (defuzzification) are steps that
make possible to process images with fuzzy techniques. The main power of fuzzy image processing is in the
middle step (modification of membership values). After the image data are transformed from gray-level plane
to the membership plane (fuzzification), appropriate fuzzy techniques modify the membership values. Multi-
sensor data fusion can be performed at four different processing levels, according to the stage at which the
fusion takes place: signal level, pixel level, feature level, and decision level. Figure 1 illustrates of the
concept of the four different fusion levels [8-10].
Figure 1. An overview of categorization of the fusion algorithms
Int J Inf & Commun Technol ISSN: 2252-8776 
A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena)
197
Signal level fusion. In signal-based fusion, signals from different sensors are combined to create a
new signal with a better signal-to noise ratio than the original signals. (2) Pixel level fusion. Pixel-based
fusion is performed on a pixel-by-pixel basis. It generates a fused image in which information associated
with each pixel is determined from a set of pixels in source images to improve the performance of image
processing tasks such as segmentation (3) Feature level fusion. Feature-based fusion at feature level requires
an extraction of objects recognized in the various data sources. It requires the extraction of salient features
which are depending on their environment such as pixel intensities, edges or textures. These similar features
from input images are fused.
2.1. Steps in fuzzy image fusion
The original image in the gray level plane is subjected to fuzzification and the modification of
membership functions is carried out in the membership plane. The result is the output image obtained after
the defuzzification process. The algorithm for pixel-level image fusion using fuzzy logic is given as
follows [11].
a. Read first image in variable I1 and find its size (rows: r1, columns: c1).
b. Read second image in variable I2 and find its size (rows: r2, columns: c2).
c. Variables I1 and I2 are images in matrix form where each pixel gray level value is in the range
from 0 to 255.
d. Compare rows and columns of both input images. If these two images are not of the same size, select the
portion, which are of same size.
e. Convert the images in column form which has C = r1×c1 entries.
f. Make a fuzzy inference system file, which has two input images.
g. Decide number and type of membership functions for both the input images by tuning the membership
functions.
h. Input images in antecedent are resolved to a degree of membership ranging 0 to 255.
i. Make fuzzy if-then rules for input images, which resolve those two antecedents to a single number from 0
to 255.
In the proposed method an image set of 10 different images are used to train the neural network.
Every image is first divided into number of blocks and features are calculated as shown in Figure 2. The
block size plays an important role in distinguishing the blurred and un-blurred regions from each other. After
dividing the images into blocks, the feature values of every block of all the images are calculated and a
features file is created. A sufficient number of feature vectors are used to train the neural network. The
trained neural network is then used to fuse any set of multi-focus images.
Figure 2. Block diagram of the proposed method
2.2. Neuro fuzzy based imagefusion
 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204
198
In the field of artificial intelligence, Neuro-Fuzzy refers to combinations of artificial neural
networks and fuzzy logic. Neuro-Fuzzy composite results in a hybrid intelligent system that synergizes these
two techniques by combining the human-like reasoning style of fuzzy systems with the learning and
connectionist structure of neural networks. Neuro-Fuzzy hybridization is widely termed as Fuzzy Neural
Network (FNN) or Neuro-Fuzzy System (NFS) in the literature. Neuro-Fuzzy system incorporates the
human-like reasoning style of fuzzy systems through the use of fuzzy sets and a linguistic model consisting
of a set of IF-THEN fuzzy rules. The strength of neuro-fuzzy systems involves two contradictory
requirements in fuzzy
Modeling interpretability versus accuracy. In practice, one of the two properties prevails. The
Neuro-Fuzzy in fuzzy modeling research field is divided into two areas: linguistic fuzzy modeling that is
focused on interpretability, mainly the Mamdani model; and precise fuzzy modeling that is focused on
accuracy, mainly the Takagi-Sugeno-Kang (TSK) model.
3. ALGORITHM FOR NEURO FUZZY BASED IMAGE FUSION
The algorithm for pixel-level image fusion using neuro fuzzy logic is given as follows.
− Read first image in variable I1 and find its size (rows:zl, columns: sl).
− Read second image in variable I2 and find its size (rows:z2. columns: s2).
− Variables I1 and I2 are images in matrix form where each pixel value is in the range from 0-255. Use
Gray Color map.
− Compare rows and columns of both input images. If the two images are not of the same size, select the
portion. Which are of same size.
− Convert the images in column form which has C= zl*sl entries.
− Form a training data, which is a matrix with three columns and entries in each column are form 0 to 255
insteps of 1.
− Form a check data. Which is a matrix of Pixels of two input images in column format.
In feature-level image fusion, the selection of different features is an important task. In multi-focus
images, some of the objects are clear (in focus) and some objects are blurred (out of focus). The author has
used five different features to characterize the information level contained in a specific portion of the image.
This feature set includes Variance, Energy of Gradient, Contrast Visibility, Spatial Frequency and Canny
Edge information.
3.1. Features selection
Contrast Visibility: It calculates the deviation of a block of pixels from the block’s mean value.
Therefore, it relates to the clearness level of the block. The visibility of the image block is obtained using (1).
V =
1
𝑚×𝑛
∑ ∑ )2𝑛
𝑗=1
𝑚
𝑖=1 (1)
of where, VI, 
, m×n, I(i,j) refers Contrast Visibility, mean, size of the block Bk
and rows and columns
of the image respectively.
Variance: Variance is used to measure the extent of focus in an image block. It is a mathematical
expectation of the average squared deviations from the mean. A pseudo center weighted local variance in the
neighborhood of an image pixel determines the amplification factor multiplying the difference between the
image pixel and its blurred counterpart before it is combined with the original image. It is calculated
using (2).
VI =
1
𝑚×𝑛 
−
),(
),(
ji
k
k
jiI

 (2)
where, V is Variance, 𝝁 is the mean value of the block image, I(i, j) is rows and columns of the image and
m×n is the image size. A high value of variance shows the greater extent of focus in the image block.
Spatial Frequency: Spatial frequency measures the activity level in an image. It is used to calculate
the frequency changes along rows and columns of the image. Spatial frequency refers to the number of pairs
One-third of a millimetre is a convenient unit of retinal distance because an image this size is said to subtend
one degree of visual angle on the retina. To give an example, index fingernail casts an image of this size
Int J Inf & Commun Technol ISSN: 2252-8776 
A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena)
199
when that nail is viewed at arm's length, a typical human thumb, not just the nail, but the entire width, casts
an image about twice as big, two degrees of visual angle. The size of the retinal image cast by some object
depends on the distance of that object from the eye, as the distance between the eye and an object decreases,
the object's image subtends a greater visual angle. The unit employed to express spatial frequency is the
number of cycles that fall within one degree of visual angle. Spatial frequency is measured using (3).
SF = √(RF)2 + (CF)2 (3)
where,
RF= √ 1
m×n
=
m
i 1
=
n
j 2
[𝐼(𝑖, 𝑗) − 𝐼(𝑖, 𝑗 − 1)]2
CF = √
1
𝑚 × 𝑛 = =
n
j
m
i1 2
[𝐼(𝑖, 𝑗) − 𝐼(𝑖, 𝑗 − 1)]2
where, SF is Spatial Frequency, RF is Row Frequency, CF is Column Frequency, m×n is size of image, I(i, j)
is the rows and columns of the image.
Energy of Gradient (EOG): It is also used to measure the amount of focus in an. It is calculated
using (4).
EOG = 
−
=
−
=
1
1
1
1
m
i
n
j
ff ji
22
( + ) (4)
where,
),(),1( jifjiff i
−+=
),()1,( jifjiff j
−+=
EOG is Energy of Gradient
f i
Energy of the row,
f j is the Energy of the column, m × n is the size of
the image.
Edge Information: The edge pixels can be found in the image block by using Canny edge detector. It
returns 1 if the current pixel belongs to some edge in the image otherwise it returns 0. The edge feature is just
the number of edge pixels contained within the image block. Edge detection is a fundamental tool in image
processing and computer vision, particularly in the areas of feature detection and feature extraction, which
aim at identifying points in a digital image at which the image brightness changes sharply or more formally
has discontinuities. The purpose of detecting sharp changes in image brightness is to capture important events
and changes in properties of the world. Edges extracted from non-trivial images are often hampered by
fragmentation, meaning that the edge curves are not connected, missing edge segments as well as false edges
not corresponding to interesting phenomena in the image, thus complicating the subsequent task of
interpreting the image data. Edge detection is one of the fundamental steps in image processing, image
analysis, image pattern recognition, and computer vision techniques.
3.2. Artificial neural network
Artificial neural networks (ANNs) have proven to be a more powerful and self-adaptive method of
pattern recognition as compared to traditional linear and simple nonlinear analyses. The ANN-based method
employs a nonlinear response function that iterates many times in a special network structure in order to learn
the complex functional relationship between input and output training data. The input layer has several
neurons, which represent the feature factors extracted and normalized from image A and image B. The
hidden layer has several neurons and the output layer has one neuron (or more neuron). The ith neuron of the
 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204
200
input layer connects with the jth neuron of the hidden layer by weight Wij, and weight between the jth neuron
of the hidden layer and the tth neuron of output layer is Vjt (in this case t = 1). The weighting function is
used to simulate and recognize the response relationship between features of fused image and corresponding
feature from original images (image A and image B). As the first step of ANN-based data fusion, two
registered images are decomposed into several blocks with size of M and N .Then, features of the
corresponding blocks in the two original images are extracted, and the normalized feature vector incident to
neural networks can be constructed. The features used here to evaluate the fusion effect are normally spatial
frequency, visibility, and edge. The next step is to select some vector samples to train neural networks. An
ANN is a universal function approximator that directly adapts to any nonlinear function defined by a
representative set of training data. Once trained, the ANN model can remember a functional relationship and
be used for further calculations. For these reasons, the ANN concept has been adopted to develop strongly
nonlinear models for multiple sensors data fusion.
3.3. Feed forward neural network
A neuron can have any number of inputs from one to n, where n is the total number of inputs. The
inputs may be represented therefore as x1, x2, x3… xn. And the corresponding weights for the inputs as w1,
w2, w3… wn., the summation of the weights multiplied by the inputs is x1w1+ x2w2+ x3w3…. + xnwn. Hence,
a = x1w1+x2w2+x3w3... +xnwn. Assuming an array of inputs and weights are already initialized as x[n]
and w[n] then
double activation = 0;
for (int i=0; i<n; i++)
{
activation += x[i] * w[i];
}
if the activation > threshold, output is 1 and if activation < threshold output is 0.
One way of is by organising the neurons into a design called a feed forward network. It gets its
name from the way the neurons in each layer feed their output forward to the next layer until we get the final
output from the neural network. A feed forward neural network is first trained with the block features of ten
pairs of multi-focus images. A feature set including spatial frequency, contrast visibility, edges, variance and
energy of gradient is used to define the clarity of the image block. Block size is determined adaptively for
each image. The trained neural network is then used to fuse any pair of multi-focus images.
3.4. Quantitative measures
There are different quantitative measures which are used to evaluate the performance of the fusion
techniques. We used three measures Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR)
and entropy (He).
3.4.1. Root mean square error
The analytical performance studies were aimed to quantitatively assess image fusion performance in
a straightforward manner. The root mean square error (RMSE), defined by the deviations between the
reference image pixel value R(i, j) and the fused image pixel value F(i, j), is computed as
𝑅𝑀𝑆𝐸 = √
∑ ∑ [𝑅(𝑖,𝑗)−𝐹(𝑖,𝑗)]2𝑛
𝑗=1
𝑚
𝑖=1
𝑚×𝑛
(6)
where m × n is the input image size. If the value of 0 correspond to the complete image reconstruction for
block m × n, it is a perfect image, which has been achieved through accurate reconstruction of multi focus to
the reference image.
3.4.2. Entropy
Entropy is known to be a measure of the amount of uncertainly about the image. It is then given by
𝐻 = − ∑ 𝑃𝑖 𝑙𝑜𝑔2 𝑝𝑖
𝐿−1
𝑖=0 (5)
where L is the number of graylevels.
3.4.3. Peak signal to noise ratio
Int J Inf & Commun Technol ISSN: 2252-8776 
A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena)
201
It determines the degree of resemblance between reference and fused image. A bigger value show
good fusion result.
𝑃𝑆𝑁𝑅 = 20 𝑙𝑜𝑔10 [
𝐿2
1
𝑚×𝑛
∑ ∑ [𝑅(𝑖,𝑗)−𝐹(𝑖,𝑗)]2𝑛
𝑗=1
𝑚
𝑖=1
] (7)
4. PERFORMANCE EVALUATION
The input image is divided into blocks and the five features are extracted using feed forward neural
network. The performance of the existing Average, Maximum, Minimum and PCA based techniques are
compared with the results of the proposed technique. The experimentation results are obtained to evaluate the
performance of the proposed technique. The simulation is carried out by using Mat lab 7.5, the simulation
window is as shown in the Figure 3 (h). The Figure 3 (a, b, c, d, e and f) are the results of the fused image by
various fusion techniques such as averaging, minimum, maximum, PCA and block-based feature level
method respectively. Table I shows the entropy of the various methods and figure 5 shows the graphical
performance of the same from which it can be seen that the proposed technique gives better results than the
previous methods. Evaluation measures are used to evaluate the quality of the fused image. The fused images
are evaluated, taking the following parameters into consideration.
(a) (b) (c)
(d) (e) (f)
(g)
(h)
Figure 3. Medical images (CT and MRI) fused by fuzzy logic and neuro fuzzy logic. (a) Input CT image, (b)
Input MRI image, (c) Fused by averaging, (d) Fused by minimum, (e) Fused by maximum, (f) Fused by
PCA, (g) Fused by proposed method, (h) Simulation window
Table 1. Results of quantitative measures of medical images
 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204
202
Quantitative Measures Average Minimum Maximum PCA Block Based Method
Entropy (dB) 6.38 4.25 5.83 8.86 20.78
RMSE 62.67 74.76 74.76 21.02 32.23
PSNR (dB) 16.58 16.20 16.20 17.4 34.53
4.1. Image quality index
Image quality index (IQI) measures the similarity between two images (11 & I2) and its value
ranges from -1 to 1. IQI is equal to 1 if both images are identical.
4.2. Mutual information measure
Mutual information measure (MIM) furnishes the amount of information of one image in another.
This gives the guidelines for selecting the best fusion method. Given two images M (i, j) and N (i, j).
Where, PM (x) and PN (y) are the probability density functions in the individual images, and PMN (x, y) is
joint probability density function.
4.3. Fusion factor
Given two images A and B, and their fused image F, the Fusion factor (FF).
FF = I AF + I BF (8)
Where IAF and IBF are the MIM values between input images and fused image. A higher value of FF
indicates that fused image contains moderately good amount of information present in both the images.
However, a high value of FF does not imply that the information from both images is symmetrically fused.
4.3.1. Fusion symmetry
Fusion symmetry (FS) is an indication of the degree of symmetry in the information content from
both the images. The quality of fusion technique depends on the degree of Fusion symmetry. Since FS is the
symmetry factor, when the sensors are of good quality, FS should be as low as possible so that the fused
image derives features from both input images [11-14]. If any of the sensors is of low quality then it is better
to maximize FS than minimizing it.
4.3.2. Fusion index
The study proposes a parameter called Fusion index from the factors Fusion symmetry and Fusion
factor. The fusion index (FI) is defined as
FI = I AF / I BF (9)
Is the mutual information index between multispectral image and fused image and IBF is the mutual
information index between panchromatic image and fused image. The quality of fusion technique depends on
the degree of fusion index. Where p contains the histogram, counts returned from im hist.
5. RESULTS AND DISCUSSIONS
There are many typical applications for image fusion. Modern spectral scanners gather up to several
hundred of spectral bands which can be both visualized and processed individually, or which can be fused
into a single image, depending on the image analysis task. In this section, input images are fused using fuzzy
logic approach [15-17]. So, it is concluded that results obtained from the implementation of neuro fuzzy
logic-based image fusion approach performs better for first two test cases and fuzzy based image fusion
shows better performance for third test case. So further investigation is needed to resolve this issue. Our
experimental results show that neuro fuzzy logic-based image fusion approach provides better performance
when compared to fuzzy based image fusion for first two examples. Image quality index (IQI), the similarity
between reference and fused image (0.9999, 0.9829, and 0.3182) are higher for first two cases when
compared to values obtained from fuzzy based fusion technique (0.9758, 0.9824, and 0.8871). The higher
values for fusion factor (FF) from first two examples (2.8115, 3.3438, 1.0109) obtained from the neuro fuzzy
based fusion approach indicates that fused image contains moderately good amount of information present in
both the images compared to FF values (1.0965,2.1329,1.9864) obtained from fuzzy based fusion approach.
The amount of information of one image in another, mutual information measure (MIM) values
(1.4656,1.5079,0.7634) are also significantly better which shows that neuro fuzzy based fusion method
Int J Inf & Commun Technol ISSN: 2252-8776 
A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena)
203
preserves more information compared to fuzzy based image fusion. The other evaluation measures like root
mean square error (RMSE) with lower and peak signal to noise ratio (PSNR), Correlation Coefficient (CC)
with higher values (0.9459, 0.8979, 0.1265) obtained form neuro fuzzy based fusion approach are also
comparatively better for first two cases. The entropy, the amount of information that can be used to
characterize the input image (7.2757, 7.3202, 4.4894) are better for two examples obtained from neuro fuzzy
based image fusion technique.
Figure 4 shows the Medical images (CT and MRI Brain) fused by different images fusion
techniques and the proposed method. Figure a and b are the input CT and MRI images respectively. Table 2
shows the results of quantitative measures of medical images such as brain. The value of each quality
assessment parameters of all mentioned fusion approaches are depicted in Table 3.
(a) (b) (c)
(d) (e) (f)
(g)
Figure 4. Medical images (CT and MRI Brain) fused by different image fusion techniques and the proposed
method. (a) Input CT image, (b) Input MRI image, (c) Fused by averaging, (d) Fused by minimum, (e) Fused
by maximum, (f) Fused by PCA, (g) Fused by contrast
Table 2. Results of quantitative measures of medical images (brain)
S.NO METHODS ENTROPY (dB)
1. Averaging 6.8981
2. Minimum 2.0906
3. maximum 6.7582
4. PCA 8.4271
5. Block based feature level method (Proposed) 8.9427
 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204
204
Table 3. Evaluation indices for image fusion based on fuzzy and neuro fuzzy logic approaches
METHOD IQI MIM FF FS FI RMSE PSNR ENTROPY CC SF
FUZZY FUSION
(EX 1) 0.8758 0.4328 1.0965 0.0779 0.7303 44.8448 15.0966 4.4881 0.5833 10.4567
(EX 1)
(EX 2) 0.8824 0.8582 2.1379 0.0518 0.8122 48.9935 14.3280 5.3981 0.7598 16.9749
(EX 2)
(EX 3) 0.8871 1.5576 1.9864 0.2841 3.6325 42.4850 15.5661 5.8205 0.7761 25.4711
(EX 3)
NEURO FUZZY
FUSION
(EX 1) 0.9999 1.3656 2.8115 0.0213 1.0837 14.9554 24.6348 7.2757 0.9459 25.5698
0.9999
0.9729
1.3656
1.5079
2.8115
3.3438
0.0213
0.0490
1.0837
0.8213
14.9554
34.4662
24.6348
17.3829
7.2757
7.3202
0.9459
0.8979
25.5698
37.1169(EX 2)
0.9729
0.2182
1.5079
0.7634
3.3438
1.0109
0.0490
0.2431
0.8213
2.8926
34.4662
68.5319
17.3829
11.4129
7.3202
4.4894
0.8979
0.1265
37.1169
16.9926(EX 3)
6. CONCLUSION
In this paper, block-based feature-level multi-focus image fusion technique is proposed for fusing images that
are not in focus. A feed forward neural network is first trained with the block features of a pair of multi-focus images. A
feature set including spatial frequency, contrast visibility, edges, variance and energy of gradient is used to define the
clarity of the image block. Block size is determined adaptively for each image. The trained neural network is then used
to fuse any pair of multi-focus images. The performance of the existing Average, Maximum, Minimum and PCA based
techniques are compared with the results of the proposed techniques. The experimentation results are obtained to evaluate
the performance of the proposed technique. Experimentation results show that the proposed technique performs better
than the existing techniques. The experimental results clearly show that the proposed image fusion using fuzzy logic
gives a considerable improvement on the quality of the fusion system and neuro fuzzy based image fusion preserves more
texture information.
REFERENCES
[1] H. Li, S. Manjunath and S. K. Mitra., “Multi-sensor image fusion using the wavelet transform,” in Graphical
Models and Image Processing, vol. 57, no. 3, pp. 235-245, 1995.
[2] P. J. Burt, R. J. Lolezynski., “Enhanced image capture through fusion,” in: Proceedings of the Fourth International
Conference on Computer Vision, Berlin, Germany, vol. 3, pp. 173-182, 1993.
[3] Yufeng Zheng, Edward A. Essock, Bruce C., “Hansen.An Advanced Image Fusion Algorithm Based on Wavelet
Transform – Incorporation with PCA and morphological Processing,” Proceedings of the SPIE, vol. 5298,
pp. 177-187, 2004.
[4] I. Bloch, “Information combination operators for data fusion. a review with classification,” IEEE Trans. SMC: Part
A, vol. 26, pp. 52-67,1996.
[5] H. A. Eltoukhy, S. Kavusi, “A computationally efficient algorithm for multi-focus image reconstruction,”
Proceedings of SPIE Electronic Imaging, vol. 33, pp. 45-51, 2003.
[6] Abdul Basit Siddiqui, M. Arfan Jaffar., “Ayyaz Hussain and Anwar M. Mirza, Block - based Feature-level Multi-
focus Image Fusion,” IEEE in signal processing, vol. 2, pp. 134-139, 2010.
[7] Li, B. Manjunath, S. Mitra, “Multisensor image fusion using the wavelet transform, Graph,” Models Image Process,
vol. 57, no. 3, pp. 235-245,1995.
[8] Ishita De, Bhabatosh Chanda., “A simple and efficient algorithm for multifocus image fusion using morphological
wavelets,” IEEE in Signal Processing, vol. 31, pp. 924-936, 2006.
[9] Gonzalo Pajares and Jesus Manuel de la Cruz, “A wavelet-based Image Fusion Tutorial,” IEEE in Pattern
Recognition, vol. 37, no. 9, pp. 1855-1872, 2004.
[10] H. J. Heijmans and J. Goutsias, “Nonlinear multiresolution signal decomposition schemes. II. Morphological
wavelets,” IEEE Transactions on Image Processing, vol. 9, pp. 1897-1913, 2000.
[11] Yang, X.H., Huang, F. Z., Liu, G. “Urban Remote Image Fusion Using Fuzzy Rules,” IEEE Proceedings of the
Eighth International Conference on Machine Learning and Cybernetics, pp. 101-109, 2009.
[12] A. Toet, “Image fusion by a ratio of low pass pyramid,” Pattern Recognition Letters, vol. 9, no. 4,
pp. 245-253, 1989.
[13] Xinman Zhang, Jiuqlang Han and Peifei Liu, “Restoration and fusion optimization scheme of multifocus image
using genetic search strategies,” Optica Applicataion, vol. XXXV, no. 4, 2005.
[14] X. Yang, W. Yang, J. Pei, “Different focus points images fusion based on wavelet decomposition,” Proceedings of
Third International Conference on Information Fusion, vol. 1, pp. 3-8, 2000.
[15] S. Mukhopadhyay, B. Chanda, “Fusion of 2d gray scale images using multiscale morphology,” Pattern Recognition,
vol. 34, pp. 1939-1949, 2001.
[16] V. P. S. Naidu and J. R. Raol, “Pixel-level Image Fusion using Wavelets and Principal Component Analysis,” in
Defence Science Journal, vol. 58, no. 3, pp. 338-352, May 2008.
[17] Othman Khalifa, “Wavelet Coding Design for Image Data Compression,” International Arab Journal of Information
Technology, vol. 2, no. 2, pp. 118-128, 2005.

More Related Content

What's hot

Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus imagesIAEME Publication
 
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGICQUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGICijsc
 
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
 
IRJET- Face Recognition using Machine Learning
IRJET- Face Recognition using Machine LearningIRJET- Face Recognition using Machine Learning
IRJET- Face Recognition using Machine LearningIRJET Journal
 
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...CSCJournals
 
Image Captioning Generator using Deep Machine Learning
Image Captioning Generator using Deep Machine LearningImage Captioning Generator using Deep Machine Learning
Image Captioning Generator using Deep Machine Learningijtsrd
 
Paper id 252014130
Paper id 252014130Paper id 252014130
Paper id 252014130IJRAT
 
Object Based Image Analysis
Object Based Image Analysis Object Based Image Analysis
Object Based Image Analysis Kabir Uddin
 
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
 
Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...Hưng Đặng
 
Automated Neural Image Caption Generator for Visually Impaired People
Automated Neural Image Caption Generator for Visually Impaired PeopleAutomated Neural Image Caption Generator for Visually Impaired People
Automated Neural Image Caption Generator for Visually Impaired PeopleChristopher Mehdi Elamri
 
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
 
An ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral imagesAn ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral imagessipij
 
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy LogicMaximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy Logicsipij
 
Efficient mobilenet architecture_as_image_recognit
Efficient mobilenet architecture_as_image_recognitEfficient mobilenet architecture_as_image_recognit
Efficient mobilenet architecture_as_image_recognitEL Mehdi RAOUHI
 
Analysis of image storage and retrieval in graded memory
Analysis of image storage and retrieval in graded memoryAnalysis of image storage and retrieval in graded memory
Analysis of image storage and retrieval in graded memoryeSAT Journals
 

What's hot (20)

Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
 
Morpho
MorphoMorpho
Morpho
 
M017427985
M017427985M017427985
M017427985
 
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGICQUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
 
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
 
A010210106
A010210106A010210106
A010210106
 
IRJET- Face Recognition using Machine Learning
IRJET- Face Recognition using Machine LearningIRJET- Face Recognition using Machine Learning
IRJET- Face Recognition using Machine Learning
 
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...
 
Image Captioning Generator using Deep Machine Learning
Image Captioning Generator using Deep Machine LearningImage Captioning Generator using Deep Machine Learning
Image Captioning Generator using Deep Machine Learning
 
Paper id 252014130
Paper id 252014130Paper id 252014130
Paper id 252014130
 
93 98
93 9893 98
93 98
 
Object Based Image Analysis
Object Based Image Analysis Object Based Image Analysis
Object Based Image Analysis
 
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
 
Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...Image compression and reconstruction using a new approach by artificial neura...
Image compression and reconstruction using a new approach by artificial neura...
 
Automated Neural Image Caption Generator for Visually Impaired People
Automated Neural Image Caption Generator for Visually Impaired PeopleAutomated Neural Image Caption Generator for Visually Impaired People
Automated Neural Image Caption Generator for Visually Impaired People
 
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...
 
An ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral imagesAn ensemble classification algorithm for hyperspectral images
An ensemble classification algorithm for hyperspectral images
 
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy LogicMaximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
 
Efficient mobilenet architecture_as_image_recognit
Efficient mobilenet architecture_as_image_recognitEfficient mobilenet architecture_as_image_recognit
Efficient mobilenet architecture_as_image_recognit
 
Analysis of image storage and retrieval in graded memory
Analysis of image storage and retrieval in graded memoryAnalysis of image storage and retrieval in graded memory
Analysis of image storage and retrieval in graded memory
 

Similar to 06 17443 an neuro fuzzy...

Fusion for medical image based on discrete wavelet transform coefficient
Fusion for medical image based on discrete wavelet transform  coefficientFusion for medical image based on discrete wavelet transform  coefficient
Fusion for medical image based on discrete wavelet transform coefficientnooriasukmaningtyas
 
Medical Image Fusion Using Discrete Wavelet Transform
Medical Image Fusion Using Discrete Wavelet TransformMedical Image Fusion Using Discrete Wavelet Transform
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionMultiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionIOSRJVSP
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewDifferent Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewIJMER
 
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...IRJET Journal
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET Journal
 
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic  Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic ijsc
 
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET Journal
 
Machine learning based augmented reality for improved learning application th...
Machine learning based augmented reality for improved learning application th...Machine learning based augmented reality for improved learning application th...
Machine learning based augmented reality for improved learning application th...IJECEIAES
 
Biomedical Image Retrieval using LBWP
Biomedical Image Retrieval using LBWPBiomedical Image Retrieval using LBWP
Biomedical Image Retrieval using LBWPIRJET Journal
 
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...ijsrd.com
 
Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978Editor IJARCET
 
Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978Editor IJARCET
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformINFOGAIN PUBLICATION
 
IRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion MethodsIRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion MethodsIRJET Journal
 

Similar to 06 17443 an neuro fuzzy... (20)

Fusion for medical image based on discrete wavelet transform coefficient
Fusion for medical image based on discrete wavelet transform  coefficientFusion for medical image based on discrete wavelet transform  coefficient
Fusion for medical image based on discrete wavelet transform coefficient
 
Medical Image Fusion Using Discrete Wavelet Transform
Medical Image Fusion Using Discrete Wavelet TransformMedical Image Fusion Using Discrete Wavelet Transform
Medical Image Fusion Using Discrete Wavelet Transform
 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionMultiresolution SVD based Image Fusion
Multiresolution SVD based Image Fusion
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewDifferent Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
 
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
 
E1083237
E1083237E1083237
E1083237
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITIONFEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
 
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
 
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic  Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
 
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
 
Machine learning based augmented reality for improved learning application th...
Machine learning based augmented reality for improved learning application th...Machine learning based augmented reality for improved learning application th...
Machine learning based augmented reality for improved learning application th...
 
Dd25624627
Dd25624627Dd25624627
Dd25624627
 
Biomedical Image Retrieval using LBWP
Biomedical Image Retrieval using LBWPBiomedical Image Retrieval using LBWP
Biomedical Image Retrieval using LBWP
 
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
 
Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978
 
Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978Volume 2-issue-6-1974-1978
Volume 2-issue-6-1974-1978
 
Gr3511821184
Gr3511821184Gr3511821184
Gr3511821184
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
 
IRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion MethodsIRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion Methods
 

More from IAESIJEECS

08 20314 electronic doorbell...
08 20314 electronic doorbell...08 20314 electronic doorbell...
08 20314 electronic doorbell...IAESIJEECS
 
07 20278 augmented reality...
07 20278 augmented reality...07 20278 augmented reality...
07 20278 augmented reality...IAESIJEECS
 
05 20275 computational solution...
05 20275 computational solution...05 20275 computational solution...
05 20275 computational solution...IAESIJEECS
 
04 20268 power loss reduction ...
04 20268 power loss reduction ...04 20268 power loss reduction ...
04 20268 power loss reduction ...IAESIJEECS
 
03 20237 arduino based gas-
03 20237 arduino based gas-03 20237 arduino based gas-
03 20237 arduino based gas-IAESIJEECS
 
02 20274 improved ichi square...
02 20274 improved ichi square...02 20274 improved ichi square...
02 20274 improved ichi square...IAESIJEECS
 
01 20264 diminution of real power...
01 20264 diminution of real power...01 20264 diminution of real power...
01 20264 diminution of real power...IAESIJEECS
 
08 20272 academic insight on application
08 20272 academic insight on application08 20272 academic insight on application
08 20272 academic insight on applicationIAESIJEECS
 
07 20252 cloud computing survey
07 20252 cloud computing survey07 20252 cloud computing survey
07 20252 cloud computing surveyIAESIJEECS
 
06 20273 37746-1-ed
06 20273 37746-1-ed06 20273 37746-1-ed
06 20273 37746-1-edIAESIJEECS
 
05 20261 real power loss reduction
05 20261 real power loss reduction05 20261 real power loss reduction
05 20261 real power loss reductionIAESIJEECS
 
04 20259 real power loss
04 20259 real power loss04 20259 real power loss
04 20259 real power lossIAESIJEECS
 
03 20270 true power loss reduction
03 20270 true power loss reduction03 20270 true power loss reduction
03 20270 true power loss reductionIAESIJEECS
 
02 15034 neural network
02 15034 neural network02 15034 neural network
02 15034 neural networkIAESIJEECS
 
01 8445 speech enhancement
01 8445 speech enhancement01 8445 speech enhancement
01 8445 speech enhancementIAESIJEECS
 
08 17079 ijict
08 17079 ijict08 17079 ijict
08 17079 ijictIAESIJEECS
 
07 20269 ijict
07 20269 ijict07 20269 ijict
07 20269 ijictIAESIJEECS
 
06 10154 ijict
06 10154 ijict06 10154 ijict
06 10154 ijictIAESIJEECS
 
05 20255 ijict
05 20255 ijict05 20255 ijict
05 20255 ijictIAESIJEECS
 
04 18696 ijict
04 18696 ijict04 18696 ijict
04 18696 ijictIAESIJEECS
 

More from IAESIJEECS (20)

08 20314 electronic doorbell...
08 20314 electronic doorbell...08 20314 electronic doorbell...
08 20314 electronic doorbell...
 
07 20278 augmented reality...
07 20278 augmented reality...07 20278 augmented reality...
07 20278 augmented reality...
 
05 20275 computational solution...
05 20275 computational solution...05 20275 computational solution...
05 20275 computational solution...
 
04 20268 power loss reduction ...
04 20268 power loss reduction ...04 20268 power loss reduction ...
04 20268 power loss reduction ...
 
03 20237 arduino based gas-
03 20237 arduino based gas-03 20237 arduino based gas-
03 20237 arduino based gas-
 
02 20274 improved ichi square...
02 20274 improved ichi square...02 20274 improved ichi square...
02 20274 improved ichi square...
 
01 20264 diminution of real power...
01 20264 diminution of real power...01 20264 diminution of real power...
01 20264 diminution of real power...
 
08 20272 academic insight on application
08 20272 academic insight on application08 20272 academic insight on application
08 20272 academic insight on application
 
07 20252 cloud computing survey
07 20252 cloud computing survey07 20252 cloud computing survey
07 20252 cloud computing survey
 
06 20273 37746-1-ed
06 20273 37746-1-ed06 20273 37746-1-ed
06 20273 37746-1-ed
 
05 20261 real power loss reduction
05 20261 real power loss reduction05 20261 real power loss reduction
05 20261 real power loss reduction
 
04 20259 real power loss
04 20259 real power loss04 20259 real power loss
04 20259 real power loss
 
03 20270 true power loss reduction
03 20270 true power loss reduction03 20270 true power loss reduction
03 20270 true power loss reduction
 
02 15034 neural network
02 15034 neural network02 15034 neural network
02 15034 neural network
 
01 8445 speech enhancement
01 8445 speech enhancement01 8445 speech enhancement
01 8445 speech enhancement
 
08 17079 ijict
08 17079 ijict08 17079 ijict
08 17079 ijict
 
07 20269 ijict
07 20269 ijict07 20269 ijict
07 20269 ijict
 
06 10154 ijict
06 10154 ijict06 10154 ijict
06 10154 ijict
 
05 20255 ijict
05 20255 ijict05 20255 ijict
05 20255 ijict
 
04 18696 ijict
04 18696 ijict04 18696 ijict
04 18696 ijict
 

Recently uploaded

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 

Recently uploaded (20)

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 

06 17443 an neuro fuzzy...

  • 1. International Journal of Informatics and Communication Technology (IJ-ICT) Vol.9, No.3, December 2020, pp. 195~204 ISSN: 2252-8776, DOI: 10.11591/ijict.v9i3.pp195-204  195 Journal homepage: http://ijict.iaescore.com A neuro fuzzy image fusion using block based feature level method S. Mary Praveena, R. Kanmani, A. K. Kavitha Department of ECE, Sri Ramakrishna Institute of Technology, India Article Info ABSTRACT Article history: Received Dec 28, 2018 Revised Mar 20, 2019 Accepted Jan 12, 2020 Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices. Keywords: Feed forward neural network Fusion Multi-focus images Optimal block Variance This is an open access article under the CC BY-SA license. Corresponding Author: S. Mary Praveena, Department of ECE, Sri Ramakrishna Institute of Technology, Pachapalayam (Post), Perur Chettipalayam, Coimbatore – 641010, India. praveena.infant@gmail.com 1. INTRODUCTION A wide variety of data acquisition devices are available at present, and hence image fusion has become an important subarea of image processing. There are sensors which cannot generate images of all objects at various distances with equal clarity. Thus several images of a scene are captured, with focus on different parts of it [1]. With the availability of multi-sensor data in many fields such as remote sensing, medical imaging, machine vision and military applications, sensor fusion has emerged as a new and promising research area. The current definition of sensor fusion is very broad and the fusion can take place at the signal, pixel, feature, and symbol level. The goal of image fusion is to create new images that are more suitable for the purposes of human visual perception, object detection and target recognition. In this paper we address the problem of pixel-level fusion [2, 3] or the so-called image fusion problem. To achieve an image where all the objects are in focus, the process of images fusion is performed either in special domain or in transformed domain. Spatial domain includes the techniques which directly incorporate the pixel values.
  • 2.  ISSN: 2252-8776 Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204 196 Multi-scale or multi-resolution approaches provide a means to exploit this fact. After applying certain operations on the transformed images, the fused image is created by taking the inverse transform.Image fusion is generally performed at three different levels of information representation including pixel level, feature level and decision level. In pixel-level image fusion, simple mathematical operations such as maximum or average are applied on the pixel values of the sources to generate fused image [4, 5]. However these techniques usually smooth the sharp edges or leave the blurring effects in the fused image. In the feature level multi-focus image fusion, the source images are first segmented into different regions and then the feature values of these regions are calculated. Using some fusion rule, the regions are selected to generate the fused image. In decision level image fusion, the objects in the source images are first detected and then by using some suitable fusion algorithm, the fused image is generated [6, 7]. In this paper, there is a new proposed method for multi-focus image fusion 2. FUZZY BASED IMAGE FUSION Fuzzy image processing is not a unique theory. Fuzzy image processing is the collection of all approaches that understand, represent and process the images, their segments and features as fuzzy sets. The representation and processing depend on the selected fuzzy technique and on the problem to be solved. It has three main stages: − Image fuzzification((Using membership functions to graphically describe a situation) − Modification of membership values(Application of fuzzy rules) − Image defuzzification((Obtaining the crisp or actual results) The coding of image data (fuzzification) and decoding of the results (defuzzification) are steps that make possible to process images with fuzzy techniques. The main power of fuzzy image processing is in the middle step (modification of membership values). After the image data are transformed from gray-level plane to the membership plane (fuzzification), appropriate fuzzy techniques modify the membership values. Multi- sensor data fusion can be performed at four different processing levels, according to the stage at which the fusion takes place: signal level, pixel level, feature level, and decision level. Figure 1 illustrates of the concept of the four different fusion levels [8-10]. Figure 1. An overview of categorization of the fusion algorithms
  • 3. Int J Inf & Commun Technol ISSN: 2252-8776  A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena) 197 Signal level fusion. In signal-based fusion, signals from different sensors are combined to create a new signal with a better signal-to noise ratio than the original signals. (2) Pixel level fusion. Pixel-based fusion is performed on a pixel-by-pixel basis. It generates a fused image in which information associated with each pixel is determined from a set of pixels in source images to improve the performance of image processing tasks such as segmentation (3) Feature level fusion. Feature-based fusion at feature level requires an extraction of objects recognized in the various data sources. It requires the extraction of salient features which are depending on their environment such as pixel intensities, edges or textures. These similar features from input images are fused. 2.1. Steps in fuzzy image fusion The original image in the gray level plane is subjected to fuzzification and the modification of membership functions is carried out in the membership plane. The result is the output image obtained after the defuzzification process. The algorithm for pixel-level image fusion using fuzzy logic is given as follows [11]. a. Read first image in variable I1 and find its size (rows: r1, columns: c1). b. Read second image in variable I2 and find its size (rows: r2, columns: c2). c. Variables I1 and I2 are images in matrix form where each pixel gray level value is in the range from 0 to 255. d. Compare rows and columns of both input images. If these two images are not of the same size, select the portion, which are of same size. e. Convert the images in column form which has C = r1×c1 entries. f. Make a fuzzy inference system file, which has two input images. g. Decide number and type of membership functions for both the input images by tuning the membership functions. h. Input images in antecedent are resolved to a degree of membership ranging 0 to 255. i. Make fuzzy if-then rules for input images, which resolve those two antecedents to a single number from 0 to 255. In the proposed method an image set of 10 different images are used to train the neural network. Every image is first divided into number of blocks and features are calculated as shown in Figure 2. The block size plays an important role in distinguishing the blurred and un-blurred regions from each other. After dividing the images into blocks, the feature values of every block of all the images are calculated and a features file is created. A sufficient number of feature vectors are used to train the neural network. The trained neural network is then used to fuse any set of multi-focus images. Figure 2. Block diagram of the proposed method 2.2. Neuro fuzzy based imagefusion
  • 4.  ISSN: 2252-8776 Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204 198 In the field of artificial intelligence, Neuro-Fuzzy refers to combinations of artificial neural networks and fuzzy logic. Neuro-Fuzzy composite results in a hybrid intelligent system that synergizes these two techniques by combining the human-like reasoning style of fuzzy systems with the learning and connectionist structure of neural networks. Neuro-Fuzzy hybridization is widely termed as Fuzzy Neural Network (FNN) or Neuro-Fuzzy System (NFS) in the literature. Neuro-Fuzzy system incorporates the human-like reasoning style of fuzzy systems through the use of fuzzy sets and a linguistic model consisting of a set of IF-THEN fuzzy rules. The strength of neuro-fuzzy systems involves two contradictory requirements in fuzzy Modeling interpretability versus accuracy. In practice, one of the two properties prevails. The Neuro-Fuzzy in fuzzy modeling research field is divided into two areas: linguistic fuzzy modeling that is focused on interpretability, mainly the Mamdani model; and precise fuzzy modeling that is focused on accuracy, mainly the Takagi-Sugeno-Kang (TSK) model. 3. ALGORITHM FOR NEURO FUZZY BASED IMAGE FUSION The algorithm for pixel-level image fusion using neuro fuzzy logic is given as follows. − Read first image in variable I1 and find its size (rows:zl, columns: sl). − Read second image in variable I2 and find its size (rows:z2. columns: s2). − Variables I1 and I2 are images in matrix form where each pixel value is in the range from 0-255. Use Gray Color map. − Compare rows and columns of both input images. If the two images are not of the same size, select the portion. Which are of same size. − Convert the images in column form which has C= zl*sl entries. − Form a training data, which is a matrix with three columns and entries in each column are form 0 to 255 insteps of 1. − Form a check data. Which is a matrix of Pixels of two input images in column format. In feature-level image fusion, the selection of different features is an important task. In multi-focus images, some of the objects are clear (in focus) and some objects are blurred (out of focus). The author has used five different features to characterize the information level contained in a specific portion of the image. This feature set includes Variance, Energy of Gradient, Contrast Visibility, Spatial Frequency and Canny Edge information. 3.1. Features selection Contrast Visibility: It calculates the deviation of a block of pixels from the block’s mean value. Therefore, it relates to the clearness level of the block. The visibility of the image block is obtained using (1). V = 1 𝑚×𝑛 ∑ ∑ )2𝑛 𝑗=1 𝑚 𝑖=1 (1) of where, VI,  , m×n, I(i,j) refers Contrast Visibility, mean, size of the block Bk and rows and columns of the image respectively. Variance: Variance is used to measure the extent of focus in an image block. It is a mathematical expectation of the average squared deviations from the mean. A pseudo center weighted local variance in the neighborhood of an image pixel determines the amplification factor multiplying the difference between the image pixel and its blurred counterpart before it is combined with the original image. It is calculated using (2). VI = 1 𝑚×𝑛  − ),( ),( ji k k jiI   (2) where, V is Variance, 𝝁 is the mean value of the block image, I(i, j) is rows and columns of the image and m×n is the image size. A high value of variance shows the greater extent of focus in the image block. Spatial Frequency: Spatial frequency measures the activity level in an image. It is used to calculate the frequency changes along rows and columns of the image. Spatial frequency refers to the number of pairs One-third of a millimetre is a convenient unit of retinal distance because an image this size is said to subtend one degree of visual angle on the retina. To give an example, index fingernail casts an image of this size
  • 5. Int J Inf & Commun Technol ISSN: 2252-8776  A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena) 199 when that nail is viewed at arm's length, a typical human thumb, not just the nail, but the entire width, casts an image about twice as big, two degrees of visual angle. The size of the retinal image cast by some object depends on the distance of that object from the eye, as the distance between the eye and an object decreases, the object's image subtends a greater visual angle. The unit employed to express spatial frequency is the number of cycles that fall within one degree of visual angle. Spatial frequency is measured using (3). SF = √(RF)2 + (CF)2 (3) where, RF= √ 1 m×n = m i 1 = n j 2 [𝐼(𝑖, 𝑗) − 𝐼(𝑖, 𝑗 − 1)]2 CF = √ 1 𝑚 × 𝑛 = = n j m i1 2 [𝐼(𝑖, 𝑗) − 𝐼(𝑖, 𝑗 − 1)]2 where, SF is Spatial Frequency, RF is Row Frequency, CF is Column Frequency, m×n is size of image, I(i, j) is the rows and columns of the image. Energy of Gradient (EOG): It is also used to measure the amount of focus in an. It is calculated using (4). EOG =  − = − = 1 1 1 1 m i n j ff ji 22 ( + ) (4) where, ),(),1( jifjiff i −+= ),()1,( jifjiff j −+= EOG is Energy of Gradient f i Energy of the row, f j is the Energy of the column, m × n is the size of the image. Edge Information: The edge pixels can be found in the image block by using Canny edge detector. It returns 1 if the current pixel belongs to some edge in the image otherwise it returns 0. The edge feature is just the number of edge pixels contained within the image block. Edge detection is a fundamental tool in image processing and computer vision, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities. The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. Edges extracted from non-trivial images are often hampered by fragmentation, meaning that the edge curves are not connected, missing edge segments as well as false edges not corresponding to interesting phenomena in the image, thus complicating the subsequent task of interpreting the image data. Edge detection is one of the fundamental steps in image processing, image analysis, image pattern recognition, and computer vision techniques. 3.2. Artificial neural network Artificial neural networks (ANNs) have proven to be a more powerful and self-adaptive method of pattern recognition as compared to traditional linear and simple nonlinear analyses. The ANN-based method employs a nonlinear response function that iterates many times in a special network structure in order to learn the complex functional relationship between input and output training data. The input layer has several neurons, which represent the feature factors extracted and normalized from image A and image B. The hidden layer has several neurons and the output layer has one neuron (or more neuron). The ith neuron of the
  • 6.  ISSN: 2252-8776 Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204 200 input layer connects with the jth neuron of the hidden layer by weight Wij, and weight between the jth neuron of the hidden layer and the tth neuron of output layer is Vjt (in this case t = 1). The weighting function is used to simulate and recognize the response relationship between features of fused image and corresponding feature from original images (image A and image B). As the first step of ANN-based data fusion, two registered images are decomposed into several blocks with size of M and N .Then, features of the corresponding blocks in the two original images are extracted, and the normalized feature vector incident to neural networks can be constructed. The features used here to evaluate the fusion effect are normally spatial frequency, visibility, and edge. The next step is to select some vector samples to train neural networks. An ANN is a universal function approximator that directly adapts to any nonlinear function defined by a representative set of training data. Once trained, the ANN model can remember a functional relationship and be used for further calculations. For these reasons, the ANN concept has been adopted to develop strongly nonlinear models for multiple sensors data fusion. 3.3. Feed forward neural network A neuron can have any number of inputs from one to n, where n is the total number of inputs. The inputs may be represented therefore as x1, x2, x3… xn. And the corresponding weights for the inputs as w1, w2, w3… wn., the summation of the weights multiplied by the inputs is x1w1+ x2w2+ x3w3…. + xnwn. Hence, a = x1w1+x2w2+x3w3... +xnwn. Assuming an array of inputs and weights are already initialized as x[n] and w[n] then double activation = 0; for (int i=0; i<n; i++) { activation += x[i] * w[i]; } if the activation > threshold, output is 1 and if activation < threshold output is 0. One way of is by organising the neurons into a design called a feed forward network. It gets its name from the way the neurons in each layer feed their output forward to the next layer until we get the final output from the neural network. A feed forward neural network is first trained with the block features of ten pairs of multi-focus images. A feature set including spatial frequency, contrast visibility, edges, variance and energy of gradient is used to define the clarity of the image block. Block size is determined adaptively for each image. The trained neural network is then used to fuse any pair of multi-focus images. 3.4. Quantitative measures There are different quantitative measures which are used to evaluate the performance of the fusion techniques. We used three measures Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR) and entropy (He). 3.4.1. Root mean square error The analytical performance studies were aimed to quantitatively assess image fusion performance in a straightforward manner. The root mean square error (RMSE), defined by the deviations between the reference image pixel value R(i, j) and the fused image pixel value F(i, j), is computed as 𝑅𝑀𝑆𝐸 = √ ∑ ∑ [𝑅(𝑖,𝑗)−𝐹(𝑖,𝑗)]2𝑛 𝑗=1 𝑚 𝑖=1 𝑚×𝑛 (6) where m × n is the input image size. If the value of 0 correspond to the complete image reconstruction for block m × n, it is a perfect image, which has been achieved through accurate reconstruction of multi focus to the reference image. 3.4.2. Entropy Entropy is known to be a measure of the amount of uncertainly about the image. It is then given by 𝐻 = − ∑ 𝑃𝑖 𝑙𝑜𝑔2 𝑝𝑖 𝐿−1 𝑖=0 (5) where L is the number of graylevels. 3.4.3. Peak signal to noise ratio
  • 7. Int J Inf & Commun Technol ISSN: 2252-8776  A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena) 201 It determines the degree of resemblance between reference and fused image. A bigger value show good fusion result. 𝑃𝑆𝑁𝑅 = 20 𝑙𝑜𝑔10 [ 𝐿2 1 𝑚×𝑛 ∑ ∑ [𝑅(𝑖,𝑗)−𝐹(𝑖,𝑗)]2𝑛 𝑗=1 𝑚 𝑖=1 ] (7) 4. PERFORMANCE EVALUATION The input image is divided into blocks and the five features are extracted using feed forward neural network. The performance of the existing Average, Maximum, Minimum and PCA based techniques are compared with the results of the proposed technique. The experimentation results are obtained to evaluate the performance of the proposed technique. The simulation is carried out by using Mat lab 7.5, the simulation window is as shown in the Figure 3 (h). The Figure 3 (a, b, c, d, e and f) are the results of the fused image by various fusion techniques such as averaging, minimum, maximum, PCA and block-based feature level method respectively. Table I shows the entropy of the various methods and figure 5 shows the graphical performance of the same from which it can be seen that the proposed technique gives better results than the previous methods. Evaluation measures are used to evaluate the quality of the fused image. The fused images are evaluated, taking the following parameters into consideration. (a) (b) (c) (d) (e) (f) (g) (h) Figure 3. Medical images (CT and MRI) fused by fuzzy logic and neuro fuzzy logic. (a) Input CT image, (b) Input MRI image, (c) Fused by averaging, (d) Fused by minimum, (e) Fused by maximum, (f) Fused by PCA, (g) Fused by proposed method, (h) Simulation window Table 1. Results of quantitative measures of medical images
  • 8.  ISSN: 2252-8776 Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204 202 Quantitative Measures Average Minimum Maximum PCA Block Based Method Entropy (dB) 6.38 4.25 5.83 8.86 20.78 RMSE 62.67 74.76 74.76 21.02 32.23 PSNR (dB) 16.58 16.20 16.20 17.4 34.53 4.1. Image quality index Image quality index (IQI) measures the similarity between two images (11 & I2) and its value ranges from -1 to 1. IQI is equal to 1 if both images are identical. 4.2. Mutual information measure Mutual information measure (MIM) furnishes the amount of information of one image in another. This gives the guidelines for selecting the best fusion method. Given two images M (i, j) and N (i, j). Where, PM (x) and PN (y) are the probability density functions in the individual images, and PMN (x, y) is joint probability density function. 4.3. Fusion factor Given two images A and B, and their fused image F, the Fusion factor (FF). FF = I AF + I BF (8) Where IAF and IBF are the MIM values between input images and fused image. A higher value of FF indicates that fused image contains moderately good amount of information present in both the images. However, a high value of FF does not imply that the information from both images is symmetrically fused. 4.3.1. Fusion symmetry Fusion symmetry (FS) is an indication of the degree of symmetry in the information content from both the images. The quality of fusion technique depends on the degree of Fusion symmetry. Since FS is the symmetry factor, when the sensors are of good quality, FS should be as low as possible so that the fused image derives features from both input images [11-14]. If any of the sensors is of low quality then it is better to maximize FS than minimizing it. 4.3.2. Fusion index The study proposes a parameter called Fusion index from the factors Fusion symmetry and Fusion factor. The fusion index (FI) is defined as FI = I AF / I BF (9) Is the mutual information index between multispectral image and fused image and IBF is the mutual information index between panchromatic image and fused image. The quality of fusion technique depends on the degree of fusion index. Where p contains the histogram, counts returned from im hist. 5. RESULTS AND DISCUSSIONS There are many typical applications for image fusion. Modern spectral scanners gather up to several hundred of spectral bands which can be both visualized and processed individually, or which can be fused into a single image, depending on the image analysis task. In this section, input images are fused using fuzzy logic approach [15-17]. So, it is concluded that results obtained from the implementation of neuro fuzzy logic-based image fusion approach performs better for first two test cases and fuzzy based image fusion shows better performance for third test case. So further investigation is needed to resolve this issue. Our experimental results show that neuro fuzzy logic-based image fusion approach provides better performance when compared to fuzzy based image fusion for first two examples. Image quality index (IQI), the similarity between reference and fused image (0.9999, 0.9829, and 0.3182) are higher for first two cases when compared to values obtained from fuzzy based fusion technique (0.9758, 0.9824, and 0.8871). The higher values for fusion factor (FF) from first two examples (2.8115, 3.3438, 1.0109) obtained from the neuro fuzzy based fusion approach indicates that fused image contains moderately good amount of information present in both the images compared to FF values (1.0965,2.1329,1.9864) obtained from fuzzy based fusion approach. The amount of information of one image in another, mutual information measure (MIM) values (1.4656,1.5079,0.7634) are also significantly better which shows that neuro fuzzy based fusion method
  • 9. Int J Inf & Commun Technol ISSN: 2252-8776  A neuro fuzzy image fusion using block based feature level method (S. Mary Praveena) 203 preserves more information compared to fuzzy based image fusion. The other evaluation measures like root mean square error (RMSE) with lower and peak signal to noise ratio (PSNR), Correlation Coefficient (CC) with higher values (0.9459, 0.8979, 0.1265) obtained form neuro fuzzy based fusion approach are also comparatively better for first two cases. The entropy, the amount of information that can be used to characterize the input image (7.2757, 7.3202, 4.4894) are better for two examples obtained from neuro fuzzy based image fusion technique. Figure 4 shows the Medical images (CT and MRI Brain) fused by different images fusion techniques and the proposed method. Figure a and b are the input CT and MRI images respectively. Table 2 shows the results of quantitative measures of medical images such as brain. The value of each quality assessment parameters of all mentioned fusion approaches are depicted in Table 3. (a) (b) (c) (d) (e) (f) (g) Figure 4. Medical images (CT and MRI Brain) fused by different image fusion techniques and the proposed method. (a) Input CT image, (b) Input MRI image, (c) Fused by averaging, (d) Fused by minimum, (e) Fused by maximum, (f) Fused by PCA, (g) Fused by contrast Table 2. Results of quantitative measures of medical images (brain) S.NO METHODS ENTROPY (dB) 1. Averaging 6.8981 2. Minimum 2.0906 3. maximum 6.7582 4. PCA 8.4271 5. Block based feature level method (Proposed) 8.9427
  • 10.  ISSN: 2252-8776 Int J Inf & Commun Technol, Vol. 9, No. 3, December 2020: 195 – 204 204 Table 3. Evaluation indices for image fusion based on fuzzy and neuro fuzzy logic approaches METHOD IQI MIM FF FS FI RMSE PSNR ENTROPY CC SF FUZZY FUSION (EX 1) 0.8758 0.4328 1.0965 0.0779 0.7303 44.8448 15.0966 4.4881 0.5833 10.4567 (EX 1) (EX 2) 0.8824 0.8582 2.1379 0.0518 0.8122 48.9935 14.3280 5.3981 0.7598 16.9749 (EX 2) (EX 3) 0.8871 1.5576 1.9864 0.2841 3.6325 42.4850 15.5661 5.8205 0.7761 25.4711 (EX 3) NEURO FUZZY FUSION (EX 1) 0.9999 1.3656 2.8115 0.0213 1.0837 14.9554 24.6348 7.2757 0.9459 25.5698 0.9999 0.9729 1.3656 1.5079 2.8115 3.3438 0.0213 0.0490 1.0837 0.8213 14.9554 34.4662 24.6348 17.3829 7.2757 7.3202 0.9459 0.8979 25.5698 37.1169(EX 2) 0.9729 0.2182 1.5079 0.7634 3.3438 1.0109 0.0490 0.2431 0.8213 2.8926 34.4662 68.5319 17.3829 11.4129 7.3202 4.4894 0.8979 0.1265 37.1169 16.9926(EX 3) 6. CONCLUSION In this paper, block-based feature-level multi-focus image fusion technique is proposed for fusing images that are not in focus. A feed forward neural network is first trained with the block features of a pair of multi-focus images. A feature set including spatial frequency, contrast visibility, edges, variance and energy of gradient is used to define the clarity of the image block. Block size is determined adaptively for each image. The trained neural network is then used to fuse any pair of multi-focus images. The performance of the existing Average, Maximum, Minimum and PCA based techniques are compared with the results of the proposed techniques. The experimentation results are obtained to evaluate the performance of the proposed technique. Experimentation results show that the proposed technique performs better than the existing techniques. The experimental results clearly show that the proposed image fusion using fuzzy logic gives a considerable improvement on the quality of the fusion system and neuro fuzzy based image fusion preserves more texture information. REFERENCES [1] H. Li, S. Manjunath and S. K. Mitra., “Multi-sensor image fusion using the wavelet transform,” in Graphical Models and Image Processing, vol. 57, no. 3, pp. 235-245, 1995. [2] P. J. Burt, R. J. Lolezynski., “Enhanced image capture through fusion,” in: Proceedings of the Fourth International Conference on Computer Vision, Berlin, Germany, vol. 3, pp. 173-182, 1993. [3] Yufeng Zheng, Edward A. Essock, Bruce C., “Hansen.An Advanced Image Fusion Algorithm Based on Wavelet Transform – Incorporation with PCA and morphological Processing,” Proceedings of the SPIE, vol. 5298, pp. 177-187, 2004. [4] I. Bloch, “Information combination operators for data fusion. a review with classification,” IEEE Trans. SMC: Part A, vol. 26, pp. 52-67,1996. [5] H. A. Eltoukhy, S. Kavusi, “A computationally efficient algorithm for multi-focus image reconstruction,” Proceedings of SPIE Electronic Imaging, vol. 33, pp. 45-51, 2003. [6] Abdul Basit Siddiqui, M. Arfan Jaffar., “Ayyaz Hussain and Anwar M. Mirza, Block - based Feature-level Multi- focus Image Fusion,” IEEE in signal processing, vol. 2, pp. 134-139, 2010. [7] Li, B. Manjunath, S. Mitra, “Multisensor image fusion using the wavelet transform, Graph,” Models Image Process, vol. 57, no. 3, pp. 235-245,1995. [8] Ishita De, Bhabatosh Chanda., “A simple and efficient algorithm for multifocus image fusion using morphological wavelets,” IEEE in Signal Processing, vol. 31, pp. 924-936, 2006. [9] Gonzalo Pajares and Jesus Manuel de la Cruz, “A wavelet-based Image Fusion Tutorial,” IEEE in Pattern Recognition, vol. 37, no. 9, pp. 1855-1872, 2004. [10] H. J. Heijmans and J. Goutsias, “Nonlinear multiresolution signal decomposition schemes. II. Morphological wavelets,” IEEE Transactions on Image Processing, vol. 9, pp. 1897-1913, 2000. [11] Yang, X.H., Huang, F. Z., Liu, G. “Urban Remote Image Fusion Using Fuzzy Rules,” IEEE Proceedings of the Eighth International Conference on Machine Learning and Cybernetics, pp. 101-109, 2009. [12] A. Toet, “Image fusion by a ratio of low pass pyramid,” Pattern Recognition Letters, vol. 9, no. 4, pp. 245-253, 1989. [13] Xinman Zhang, Jiuqlang Han and Peifei Liu, “Restoration and fusion optimization scheme of multifocus image using genetic search strategies,” Optica Applicataion, vol. XXXV, no. 4, 2005. [14] X. Yang, W. Yang, J. Pei, “Different focus points images fusion based on wavelet decomposition,” Proceedings of Third International Conference on Information Fusion, vol. 1, pp. 3-8, 2000. [15] S. Mukhopadhyay, B. Chanda, “Fusion of 2d gray scale images using multiscale morphology,” Pattern Recognition, vol. 34, pp. 1939-1949, 2001. [16] V. P. S. Naidu and J. R. Raol, “Pixel-level Image Fusion using Wavelets and Principal Component Analysis,” in Defence Science Journal, vol. 58, no. 3, pp. 338-352, May 2008. [17] Othman Khalifa, “Wavelet Coding Design for Image Data Compression,” International Arab Journal of Information Technology, vol. 2, no. 2, pp. 118-128, 2005.