SlideShare a Scribd company logo
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
DOI : 10.5121/ijist.2013.3104 43
ADOPTING AND IMPLEMENTATION OF
SELF ORGANIZING FEATURE MAP FOR
IMAGE FUSION
Dr.AnnaSaroVijendran1
and G.Paramasivam2
1
Director, SNR Institute of Computer Applications, SNR Sons College, Coimbatore,
Tamilnadu, INDIA.
saroviji@rediffmail.com
2
Asst.Professor, Department of Computer Applications, SNRSonsCollege,
CoimbatoreTamilnadu,INDIA.
pvgparam@yahoo.co.in
ABSTRACT
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming
to produce quality images. Image Fusion is to integrate complementary and redundant information from
multiple images of the same scene to create a single composite image that contains all the important
features of the original images. The resulting fused image will thus be more suitable for human and
machine perception or for further image processing tasks. The existing fusion techniques based on either
direct operation on pixels or segments fail to produce fused images of the required quality and are mostly
application based. The existing segmentation algorithms become complicated and time consuming when
multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self
organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to
produce multiple slices of the source and reference images based on various combination of gray scale and
can dynamically fused depending on the application. The proposed technique is adopted and analyzed for
fusion of multiple images. The technique is robust in the sense that there will be no loss in information due
to the property of Self Organizing Feature Maps; noise removal in the source images done during
processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental
results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than
some popular image fusion methods in both subjective and objective qualities.
KEYWORDS
Image Fusion, Image Segmentation,Self Organizing Feature Maps,Code Book Generation,Multifocus
Images, Gray Scale Images
1.INTRODUCTION
Nowadays, image fusion has become an important subarea of image processing. For one object or
scene, multiple images can be taken from one or multiple sensors.These images usually contain
complementary information. Image fusion is the process of combining information from two or
more images of a scene into a single composite image that is more informative and is more
suitable for visual perception or computer processing. The objectivein image fusion is to reduce
uncertainty and minimize redundancy in the output while maximizing relevant information
particular to an application or task. Image fusion has become a common term used within medical
diagnostics and treatment. Given the same set of input images, different fused images may be
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
44
created depending on the specific application and what is considered relevant information. There
are several benefits in using image fusion like wider spatial and temporal coverage, decreased
uncertainty, improved reliability and increased robustness of system performance.Often a single
sensor cannot produce a complete representation of a scene. Successful image fusion significantly
reduces the amount of data to be viewed or processed without significantly reducing the amount
of relevant information.
Image fusion algorithms can be categorized into pixel, feature and symbolic levels. Pixel-level
algorithms work either in the spatial domain [1, 2] or in the transform domain [3, 4 and 5].
Although pixel-level fusion is a local operation, transform domain algorithms create the fused
image globally. By changing a single coefficient in the transformed fused image, all imagevalues
in the spatial domain will change. As a result, in the process of enhancing properties in some
image areas, undesirable artifacts may be created in other image areas. Algorithms that work in
the spatial domain have the ability to focus on desired image areas, limiting change in other
areas.Multiresolution analysis is a popular method in pixel-level fusion. Burt [6] and Kolczynski
[7] used filters with increasing spatial extent to generate a sequence of images from each image,
separating information observed at different resolutions. Then at each position in the transform
image, the value in the pyramid showing the highest saliency was taken. An inverse transform of
the composite image was used to create the fused image. In a similar manner, various wavelet
transforms can be used to fuse images. The discrete wavelet transform (DWT) has been used in
many applications to fuse images [4]. The dual-tree complexwavelet transforms (DT-CWT), first
proposed by Kingsbury[8], was improved by Nikolov [9] and Lewis [10] to outperform most
other gray-scale image fusion methods.
Feature-based algorithms typically segment the images into regions and fuse the regions using
their various properties [10–12]. Feature-based algorithms are usually less sensitive to signal-
level noise [13]. Toet [3] first decomposed each input image into aset of perceptually
relevant patterns. The patterns were then combined to create a composite image containing all
relevant patterns. A mid-level fusion algorithm was developed by Piella [12, 15] where the
images are first segmented and the obtained regions are then used to guide the multiresolution
analysis.
Recently methodshave been proposed to fuse multifocus source images using the divided blocks
or segmented regions instead of single pixels [16, 17, and 18]. All the segmented region-based
methods are strongly dependent on the segmentation algorithm. Unfortunately, the segmentation
algorithms, which are of vital importance to fusion quality, are complicated and time-consuming.
The common transform approaches for fusion of mutifocus images include the discrete wavelet
transform (DWT) [19],curvelet transform [20] and nonsubsamplingcontourlet transform (NSCT)
[21]. Recently, a new multifocus image fusion and restoration algorithm based on the sparse
representation hasbeen proposed byYang and Li [22]. A new multifocus image fusion method
based on homogeneity similarity and focused regions detection has been proposed during the
year 2011 by Huafeng Li , Yi Chai , Hongpeng Yin and Guoquan Liu [23] .
Most of the traditional image fusion methods are based on the assumption that the source images
are noise free, and they can produce good performance when the assumption is satisfied. For the
traditional noisy image fusion methods, they usually denoise the source images, and then the
denoised images are fused. The multifocus image fusion and restoration algorithm proposed by
Yang and Li [22] performs well with both noisy and noise free images, and outperforms
traditional fusion methods in terms of fusion quality and noise reductionin the fused output.
However, this scheme is complicated and time-consuming especially when the source images are
noise free. The image fusion algorithm based on homogeneity similarity proposed by HuafengLi ,
Yi Chai , Hongpeng Yin and Guoquan Liu [23] aims at solving the fusion problem of clean and
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
45
noisy multifocus images. Further in any region based fusion algorithm, the fusion results are
affected by the performance of segmentation algorithm. The various segmentation algorithms are
based on thresholding and clustering but the partition criteria used by these algorithms often
generates undesired segmented regions.
In order to overcome the above said problems a new method for segmentation using Self-
organizing Feature Maps which consequently helps in fusion of images dynamically to the
desired degree of information retrieval depending on the application has been proposed in this
paper. The proposed algorithm is compatible for any type of image either noisy or clean. The
method is simple and since mapping of image is carried out by Self-organizing Feature Maps all
the information in the images will be preserved.The images used in image fusion should already
be registered. A novel image fusion algorithm based on self organizing feature map is proposed in
this paper.
The outline of this paper is as follows: In Section 2, the Self-organizing Feature Maps is briefly
introduced. Section 3 describes the algorithm for Code Book generation using Self Organizing
Feature Maps. Section 4 describes the proposed method of Fusion. Section 5 details the
Experimental Analysis and Section 6 gives the conclusion of this paper.
2. SELF-ORGANIZING FEATURE MAP
Self-organizing Feature Map (SOM) is a special class of Artificial Neural Networkbased on
competitive learning. It is an ingenious Artificial Neural Network built around a one or two-
dimensional lattice of neurons for capturing the important features contained in the input. The
Kohonen technique creates a network that stores information in such a way that any topological
relationships within the training set are maintained. In addition to clustering the data into distinct
regions, regions of similar properties are put into good use by the Kohonen maps.
The primary benefit is that the network learns autonomously without the requirement that the
system be well defined. System does not stop learning but instead continues to adapt to changing
inputs. This plasticity allows it to adapt as the environment changes.A particular advantage over
other artificial neural networks is that the system appears well suited to parallel computation.
Indeed the only global knowledge required by each neuron is the current input to the network
and the position within the array of the neuron which produced the maximum output
Kohonen networks are grid of computing elements, which allows identifying the immediate
neighbours of a unit. This is very important, since during learning, the weights of computing units
and their neighbours are updated. The objective of such a learning approach is that neighbouring
units learn to react to closely related signals.
A Self-organizing Feature Mapdoes not need a target output to be specified unlike many other
types of network. Instead, where the node weights match the input vector, that area of the lattice
is selectively optimized to more closely resemble the data for the class, the input vector is a
member. From an initial distribution of random weights, and over many iterations, the Self-
organizing Feature Map eventually settles into a map of stable zones. Each zone is effectively a
feature classifier. The output is a type of feature map of the input space. In the trained network,
the blocks of similar values represent the individual zones. Any new, previously unseen input
vectors presented to the network will stimulate nodes in the zone with similar weight vectors.
Training occurs in several steps and over many iterations.
Each node's weights are initialized. A vector is chosen at random from the set of training data and
presented to the lattice. Every node is examined to calculate which one's weights are most like the
input vector. The winning node is commonly known as the Best Matching Unit (BMU). The
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
46
radius of the neighbourhood of the Best Matching Unit is now calculated. This is a value that
starts large, typically set to the 'radius' of the lattice, but diminishes each time-step. Any nodes
found within this radius are deemed to be inside the Best Matching Unit‘s neighbourhood. Each
neighbouring node's weights are adjusted to make them more like the input vector. The closer a
node is to the Best Matching Unit; the more its weights get altered. The procedure is repeated for
all input vectors for number of iterations. Prior to training, each node's weights must be
initialized. Typically these will be set to small-standardized random values. To determine the
Best Matching Unit, one method is to iterate through all the nodes and calculate the Euclidean
distance between each node's weight vector and the current input vector. The node with a weight
vector closest to the input vector is tagged as the Best Matching Unit. After the Best Matching
Unit has been determined, the next step is to calculate which of the other nodes are within the
Best Matching Unit's neighbourhood. All these nodes will have their weight vectors altered in the
next step. A unique feature of the Kohonen learning algorithm is that the area of the
neighbourhoodshrinks over time to the size of just one node.
After knowing the radius, iterations are carried out through all the nodes in the lattice to
determine if they lay within the radius or not. If a node is found to be within the neighbourhood
then its weight vector is adjusted. Every node within the Best Matching Unit‘s neighbourhood
(including the Best Matching Unit) has its weight vector adjusted.
In Self-organizing Feature Map, the neurons are placed at the lattice nodes; the lattice may take
different shapes: rectangular grid, hexagonal, even random topology .
Figure 1.Self Organizing Feature Map Architecture
The neurons become selectivity tuned on various input patterns in the course of competitive
learning process. The locations of the neurons (i.e. the winning neurons) so tuned, tend to become
ordered with respect to each other in such a way that a meaningful coordinate system for different
input features to be created over the lattice.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
47
SOM Neural Network Training GUI:
3. CODE BOOK GENERATION USING SELF-ORGANIZG
FEATURE MAP
Given a two-dimensional input image pattern to be mapped onto a two dimensional spatial
organization of neurons located at different positions (i, j)’s on a rectangular lattice of size nxn.
Thus, for a set of nxn points on the two-dimensional plane, there would be n2
neurons Nij:1 ≤ i,j ≤
n, and for each neuron Nij there is an associated weight vector denoted as Wij. In Self-organizing
Feature Map, the neuron with minimum distance between its weight vector Wij and the input
vector X is the winner neuron (k,l), and it is identified using the following equation.
||X - W kl|| = min [min ||X - W ij ||]
1≤i≤ n i≤j≤ n (1)
After the position of the (i,j)th winner neuron is located in the two-dimensional plane, the winner
neuron and its neighbourhood neurons are adjusted using Self- organizing Feature Map learning
rule as:
Wij(t+1) = Wij(t) + α || X -Wij(t) || (2)
Where, α is the Kohonen’s learning rate to control the stability and the rate of convergence. The
winner weight vector reaches equilibrium when Wij(t+1)=Wij(t). The neighbourhood of neuron
Nij is chosen arbitrary. It can be a square or a circular zone around Nij of arbitrary chosen radius.
Algorithm
1: The image A (i,j) of size 2 N
x 2 N
isdivided into blocks, each of them of size
2 n
x 2n
pixels, n < N.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
48
2: A Self-organizing Feature Map Network is created with a codebook consisting of M neurons
(mi: i=1,2,.....M).The total M neurons are arranged in a hexagonal lattice, and for each
neuron there is an associated weight vector Wi = [wi1 wi2.......wi2
2n
].
3: The weights vectors are initiated for all the neurons in the lattice with small random values.
4: The learning input patterns (image blocks) are applied to the network. The Kohonen’s
competitive learning process identifies the winning neurons that best match the input blocks.
The best matching criterion is the minimum Euclidean distance between the vectors. Hence,
the mapping process Q that identifies the neuron that best matches the input block X is
determined by applying the following condition.
Q(X) = argi min ||X - Wi|| i = 1,2,……..M (3)
5: At equilibrium, there are m winner neurons per block or m codewords per block. Hence, the
whole image is represented using m number of codewords.
6: The indices of the obtained codewords are stored. The set of indices of all winner neurons
along with the codebook are stored.
7: The reconstructed image blocks of same size as the original ones, will be restored from the
indices of the codewords.
4. PROPOSED METHOD OF FUSION
Let us consider two pre-registered grayscale 8 bitimages A and B formed of the same scene or
object.
The first imageA is decomposed into sub-images and given as input to the Self-organizing
Feature Map Neural Network. In order to preserve all the gray values of the image, the codebook
size for compressing an 8 bit image is chosen to be the maximum possible number of gray levels,
say 256. Since the weight values after training has to represent the input level gray levels, random
values ranging from 0 to 255 are assigned as initial weights.When sub-images of size say 4 x 4 is
considered as the input vector, then there will be 16 nodes in the input layer and the Kohonen
layer consists of 256 nodes arranged in a 16 x 16 array. The input layer takes as input the gray-
level values from all the 16 pixels of the gray-level block. The weights assigned between node j
of the Kohonen layer and the input layer represents the weight matrix. For all the 256 nodes we
have Wji for j = 0,1,…,255 and i = 0,1…15. Once the weights are initialized randomly
network is ready for training.
The image block vectors are mapped with the weight vectors. The neighborhood is initially
chosen; say 5 x 5and then reduced gradually to find the best matching node. The Self-organizing
Feature Map generates the codebook according to the weight updation. The set of indices of all
the winner neurons for the blocks along with the code book are stored for retrieval.
The image A is retrieved by generating the weight vectors of each neuron from the index values,
which gives the pixel value of the image. For each index value the connected neuron is found.
The weight vector of that neuron to the input layer neuron is generated. The values of the neuron
weights are the gray level for the block. The gray level value thus obtained is displayed as pixels.
Thus we get the image A back in its original form.
Now the Image B is given as input to the Neural Network. Since the features in Images A and B
are the same, when B is given as input to the trained Network the code book for the Image B will
be generated in minimum simulation time without loss in information. Also since the images are
registeredthe index values which represent the position of pixels will be the same in the two
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
49
images. For the same index value the value of the neuron weights of both the images are
compared and the higher value is selected. The procedure is repeated for all the indices until a
weight vector of optimal strength is generated. The image retrieved from this optimal weight
vector is the fused image which will represent all the gray values in its optimal values.The
procedure can be repeated in various combinations with multiple images until the desired result is
achieved.
5. EXPERIMENTAL ANALYSIS AND RESULTS
The Experimental analysis of the proposed algorithm has been performed using large numbers of
images having different content for simulations and different objective parameters discussed in
the paper. In order to evaluate the fusion performance, the first experiment is performed on one
set of perfectly registered multifocus source images. The dataset consists of multifocus images.
The use of different pairs of multifocus images of different scenes including all categories of text,
text+object and only objects allows evaluating the proposed algorithm in true sense. The
proposed algorithm is simulated in Matlab7. The proposed algorithm is evaluated based on the
quality of the fused image obtained. The robustness of the proposed algorithm that is to obtain
consistent good quality fused image with different categories of images like standard images,
medical images, satellite images has beenevaluated. The average computational time to generate
final fused image from the source image size of 128 x 128 using the proposed algorithm has been
calculated and the average computational time is 96 seconds. The quality of the fused image with
the source images has been compared in terms of RMSE and PSNR values.The experimental
results obtained for fusion of grayscaleimages adopting the proposed algorithm are shown in
Table1. The source multifocus images and the fused images of different types are shown in
Figures 1 to 4.The histogram and image difference for lena and bacteria images are shown in
Figure 5 and 6 respectively.
Table 1.a
Images
RMSE
Image A Image B
Fused Image
Lena 6.2856 6.0162 3.3205
Bacteria 6.8548 6.421 6.227
Satellite map 4.0043 5.325 3.8171
MRI -Head 4.3765 4.9824 1.5163
Table 1.b
Images
PSNR
Image A Image B
Fused Image
Lena 30.381 31.701 38.0481
Bacteria 28.194 27.989 32.5841
Satellite map 32.077 31.582 33.2772
MRI -Head 30.787 33.901 45.3924
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
50
Figure 1.a Figure 1.b Figure 1.c
Figure 1.Lena
Figure 2.a Figure 2.b Figure 2.c
Figure 2.Bacteria
Figure 2.a Figure 2.b Figure 2.c
Figure 2.Bacteria
Figure 4.a Figure 4.b Figure 4.c
Figure 4.MRI –Head
Images (a) and (b ) are the multi focused images Image (c) is the fused version of (a)and(b)
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
Figure 5. Histogram f
Figure 6. The Difference images between fused image and the source image :(g) Difference
between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
Difference between (e)and (f).
6. CONCLUSIONS
In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self
organizing Feature Map is that after training, the weight vectors not only represents the image
block cluster centroids but also preserves two main f
the input vectors are mapped to that of topologically neighbouring neurons in the code book.
Also the distribution of weight vectors of the neurons reflects the distribution of the weight
vectors in the input space. Hence their will not be any loss of information in the process
proposed method is dynamic in the sense that depending on the application the optimal weight
vectors are generated and also redundant values as well as noise can be ignored in this
resulting in lesser simulation time.The method can be extended to colour images also.
REFERENCES
[1] S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and
SPOT panchromatic images, Information Fusion 3
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
Figure 5. Histogram for lena and bacteria images
Figure 6. The Difference images between fused image and the source image :(g) Difference
between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self
organizing Feature Map is that after training, the weight vectors not only represents the image
block cluster centroids but also preserves two main features. Topologically neighbour blocks in
the input vectors are mapped to that of topologically neighbouring neurons in the code book.
Also the distribution of weight vectors of the neurons reflects the distribution of the weight
Hence their will not be any loss of information in the process
proposed method is dynamic in the sense that depending on the application the optimal weight
vectors are generated and also redundant values as well as noise can be ignored in this
resulting in lesser simulation time.The method can be extended to colour images also.
S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and
SPOT panchromatic images, Information Fusion 3 (2002) 17–23.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
51
Figure 6. The Difference images between fused image and the source image :(g) Difference
between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self
organizing Feature Map is that after training, the weight vectors not only represents the image
eatures. Topologically neighbour blocks in
the input vectors are mapped to that of topologically neighbouring neurons in the code book.
Also the distribution of weight vectors of the neurons reflects the distribution of the weight
Hence their will not be any loss of information in the process.The
proposed method is dynamic in the sense that depending on the application the optimal weight
vectors are generated and also redundant values as well as noise can be ignored in this process
resulting in lesser simulation time.The method can be extended to colour images also.
S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
52
[2] A.Goshtasby, 2-D and 3-D ImageRegistration for Medical, Remote Sensing,and Industrial
Applications, Wiley Press,2005.
[3] A.Toet,Hierarchical image fusion, Machine Vision and Applications 3 (1990) 1–11.
[4] H.Li,S. Manjunath, S. Mitra, Multisensorimage fusion using the wavelet transform,Graphical
Models and Image Processing 57 (3) (1995) 235–245.
[5] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, M. Halliwell, and P.N.T. Wells. Imagefusion using a
3-d wavelet transform. In Proc.7th International Conference on Image Processing And Its
Applications, pages 235-239, 1999.
[6] P.J. Burt, A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis, Springer-Verlag,
Berlin, 1984, pp. 6–35.
[7] P.J. Burt, R.J. Kolczynski, Enhanced image capture through fusion, in International Conference on
Computer Vision, 1993, pp. 173–182.
[8] N. Kingsbury, Image processing with complex wavelets, Silverman, J. Vassilicos(Eds.), Wavelets:
The Key to Intermittent Information, Oxford University Press, 1999, pp.165–185.
[9] S.G. Nikolov, P. Hill, D.R. Bull, C.N. Canagarajah, Wavelets for image fusion, in: A. Petrosian, F.
Meyer (Eds.), Waveletsin Signal and Image Analysis, Kluwer Academic Publishers,The Netherlands,
2001, pp. 213–244.
[10] J.J. Lewis, R.J. O’Callaghan, S.G.Nikolov, D.R.Bull,C.N.Canagarajah, Region-based image fusion
using complex wavelets, in: Proceedings of the 7th International Conference on Information
Fusion, Stockholm, Sweden, June 28–July 1, 2004, pp. 555–562.
[11] Z. Zhang, R. Blum, Region-based image fusion scheme for concealed weapon detection, in:
ISIF Fusion Conference, Annapolis, MD, July 2002.
[12] G. Piella, A general framework for multiresolution image fusion: from pixels to regions,
Information Fusion 4 (2003)259–280.
[13] G. Piella, A region-based multiresolutionimage fusion algorithm, Proceedings of the 5th
International Conference onInformation Fusion, Annapolis, MS, July8–11, 2002, pp. 1557–1564.
[14] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, 2-D image fusion by multiscaleedge graphcombination,
in:Proceedings of the 3rdInternational Conference on Information Fusion, Paris,France, July 10–13,
1, 2000, pp.16–22.
[15] G. Piella, A general framework formultiresolutionimage fusion from pixels to regions,
Information Fusion 4 (2003) 259–280.
[16] H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493.
[17] S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985.
[18] V. Aslanta, R. Kurban, Expert Systemswith Applications 37 (12) (2010) 8861.
[19] Y. Chai, H.F. Li, M.Y. Guo Optics Communications 248 (5) (2011) 1146.
[20] S.T. Li ,B. Yang, Pattern RecognitionLetters 29 (9) (2008) 1295
[21] Q. Zhang, B.L. Guo, Signal Processing 89(2009) 1334.
[22] B. Yang, S.T. Li, IEEE Transactions on Instrumentation and Measurement 59 (4)(2010) 884.
[23] Multifocus image fusion and denoising scheme based on homogeneity similarityHuafeng Li , Yi
Chai ,Hongpeng Yin,Guoquan Liu , Optics Communications Journal , September 2011.
ACKNOWLEDGMENTS
The authors thanks to our Managementof SNR Sons Institutions for allowing us to utilize their
resources for doing our research work. Our sincere thanks to our Principal&Secretary
Dr.H.BalakrishnanM.Com., M.Phil.,Ph.D., for his support and encouragement in our research
work. And my grateful thanks to my guide Dr. Anna SaroVijendran MCA.,M.Phil.,Ph.D.,
Director of MCA., SNR Sons College, Coimbatore-6 for her valuable guidance and giving me a
lot of suggestions & proper solutions for critical situations in our research work.The authors
thank the Associate Editor and reviewers for their encouragement and valued comments, which
helped in improving the quality of the paper.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
53
AUTHOR’S BIOGRAPHY
Dr. Anna SaroVijendran receivedthe Ph.D. degree in Computer Science from Mother
Teresa Womens University,Tamilnadu, India, in 2009. She has 20 years of experience
in teaching. She is currently working as the Director, MCA in SNR Sons College,
Coimbatore, Tamilnadu, India. She has presented and published many papers in
International and National conferences. She has authored and co-authored more than 30
refereed papers. Herprofessional interests are Image Processing, Image fusion, Data
mining and Artificial Neural Networks.
G.ParamasivamPh.D (PT)Research scholar.He is currently theAsst Professor, SNR
Sons College,Coimbatore, Tamilnadu, India.He has 10 years of teaching experience.His
technical interests include Image Fusion and ArtificialNeural networks.

More Related Content

What's hot

Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Pinaki Ranjan Sarkar
 
A Combined Model for Image Inpainting
A Combined Model for Image InpaintingA Combined Model for Image Inpainting
A Combined Model for Image Inpainting
iosrjce
 
Issues in Image Registration and Image similarity based on mutual information
Issues in Image Registration and Image similarity based on mutual informationIssues in Image Registration and Image similarity based on mutual information
Issues in Image Registration and Image similarity based on mutual information
Darshana Mistry
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
IAEME Publication
 
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
IOSR Journals
 
Analysis of Multi-focus Image Fusion Method Based on Laplacian Pyramid
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidAnalysis of Multi-focus Image Fusion Method Based on Laplacian Pyramid
Analysis of Multi-focus Image Fusion Method Based on Laplacian Pyramid
Rajyalakshmi Reddy
 
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
ijsrd.com
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...IOSR Journals
 
Medical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transformMedical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transform
Anju Anjujosepj
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
IJSRD
 
IRJET - Clustering Algorithm for Brain Image Segmentation
IRJET - Clustering Algorithm for Brain Image SegmentationIRJET - Clustering Algorithm for Brain Image Segmentation
IRJET - Clustering Algorithm for Brain Image Segmentation
IRJET Journal
 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819Iman Roosta
 
Multimodality medical image fusion using improved contourlet transformation
Multimodality medical image fusion using improved contourlet transformationMultimodality medical image fusion using improved contourlet transformation
Multimodality medical image fusion using improved contourlet transformationIAEME Publication
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
INFOGAIN PUBLICATION
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 

What's hot (18)

Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...
 
A Combined Model for Image Inpainting
A Combined Model for Image InpaintingA Combined Model for Image Inpainting
A Combined Model for Image Inpainting
 
Issues in Image Registration and Image similarity based on mutual information
Issues in Image Registration and Image similarity based on mutual informationIssues in Image Registration and Image similarity based on mutual information
Issues in Image Registration and Image similarity based on mutual information
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
 
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
 
Analysis of Multi-focus Image Fusion Method Based on Laplacian Pyramid
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidAnalysis of Multi-focus Image Fusion Method Based on Laplacian Pyramid
Analysis of Multi-focus Image Fusion Method Based on Laplacian Pyramid
 
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
 
Medical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transformMedical image fusion based on NSCT and wavelet transform
Medical image fusion based on NSCT and wavelet transform
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
 
J25043046
J25043046J25043046
J25043046
 
Morpho
MorphoMorpho
Morpho
 
IRJET - Clustering Algorithm for Brain Image Segmentation
IRJET - Clustering Algorithm for Brain Image SegmentationIRJET - Clustering Algorithm for Brain Image Segmentation
IRJET - Clustering Algorithm for Brain Image Segmentation
 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819
 
Multimodality medical image fusion using improved contourlet transformation
Multimodality medical image fusion using improved contourlet transformationMultimodality medical image fusion using improved contourlet transformation
Multimodality medical image fusion using improved contourlet transformation
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
 
Fuzzy In Remote Classification
Fuzzy In Remote ClassificationFuzzy In Remote Classification
Fuzzy In Remote Classification
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 

Similar to ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION

Review of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging ApproachReview of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging Approach
Editor IJMTER
 
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
sipij
 
International Journal of Computational Engineering Research(IJCER)
 International Journal of Computational Engineering Research(IJCER)  International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
ijceronline
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewDifferent Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
IJMER
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET Journal
 
D45012128
D45012128D45012128
D45012128
IJERA Editor
 
Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317
Yashwant Kurmi
 
Dd25624627
Dd25624627Dd25624627
Dd25624627
IJERA Editor
 
E1803053238
E1803053238E1803053238
E1803053238
IOSR Journals
 
Image Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringImage Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation Clustering
IJERA Editor
 
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
sipij
 
A Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur ClassificationA Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur Classification
ijeei-iaes
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentation
ijma
 
IRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A ReviewIRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A Review
IRJET Journal
 
I017265357
I017265357I017265357
I017265357
IOSR Journals
 
IRJET - Deep Learning Approach to Inpainting and Outpainting System
IRJET -  	  Deep Learning Approach to Inpainting and Outpainting SystemIRJET -  	  Deep Learning Approach to Inpainting and Outpainting System
IRJET - Deep Learning Approach to Inpainting and Outpainting System
IRJET Journal
 
Multitude Regional Texture Extraction for Efficient Medical Image Segmentation
Multitude Regional Texture Extraction for Efficient Medical Image SegmentationMultitude Regional Texture Extraction for Efficient Medical Image Segmentation
Multitude Regional Texture Extraction for Efficient Medical Image Segmentation
inventionjournals
 
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
IDES Editor
 
A Survey on Image Segmentation and its Applications in Image Processing
A Survey on Image Segmentation and its Applications in Image Processing A Survey on Image Segmentation and its Applications in Image Processing
A Survey on Image Segmentation and its Applications in Image Processing
IJEEE
 

Similar to ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION (20)

Review of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging ApproachReview of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging Approach
 
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
 
International Journal of Computational Engineering Research(IJCER)
 International Journal of Computational Engineering Research(IJCER)  International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical ReviewDifferent Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
 
D45012128
D45012128D45012128
D45012128
 
Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317
 
Dd25624627
Dd25624627Dd25624627
Dd25624627
 
E1803053238
E1803053238E1803053238
E1803053238
 
Image Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringImage Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation Clustering
 
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
 
A Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur ClassificationA Pattern Classification Based approach for Blur Classification
A Pattern Classification Based approach for Blur Classification
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentation
 
IRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A ReviewIRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A Review
 
I017265357
I017265357I017265357
I017265357
 
IRJET - Deep Learning Approach to Inpainting and Outpainting System
IRJET -  	  Deep Learning Approach to Inpainting and Outpainting SystemIRJET -  	  Deep Learning Approach to Inpainting and Outpainting System
IRJET - Deep Learning Approach to Inpainting and Outpainting System
 
Multitude Regional Texture Extraction for Efficient Medical Image Segmentation
Multitude Regional Texture Extraction for Efficient Medical Image SegmentationMultitude Regional Texture Extraction for Efficient Medical Image Segmentation
Multitude Regional Texture Extraction for Efficient Medical Image Segmentation
 
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
 
A Survey on Image Segmentation and its Applications in Image Processing
A Survey on Image Segmentation and its Applications in Image Processing A Survey on Image Segmentation and its Applications in Image Processing
A Survey on Image Segmentation and its Applications in Image Processing
 
Id105
Id105Id105
Id105
 

More from ijistjournal

Call for Research Articles - 5th International Conference on Artificial Intel...
Call for Research Articles - 5th International Conference on Artificial Intel...Call for Research Articles - 5th International Conference on Artificial Intel...
Call for Research Articles - 5th International Conference on Artificial Intel...
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
ijistjournal
 
Call for Research Articles - 4th International Conference on NLP & Data Minin...
Call for Research Articles - 4th International Conference on NLP & Data Minin...Call for Research Articles - 4th International Conference on NLP & Data Minin...
Call for Research Articles - 4th International Conference on NLP & Data Minin...
ijistjournal
 
Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)
ijistjournal
 
Online Paper Submission - 6th International Conference on Machine Learning & ...
Online Paper Submission - 6th International Conference on Machine Learning & ...Online Paper Submission - 6th International Conference on Machine Learning & ...
Online Paper Submission - 6th International Conference on Machine Learning & ...
ijistjournal
 
Submit Your Research Articles - International Journal of Information Sciences...
Submit Your Research Articles - International Journal of Information Sciences...Submit Your Research Articles - International Journal of Information Sciences...
Submit Your Research Articles - International Journal of Information Sciences...
ijistjournal
 
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systems
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO SystemsBER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systems
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systems
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)
ijistjournal
 
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...
ijistjournal
 
Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...
ijistjournal
 
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVIN
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVINA MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVIN
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVIN
ijistjournal
 
DECEPTION AND RACISM IN THE TUSKEGEE SYPHILIS STUDY
DECEPTION AND RACISM IN THE TUSKEGEE  SYPHILIS STUDYDECEPTION AND RACISM IN THE TUSKEGEE  SYPHILIS STUDY
DECEPTION AND RACISM IN THE TUSKEGEE SYPHILIS STUDY
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 

More from ijistjournal (20)

Call for Research Articles - 5th International Conference on Artificial Intel...
Call for Research Articles - 5th International Conference on Artificial Intel...Call for Research Articles - 5th International Conference on Artificial Intel...
Call for Research Articles - 5th International Conference on Artificial Intel...
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
 
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...
 
Call for Research Articles - 4th International Conference on NLP & Data Minin...
Call for Research Articles - 4th International Conference on NLP & Data Minin...Call for Research Articles - 4th International Conference on NLP & Data Minin...
Call for Research Articles - 4th International Conference on NLP & Data Minin...
 
Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
 
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)
 
Online Paper Submission - 6th International Conference on Machine Learning & ...
Online Paper Submission - 6th International Conference on Machine Learning & ...Online Paper Submission - 6th International Conference on Machine Learning & ...
Online Paper Submission - 6th International Conference on Machine Learning & ...
 
Submit Your Research Articles - International Journal of Information Sciences...
Submit Your Research Articles - International Journal of Information Sciences...Submit Your Research Articles - International Journal of Information Sciences...
Submit Your Research Articles - International Journal of Information Sciences...
 
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systems
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO SystemsBER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systems
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systems
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
 
International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)
 
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...
 
Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...Research Article Submission - International Journal of Information Sciences a...
Research Article Submission - International Journal of Information Sciences a...
 
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVIN
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVINA MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVIN
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVIN
 
DECEPTION AND RACISM IN THE TUSKEGEE SYPHILIS STUDY
DECEPTION AND RACISM IN THE TUSKEGEE  SYPHILIS STUDYDECEPTION AND RACISM IN THE TUSKEGEE  SYPHILIS STUDY
DECEPTION AND RACISM IN THE TUSKEGEE SYPHILIS STUDY
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
 
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
 

Recently uploaded

Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
Kamal Acharya
 
Event Management System Vb Net Project Report.pdf
Event Management System Vb Net  Project Report.pdfEvent Management System Vb Net  Project Report.pdf
Event Management System Vb Net Project Report.pdf
Kamal Acharya
 
LIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.pptLIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.ppt
ssuser9bd3ba
 
The role of big data in decision making.
The role of big data in decision making.The role of big data in decision making.
The role of big data in decision making.
ankuprajapati0525
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation & Control
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
karthi keyan
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
Courier management system project report.pdf
Courier management system project report.pdfCourier management system project report.pdf
Courier management system project report.pdf
Kamal Acharya
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfCOLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
Kamal Acharya
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
seandesed
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
AhmedHussein950959
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
Intella Parts
 

Recently uploaded (20)

Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
 
Event Management System Vb Net Project Report.pdf
Event Management System Vb Net  Project Report.pdfEvent Management System Vb Net  Project Report.pdf
Event Management System Vb Net Project Report.pdf
 
LIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.pptLIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.ppt
 
The role of big data in decision making.
The role of big data in decision making.The role of big data in decision making.
The role of big data in decision making.
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
Courier management system project report.pdf
Courier management system project report.pdfCourier management system project report.pdf
Courier management system project report.pdf
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfCOLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdf
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
 

ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION

  • 1. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 DOI : 10.5121/ijist.2013.3104 43 ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION Dr.AnnaSaroVijendran1 and G.Paramasivam2 1 Director, SNR Institute of Computer Applications, SNR Sons College, Coimbatore, Tamilnadu, INDIA. saroviji@rediffmail.com 2 Asst.Professor, Department of Computer Applications, SNRSonsCollege, CoimbatoreTamilnadu,INDIA. pvgparam@yahoo.co.in ABSTRACT A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities. KEYWORDS Image Fusion, Image Segmentation,Self Organizing Feature Maps,Code Book Generation,Multifocus Images, Gray Scale Images 1.INTRODUCTION Nowadays, image fusion has become an important subarea of image processing. For one object or scene, multiple images can be taken from one or multiple sensors.These images usually contain complementary information. Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The objectivein image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Image fusion has become a common term used within medical diagnostics and treatment. Given the same set of input images, different fused images may be
  • 2. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 44 created depending on the specific application and what is considered relevant information. There are several benefits in using image fusion like wider spatial and temporal coverage, decreased uncertainty, improved reliability and increased robustness of system performance.Often a single sensor cannot produce a complete representation of a scene. Successful image fusion significantly reduces the amount of data to be viewed or processed without significantly reducing the amount of relevant information. Image fusion algorithms can be categorized into pixel, feature and symbolic levels. Pixel-level algorithms work either in the spatial domain [1, 2] or in the transform domain [3, 4 and 5]. Although pixel-level fusion is a local operation, transform domain algorithms create the fused image globally. By changing a single coefficient in the transformed fused image, all imagevalues in the spatial domain will change. As a result, in the process of enhancing properties in some image areas, undesirable artifacts may be created in other image areas. Algorithms that work in the spatial domain have the ability to focus on desired image areas, limiting change in other areas.Multiresolution analysis is a popular method in pixel-level fusion. Burt [6] and Kolczynski [7] used filters with increasing spatial extent to generate a sequence of images from each image, separating information observed at different resolutions. Then at each position in the transform image, the value in the pyramid showing the highest saliency was taken. An inverse transform of the composite image was used to create the fused image. In a similar manner, various wavelet transforms can be used to fuse images. The discrete wavelet transform (DWT) has been used in many applications to fuse images [4]. The dual-tree complexwavelet transforms (DT-CWT), first proposed by Kingsbury[8], was improved by Nikolov [9] and Lewis [10] to outperform most other gray-scale image fusion methods. Feature-based algorithms typically segment the images into regions and fuse the regions using their various properties [10–12]. Feature-based algorithms are usually less sensitive to signal- level noise [13]. Toet [3] first decomposed each input image into aset of perceptually relevant patterns. The patterns were then combined to create a composite image containing all relevant patterns. A mid-level fusion algorithm was developed by Piella [12, 15] where the images are first segmented and the obtained regions are then used to guide the multiresolution analysis. Recently methodshave been proposed to fuse multifocus source images using the divided blocks or segmented regions instead of single pixels [16, 17, and 18]. All the segmented region-based methods are strongly dependent on the segmentation algorithm. Unfortunately, the segmentation algorithms, which are of vital importance to fusion quality, are complicated and time-consuming. The common transform approaches for fusion of mutifocus images include the discrete wavelet transform (DWT) [19],curvelet transform [20] and nonsubsamplingcontourlet transform (NSCT) [21]. Recently, a new multifocus image fusion and restoration algorithm based on the sparse representation hasbeen proposed byYang and Li [22]. A new multifocus image fusion method based on homogeneity similarity and focused regions detection has been proposed during the year 2011 by Huafeng Li , Yi Chai , Hongpeng Yin and Guoquan Liu [23] . Most of the traditional image fusion methods are based on the assumption that the source images are noise free, and they can produce good performance when the assumption is satisfied. For the traditional noisy image fusion methods, they usually denoise the source images, and then the denoised images are fused. The multifocus image fusion and restoration algorithm proposed by Yang and Li [22] performs well with both noisy and noise free images, and outperforms traditional fusion methods in terms of fusion quality and noise reductionin the fused output. However, this scheme is complicated and time-consuming especially when the source images are noise free. The image fusion algorithm based on homogeneity similarity proposed by HuafengLi , Yi Chai , Hongpeng Yin and Guoquan Liu [23] aims at solving the fusion problem of clean and
  • 3. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 45 noisy multifocus images. Further in any region based fusion algorithm, the fusion results are affected by the performance of segmentation algorithm. The various segmentation algorithms are based on thresholding and clustering but the partition criteria used by these algorithms often generates undesired segmented regions. In order to overcome the above said problems a new method for segmentation using Self- organizing Feature Maps which consequently helps in fusion of images dynamically to the desired degree of information retrieval depending on the application has been proposed in this paper. The proposed algorithm is compatible for any type of image either noisy or clean. The method is simple and since mapping of image is carried out by Self-organizing Feature Maps all the information in the images will be preserved.The images used in image fusion should already be registered. A novel image fusion algorithm based on self organizing feature map is proposed in this paper. The outline of this paper is as follows: In Section 2, the Self-organizing Feature Maps is briefly introduced. Section 3 describes the algorithm for Code Book generation using Self Organizing Feature Maps. Section 4 describes the proposed method of Fusion. Section 5 details the Experimental Analysis and Section 6 gives the conclusion of this paper. 2. SELF-ORGANIZING FEATURE MAP Self-organizing Feature Map (SOM) is a special class of Artificial Neural Networkbased on competitive learning. It is an ingenious Artificial Neural Network built around a one or two- dimensional lattice of neurons for capturing the important features contained in the input. The Kohonen technique creates a network that stores information in such a way that any topological relationships within the training set are maintained. In addition to clustering the data into distinct regions, regions of similar properties are put into good use by the Kohonen maps. The primary benefit is that the network learns autonomously without the requirement that the system be well defined. System does not stop learning but instead continues to adapt to changing inputs. This plasticity allows it to adapt as the environment changes.A particular advantage over other artificial neural networks is that the system appears well suited to parallel computation. Indeed the only global knowledge required by each neuron is the current input to the network and the position within the array of the neuron which produced the maximum output Kohonen networks are grid of computing elements, which allows identifying the immediate neighbours of a unit. This is very important, since during learning, the weights of computing units and their neighbours are updated. The objective of such a learning approach is that neighbouring units learn to react to closely related signals. A Self-organizing Feature Mapdoes not need a target output to be specified unlike many other types of network. Instead, where the node weights match the input vector, that area of the lattice is selectively optimized to more closely resemble the data for the class, the input vector is a member. From an initial distribution of random weights, and over many iterations, the Self- organizing Feature Map eventually settles into a map of stable zones. Each zone is effectively a feature classifier. The output is a type of feature map of the input space. In the trained network, the blocks of similar values represent the individual zones. Any new, previously unseen input vectors presented to the network will stimulate nodes in the zone with similar weight vectors. Training occurs in several steps and over many iterations. Each node's weights are initialized. A vector is chosen at random from the set of training data and presented to the lattice. Every node is examined to calculate which one's weights are most like the input vector. The winning node is commonly known as the Best Matching Unit (BMU). The
  • 4. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 46 radius of the neighbourhood of the Best Matching Unit is now calculated. This is a value that starts large, typically set to the 'radius' of the lattice, but diminishes each time-step. Any nodes found within this radius are deemed to be inside the Best Matching Unit‘s neighbourhood. Each neighbouring node's weights are adjusted to make them more like the input vector. The closer a node is to the Best Matching Unit; the more its weights get altered. The procedure is repeated for all input vectors for number of iterations. Prior to training, each node's weights must be initialized. Typically these will be set to small-standardized random values. To determine the Best Matching Unit, one method is to iterate through all the nodes and calculate the Euclidean distance between each node's weight vector and the current input vector. The node with a weight vector closest to the input vector is tagged as the Best Matching Unit. After the Best Matching Unit has been determined, the next step is to calculate which of the other nodes are within the Best Matching Unit's neighbourhood. All these nodes will have their weight vectors altered in the next step. A unique feature of the Kohonen learning algorithm is that the area of the neighbourhoodshrinks over time to the size of just one node. After knowing the radius, iterations are carried out through all the nodes in the lattice to determine if they lay within the radius or not. If a node is found to be within the neighbourhood then its weight vector is adjusted. Every node within the Best Matching Unit‘s neighbourhood (including the Best Matching Unit) has its weight vector adjusted. In Self-organizing Feature Map, the neurons are placed at the lattice nodes; the lattice may take different shapes: rectangular grid, hexagonal, even random topology . Figure 1.Self Organizing Feature Map Architecture The neurons become selectivity tuned on various input patterns in the course of competitive learning process. The locations of the neurons (i.e. the winning neurons) so tuned, tend to become ordered with respect to each other in such a way that a meaningful coordinate system for different input features to be created over the lattice.
  • 5. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 47 SOM Neural Network Training GUI: 3. CODE BOOK GENERATION USING SELF-ORGANIZG FEATURE MAP Given a two-dimensional input image pattern to be mapped onto a two dimensional spatial organization of neurons located at different positions (i, j)’s on a rectangular lattice of size nxn. Thus, for a set of nxn points on the two-dimensional plane, there would be n2 neurons Nij:1 ≤ i,j ≤ n, and for each neuron Nij there is an associated weight vector denoted as Wij. In Self-organizing Feature Map, the neuron with minimum distance between its weight vector Wij and the input vector X is the winner neuron (k,l), and it is identified using the following equation. ||X - W kl|| = min [min ||X - W ij ||] 1≤i≤ n i≤j≤ n (1) After the position of the (i,j)th winner neuron is located in the two-dimensional plane, the winner neuron and its neighbourhood neurons are adjusted using Self- organizing Feature Map learning rule as: Wij(t+1) = Wij(t) + α || X -Wij(t) || (2) Where, α is the Kohonen’s learning rate to control the stability and the rate of convergence. The winner weight vector reaches equilibrium when Wij(t+1)=Wij(t). The neighbourhood of neuron Nij is chosen arbitrary. It can be a square or a circular zone around Nij of arbitrary chosen radius. Algorithm 1: The image A (i,j) of size 2 N x 2 N isdivided into blocks, each of them of size 2 n x 2n pixels, n < N.
  • 6. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 48 2: A Self-organizing Feature Map Network is created with a codebook consisting of M neurons (mi: i=1,2,.....M).The total M neurons are arranged in a hexagonal lattice, and for each neuron there is an associated weight vector Wi = [wi1 wi2.......wi2 2n ]. 3: The weights vectors are initiated for all the neurons in the lattice with small random values. 4: The learning input patterns (image blocks) are applied to the network. The Kohonen’s competitive learning process identifies the winning neurons that best match the input blocks. The best matching criterion is the minimum Euclidean distance between the vectors. Hence, the mapping process Q that identifies the neuron that best matches the input block X is determined by applying the following condition. Q(X) = argi min ||X - Wi|| i = 1,2,……..M (3) 5: At equilibrium, there are m winner neurons per block or m codewords per block. Hence, the whole image is represented using m number of codewords. 6: The indices of the obtained codewords are stored. The set of indices of all winner neurons along with the codebook are stored. 7: The reconstructed image blocks of same size as the original ones, will be restored from the indices of the codewords. 4. PROPOSED METHOD OF FUSION Let us consider two pre-registered grayscale 8 bitimages A and B formed of the same scene or object. The first imageA is decomposed into sub-images and given as input to the Self-organizing Feature Map Neural Network. In order to preserve all the gray values of the image, the codebook size for compressing an 8 bit image is chosen to be the maximum possible number of gray levels, say 256. Since the weight values after training has to represent the input level gray levels, random values ranging from 0 to 255 are assigned as initial weights.When sub-images of size say 4 x 4 is considered as the input vector, then there will be 16 nodes in the input layer and the Kohonen layer consists of 256 nodes arranged in a 16 x 16 array. The input layer takes as input the gray- level values from all the 16 pixels of the gray-level block. The weights assigned between node j of the Kohonen layer and the input layer represents the weight matrix. For all the 256 nodes we have Wji for j = 0,1,…,255 and i = 0,1…15. Once the weights are initialized randomly network is ready for training. The image block vectors are mapped with the weight vectors. The neighborhood is initially chosen; say 5 x 5and then reduced gradually to find the best matching node. The Self-organizing Feature Map generates the codebook according to the weight updation. The set of indices of all the winner neurons for the blocks along with the code book are stored for retrieval. The image A is retrieved by generating the weight vectors of each neuron from the index values, which gives the pixel value of the image. For each index value the connected neuron is found. The weight vector of that neuron to the input layer neuron is generated. The values of the neuron weights are the gray level for the block. The gray level value thus obtained is displayed as pixels. Thus we get the image A back in its original form. Now the Image B is given as input to the Neural Network. Since the features in Images A and B are the same, when B is given as input to the trained Network the code book for the Image B will be generated in minimum simulation time without loss in information. Also since the images are registeredthe index values which represent the position of pixels will be the same in the two
  • 7. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 49 images. For the same index value the value of the neuron weights of both the images are compared and the higher value is selected. The procedure is repeated for all the indices until a weight vector of optimal strength is generated. The image retrieved from this optimal weight vector is the fused image which will represent all the gray values in its optimal values.The procedure can be repeated in various combinations with multiple images until the desired result is achieved. 5. EXPERIMENTAL ANALYSIS AND RESULTS The Experimental analysis of the proposed algorithm has been performed using large numbers of images having different content for simulations and different objective parameters discussed in the paper. In order to evaluate the fusion performance, the first experiment is performed on one set of perfectly registered multifocus source images. The dataset consists of multifocus images. The use of different pairs of multifocus images of different scenes including all categories of text, text+object and only objects allows evaluating the proposed algorithm in true sense. The proposed algorithm is simulated in Matlab7. The proposed algorithm is evaluated based on the quality of the fused image obtained. The robustness of the proposed algorithm that is to obtain consistent good quality fused image with different categories of images like standard images, medical images, satellite images has beenevaluated. The average computational time to generate final fused image from the source image size of 128 x 128 using the proposed algorithm has been calculated and the average computational time is 96 seconds. The quality of the fused image with the source images has been compared in terms of RMSE and PSNR values.The experimental results obtained for fusion of grayscaleimages adopting the proposed algorithm are shown in Table1. The source multifocus images and the fused images of different types are shown in Figures 1 to 4.The histogram and image difference for lena and bacteria images are shown in Figure 5 and 6 respectively. Table 1.a Images RMSE Image A Image B Fused Image Lena 6.2856 6.0162 3.3205 Bacteria 6.8548 6.421 6.227 Satellite map 4.0043 5.325 3.8171 MRI -Head 4.3765 4.9824 1.5163 Table 1.b Images PSNR Image A Image B Fused Image Lena 30.381 31.701 38.0481 Bacteria 28.194 27.989 32.5841 Satellite map 32.077 31.582 33.2772 MRI -Head 30.787 33.901 45.3924
  • 8. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 50 Figure 1.a Figure 1.b Figure 1.c Figure 1.Lena Figure 2.a Figure 2.b Figure 2.c Figure 2.Bacteria Figure 2.a Figure 2.b Figure 2.c Figure 2.Bacteria Figure 4.a Figure 4.b Figure 4.c Figure 4.MRI –Head Images (a) and (b ) are the multi focused images Image (c) is the fused version of (a)and(b)
  • 9. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 Figure 5. Histogram f Figure 6. The Difference images between fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) Difference between (e)and (f). 6. CONCLUSIONS In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image block cluster centroids but also preserves two main f the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight vectors in the input space. Hence their will not be any loss of information in the process proposed method is dynamic in the sense that depending on the application the optimal weight vectors are generated and also redundant values as well as noise can be ignored in this resulting in lesser simulation time.The method can be extended to colour images also. REFERENCES [1] S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and SPOT panchromatic images, Information Fusion 3 International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 Figure 5. Histogram for lena and bacteria images Figure 6. The Difference images between fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image block cluster centroids but also preserves two main features. Topologically neighbour blocks in the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight Hence their will not be any loss of information in the process proposed method is dynamic in the sense that depending on the application the optimal weight vectors are generated and also redundant values as well as noise can be ignored in this resulting in lesser simulation time.The method can be extended to colour images also. S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and SPOT panchromatic images, Information Fusion 3 (2002) 17–23. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 51 Figure 6. The Difference images between fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image eatures. Topologically neighbour blocks in the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight Hence their will not be any loss of information in the process.The proposed method is dynamic in the sense that depending on the application the optimal weight vectors are generated and also redundant values as well as noise can be ignored in this process resulting in lesser simulation time.The method can be extended to colour images also. S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and
  • 10. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 52 [2] A.Goshtasby, 2-D and 3-D ImageRegistration for Medical, Remote Sensing,and Industrial Applications, Wiley Press,2005. [3] A.Toet,Hierarchical image fusion, Machine Vision and Applications 3 (1990) 1–11. [4] H.Li,S. Manjunath, S. Mitra, Multisensorimage fusion using the wavelet transform,Graphical Models and Image Processing 57 (3) (1995) 235–245. [5] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, M. Halliwell, and P.N.T. Wells. Imagefusion using a 3-d wavelet transform. In Proc.7th International Conference on Image Processing And Its Applications, pages 235-239, 1999. [6] P.J. Burt, A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis, Springer-Verlag, Berlin, 1984, pp. 6–35. [7] P.J. Burt, R.J. Kolczynski, Enhanced image capture through fusion, in International Conference on Computer Vision, 1993, pp. 173–182. [8] N. Kingsbury, Image processing with complex wavelets, Silverman, J. Vassilicos(Eds.), Wavelets: The Key to Intermittent Information, Oxford University Press, 1999, pp.165–185. [9] S.G. Nikolov, P. Hill, D.R. Bull, C.N. Canagarajah, Wavelets for image fusion, in: A. Petrosian, F. Meyer (Eds.), Waveletsin Signal and Image Analysis, Kluwer Academic Publishers,The Netherlands, 2001, pp. 213–244. [10] J.J. Lewis, R.J. O’Callaghan, S.G.Nikolov, D.R.Bull,C.N.Canagarajah, Region-based image fusion using complex wavelets, in: Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, June 28–July 1, 2004, pp. 555–562. [11] Z. Zhang, R. Blum, Region-based image fusion scheme for concealed weapon detection, in: ISIF Fusion Conference, Annapolis, MD, July 2002. [12] G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information Fusion 4 (2003)259–280. [13] G. Piella, A region-based multiresolutionimage fusion algorithm, Proceedings of the 5th International Conference onInformation Fusion, Annapolis, MS, July8–11, 2002, pp. 1557–1564. [14] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, 2-D image fusion by multiscaleedge graphcombination, in:Proceedings of the 3rdInternational Conference on Information Fusion, Paris,France, July 10–13, 1, 2000, pp.16–22. [15] G. Piella, A general framework formultiresolutionimage fusion from pixels to regions, Information Fusion 4 (2003) 259–280. [16] H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493. [17] S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985. [18] V. Aslanta, R. Kurban, Expert Systemswith Applications 37 (12) (2010) 8861. [19] Y. Chai, H.F. Li, M.Y. Guo Optics Communications 248 (5) (2011) 1146. [20] S.T. Li ,B. Yang, Pattern RecognitionLetters 29 (9) (2008) 1295 [21] Q. Zhang, B.L. Guo, Signal Processing 89(2009) 1334. [22] B. Yang, S.T. Li, IEEE Transactions on Instrumentation and Measurement 59 (4)(2010) 884. [23] Multifocus image fusion and denoising scheme based on homogeneity similarityHuafeng Li , Yi Chai ,Hongpeng Yin,Guoquan Liu , Optics Communications Journal , September 2011. ACKNOWLEDGMENTS The authors thanks to our Managementof SNR Sons Institutions for allowing us to utilize their resources for doing our research work. Our sincere thanks to our Principal&Secretary Dr.H.BalakrishnanM.Com., M.Phil.,Ph.D., for his support and encouragement in our research work. And my grateful thanks to my guide Dr. Anna SaroVijendran MCA.,M.Phil.,Ph.D., Director of MCA., SNR Sons College, Coimbatore-6 for her valuable guidance and giving me a lot of suggestions & proper solutions for critical situations in our research work.The authors thank the Associate Editor and reviewers for their encouragement and valued comments, which helped in improving the quality of the paper.
  • 11. International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 53 AUTHOR’S BIOGRAPHY Dr. Anna SaroVijendran receivedthe Ph.D. degree in Computer Science from Mother Teresa Womens University,Tamilnadu, India, in 2009. She has 20 years of experience in teaching. She is currently working as the Director, MCA in SNR Sons College, Coimbatore, Tamilnadu, India. She has presented and published many papers in International and National conferences. She has authored and co-authored more than 30 refereed papers. Herprofessional interests are Image Processing, Image fusion, Data mining and Artificial Neural Networks. G.ParamasivamPh.D (PT)Research scholar.He is currently theAsst Professor, SNR Sons College,Coimbatore, Tamilnadu, India.He has 10 years of teaching experience.His technical interests include Image Fusion and ArtificialNeural networks.