1.
1

P a g e
Wavelet based Image Fusion
Term Paper
Report
CE 672
Machine Data Processing of Remotely Sensed Data
Submitted by: 
Umed Paliwal
10327774
2.
2

P a g e
Abstract
The objective of image fusion is to combine information from multiple images of the same
scene. The result of image fusion is a new image which is more suitable for human and
machine perception or further imageprocessing tasks such as segmentation, feature
extraction and object recognition. Different fusion methods have been proposed in literature,
including multiresolution analysis. This paper is on image fusion based on wavelet
decomposition, i.e. a multiresolution image fusion approach. We can fuse images with the
same or different resolution level, i.e. range sensing, visual CCD, infrared, thermal or
medical. Over the past decade, a significant amount of research has been conducted
concerning the application of wavelet transforms in image fusion. In this paper, an
introduction to wavelet transform theory and an overview of image fusion technique are
given. The results from waveletbased methods can also be improved by applying more
sophisticated models for injecting detail information; however, these schemes often have
greater setup requirements.
4.
4

P a g e
Table of Figures
Figure Title Page No.
2.1 ThreeLevel onedimensional discrete wavelet transform 9
2.2 Image at Decomposition Level 1 and 2 9
3.1 Flowchart of the Methodology 11
4.1 The design of user interface 12
5.1 Fusion Example 13
5.
5

P a g e
1 . Introduction
It is often desirable to fuse images from different sources, acquired at different times, or
otherwise having different characteristics. There are various methods that have been
developed to perform image fusion. The standard image fusion techniques, such as those that
use IHS, PCA, and Brovey transforms, however, can often produce poor results, at least in
comparison with the ideal output of the fusion. New approaches, or improvements on
existing approaches, are regularly being proposed that address particular problems with the
standard techniques. Most recently, the potential benefits of waveletbased image fusion
methods have been explored in a variety of fields and for a variety of purposes.
Wavelet theory has developed since the beginning of the last century. It was first applied to
signal processing in the 1980's, and over the past decade it has been recognized as having
great potential in image processing applications. Wavelet transforms are essentially
extensions of the idea of high pass filtering. In visual terms, image detail is a result of high
contrast between features, for example a light rooftop and dark ground, and high contrast in
the spatial domain corresponds to high values in the frequency domain. Frequency
information can be extracted by applying Fourier transforms, however it is then no longer
associated with any spatial information. Wavelet transforms can therefore be more useful
than Fourier transforms, since they are based on functions that are localized in both space
and frequency. The detail information that is extracted from one image using wavelet
transforms can be injected into another image using one of a number of methods, for example
substitution, addition, or a selection method based on either frequency or spatial context.
Further more, the wavelet function used in the transform can be designed to have specific
properties that are useful in the particular application of the transform .
Experiments with waveletbased fusion schemes have, for the most part, produced positive
results, although there are some negative aspects, such as the introduction of artifacts in the
fused image when decimated algorithms are used. In earlier studies , wavelet based schemes
were generally assessed in comparison to standard schemes; more recent studies propose
hybrid schemes, which use wavelets to extract the detail information from one image and
standard image transformations to inject it into another image, or propose improvements in
the method of injecting information. These approaches seem to achieve better results than
either the standard image fusion schemes (e.g. IHS, PCA) or standard waveletbased image
fusion schemes (e.g. substitution, addition); however, they involve greater computational
complexity.
Wavelet theory and wavelet analysis is a relatively recent branch of mathematics. The first,
and simplest, wavelet was developed by Alfred Haar in 1909. The Haar wavelet belongs to
the group of wavelets known as Daubechies wavelets, which are named after Ingrid
Daubechies, who proved the existence of wavelet families whose scaling functions have
certain useful properties, namely compact support over an interval, at least one non
vanishing moment, and orthogonal translates. Because of its simplicity, the Haar wavelet is
useful for illustrating the basic concepts of wavelet theory but has limited utility in
applications.
!Haar(x) =
1,0!x<1
0,otherwise
"
#
$
(1) !Haar(x) =
1,0 ! x <1/ 2
"1,1/ 2 ! x <1
0,otherwise
#
$
%
&
%
(2)
6.
6

P a g e
Over the past decade, there has been an increasing amount of research into the applications of
wavelet transforms to remote sensing, particularly in image fusion. It has been found that
wavelets can be used to extract detail information from one image and inject it into another,
since this information is contained in high frequencies and wavelets can be used to select a set
of frequencies in both time and space. The resulting merged image, which can in fact be a
combination of any number of images, contains the best characteristics of all the original
images.
2. Wavelet Transform Theory
2.1. Wavelet family
Wavelets can be described in terms of two groups of functions: wavelet functions and scaling
functions. It is also common to refer to them as families: the wavelet function is the “mother”
wavelet, the scaling function is the “father” wavelet, and transformations of the parent
wavelets are “daughter” and “son” wavelets.
2.1.1. Wavelet functions
Generally, a wavelet family is described in terms of its mother wavelet, denoted as Ψ(x). The
mother wavelet must satisfy certain conditions to ensure that its wavelet transform is stably
invertible. These conditions are:
!(x)!
2
dx =1
!(x) dx < "!
!(x)dx = 0!
3&4&5
The conditions specify that the function must be an element of L2(R), and in fact must have
normalized energy, that it must be an element of L1(R), and that it have zero mean. The third
condition allows the addition of wavelet coefficients without changing the total flux of the
signal. Other conditions might be specified according to the application. For example, the
wavelet function might need to be continuous, or continuously differentiable, or it might
need to have compact support over a specific interval, or a certain number of vanishing
moments. Each of these conditions affects the results of the wavelet transform.
To apply a wavelet function, it must be scaled and translated. Generally, a normalization
factor is also applied so that the daughter wavelet inherits all of the properties of the mother
wavelet. A daughter wavelet a,b(x) is defined by the equation
!a, b(x) = a!1/2
!((x ! b) / a) (6)
where a, b ∈ R and a ≠ 0; a is called the scaling or dilation factor and b is called the translation
factor. In most practical applications it is necessary to place limits on the values of a and b. A
common choice isa = 2—j and b = 2k where j and k are integers. The resulting equation is
!j,k (x) = 21/2
!(2j
x ! k) (7)
This choice for dilation and translation factors is called a dyadic sampling. Changing j by one
corresponds to changing the dilation by a factor of two, and changing k by one corresponds
to a shift of 2j.
7.
7

P a g e
2.1.2. Scaling functions
In discrete wavelet transforms, a scaling function, or father wavelet, is needed to cover the
low frequencies. If the mother wavelet is regarded as a high pass filter then the father
wavelet, denoted as (x), should be a low pass filter. To ensure that this is the case, it cannot
have any vanishing moments. It is useful to specify that, in fact, the father wavelet have a
zeroth moment, or mean, equal to one:
!(x)dx =1! (8)
Multiresolution analysis makes use of a closed and nested sequence of sub spaces ,which is
dense in L2(R) : each subsequent subspace is at a higher resolution and contains all the
subspaces at lower resolutions. Since the father wavelet is in V0, it, as well as the mother
wavelet, can be expressed as linear combinations of the basis functions for V1, k(x) :
!(x) = hk"i,k (x)
k
! (9)
!(x) = lk!i,k (x)
k
! (10)
2.2. Wavelet transforms
Wavelet transforms provide a framework in which a signal is decomposed, with each level
corresponding to a coarser resolution, or lower frequency band. There are two main groups
of transforms, continuous and discrete. Discrete transforms are more commonly used and can
be subdivided in various categories. Although a review of the literature produces a number
of different names and approaches for wavelet transformations, most fall into one of the
following three categories: decimated, undecimated, and non separated.
2.2.1. Continuous wavelet transform
A continuous wavelet transform is performed by applying an inner product to the signal and
the wavelet functions. The dilation and translation factors are elements of the real line. For a
particular dilation a and translation b, the wavelet coefficient Wf(a,b) for a signal f can be
calculated as
Wf (a,b) = f,!a,b = f (x)!! a,b
(x)dx (11)
Wavelet coefficients represent the information contained in a signal at the corresponding
dilation and translation. The original signal can be reconstructed by applying the inverse
transform:
f (x) =
1
Cw
Wf (a,b)!a,b (x)db
da
a2
!"
"
# (12)
where Cw is the normalization factor of the mother wavelet. Although the continuous wavelet
transform is simple to describe mathematically, both the signal and the wavelet function
must have closed forms, making it difficult or impractical to apply. The discrete wavelet is
used instead.
8.
8

P a g e
2.2.2. Discrete wavelet transform
The term discrete wavelet transform (DWT) is a general term, encompassing several different
methods. It must be noted that the signal itself is continuous; discrete refers to discrete sets of
dilation and translation factors and discrete sampling of the signal. For simplicity, it will be
assumed that the dilation and translation factors are chosen so as to have dyadic sampling,
but the concepts can be extended to other choices of factors.
At a given scale J, a finite number of translations are used in applying multiresolution
analysis to obtain a finite number of scaling and wavelet coefficients. The signal can be
represented in terms of these coefficients as
f (x) = cJk!Jk (x)+ djk"jk (x)
k
!
j=1
!
k
! (13)
where cjk are the scaling coefficients and djk are the wavelet coefficients. The first term in Eq.
(14) gives the lowresolution approximation of the signal while the second term gives the
detailed information at resolutions from the original down to the current resolution J. The
process of applying the DWT can be represented as a bank of filters. At each level of
decomposition, the signal is split into high frequency and low frequency components; the low
frequency components can be further decomposed until the desired resolution is reached.
When multiple levels of decomposition are applied, the process is referred to as
multiresolution decomposition. In practice when wavelet decomposition is used for image
fusion, one level of decomposition can be sufficient, but this depends on the ratio of the
spatial resolutions of the images being fused (for dyadic sampling, a 1:2 ratio is needed).
2.2.2.1. Decimated
The conventional DWT can be applied using either a decimated or an undecimated
algorithm. In the decimated algorithm, the signal is down sampled after each level of
transformation. In the case of a twodimensional image, downsampling is performed by
keeping one out of every two rows and columns, making the transformed image one quarter
of the original size and half the original resolution. The decimated algorithm can therefore be
represented visually as a pyramid, where the spatial resolution becomes coarser as the image
becomes smaller Further discussion of the DWT will be primarily with respect to two
dimensional images, keeping in mind that the concepts can be simplified to the one
dimensional case.
The wavelet and scaling filters are onedimensional, necessitating a twostage process for
each level in the multiresolution analysis: the filtering and downsampling are first applied to
the rows of the image and then to its columns. This produces four images at the lower
resolution, one approximation image and three wavelet coefficient, or detail, images. A, HD,
VD, and DD are the subimages produced after one level of transformation. The A subimage
is the approximation image and results from applying the scaling or lowpass filter to both
rows and columns. A subsequent level of transformation would be applied only to this sub
image. The HD sub image contains the horizontal details (from lowpass on rows, highpass
on columns), the VD subimage contains the vertical details (from highpass on rows, lows
pass on columns) and the DD subimage contains the diagonal details (from highpass, or
wavelet filter, on both rows and columns).
9.
9

P a g e
Fig. 2.1. ThreeLevel onedimensional discrete wavelet transform
Fig.2.2. a) Image at first decomposition level b) Second Decomposition Level
The decimated algorithm is not shiftinvariant, which means that it is sensitive to shifts of the
input image. The decimation process also has a negative impact on the linear continuity of
spatial features that do not have a horizontal or vertical orientation. These two factors tend to
introduce artifacts when the algorithm is used in applications such as image fusion.
Image Source :  G. Pajares, J. Manuel , A waveletbased image fusion tutorial, Pattern
Recognition 37(2004) 18551872.
10.
10

P a g e
2.2.2.2. Undecimated.
The undecimated algorithm addresses the issue of shiftinvariance. It does so by suppressing
the downsampling step of the decimated algorithm and instead upsampling the filters by
inserting zeros between the filter coefficients. Algorithms in which the filter is upsampled
are called “à trous”, meaning “with holes”. As with the decimated algorithm, the filters are
applied first to the rows and then to the columns. In this case, however, although the four
images produced (one approximation and three detail images) are at half the resolution of the
original, they are the same size as the original image. The approximation images from the
undecimated algorithm are therefore represented as levels in a parallelepiped, with the
spatial resolution becoming coarser at each higher level and the size remaining the same. The
undecimated algorithm is redundant, meaning some detail information may be retained in
adjacent levels of transformation. It also requires more space to store the results of each level
of transformation and, although it is shiftinvariant, it does not resolve the problem of feature
orientation.
A previous level of approximation, resolution J1, can be
reconstructed exactly by applying the inverse transform to all four images at resolution J and
combining the resulting images. Essentially, the inverse transform involves the same steps as
the forward transform, but they are applied in the reverse order. In the decimated case, this
means upsampling the approximation and detail images and applying reconstruction filters,
which are inverses of the decomposition scaling and wavelet filters, first by columns and then
by rows. For example, first the columns of the VD image would be upsampled and the
inverse scaling filter would be applied, then the rows would be up sampled and the inverse
wavelet filter would be applied. The original image is reconstructed by applying the inverse
transform to each deconstructed level in turn, starting from the level at the coarsest
resolution, until the original resolution is reached. Reconstruction in the undecimated case is
similar, except that instead of up sampling the images, the filters are downsampled before
each application of the inverse filters.
Shiftinvariance is necessary in order to compare and combine wavelet coefficient images.
Without shift invariance, slight shifts in the input signal will produce variations in the
wavelet coefficients that might intro duce artifacts in the reconstructed image. Shiftvariance
is caused by the decimation process, and can be resolved by using the undecimated
algorithm. However, the other problem with standard discrete wavelet transforms is the poor
directional selectivity, meaning poor representation of features with orientations that are not
horizontal or vertical, which is a result of separate filtering in these directions (Kingsbury,
1999).
2.2.2.3. Nonseparated.
One approach for dealing with shift variance is to use a nonseparated, twodimensional
wavelet filter derived from the scaling function. This produces only two images, one
approximation image, also called the scale frame, and one detail image, called the wavelet
plane. The wavelet plane is computed as the difference between the original and the
approximation images and contains all the detail lost as a result of the wavelet
decomposition. As with the undecimated DWT, a coarser approximation is achieved by
upsampling the filter at each level of decomposition; correspondingly, the filter is down
sampled at each level of reconstruction. Some redundancy between adjacent levels of
decomposition is possible in this approach, but since it is not decimated, it is shiftinvariant,
and since it does not involve separate filtering in the horizontal and vertical directions, it
better preserves feature orientation.
2.3 Image fusion
The objective of image fusion is to produce a single image containing the best aspects of the
fused images. Some desirable aspects include high spatial resolution and high spectral
11.
11

P a g e
resolution (multispectral and panchromatic satellite images), areas in focus (microscopy
images), functional and anatomic information (medical images), different spectral
information (optical and infrared images), or colour information and texture information
(multispectral and synthetic aperture radar images).
2.3.1 Image Fusion Scheme
The wavelet transform contains the lowhigh bands, the highlow bands and the highhigh
bands of the image at different scales, plus the lowlow band of the image at coarsest level.
Except for the lowlow band which has all positive transform values, all other bands contain
transform values in these bands correspond to sharper brightness values and thus to the
salient features in the image such as edges, lines, and region boundaries. There a good
integration rule is to select the larger( absolute value) of the two wavelet coefficients at each
point. Subsequently, a composite image constructed by performing an inverse wavelet
transform based on the combined transform coefficients.
Since the wavelet transform provides both spatial and frequency domain localization, the
effect of maximum fusion rule can be illustrated in the following two aspects. If the same
object appears more distinctly( in other words with better contrast) in image A than in image
B, after fusion the object in image A will be preserved. In a different scenario, suppose the
outer boundary of the object appears more distinctly in image A while the inner boundary of
the object appears more distinctly in image B. As a result, the object in image A looks visually
larger than the corresponding object in image B. In this case the wavelet transform
coefficients of the object in images A and B will be dominant at the different resolution levels.
Based on maximum selection rule, both the outer structure from image A and the inner
structure from image B will be preserved in the fused image.
3. Methodolgy
The steps involved in fusion of images through wavelet transform are given below.
1) Get the images to be fused
2) Apply the wavelet transform on both the images through chosen wavelet at the
desired level
3) Get the approximation and detail coefficients for both the images
4) Merge the coefficients by desired fusion rule
5) Apply Inverse discrete wavelet transform on the merged coefficients and get the
fused image
Fig.3.1 Flowchart of the Methodology
Image Source : H. Li, B.S. Manjunath, S.K. Mitra, Multisensor Image Fusion Using the
Wavelet Transform, Graphical Models and Image Processing, Vol. 57, No. 3, 1995, 235245.
12.
12

P a g e
4. Results
Based on the above stated methodology a MATLAB code has been written with a graphical
user interface to fuse the images.
4.1 User Interface
4.1.1 Wavelets
A choice of wavelets has been given to the user
1) Debauchies Wavelet  db1
2) Coiflets  coif1
3) Symlets – sym2
4) Discrete meyer  dmey
5) Orthogonal – bior 1.1
6) Reverse Biorthogonal  rbio1.1
User can select up to 10 decomposition level
4.2.2 Fusion Method
User has 6 options for fusion method both for approximation coefficients and detail
coefficients i.e. maximum, minimum, mean, Image 1, Image 2, and random which merges the
two approximations or detail structures obtained from Image 1 and 2 element wise by taking
the maximum, the minimum, the mean, the first element, the second element, the second
element or a randomly chosen element.
Fig. 4.1 The design of user interface
Shown below are the input images and the output fused image. As can be seen in the image 1
the left portion of the image is blurred and in the image 2 the middle portion is blurred. The
output image is much better than both the input images.
13.
13

P a g e
Fig.5.1. a and b Input Images c) Fused Image using biorthogonal wavelet transform, four
levels of decomposition, minimum approximation coefficients and maximum detailed
coefficients.
5. Conclusions
Wavelet transforms isolate frequencies in both time and space, allowing detail information to
be easily extracted from satellite imagery. A number of different schemes have been proposed
to inject this detail information into multispectral imagery, ranging from simple substitution
to complex formulas based on the statistical properties of the imagery. While even the
simplest waveletbased fusion scheme tends to produce better results than standard fusion
schemes such as IHS and PCA, further improvement is evident with more sophisticated
wavelet based fusion schemes. The drawback is that there is greater computational
complexity and often parameters must be set up before the fusion scheme can be applied.
Another strategy for improving the quality of results is to combine a standard fusion scheme
with a waveletbased fusion scheme, however this also has limitations; IHS, for instance, can
only be applied to three bands at a time. The type of algorithm that is used to apply the
wavelet transform can also affect the quality of the result: decimation disturbs the linear
a)
Image
1
b)
Image
2
c)
Fused
Image
14.
14

P a g e
continuity of spatial features and introduces artifacts in the fused image, whereas non
decimated algorithms require more memory space during processing but do not introduce as
many artifacts. Each waveletbased fusion scheme has its own set of advantages and
limitations. More comprehensive testing is required in order to assess fully under what
conditions each one is most appropriate.
6. References
G. Pajares, J. Manuel , A waveletbased image fusion tutorial, Pattern Recognition 37(2004)
18551872.
V.P.S Naidu, J.R. Raol, Pixellevel Image fusion using wavelets and Principal component
analysis, Defence Science Journal, Vol. 58No. 3, May 2008, 338352.
E.J. Stollnitz, T.D. Derose, D. Salesin, Wavelets in Computer Graphics, Theory and
Applications, Morgan Kauffman Publishers Inc.
H. Li, B.S. Manjunath, S.K. Mitra, Multisensor Image Fusion Using the Wavelet Transform,
Graphical Models and Image Processing, Vol. 57, No. 3, 1995, 235245.
15.
15

P a g e
7. MATLAB Code
function varargout = wavelet(varargin)
% WAVELET MATLAB code for wavelet.fig
% WAVELET, by itself, creates a new WAVELET or raises the
existing
% singleton*.
%
% H = WAVELET returns the handle to a new WAVELET or the
handle to
% the existing singleton*.
%
% WAVELET('CALLBACK',hObject,eventData,handles,...) calls
the local
% function named CALLBACK in WAVELET.M with the given input
arguments.
%
% WAVELET('Property','Value',...) creates a new WAVELET or
raises the
% existing singleton*. Starting from the left, property
value pairs are
% applied to the GUI before wavelet_OpeningFcn gets called.
An
% unrecognized property name or invalid value makes property
application
% stop. All inputs are passed to wavelet_OpeningFcn via
varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI
allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help wavelet
% Last Modified by GUIDE v2.5 29Mar2014 21:32:17
% Begin initialization code  DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @wavelet_OpeningFcn, ...
'gui_OutputFcn', @wavelet_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code  DO NOT EDIT
%  Executes just before wavelet is made visible.
16.
16

P a g e
function wavelet_OpeningFcn(hObject, eventdata, handles,
varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to wavelet (see VARARGIN)
% Choose default command line output for wavelet
handles.output = hObject;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes wavelet wait for user response (see UIRESUME)
% uiwait(handles.figure1);
%  Outputs from this function are returned to the command
line.
function varargout = wavelet_OutputFcn(hObject, eventdata,
handles)
% varargout cell array for returning output args (see
VARARGOUT);
% hObject handle to figure
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.output;
%  Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
axes(handles.axes1)
imshow('image1.jpg');
%  Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
axes(handles.axes2)
imshow('image2.jpg');
%  Executes on selection change in popupmenu1.
function popupmenu1_Callback(hObject, eventdata, handles)
% hObject handle to popupmenu1 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
17.
17

P a g e
% handles structure with handles and user data (see GUIDATA)
% Hints: contents = cellstr(get(hObject,'String')) returns
popupmenu1 contents as cell array
% contents{get(hObject,'Value')} returns selected item
from popupmenu1
a = get(handles.popupmenu1, 'value');
switch a
case 1
handles.x1 = 'db1';
case 2
handles.x1 = 'coif1';
case 3
handles.x1 = 'sym2';
case 4
handles.x1 = 'dmey';
case 5
handles.x1 = 'bior1.1';
case 6
handles.x1 = 'rbio1.1' ;
end
guidata(hObject,handles)
%  Executes during object creation, after setting all
properties.
function popupmenu1_CreateFcn(hObject, eventdata, handles)
% hObject handle to popupmenu1 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles empty  handles not created until after all
CreateFcns called
% Hint: popupmenu controls usually have a white background on
Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
%  Executes on selection change in popupmenu2.
function popupmenu2_Callback(hObject, eventdata, handles)
% hObject handle to popupmenu2 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: contents = cellstr(get(hObject,'String')) returns
popupmenu2 contents as cell array
% contents{get(hObject,'Value')} returns selected item
from popupmenu2
b = get(handles.popupmenu2, 'value');
handles.x2 = b;
% switch b
% case 1
% handles.x2 = 1;
% case 2
% handles.x2 = 2;
18.
18

P a g e
% case 3
% handles.x2 = 3;
% case 4
% handles.x2 = 4;
% case 5
% handles.x2 = 5;
%
% end
guidata(hObject,handles)
%  Executes during object creation, after setting all
properties.
function popupmenu2_CreateFcn(hObject, eventdata, handles)
% hObject handle to popupmenu2 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles empty  handles not created until after all
CreateFcns called
% Hint: popupmenu controls usually have a white background on
Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
%  Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton3 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
%  Executes on selection change in popupmenu3.
function popupmenu3_Callback(hObject, eventdata, handles)
% hObject handle to popupmenu3 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: contents = cellstr(get(hObject,'String')) returns
popupmenu3 contents as cell array
% contents{get(hObject,'Value')} returns selected item
from popupmenu3
c = get(handles.popupmenu3, 'value');
switch c
case 1
handles.x3 = 'max';
case 2
handles.x3 = 'min';
case 3
handles.x3 = 'mean';
case 4
handles.x3 = 'img1';
case 5
handles.x3 = 'img2';
19.
19

P a g e
case 6
handles.x3 ='rand';
end
guidata(hObject,handles)
%  Executes during object creation, after setting all
properties.
function popupmenu3_CreateFcn(hObject, eventdata, handles)
% hObject handle to popupmenu3 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles empty  handles not created until after all
CreateFcns called
% Hint: popupmenu controls usually have a white background on
Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
%  Executes on selection change in popupmenu4.
function popupmenu4_Callback(hObject, eventdata, handles)
% hObject handle to popupmenu4 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: contents = cellstr(get(hObject,'String')) returns
popupmenu4 contents as cell array
% contents{get(hObject,'Value')} returns selected item
from popupmenu4
d = get(handles.popupmenu4, 'value');
switch d
case 1
handles.x4 = 'max';
case 2
handles.x4 = 'min';
case 3
handles.x4 = 'mean';
case 4
handles.x3 = 'img1';
case 5
handles.x3 = 'img2';
case 6
handles.x3 ='rand';
end
guidata(hObject,handles)
%  Executes during object creation, after setting all
properties.
function popupmenu4_CreateFcn(hObject, eventdata, handles)
% hObject handle to popupmenu4 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
20.
20

P a g e
% handles empty  handles not created until after all
CreateFcns called
% Hint: popupmenu controls usually have a white background on
Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
%  Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton4 (see GCBO)
% eventdata reserved  to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
p = handles.x1;
q = handles.x2;
r = handles.x3;
s = handles.x4;
X1 = imread('image1.jpg');
X2 = imread('image2.jpg');
XFUS = wfusimg(X1,X2,p,q,r,s)/255;
axes(handles.axes5)
imshow(XFUS);
Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.
Be the first to comment