DEPARTMENT OF COMPUTER ENGINEERING
FR.CONCEICAO RODRIGUES COLLEGE OF
UNIVERSITY OF MUMBAI
Under the guidance of
Prof. Ms. Ibtisam Mogul
Department Of Computer Engineering
Fr.Conceicao Rodrigues College Of Engineering
This is to certify that the project entitled
Submitted in the partial fulfillment of the degree of BE in Computer
Engineering is approved.
Internal Examiner External Examiner
Today, there is almost no area of technical endeavor that is not impacted some way
by digital image processing. With the advent of increasing support of graphics and
images by computer, image processing has found its application in field of
forensics, medicine, surgery etc.
One of these applications is Face Morphing.
Face Synthesis deals with the metamorphosis of an image to another image.
It aims at developing software that contains a database of parts of several faces with
functionality of combining various parts of different faces. Face morphing deals
with the metamorphosis of an image to another image.
Our project aims at developing S/W that contains a database of several faces with
functionality of combining various features. It alters a few characteristics of an
image to produce the desired image.
Despite the researches in morphology there has not been seen much development in
• DIGITAL IMAGE PROCESSING
• PROBLEM DEFINITION
• EXISTING SYSTEM WE ARE TRYING TO BETTER
• SCOPE AND APPLICATIONS
An image can be defined as two-dimensional function, f(x,y), where x and y are spatial
(plane) coordinates, and the amplitude of f at any pair of coordinates ( x, y ) is called the
intensity or gray level of the image at the point. When x, y and the amplitude values of
f are all finite, discrete quantities, we call the image a digital image. The field of digital
image processing refers to processing digital images by means of a digital computer.
Note that a digital image is composed of a finite number of elements each of which has
a particular location and value. These elements are referred to as picture elements,
image elements and pixels. Pixel is a term used to donate the elements of a digital
1.2 DIGITAL IMAGE PROCESSING
Digital Image processing is a vast field which involves many sub-branches like image
enhancement, information extraction, pattern recognition. This is the field which has
touched all aspects of human life. Digital Image processing stretches its application
right from visible light to infra-red, x-ray as well Ultra Violet Image processing.
Digital image processing is divided into 3 regions in continuum. Low level, middle
level and high level processing.
o Low Level processes involve primitive operations such as image preprocessing
to reduce noise, contrast enhancement and image sharpening. A low level
process is characterized by the fact that both inputs and outputs are images.
o Mid-level processing involves tasks such as segmentation (partitioning an image
into regions or objects) description of those objects to reduce them to form
suitable for computer processing, and classification [recognition] of individual
objects. A mid level process is characterized by the fact that its inputs generally
are images but its outputs are attributes extracted from those images. [e.g. edges,
contours and the identity of individual objects]
o High-level processing involves “making-sense” of an ensemble of recognized
objects, as in image analysis, and at the far end of the continuum, performing the
cognitive functions normally associated with vision.
Image displays Computers Mass Storage
Hardcopy Specialized image Image processing
processing hardware software
Fig 1.1 (courtesy of “Digital Image Processing:, Gonzalez and Woods)
1.3 PROBLEM DEFINITION
Reproduction of human faces envisioned by different individuals is still limited to
artists’ trying to reproduce other’s thoughts and this being time consuming. Our project
combines the power of computer with the artists’ talents and helps in reproducing
human faces vis-à-vis Digital Image Processing.
Project aims at developing software that contains a database of several faces with
functionality of combining various features. It alters a few original characteristics of the
images to produce the desired image.
It contains a database of faces divided in three parts:
• The first part consists of hair, forehead and eyes.
• The second part contains the nose.
• The third part consists of lips and chin.
The user is presented with a screen that displays all the parts of various faces stored in the
database. The user then chooses one image for each of the three parts. Once the decision
is made, the software then takes charge. The software aims at combining these three parts
from three different faces to produce a new face that looks very natural. It does so by
utilizing the powerful features and functionality of Image Processing coupled with the
Image Object Library provided in Virtual C++.
The software also provides the user with an option to resize any of the parts’ dimensions
to further achieve the envisioned look. The entire database is maintained by the user and
the user is provided the freedom of adding/removing new faces’ parts to the database to
improve the range and accuracy of the software’s ability to replicate human faces.
1.4 EXISTING SYSTEM WE ARE TRYING TO BETTER
There are many professional applications that can create video effects. They produce
great animations that are used in million-dollar movies. But, there is a downside. These
programs require a million-dollar operator. You need to spend years studying a
professional video editing package to get yourself familiar with its interface and
features. Even a simplified morphing program is not suitable for morphing faces just for
fun. To correctly morph one face into another, you need to mark numerous spots first.
You have to point the victim's eyes, nose, lips and other notable face areas. Marking the
spots manually ensures the highest quality but it may take hours.
Our program is designed to turn face morphing into real child’s play. It guesses the
basic spots automatically. In most cases, though, the automatically guessed spots
produce animation that is perfectly acceptable for amateur videos or web graphics.
The program is fully automated too. A series of pictures are loaded and the program
automatically recognizes the images as faces. Then the morphing process begins and the
program renders the animation. The result is automatically saved in the source folder
which can later be used for printing, distribution etc.
Our program makes an attempt to provide a cheaper and better alternative to
FaceMorpher which is termed as the best morphing utility in the market.
1.5 SCOPE & APPLICATIONS
Our project finds applications in the following fields:
This software can aid in visualizing faces of criminals as per descriptions provided by
the witness who has a faint image of the criminal in his mind. The design of our
software conforms to international requirements as stated by researchers. This need was
further justified by an article published in NEW SCIENTIST magazine :”Face
morphing could catch criminals”.
When performing a cosmetic surgery for say enhancement of nose, a surgeon needs to
give the patient an idea about his post surgery look. This task is highly facilitated when
various features of the face can be edited as per patient’s requirement which helps the
patients to accurately visualize his post surgery look.
Patient puts his face into a database and then edits various features of the face with the
help of the functionality provided by the software. This enables the patient to properly
specify his requirements and surgeons gets a better idea about the surgery to be
A barber can provide his customer better service by allowing them to experiment with
various hairstyles before selecting one.
Customers can superimpose hair from other face images to their face image.
REVIEW OF LITERATURE
• SYSTEM STUDY
• BLURRING OF EDGES
• AVERAGING FILTER OPERATIONS
• DEALING WITH COLOR IMAGES
• HISTOGRAM SPECIFICATION
REVIEW OF LITERATURE
2.1 SYSTEM STUDY
The overall architecture for the “New Face Tool” comprises various components
IMAGE FILES EXPANSION OF IMAGES
FACE SYNTHESIS COMBINING PROCESSED
BLURRING THE EDGES
Fig. 2.1 : Block Diagram of System
Our major goal during mosaicking images is to minimize the edge formation at the
point of combining the two images. While combining two images we are assuming a
predefined overlap limit, which would determine the thickness of the edge formed at the
overlap. The actual procedure of combining the two images involves creating three
matrices. These are used for the following purposes.
• To store the content of the first image file, this is the first of two faces to be
• To store the content of the second image file, this is the second of two images to
• Finally, the third matrix is going to be used to store the overlapped image
consisting of the mosaic formed by combining the two images.
The first of these two matrices are formed by directly using the two images to be
combined, by simply reading the two image files into these matrices. The sizes of these
matrices can be determined by using the width and height attributes of each of the two
images. Having read on the two images in the matrices we can then combine these two
images by flooding the third matrix in the following manner. First we use the matrix
corresponding to the upper face to flood the third matrix, then we start flooding the
target matrix using the matrix corresponding to the lower face starting from appoint in
the target matrix such that the overlap of two images comes to the predetermined limit
of overlapping. Having mosaicked the two images the intensities of pixels contained in
the edge would be large due to the combined intensities of the two faces and hence
would require an averaging operation to bring down their intensities at a level which is
very close to the average intensity of the combined face.
Here we are using the technique of weighted averages for expanding or contracting the
In the case of expansion, the distance between the adjacent pixels in the compressed
grid corresponding to the expanded image is less than unity, as compared to unit
spacing between two pixels in the original image.
Similarly for compressing an image we have to expand an imaginary grid, which fits on
the smaller version of the image, hence the distance between adjacent pixels is the
expanded grid corresponding to the shrunken image is more than unity. Thus the
spacing between the pixels in the grid will be :
Space_x_direction = original_x_resolution/new_x_resolution
Space_y_direction = original_y_resolution/new_y_resolution
To implement the weighted averages algorithm, we will have to take in consideration
the intensities of all the pixels surrounding the pixel under consideration, hence the new
intensity of every pixel will be calculated using the interpolation of the intensities of its
2.4 BLURRING OF EDGES
The image is passed through a Low Pass Spatial Filter to reduce edge formation at the
boundary of the three parts of the face. This gives a smooth look to the face and helps in
producing a gradual transition of skin tone from the first part to the third part thereby
giving a natural look to the face.
During the process of morphing of two separate faces, we would have to minimize edge
formed at the region of combination of the two images. This can be done by applying a
low pass filter or can be achieved by any other smoothing operation.
2.5 MORPHING :
Morphing is a special effect in motion pictures and animations that changes (or
morphs) one image into another through a seamless transition. Most often it is used to
depict one person turning into another through some magical or technological means or
as part of a fantasy or surreal sequence.
2.6AVERAGING FILTER OPERATIONS
2.6.1 Smoothing Spatial Filters
Smoothing filters are used for noise reduction. Blurring is used in preprocessing steps,
such as removal of small details from an image prior to (large) object extraction and
bringing of small gaps in lines or curves. Noise reduction can be accomplished by
blurring with a linear filter and also by non-linear filtering.
2.6.2 Smoothing Linear Filters
The desirable features of the image are characterized by sharp transition gray levels, so
averaging filters have the undesirable side effect that they blur edges. Another
application of this type of process includes the smoothing of false contours that result
from using an insufficient number of gray levels.
1 2 1 1/9
2 4 2
1 2 1
LOW PASS FILTER MASK
1 1 1
1 1 1 1/16
1 1 1
AVERAGING FILTER MASK
A major use of averaging filter is in the reduction of “irrelevant” detail in the image. By
“irrelevant” we mean the pixel regions that are small with respect to the size of the filter
The above figure shows two 3x3 smoothing filters. Use of the first filter yields the
standard average of the pixels under the mark. This can be best seen by substituting the
coefficients of the mask in
R = 1/9 ∑ Z i ( i =1 to 9)
which is the average of the gray levels of the pixels in the 3x3 neighborhood defined by
the mask. Note that, instead of being 1/9, the coefficients of the filter are all 1’s. The
idea here is that it is computationally more efficient to have coefficients valued 1. At the
end of filtering process the entire image is divided by 9. An mxn mask would have a
normalizing constant equal to1/mn. A spatial averaging filter in which all coefficients
are equal is sometimes called a box filter.
A second mask shown in the figure is little more interesting. This mask is called
weighted average, terminology used to indicate that the pixels are multiplied by
different coefficients, thus giving more importance (weight) to some pixels at the
expense of the others. In the second mask shown the pixels at the center of the mask is
multiplied by the higher value than any other, thus giving this pixel more importance in
the calculation of the average. The other pixels are inversely weighted as a function of
their distance from the center of the mask. The diagonal terms are further away from the
center than the orthogonal neighbors (by a factor of √2) and thus, are weighted less than
these immediate neighbors of the center pixel. The basic strategy behind weighing the
center point the highest and then reducing the value of the coefficients as a function of
increasing the distance from the origin is simply an attempt to reduce the blurring in the
smoothing process. We could have picked other weights to accomplish the same general
objective. However, the sum of all coefficients in the second mask is equal to 16, an
attractive feature for computer implementation because it has an integer power of 2. In
practice, it is difficult in general to see the differences between the images smoothed by
using either of the masks shown above, or similar arrangements, because the area these
masks span at anyone location in an image is small.
2.7 DEALING WITH COLOR IMAGES
Color images consists of three components RED, GREEN and BLUE i.e RGB.
Combination of these three colors helps us to form a color image. The operations that
can be performed on Gray Scale images can also be done on color images. To work
with color images we first extract the individual RED, GREEN and BLUE components
from the BMP images and form a separate matrix for each color component.
We form each of these component matrices by extracting the RGB bytes individually
and appending into the corresponding matrix i.e. 1st, 4th, 7th ……...bytes form the red
component matrix, the 2nd , 5th , 8th ………bytes form the green component matrix, and
the 3rd , 6th , 9th ………bytes form the blue component matrix.
All the required operations are then performed on the individual component matrices.
Then they are combined together to form the final resultant modified color image. The
combination process is reverse of that extraction.
A convenient way is to view this as a color plane (faces or cross sections of the cube).
This can be accomplished by simply fixing one of three colors and allowing the other
two to vary. For a instance, a cross-sectional plane through the center of the cube and
parallel to GB plane is plane (127, G, B) for G, B=0, 1, 2,….255. Figure below shows
the RED, BLUE and GREEN component images for this color plane. Thus it can be
seen that the color plane can be generated using these components.
Color Image Processing
Green Monitor RGB
image of the cross-
sectional color plane
The three hidden Blue
surface planes in the
(R-0) (G-0) (B-0)
Fig 2.2 : RGB Color Plane for Display
2.8 HISTOGRAM SPECIFICATION
The histogram of the digital image with gray levels in the range [ 0, L- 1] is a discrete
function h(r k) = n k , Where r k is the kth gray level and n k is the number of pixels in
the image having gray levels r k . The method used to generate a processed image that
has a specified histogram is called histogram matching or histogram specification.
s k = T(r k) = ∑ nj/n ; k=0, 1, 2,….L-1.
where pr(rj) is the probability that the pixel has gray level j.
Thus for equalizing the histogram of an input image we first construct a probability
array prob[i] storing probability of the pixel in the input image having gray level i,
where 0<=i<=255. The transformation function T is then applied to ‘prob’ array. Each
value of prob[i] is then mapped to the nearest integer to give new level[i]. The pixels
having gray level ‘i’ in the input image are now assigned gray level new) level[i]
resulting in the histogram equalized image. This process is applied separately on each
component (Red, Green and Blue) of the input image to obtain histogram equalized
For specifying the histogram of the input image to histogram of target image; firstly, the
histogram of both input and target images need to be equalized by the above mentioned
procedure resulting in the data structures new_level1[i] and new_level2[i]
corresponding to the input and target image respectively. The pixels having gray level
‘i’ in the input image is now mapped to gray level ‘j’ if the gray level ‘i’ of the input
image and gray level ‘j’ of the target image are equalized to same gray level, that is,
The above mentioned process is applied separately on each component (Red, Green
Blue) of the input image to obtain histogram specified color image.
SRS (SOFTWARE REQUIREMENT SPECIFICATIONS)
The basic issues that the SRS shall address are the following:
Our project aims at developing software that contains a database of parts of
several faces with functionality of combining various parts of different faces.
The software alters a few characteristics of the original image to produce the
3.2 External interfaces
Different parts of the face are stored in a database.
1. The first part consists of the forehead.
2. The second part contains the nose section.
3. The third part contains the lower face.
The user needs to select one part each from the above and the software displays
the resulting face image.
The considerations made while designing our project are as follows
1. Images should be of the same size.
2. Images should not be black and white.
3.4 Design constraints
One of the constraints is that in our project there is a user-controlled database.
Hence proper handling of database is extremely important.
The other constraint is that our project can handle only two-dimensional images.
• FEASIBILITY STUDY
• RISK ANALYSIS
• TASK SET
4.1 FEASIBILITY STUDY
The feasibility study looks at three main areas of feasibility for the project:
4.1.1 Economic Feasibility
From the cost benefit analysis of the hardware and software requirements carried out we
can conclude that the system is economically feasible. The cost in developing the project
is negligible when compared to the features provided by it.
The best software currently available in the market, Face Morpher, comes with a price tag
of Rs.3000/-. This amount is obscene for an individual intending to use the software for
his personal use. The software is also packed with too many features that are not useful
when it comes to personal use. Hence, our software aims to bridge the gap between a
low-cost face morphing software and availability of all the basic features that are
essential to reproduce a moderate – good quality image as the output.
4.1.2 Time Feasibility
From the schedule calculations in the estimation we can conclude that the system
development is feasible in terms of time. The project development can be completed in a
reasonable time span.
Though the analysis phase of the project took a longer time than was previously
allocated, the coding process was sped up with all the members of the group working
extensively and collectively.
4.1.3 Technical Feasibility
From the requirement study we can conclude that the system development is technically
feasible. The technical analysis of the system proves that the application can be
developed within the analyzed resources.
Since the analysis phase was conducted in a detailed manner, resource allocation was
done as per requirements and there were no developmental issues.
Software project estimation is a form of problem solving; i.e., developing a cost and
effort estimate for a software project.
Since LOC based estimation are programming languages dependent, they penalize well
designed but shorter programs and their use in estimation models requires a level of detail
that is difficult to achieve before the analysis and design is to be completed: so we are
considering the FP based estimation that considers the functionality delivered by the
application as a normalization value.
4.2.1 FP-BASED ESTIMATION
FP-based estimation focuses on the information domain values such as inputs, outputs,
inquiries, files and external interfaces for CLASS. For the purpose of this estimate, the
complexity weighting factor is assumed to be average. The below mentioned table
presents the result of this estimate:
Information Domain Value Count Weight FP-count
Number of Inputs 3 4 12
Number of Outputs 3 5 15
Number of Inquiries 3 4 12
Number of Files 7 10 70
Number of External Interfaces 0 7 0
The estimated FP is derived using the formula:
Where ΣFi=sum of all complexity adjustment values computed by estimating the
following weighting factors.
Backup and recovery 0
Data communication 5
Distributed processing 0
Performance critical 4
Existing operating environment 3
On-line data entry 0
Input transactions over multiple screens 0
Master files update online 0
Information domain values complex 0
Internal Processing complex 3
Code design for reuse 4
Conversion/installation in design 3
Multiple installations 0
Applications designed for change 5
Complexity adjustment factor [0.65+0.01*27] = 0.92 ~1
Finally the estimated of FP is derived:
4.2.2 EFFORT ESTIMATION
Effort estimation is required to find the number of people required to complete the project
over the duration of the project. The effort estimation of our project is as follows:
Labour Rate for month =100$
2> Total Cost=31.25*100.28
Thus estimated effort required for FACE MORPHING is 3 persons over a period of 5
4.3 RISK ANALYSIS
4.3.1 RISK TABLE
RISK CATEGORY PROBABILITY IMPACT
Schedule goes PS 0.6 3
Competition in the BU 0.8 2
User-controlled PS 0.5 2
PS – project size 1 – Catastrophic
BU – business 2 – Critical
3 – marginal
4 – Negligible
4.3.2 RMMM PLAN (Risk Mitigation Monitoring and Management)
Risk id – abc Date – 12/10/07 Probability – 60% Impact – marginal
Description: it is possible that the development of software is behind schedule.
Mitigation: Ensure project is running on schedule
Monitoring: Periodic meetings to co-ordinate work schedule
Management: Overlook work development and increase pace of work completion
Current status: Project is completed within schedule
Originator: Abhinav Mehrotra Assigned: Abhinav Mehrotra
Risk id – 123 Date – 2/2/08 Probability – 80% Impact – critical
Description: there is a good possibility of the software facing stiff competition in the
Mitigation: Make the s/w fare better amongst other 2D rendering s/w.
Monitoring: Check for regular developments made in other rival softwares
Management: Periodically surveying the market for new features present in other s/w
Current status: Product stands up to features provided by most of the rival s/w
Originator: Karan Modi Assigned: Karan Modi
Risk id – xyz Date – 12/3/08 Probability – 50% Impact – critical
Description: improper handling of database could stop the software from functioning in
the proper manner.
Mitigation: Provide a user manual
Monitoring: Simplify the management of database
Management: Provide assistance/training to customer prior to s/w operation
Current status: Arrangements made for providing training to customers.
Originator: Akshay Suresh Assigned: Akshay Suresh
4.4 TASK SET
Task sets are designed to accommodate the different types of projects and different
degrees of rigor.
The different software project types are:
Concept Development Projects: These are initiated to explore some new business
concept or application of some new technology.
New Application Development: These projects are undertaken as a consequence of a
specific customer request.
Application Enhancement Projects: These occur when existing software undergoes
major modifications to function, performance or interfaces that are observable by the end
Application Maintenance Projects: These correct, adapt or extend existing software in
ways that may not be immediately obvious to the end user.
Re-engineering Projects: These are undertaken with the intent of rebuilding an
existing legacy in system in whole or in part.
4.4.1 DEGREES OF RIGOR
Four different degrees of rigor can be defined:
Casual: All process framework activities are applied, but only a minimum task set is
required. Umbrella tasks are minimized and documentation is less.
Structured: Umbrella activities necessary to ensure high quality will be applied.
Strict: The full process will be applied for this project with a degree of discipline that
will ensure high quality. All umbrella activities will be applied and robust work products
will be produced.
Quick Reaction: Because of an emergency situation only those tasks essential to
maintaining good quality will be applied.
4.4.2 Adaptation Criteria
It is used to determine the recommended degree of rigor with which the software process
should be applied on a project. They are as follows:
Size of the project
Number of potential users
Stability of requirements
Ease of customer/developer communication
Embedded and non embedded characteristics
4.4.3 Computation of the task set selector
Review each of the adaptation criteria and assign the appropriate grades (1-5)
Review the weighting factor assigned to each of the criteria. The value of a weighting
factor ranges from 0.8 to 1.2.
Multiply the grade with the weighting factor and the entry point multiplier.
Place the result of multiplication in the project column and compute the average of all
entries and place the result in the total task set selector.
4.4.4 TASK SET SELECTOR
Entry Point Multiplier
Adaptation Grade Weight C N AE AM RE Product
Criteria D A
Proj. Size 2 1.20 0 1 1 1 1 2.4
No. of users 2 1.10 0 1 1 1 1 2.2
Busines 2 1.10 0 1 1 1 1 2.2
Appn. 3 0.90 0 1 1 0 0 2.7
Stab. Of 5 1.20 0 1 1 1 1 6.0
Ease of comm. 5 0.90 1 1 1 1 1 4.5
Technology 4 0.90 1 1 0 0 1 3.6
Perf. 4 0.80 0 1 1 0 1 3.2
Embedded/non 1 1.20 1 1 1 0 1 1.2
Proj. staffing 1 1.00 1 1 1 1 1 1.0
Interoperability 3 1.10 0 1 1 1 1 3.3
Re-eng factors 3 1.20 0 1 0 0 1 0
Taking the average we obtain 32.3/12=2.69
Since the T.S.S value is 1.0< T.S.S <3.0 the degree of rigor is structured. (Can be strict
also- final decision made by considering all the project factors.
• DATA DESIGN
• DATA FLOW DIAGRAM LEVEL 0
• DATA FLOW DIAGRAM LEVEL 1
• USE CASE DIAGRAM
• PROCEDURAL DESIGN
5.1 DATA DESIGN
Data Design creates a model of data store information that is represented at a high level
of abstraction. We consider the following set of principles for data specification:
1> The system analysis principles applied to function and behaviour should be
applied to the data.
2> All data structures and operations to be applied to data to be performed on each
should be identified.
3> A data dictionary should be established and used to define both data and program
4> Low level data design decisions should be defined and deferred until late in
5> A software design supports the specified and realization of abstract data type.
DATA FLOW DIAGRAM LEVEL 0
DATA FLOW DIAGRAM LEVEL 1
Raw Image Parts
t Parts Having
PROCESS THE IMAGE UNIT IMAGE FILES
ADJUST THE SIZE
STANDARDIZE THE IMAGE
MERGE THE FACES DISPLAY UNIT
5.2 PROCEDURAL DESIGN
The different procedures required are as follows:
Search for files: The prototype should have a login screen where user has to enter a
valid userid and password. The system sends this data to the database; data is
validated and perform a checkup against a database.
Database Required: The search contents for this purpose are required to be stored in
Upload Images: The user when wants to upload files in the database must first login
and enter before uploading.
Selection of three images: The user selects three different parts of the face from the
library of images stored in the database to obtain the morphed image from the
The sequence of operation is as follows:
T Wrap the BMP files for the three parts selected in three Expand Contract
o Expand two of the three parts using ‘expand’ function to make the widths
of all Parts equal.
o Wrap the BMP files containing the expanded parts in three Specifications
o Specify the histogram of each part to target histogram using the ‘specify’
f Wrap the BMP files containing the histogram using the specified parts ion
the three overlap objects.
t Combine the three histogram specified parts using ‘combine’ function.
t Wrap the BMP files containing the combined face in Specification Object.
t Specify the histogram of combined face to target histogram using ‘specify’
5.3 DESIGN OVERVIEW
The executable files for image processing programs are created using VC++.
1. The following steps will create a new project:
• Select File | New | Project
• Select Win32 Console Application
• Set the project location to the directory containing the header files and
• Set the project name
• Select OK to begin creating the project
2. Then we specify the kind of console application
• Select an empty project
• Click finish
• Click OK to create the project workspace
3. Once the project is created, we add files to the project, as follows:
• Select Project | Add to project | Files
• Highlight the .cpp files in the project
• Select OK to add the files to the project
4. Once the files have been added to the project, you build and execute the result as:
• To build the program, select Build | Build projectname.exe
• To execute the program, select Build | Execute projectname.exe
CODING AND IMPLEMENTATION
• CODE DESCRIPTION
CODING AND IMPLEMENTATION
The language used for implementation is VC++. Our code uses Microsoft Foundation
Classes (MFC) that is a C++ class library designed for creating GUI programs. The
MFC wraps the Win32 API providing a higher-level, more portable programming
interface. Our project (named Display) is a SDI (Single Document Interface) application
that uses four main classes.
• Document class
• View class
• Main frame Window class.
• Application class.
The programs tasks are divided among these four classes, and AppWizard creates
separate source files for each class.
The Display document class is named CDisplayDoc and is derived from the MFC class
CDocument. The CDisplayDoc header file is named DisplayDoc.h and the
implementation file is named DisplayDoc.cpp The Document class is responsible for
storing the program data.
The display view class is named CDisplayView and is derived from the MFC class
CView. The CDisplayView header file is named DisplayView.h and the implementation
is named DisplayView.cpp. The view class is responsible for displaying the program
data, processing the input from the user an, managing the view window. The file
DisplayView.cpp contains the main program logic.
The display main frame window class is named CMainFrame and is derived from the
MFC class CFrameWnd. The CmainFrame header file is named MainFrm.h and the
implementation file is named MainFrm.cpp. The main frame window class manages the
main program window, which is the frame window that contains window frame.
The display application class is named CDisplayApp and is derived from the MFC
class CWinApp. The CDisplayApp header file is named as Display.h and the
implementation file is Display.cpp. The application class manages the program as a
whole that is it performs general tasks like initializing the program and performing final
The file DisplayView.cpp includes a header file project.h, which contains class
definitions of five classes that are created:
The class ImageProcessor provides functions for copying the BMP header from source
to destination file, copying the data of BMP file from source file to destination file,
copying data of BMP file to an array, copying data from an array to destination BMP
file, closing a BMP file and extracting height and width of BMP file. The constructor of
the class allocates memory, opens the BMP file.
The class ExpandContract inherits ImageProcessor and provides a function to expand
or Contract a source BMP file by any extent specified.
The class Overlap inherits ImageProcessor and provides a function “combine” that
takes as input three source BMP files and combines them into a single BMP file using
predefined overlap limit. It also provides function ‘averageEdges’ which blurs the edges
between overlapped parts using averaging operator.
The class Histogram inherits ImageProcessor and provides a function, which first
constructs the histogram of source BMP file, equalizes the histogram and stores the
Equalized image in the destination BMP file.
The class Specification inherits ImageProcessor and provides a function, which first
constructs the histogram of source BMP file, specifies the histogram to target histogram
And stores the histogram specified image in the destination BMP file.
The function ‘Make Face’ in the file DisplayView.ccp uses the three parts selected and
Makes the face using functionality provided by the five classes in project.h.
CODE FOR DISPLAYVIEW.CPP
// Global Variables
CButton *moreimages=new CButton();
// CDisplayView construction/destruction
• TEST REPORTS
• TEST PLAN
• TEST PROCEDURES
• TEST CASES
7.1 TEST REPORTS
This section will focus on the different types of testing techniques used to check the
correctness of the middleware. A rigorous testing cannot be applied to this project
because the project is eventually a small part of the entire grid system. It has to work in
co- ordination with other modules that have been mentioned earlier.
7.2 TEST PLAN
Testing is a set of activity that can be planned in advance & conducted systematically.
For this reason, a template for software testing – asset of steps into which we can place
specific test cases, design techniques & testing methods – should be defined for the
The templates for software testing have the following generic characteristics:
• Testing begins at the component level and work outward toward the
integration of the entire computer based system.
• Different testing techniques are appropriate at different points of time.
• Testing & debugging must be accommodated in any testing strategy.
• A successful testing is one that uncovers as-yet-undiscovered error.
A strategy for software testing must accommodate low level tests that are necessary to
verify a small source code has been correctly implemented as well as high level test that
validate major system functions against customer requirements.
7.3 TESTING PROCEDURES
7.3.1 White Box Testing:
• The sequential flow of each statement has been checked.
• In case of branch testing, the true and the false branches are tested
• The computation taking place at every step is checked to see if the desired
result are achieved
• Debugging of the overall program code was done by stepping through the
code using F8 key.
• Thus, here internal logic of the program is tested efficiently.
7.3.2 Black Box Testing:
• The appropriate image is entered into the forms and the validations are
• The dependency or the flow of data from one form to another is verified.
• The proper linking of the form from the main form is also verified.
• The actions associated with the buttons are verified.
• Once the test transaction has been done, the data is checked in the
database to view the correctness of the data computation.
• Thus, inputs were provided to the application and result obtained were
matched with desired output.
7.3.3 Unit testing:
• The modules, event & subroutines in the program are initially identified
For e.g.: Head, Nose, Chin etc.
• Rules related to each module are being tested &verified individually
against all possible values.
7.3.4 Integration Testing:
Bottom-up approach of integration testing is used in the project.
• The modules in the program are initially identified.
For e.g.: Head, Nose, Chin etc
• Here, every module is tested & verified against the set of all possible
• Finally, the overall program is tested for efficient functionality of the
7.3.5 Regression Testing:
• Each time a new frame was added to the second module, regression
testing was performed to make sure that the newly added frame worked
well with the existing software.
Thus, in varied ways the testing part was accomplished.
7.4 TEST CASES:
Variable sized image parts were stored in the database – face part: 200x200, nose part:
400x400 and lips part: 120x300.
The software failed to combine the three parts due to the allocation of pre-defined
matrices for buffering the images. Hence clipping of images occurred.
The matrices were dynamically allocated instead of pre-allocation which enabled the
use of variable sized images thus solving the problem.
Variable sized images were used to morph one face to another – some images were
smaller in size, while some were larger in size.
For some cases, the final resultant image was of a smaller dimension than was desirable.
The user was allowed to customize the size of each image part thereby aiding him to
create user-defined sized image results.
Three different parts were selected and then combined to form the resultant morphed
For some cases, the final resultant image exhibited a bright band of light at the edge
The resultant image was passed through a Low Pass Filter to reduce the Brightness and
improve blurring of edges.
• GUI SHOWING THE FIRST SCREEN (DATABASE OF IMAGES)
• GUI SHOWING THE SECOND SCREEN (MORE IMAGES)
• GUI SHOWING THE MORPHED IMAGE
• GUI SHOWING THE CUSTOMIZE SCREEN
• GUI SHOWING THE RESULTANT IMAGE
• GUI SHOWING AN ERROR SCREEN
The First Screen Giving You the Options for selecting the Head, Nose And Lips.
Second screen giving more options
The resultant morphed image after combining the parts.
Dialog box asking for customizing information:
Final image after customizing:
The screen showing an error if any of the 3 parts are not selected.
RESULTS AND DISCUSSION
From the perspective of a critical review the major problem with out project correctly is
with color equalization, because of the image of the RGB color model the color
equalization also takes place on the Hue and Saturation components of an image which
causes some distortion in the final image, hence a major improvement would be to
achieve this color equalization using the HIS model. Another improvement would be to
include the image warping techniques which could be used to adjust some of the
features such as the nose and ears.
Due to severe constraints on the time that we had for implementation of this project,
some extra features could be implemented in our project and hence are mentioned here
as future work.
The future work of our project deals with the usage of the HIS color model for
achieving color equalization of the image also we have to provide for handling the
usage of images of females. Finally we would like to add that techniques from image
warping can be used to further provide extra functionality for adjusting the various
features of a face.
1. Refael C. Gonzalez and Richard E Woods, “Digital Image
Processing”, Prentice Hall, 2002.
2. A.K.Jain “Fundamentals of Digital Image Processing”, Prentice Hall,
3. Praffenburget & Gutzman , “VC ++ Bible”, Hungry Minds, 1998.
4. David J. Kruglinski, “Inside Visual C++”, Microsoft Press,2000
5. Visual C++ in 21 days, Sams Publishing, 1998
6. Roger Pressman, “Software Engineering”, Nandu Publications, 5th Edition
Damian Carrington , “New Scientist Print Edition”, Glasgow, 2001
9.3 WEB SITES: