SlideShare a Scribd company logo
IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 15, Issue 5 (Nov. - Dec. 2013), PP 61-68
www.iosrjournals.org
www.iosrjournals.org 61 | Page
A Novel approach for Graphical User Interface development and
real time Object and Face Tracking using Image Processing and
Computer Vision Techniques implemented in MATLAB
Satyajit Mahanta1
, Souvik Ghosal2
, Partha Das3
, Aditi Datta4
, SauravDebnath5
,
Sauvik Das Gupta6
12345
West Bengal University of Technology, Kolkata, West Bengal, India
6
Schoolof Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
Abstract:Image Processing is an advancement in the Signal processing domain where the input is an image or
video and output can be image, video or in any other forms. With daily advances in Human-Computer
interaction, Image Processing, along with computer vision has become an integral part of any intelligent system.
They are mainly based on the workings of the visual perception function of the human brain. As such most of
the algorithms are designed based around this central concept. The major application of Computer Vision is in
the field of intelligent machines and robotic visions, where machines are made intelligent by being able to sense
their surrounding visually and optimize themselves accordingly. In case of robots, through computer vision, they
are made capable of sensing their surroundings, just like a human.
Index Terms: Image processing,GUI, Object Tracking, Face Tracking, Edge Detection.
I. Introduction
In Image Processing or Computer Vision, the basic idea is to mimic the human visual perception
powers and try to find the mathematic models behind them [1] [2] [3]. Having achieved this, the next challenge
is to implement these models into viable real world systems and applications. In this paper, we try and
implement object detection & tracking along with Face Detection and tracking in real-time.We first start off
with detecting and tracking of objects and gradually expand this system to controlling a real world object, here
the iRobot Create. Though multiple algorithms have been developed for detecting objects, we have decided to
base our approach on the Colour based segmentation. Through this we detect an object and its centroid and over
the course of multiple frames, track it, based on its colour. We then further expand and apply the findings to
real-world systems (Arduino and iRobot Create) and end up with the ability to control, wirelessly. We then
develop the system further to allow us to have complete control of the robot as the user tend to change its
position the robot moves simultaneously in that direction.
We then moved onto Computer Vision wherein we implemented the Face Detection and tracking using
the highly efficient Viola-Jones [4] [5] Algorithm. For the tracking part, we first detected the face and based on
that and the skin tone information of the nose, which is the only part of the face having the least number of
background pixels; we implemented a CAMShift [6] algorithm to track the face over time. After the initial
findings, we expanded the system to be capable of detecting and tracking multiple faces.
Lastly, we have gathered and implemented all our findings and systems in a unified GUI [7] in order to give the
user a graphical user interfacefor ease of accessibility of the advanced functions.
II. GUI for Manipulating Images
GUI (Graphical User Interface) is a very popular Point-and-click control of a software application; it is
a well-known design environment which is very effective as the user does not need to know the programing
language (in which the software was built) in order to use the software. Thus, GUI is such an interface that can
literally be handled by anybody. In this article, we are going to discuss about a GUI which we have created in
the MATLAB‟s Graphics User Interface (GUI) environment and we will also discuss briefly how to make such
a GUI in MATLAB environment. This GUI design is all about the editing of pictures. The overview of this GUI
is given below but before that the procedure to make a GUI is mentioned below, in a nutshell.
2.1:Making a GUI in MATLAB – In MATLAB, There are vividly four kind of way to make a GUI,
namely Basic Graphics, Animation, Handle Graphics Objects and Creation of GUI using Guide. Here we will be
demonstrating about the GUI creation by using „GUIDE‟ command. This is a command which will fetch us to a
window to make a new GUI or to open any existing GUI. Let us make a curt demonstration to make anew GUI.
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 62 | Page
Below in figure 1 is a chart about the various ways of making GUI in MATLAB:
Figure1: Basic ways to implement a GUI
Here, we are going to use the last one i.e. „Creating GUI using GUIDE‟ to make a new one. There are some
typical steps involved in the process of making a GUI. The diagram shown below demarcatesthose typical steps
which are to be followed:
Figure 2: Basic steps to make one GUI by using GUIDE in MATLAB
Now, let us illustrate what these steps really mean. At first, the Step is “Designing the GUI”. Here user must
have a clear mental picture of what he or she is going to design and accordingly the user must start the design
right away. Next comes “Laying out the GUI” [8] [9]. In this step the user will create a layout of his or her
preferred design of the GUI in the Layout Editor. Then comes the “Programming the GUI”. Now, in this step
user must create some logic for their own preferences and accordingly write that logic in the programing
language of MATLAB. The last step which is “Saving & Running the GUI”. Now, there is one thing that is to
be remembered at the time of making a GUI that the design that the user will do in the Layout Editor will be
shown as the external interface and there are buttons, sliders and many more things that can be incorporated in
the Design. These various buttons are available on the due left corner of the Layout Editor. The user of the GUI
will not use any programming language to make any particular button active. Instead, the programming should
be under the title of the Button such that when the Button is pressed, it commands MATLAB to run the program
which is written under that function.
2.2 :Details - about the GUI-We have used several functions and operations in our design. Firstly there are
few buttons (technically known as Push Button) available at the top left corner of our design, namely „Resize‟
(to resize an image to users desired resolution), Noise, Image adjust, Rotate, Blend etc. The operations that we
have incorporated in our design are Morphological Operation, Image Segmentation etc. Here few of them are
discussed in the sense of giving an idea about what these functions are and what they do. Firstly we have shown
a screen shot of our design below.In this design we have shown the tweaking of the Red, Green and Blue
(technically known as RGB) values of a sample image (left), and the result image (right).
Figure 3: The GUI design at the time of real time RGB value tweaking operation on images
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 63 | Page
Next is the „Blend‟ Pushbutton. This is a button by clicking which we can add any two kinds (resolution wise)
of picture over one and another, i.e. they get merged up together. First the user has to load one picture to the
GUI main frame and this will be assigned as the 1st
image, on which a second image will be blended. Then
he/she can press this button so that a window will appear to load another image. We have designed this function
in such a way that when the image is not equal in dimension with the first one, then firstly the resizing of image
will be done according to the resolution of the first image and then the merging operation will be done. Here two
distinct images and their blended form are shown below.
Figure4: The Blend Operation
Another very useful feature of Image Processing that has been integrated in our GUI,is the ability to find edges
in a given image [10]. This functionality is widely used in many Computer vision techniques. As such there are
multiple algorithms that are used for finding edges. In our GUI, we have included most of the available
algorithm. But during our workings, we found that since the algorithms work only with Black & White images,
there is a considerable amount of information loss when we need to find edges from a real world or color image,
as it is converted to the Black & White image. In order to reduce this information loss, we have implemented a
simple new algorithm, based on the original algorithms, to find edges for color images, as shown in the flow
diagram (Figure 5). As can be seen from Figure 6: edges of objects that were lost in the general approach are
quite visible through the new approach. The weakest edges are denoted by Red, followed by Green & Blue. The
most prominent edges are marked with white. These can be used or discarded (Figure 8) as required, by a simple
thresh-holding on the component.
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 64 | Page
Figure 5: Proposed Algorithm of the new Edge Detection
Figure 6: Original Color Image
Figure7: (left) Result after normal Edge Detection (Canny) (right) Result after the new Edge Detection
algorithm
Figure 8: Applying thresholding after the new Edge Detection algorithm to discard the weak edges
III. Image Processing
3.1: Object Detection & Tracking- Like any living things, in order to track an object, we first need to identify
and detect that object. There are many approaches to this. We base our approach on the colour of the object. For
this, we first initialize the camera, the primary input of our processing system, to start grabbing images and return
it in the RGB colour space. Once this is done, we then set up a loop that iterate through each frame of the input
stream, since a video is essentially a stream of still images. With each of these grabbed frames, we first convert
them to grey scale & then we segment the colour channel from it. After this we apply filters to remove the noise
& hence minimize errors due to external conditions. Having done this, we find the area of the objects, which are
now only the objects with the required colours. Once done, we obtain an array containing descriptions of the
centroid and area of all the objects with the desired colour. We then use this array to draw a box around the
detected object(s). Since we are finding the centroid for the objects in every frame, the object is tracked along
with its motion over the entire duration of the stream.
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 65 | Page
Figure 9: Proposed scheme for Object detection & tracking
Figure 10: Objects being detected & tracked
3.2: Object Based Control- Once we are able to detect and track an object, we can easily use the data so
obtained to control real world systems. For the preliminary stage, we chose a basic Arduino board with an LED.
The idea was to make a LED light up when it was in a given range on the X-Y axis and turn off when it went
out of the range. We used the differential of the previous and present coordinates of the centroid of the tracked
object. By analysing the result from this operation we can detect whether object was moved to the left or right or
to the top or bottom. For example if the difference in the X Axis was greater than the Y Axis, then we can surely
say that the object was moved in the X-Axis & vice-versa. A value within a small threshold is ignored in order
to prevent the system from becoming too sensitive to motion and thus responding to even slight hand jerks.
3.3: Object Based control of iRobot – Once the control algorithm has been tested; we applied the findings to
control iRobot over a Bluetooth Access Module (BAM), which gives us a huge range of wireless operability.
Just like with the arduino board, we detected the motion using the same algorithm (figure 11) and for movement
along Y-Axis, we made it turn clockwise/anti-clockwise, while for X-Axis, we made it go forward and
backward, thus detecting 2-D movement of the object. We then further expanded the dimension of the iRobot
movement by tracking the area too. If the area increases, then the object is coming closer to the camera, while
the decrease signifies the object moving away from the camera in the Z-Axis. With this in mind we made the
iRobot back away when the object moved closer and come closer when the object moved away from the camera,
thus allowing a 3-D motion control of the iRobot.
Figure 11: Proposed scheme for controlling iRobot through Object Detection
GRAB IMAGE & RETURN TO RGB SPACE
CONVERT TO GRAY SCALE
SUBTRACT FROM REQUIRED COLOUR CHANNEL
APPLLY FILTERS
FIND AREA AND OBJECT CENTROID
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 66 | Page
3.4: Object Motion Mimicking – After controlling the iRobot in three dimensions, we further expanded
the capability by making the robot mimic the object‟s movement, thus freeing it from rigid straight line
movements. As such, the robot‟s movement follows the pattern that is plotted by the movement of the object.
The basic idea behind the algorithm is that we can describe each movement in a Cartesian plane with
coordinates & when it is translated to the real world, it can be described by the change of angle along with
forward, backward or sideways movements. Thus, like the previous algorithm, we detect which direction the
object is moving and along with that we also calculate the angle at which the object is moved. This can be easily
found by calculating the slope between the two points of the centroid and normalizing it to a value between π to
–π degrees. Like before we also keep a threshold here to ignore the unwanted jerks of the hands. We then make
the iRobot turn the calculated angle along with the direction of the movement, thus mimicking the motion of the
object in the real world.
3.5 :Following a map – From all these, we also observed that we can make the iRobot follow a map or pattern
in the real world based on a map or pattern drawn as an image. The basic idea is that like the mimicking
algorithm, the iRobot will follow a pattern, but instead of the input being from the object‟s movement, it will be
a pre-drawn pattern in a binary image. Another zero initialized matrix (reference) with the same dimension as
the image is maintained to track looping. The algorithm is such that, from the starting point, the program will
sample its next pixel, which can be in front or to the sides. If the one of the pixels is set as 1 and the
corresponding pixel is not set as 1 in the reference, then the iRobot will move in that direction. When the iRobot
moves, it updates the corresponding pixel in the reference too. The entire process is repeated until the end of the
map is found (when there are no more pixels to move forward to) or the pixel which has already been traversed
(using the reference) is met. If the robot encounters a cross-section (overlapping of path), it randomly decides on
a direction and updates. The next time it comes, due to the reference being updated, it will only have one way to
go. Thus the robot is made to follow a given map/pattern even with loops. The algorithm was further expanded
by making the reference increase the weights it deposits on a traversed path, like an ant. This can track the
number of loop the robot is required to perform. Also, a scale factor is kept for the map to real world distance
scaling. This factor can be tweaked to indicate how much distance of the real world each pixel should designate.
lV. Face Detection
4.1: Viola-Jones Algorithm -In 2001, Paul Viola and Michael Jones proposed a framework for an algorithm
called the Viola-Jones algorithmthat used a variant of Adaboost, Haar Features and Integral Image to detect
faces in any given image. This algorithm can also be used to detect other parts of the face like eyes, nose, upper
body, etc. Although this algorithm was motivated by the problem of Face Detection, it is capable of being used
as an object detector and hence this algorithm can be trained to detect a wide variety of objects too.
4.2: Methodology - We have used Viola-Jones algorithm forfacedetection to implement a real time face
detection system. As illustrated in the flow chart in Figure12, we first initialize the cascade object detector to
detect a face. There are various ways to detect a face using the Viola-Jones algorithm. The algorithm can be
initialized to detect different parts of the face, like eyes, nose, upper body etc. Since the face is the part which is
distinguished mainly by the skin tone, we base our approach on this method and as such we first convert the
captured image to the Hue channel, as this gives us the skin tone information and much better detection rate.
Once the face is detected, our next objective is to track. Since we are using computer vision, we implemented
the Histogram Based tracker, that tracks an object based on the pixel density over time, to track the subject(s)‟
face. Again, in order to get better and constant tracking results, we first extract the face and detect the nose in it
using Viola-Jones Algorithm, all using the Hue channel data. We specifically use the nose instead of the face to
track the face because in a human face, the nose is the most prominent part. In addition, the nose has the lowest
number of background pixels which has the least variance due to external conditions, like illumination. This can
be easily seen in the skin tone information obtained through the Hue channel data. Once the nose is detected, we
initialize the tracker with this and extrapolate the tracked information to account for the face. We even further
expanded the system to detect multiple faces, using the same principle but looped many times over. The system
so developed was integrated in to the GUI that has been developed and discussed in the first part of this paper.
The Figure 13 shows the system detecting & tracking multiple faces at once.
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 67 | Page
Figure 12: Proposed method for Face Detection & Tracking
Figure 13: Multiple faces being detected and tracked simultaneously
V. Discussion & Future Prospects
In the paper, we have seen and implemented many functions in the GUI which are widely used in the
field of image processing and we also tried to make some improvements in some of the functions. However, all
of the functions could not be shown in the one screen shot as there are other sub-steps involved in this process.
Image Processing opens up a wide variety of Human-Computer interaction that can only be limited by the
human imagination. The works that we have shown here can easily be expanded to various real world situations.
For example, the iRobot function of following the map can easily be embedded and made into a robot that can
perform tasks like a peon in a disaster stricken area where normal transportations are hindered. The topography
of the area can be easily found out and mapped using conventional method and then using this approach an
automated delivery or even rescue system can be setup quickly, by chalking out a map based on the topography,
for the robot to follow. Again the motion mimicking part can be integrated here to make the entire process real
time and thus help guide people within a, for example, collapsed building/area, by guiding the victims by
drawing the escape path and making a guide robot follow it. This is but just one way of implementing the work
we have done here in the real world. The algorithms, though, that have been implemented still does have room
for improvements. For example, in the basic object detection model, instead of scanning the entire image for the
next position of the tracked image, we could set up a probabilistic boundary in which to search for the next
position. If the object is not found there, we scan the entire image. This can result in an even better, faster and
efficient detection and tracking algorithm.
A Novel approach for Graphical User Interface development and real time Object and Face Tracking
www.iosrjournals.org 68 | Page
Along the same line, we have Face Detection and tracking. Needless to say, this has an ever growing
demand in the field of security. But, as more powerful hardware are getting miniaturized face detection and
tracking are also creeping into our daily lives. For example, smart TVs and electronics can be more efficient by
automatically tracking and thus regulating themselves for efficient management of energy. They can for
example turn themselves off or go into a sleep mode when they can‟t see any person in their field of service.
Also, advanced gesture recognition systems can be developed using facial features detection. One example, out
of many such possibilities, is that a user might control robots or other electronics with just the movement of
their heads, which the electronics can track. It is only a matter of time, before we will have electronics that will
actually give us personalized services based on who we are.
Acknowledgements
We would like to thank ESL, www.eschoollearning.net, Kolkata for giving us a platform to work on this project.
References
[1] Richard Szeliski,Computer Vision algorithms and applications, (2010)
[2] R. Fisher , K Dawson-Howe, A. Fitzgibbon, C.Robertson, E. Trucco, John Wiley, Dictionary of Computer Vision and Image
Processing, 2005
[3] F. Tanner, B. Colder, C. Pullen, D. Heagy, C. Oertel, & P. Sallee, Overhead Imagery Research Data Set (OIRDS) – an annotated
data library and tools to aid in the development of computer vision algorithms, June 2009
[4] Liming Wang, Jianbosh Shi, Gang Song, I-fan Shen, Object Detection Combining recognition and Segmentation, Fudan University,
Shanghai, PRC, 200433, obj_det_liming_accv07.pdf
[5] Viola, Jones: Robust Real-time Object Detection, IJCV 2001,http://en.wikipedia.org/wiki/Caltech_101#cite_note-Viola_Jones-
[6] D. Comaniciu, V. Ramesh and P. Meer, Real-time tracking of non-rigid objects using mean shift. In Proc. Conf. Comp. Vision
Pattern Rec., pages II: 142–149, Hilton Head, SC, and June 2000
[7] Scott T. Smith, MATLAB Advanced GUI Development, Dog Ear Publication, published (2006)
[8] Yair Moshe, Signal and Image Processing Laboratory, GUI with MATLAB, Published May 2004
[9] Rafael C. Gonzalez, Richard E. Woods and Steven L, Eddins, Digital Image Processing using MATLAB, (2004)
[10] Scott E Umbaugh, Digital image processing and analysis: human and computer vision applications with CVIPTools, segmentation
and edge like detection, CRC press, November 19, 2010

More Related Content

What's hot

computer graphics
computer graphicscomputer graphics
computer graphics
ashpri156
 
Introduction to graphics
Introduction to graphicsIntroduction to graphics
Introduction to graphics
Sowmya Jyothi
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
sonal_badhe
 
Applications of cg
Applications of cgApplications of cg
Applications of cg
Ankit Garg
 
Panorama Technique for 3D Animation movie, Design and Evaluating
Panorama Technique for 3D Animation movie, Design and EvaluatingPanorama Technique for 3D Animation movie, Design and Evaluating
Panorama Technique for 3D Animation movie, Design and Evaluating
IOSRjournaljce
 
Sketch2presentation
Sketch2presentationSketch2presentation
Sketch2presentation
jin.fan
 
IRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented RealityIRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented Reality
IRJET Journal
 
01 introduction image processing analysis
01 introduction image processing analysis01 introduction image processing analysis
01 introduction image processing analysis
Rumah Belajar
 
Introduction to Computer Graphics
Introduction to Computer GraphicsIntroduction to Computer Graphics
Introduction to Computer Graphics
Budditha Hettige
 
Animation
AnimationAnimation
Animation
Vasu Mathi
 
From Image Processing To Computer Vision
From Image Processing To Computer VisionFrom Image Processing To Computer Vision
From Image Processing To Computer Vision
Joud Khattab
 
Mouse Simulation Using Two Coloured Tapes
Mouse Simulation Using Two Coloured Tapes Mouse Simulation Using Two Coloured Tapes
Mouse Simulation Using Two Coloured Tapes
ijistjournal
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
Kamal Acharya
 
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraGesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
IJERD Editor
 
Computer Graphics
Computer GraphicsComputer Graphics
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
Amandeep Kaur
 
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
IJERA Editor
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
Partnered Health
 
3 d computer graphics software
3 d computer graphics software3 d computer graphics software
3 d computer graphics software
Afnan Asem
 

What's hot (19)

computer graphics
computer graphicscomputer graphics
computer graphics
 
Introduction to graphics
Introduction to graphicsIntroduction to graphics
Introduction to graphics
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
 
Applications of cg
Applications of cgApplications of cg
Applications of cg
 
Panorama Technique for 3D Animation movie, Design and Evaluating
Panorama Technique for 3D Animation movie, Design and EvaluatingPanorama Technique for 3D Animation movie, Design and Evaluating
Panorama Technique for 3D Animation movie, Design and Evaluating
 
Sketch2presentation
Sketch2presentationSketch2presentation
Sketch2presentation
 
IRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented RealityIRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented Reality
 
01 introduction image processing analysis
01 introduction image processing analysis01 introduction image processing analysis
01 introduction image processing analysis
 
Introduction to Computer Graphics
Introduction to Computer GraphicsIntroduction to Computer Graphics
Introduction to Computer Graphics
 
Animation
AnimationAnimation
Animation
 
From Image Processing To Computer Vision
From Image Processing To Computer VisionFrom Image Processing To Computer Vision
From Image Processing To Computer Vision
 
Mouse Simulation Using Two Coloured Tapes
Mouse Simulation Using Two Coloured Tapes Mouse Simulation Using Two Coloured Tapes
Mouse Simulation Using Two Coloured Tapes
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
 
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraGesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
 
Computer Graphics
Computer GraphicsComputer Graphics
Computer Graphics
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
 
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
 
Introduction to computer graphics
Introduction to computer graphicsIntroduction to computer graphics
Introduction to computer graphics
 
3 d computer graphics software
3 d computer graphics software3 d computer graphics software
3 d computer graphics software
 

Viewers also liked

F012475664
F012475664F012475664
F012475664
IOSR Journals
 
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...
IOSR Journals
 
F1803013034
F1803013034F1803013034
F1803013034
IOSR Journals
 
L1803027588
L1803027588L1803027588
L1803027588
IOSR Journals
 
I010336070
I010336070I010336070
I010336070
IOSR Journals
 
A0530106
A0530106A0530106
A0530106
IOSR Journals
 
T130403141145
T130403141145T130403141145
T130403141145
IOSR Journals
 
F1303053542
F1303053542F1303053542
F1303053542
IOSR Journals
 
Educational Process Mining-Different Perspectives
Educational Process Mining-Different PerspectivesEducational Process Mining-Different Perspectives
Educational Process Mining-Different Perspectives
IOSR Journals
 
B012651523
B012651523B012651523
B012651523
IOSR Journals
 
D017552025
D017552025D017552025
D017552025
IOSR Journals
 
L012247886
L012247886L012247886
L012247886
IOSR Journals
 
L1303038388
L1303038388L1303038388
L1303038388
IOSR Journals
 
F011136467
F011136467F011136467
F011136467
IOSR Journals
 
N130305105111
N130305105111N130305105111
N130305105111
IOSR Journals
 
J017145559
J017145559J017145559
J017145559
IOSR Journals
 
D0441318
D0441318D0441318
D0441318
IOSR Journals
 
Q01742112115
Q01742112115Q01742112115
Q01742112115
IOSR Journals
 
L017127174
L017127174L017127174
L017127174
IOSR Journals
 
J012546068
J012546068J012546068
J012546068
IOSR Journals
 

Viewers also liked (20)

F012475664
F012475664F012475664
F012475664
 
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...
Comparison of an Isolated bidirectional Dc-Dc converter with and without a Fl...
 
F1803013034
F1803013034F1803013034
F1803013034
 
L1803027588
L1803027588L1803027588
L1803027588
 
I010336070
I010336070I010336070
I010336070
 
A0530106
A0530106A0530106
A0530106
 
T130403141145
T130403141145T130403141145
T130403141145
 
F1303053542
F1303053542F1303053542
F1303053542
 
Educational Process Mining-Different Perspectives
Educational Process Mining-Different PerspectivesEducational Process Mining-Different Perspectives
Educational Process Mining-Different Perspectives
 
B012651523
B012651523B012651523
B012651523
 
D017552025
D017552025D017552025
D017552025
 
L012247886
L012247886L012247886
L012247886
 
L1303038388
L1303038388L1303038388
L1303038388
 
F011136467
F011136467F011136467
F011136467
 
N130305105111
N130305105111N130305105111
N130305105111
 
J017145559
J017145559J017145559
J017145559
 
D0441318
D0441318D0441318
D0441318
 
Q01742112115
Q01742112115Q01742112115
Q01742112115
 
L017127174
L017127174L017127174
L017127174
 
J012546068
J012546068J012546068
J012546068
 

Similar to A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB

IRJET- HCI System with Hand Gesture
IRJET-  	  HCI System with Hand GestureIRJET-  	  HCI System with Hand Gesture
IRJET- HCI System with Hand Gesture
IRJET Journal
 
Color based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabColor based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlab
Kamal Pradhan
 
Cg notes
Cg notesCg notes
Cg notes
Singston_Albert
 
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET Journal
 
Mobile Programming - 8 Progress Bar, Draggable Music Knob, Timer
Mobile Programming - 8 Progress Bar, Draggable Music Knob, TimerMobile Programming - 8 Progress Bar, Draggable Music Knob, Timer
Mobile Programming - 8 Progress Bar, Draggable Music Knob, Timer
AndiNurkholis1
 
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTION
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTIONA SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTION
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTION
ijcsit
 
Counterfeit Currency Detection using Image Processing
Counterfeit Currency Detection using Image ProcessingCounterfeit Currency Detection using Image Processing
Counterfeit Currency Detection using Image Processing
karthik0101
 
Face Recognition Based on Image Processing in an Advanced Robotic System
Face Recognition Based on Image Processing in an Advanced Robotic SystemFace Recognition Based on Image Processing in an Advanced Robotic System
Face Recognition Based on Image Processing in an Advanced Robotic System
IRJET Journal
 
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI Implementation
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI ImplementationUI2code : A Neural Machine Translator to Bootstrap Mobile GUI Implementation
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI Implementation
Chunyang Chen
 
IRJET- A Smart Personal AI Assistant for Visually Impaired People: A Survey
IRJET-  	  A Smart Personal AI Assistant for Visually Impaired People: A SurveyIRJET-  	  A Smart Personal AI Assistant for Visually Impaired People: A Survey
IRJET- A Smart Personal AI Assistant for Visually Impaired People: A Survey
IRJET Journal
 
Computer Art Applications.pptx
Computer Art Applications.pptxComputer Art Applications.pptx
Computer Art Applications.pptx
leovieortega2
 
06108870 analytical study of parallel and distributed image processing 2011
06108870 analytical study of parallel and distributed image processing 201106108870 analytical study of parallel and distributed image processing 2011
06108870 analytical study of parallel and distributed image processing 2011
Kiran Verma
 
IRJET- Full Body Motion Detection and Surveillance System Application
IRJET-  	  Full Body Motion Detection and Surveillance System ApplicationIRJET-  	  Full Body Motion Detection and Surveillance System Application
IRJET- Full Body Motion Detection and Surveillance System Application
IRJET Journal
 
Heart check - Fourth Deadline
Heart check - Fourth DeadlineHeart check - Fourth Deadline
Heart check - Fourth Deadline
Mauro Papa
 
Research on Detecting Hand Gesture
Research on Detecting Hand GestureResearch on Detecting Hand Gesture
Research on Detecting Hand Gesture
IRJET Journal
 
HAND GESTURE CONTROLLED MOUSE
HAND GESTURE CONTROLLED MOUSEHAND GESTURE CONTROLLED MOUSE
HAND GESTURE CONTROLLED MOUSE
IRJET Journal
 
GTK+ 2.0 App - Icon Chooser
GTK+ 2.0 App - Icon ChooserGTK+ 2.0 App - Icon Chooser
GTK+ 2.0 App - Icon Chooser
William Lee
 
Accessing Operating System using Finger Gesture
Accessing Operating System using Finger GestureAccessing Operating System using Finger Gesture
Accessing Operating System using Finger Gesture
IRJET Journal
 
Cursor movement by hand gesture.pptx
Cursor movement by hand gesture.pptxCursor movement by hand gesture.pptx
Cursor movement by hand gesture.pptx
RastogiAman
 
IRJET- Sixth Sense Technology in Image Processing
IRJET-  	  Sixth Sense Technology in Image ProcessingIRJET-  	  Sixth Sense Technology in Image Processing
IRJET- Sixth Sense Technology in Image Processing
IRJET Journal
 

Similar to A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB (20)

IRJET- HCI System with Hand Gesture
IRJET-  	  HCI System with Hand GestureIRJET-  	  HCI System with Hand Gesture
IRJET- HCI System with Hand Gesture
 
Color based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabColor based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlab
 
Cg notes
Cg notesCg notes
Cg notes
 
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
 
Mobile Programming - 8 Progress Bar, Draggable Music Knob, Timer
Mobile Programming - 8 Progress Bar, Draggable Music Knob, TimerMobile Programming - 8 Progress Bar, Draggable Music Knob, Timer
Mobile Programming - 8 Progress Bar, Draggable Music Knob, Timer
 
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTION
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTIONA SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTION
A SOFTWARE TOOL FOR EXPERIMENTAL STUDY LEAP MOTION
 
Counterfeit Currency Detection using Image Processing
Counterfeit Currency Detection using Image ProcessingCounterfeit Currency Detection using Image Processing
Counterfeit Currency Detection using Image Processing
 
Face Recognition Based on Image Processing in an Advanced Robotic System
Face Recognition Based on Image Processing in an Advanced Robotic SystemFace Recognition Based on Image Processing in an Advanced Robotic System
Face Recognition Based on Image Processing in an Advanced Robotic System
 
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI Implementation
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI ImplementationUI2code : A Neural Machine Translator to Bootstrap Mobile GUI Implementation
UI2code : A Neural Machine Translator to Bootstrap Mobile GUI Implementation
 
IRJET- A Smart Personal AI Assistant for Visually Impaired People: A Survey
IRJET-  	  A Smart Personal AI Assistant for Visually Impaired People: A SurveyIRJET-  	  A Smart Personal AI Assistant for Visually Impaired People: A Survey
IRJET- A Smart Personal AI Assistant for Visually Impaired People: A Survey
 
Computer Art Applications.pptx
Computer Art Applications.pptxComputer Art Applications.pptx
Computer Art Applications.pptx
 
06108870 analytical study of parallel and distributed image processing 2011
06108870 analytical study of parallel and distributed image processing 201106108870 analytical study of parallel and distributed image processing 2011
06108870 analytical study of parallel and distributed image processing 2011
 
IRJET- Full Body Motion Detection and Surveillance System Application
IRJET-  	  Full Body Motion Detection and Surveillance System ApplicationIRJET-  	  Full Body Motion Detection and Surveillance System Application
IRJET- Full Body Motion Detection and Surveillance System Application
 
Heart check - Fourth Deadline
Heart check - Fourth DeadlineHeart check - Fourth Deadline
Heart check - Fourth Deadline
 
Research on Detecting Hand Gesture
Research on Detecting Hand GestureResearch on Detecting Hand Gesture
Research on Detecting Hand Gesture
 
HAND GESTURE CONTROLLED MOUSE
HAND GESTURE CONTROLLED MOUSEHAND GESTURE CONTROLLED MOUSE
HAND GESTURE CONTROLLED MOUSE
 
GTK+ 2.0 App - Icon Chooser
GTK+ 2.0 App - Icon ChooserGTK+ 2.0 App - Icon Chooser
GTK+ 2.0 App - Icon Chooser
 
Accessing Operating System using Finger Gesture
Accessing Operating System using Finger GestureAccessing Operating System using Finger Gesture
Accessing Operating System using Finger Gesture
 
Cursor movement by hand gesture.pptx
Cursor movement by hand gesture.pptxCursor movement by hand gesture.pptx
Cursor movement by hand gesture.pptx
 
IRJET- Sixth Sense Technology in Image Processing
IRJET-  	  Sixth Sense Technology in Image ProcessingIRJET-  	  Sixth Sense Technology in Image Processing
IRJET- Sixth Sense Technology in Image Processing
 

More from IOSR Journals

A011140104
A011140104A011140104
A011140104
IOSR Journals
 
M0111397100
M0111397100M0111397100
M0111397100
IOSR Journals
 
L011138596
L011138596L011138596
L011138596
IOSR Journals
 
K011138084
K011138084K011138084
K011138084
IOSR Journals
 
J011137479
J011137479J011137479
J011137479
IOSR Journals
 
I011136673
I011136673I011136673
I011136673
IOSR Journals
 
G011134454
G011134454G011134454
G011134454
IOSR Journals
 
H011135565
H011135565H011135565
H011135565
IOSR Journals
 
F011134043
F011134043F011134043
F011134043
IOSR Journals
 
E011133639
E011133639E011133639
E011133639
IOSR Journals
 
D011132635
D011132635D011132635
D011132635
IOSR Journals
 
C011131925
C011131925C011131925
C011131925
IOSR Journals
 
B011130918
B011130918B011130918
B011130918
IOSR Journals
 
A011130108
A011130108A011130108
A011130108
IOSR Journals
 
I011125160
I011125160I011125160
I011125160
IOSR Journals
 
H011124050
H011124050H011124050
H011124050
IOSR Journals
 
G011123539
G011123539G011123539
G011123539
IOSR Journals
 
F011123134
F011123134F011123134
F011123134
IOSR Journals
 
E011122530
E011122530E011122530
E011122530
IOSR Journals
 
D011121524
D011121524D011121524
D011121524
IOSR Journals
 

More from IOSR Journals (20)

A011140104
A011140104A011140104
A011140104
 
M0111397100
M0111397100M0111397100
M0111397100
 
L011138596
L011138596L011138596
L011138596
 
K011138084
K011138084K011138084
K011138084
 
J011137479
J011137479J011137479
J011137479
 
I011136673
I011136673I011136673
I011136673
 
G011134454
G011134454G011134454
G011134454
 
H011135565
H011135565H011135565
H011135565
 
F011134043
F011134043F011134043
F011134043
 
E011133639
E011133639E011133639
E011133639
 
D011132635
D011132635D011132635
D011132635
 
C011131925
C011131925C011131925
C011131925
 
B011130918
B011130918B011130918
B011130918
 
A011130108
A011130108A011130108
A011130108
 
I011125160
I011125160I011125160
I011125160
 
H011124050
H011124050H011124050
H011124050
 
G011123539
G011123539G011123539
G011123539
 
F011123134
F011123134F011123134
F011123134
 
E011122530
E011122530E011122530
E011122530
 
D011121524
D011121524D011121524
D011121524
 

Recently uploaded

AI assisted telemedicine KIOSK for Rural India.pptx
AI assisted telemedicine KIOSK for Rural India.pptxAI assisted telemedicine KIOSK for Rural India.pptx
AI assisted telemedicine KIOSK for Rural India.pptx
architagupta876
 
Material for memory and display system h
Material for memory and display system hMaterial for memory and display system h
Material for memory and display system h
gowrishankartb2005
 
cnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classicationcnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classication
SakkaravarthiShanmug
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
co23btech11018
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
Certificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi AhmedCertificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi Ahmed
Mahmoud Morsy
 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
21UME003TUSHARDEB
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
IJECEIAES
 
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
Gino153088
 
Null Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAMNull Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAM
Divyanshu
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
ElakkiaU
 
Seminar on Distillation study-mafia.pptx
Seminar on Distillation study-mafia.pptxSeminar on Distillation study-mafia.pptx
Seminar on Distillation study-mafia.pptx
Madan Karki
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
KrishnaveniKrishnara1
 
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student MemberIEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
VICTOR MAESTRE RAMIREZ
 
BRAIN TUMOR DETECTION for seminar ppt.pdf
BRAIN TUMOR DETECTION for seminar ppt.pdfBRAIN TUMOR DETECTION for seminar ppt.pdf
BRAIN TUMOR DETECTION for seminar ppt.pdf
LAXMAREDDY22
 
Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
Hitesh Mohapatra
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
kandramariana6
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Sinan KOZAK
 
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
ecqow
 
Rainfall intensity duration frequency curve statistical analysis and modeling...
Rainfall intensity duration frequency curve statistical analysis and modeling...Rainfall intensity duration frequency curve statistical analysis and modeling...
Rainfall intensity duration frequency curve statistical analysis and modeling...
bijceesjournal
 

Recently uploaded (20)

AI assisted telemedicine KIOSK for Rural India.pptx
AI assisted telemedicine KIOSK for Rural India.pptxAI assisted telemedicine KIOSK for Rural India.pptx
AI assisted telemedicine KIOSK for Rural India.pptx
 
Material for memory and display system h
Material for memory and display system hMaterial for memory and display system h
Material for memory and display system h
 
cnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classicationcnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classication
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
Certificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi AhmedCertificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi Ahmed
 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
 
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
 
Null Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAMNull Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAM
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
 
Seminar on Distillation study-mafia.pptx
Seminar on Distillation study-mafia.pptxSeminar on Distillation study-mafia.pptx
Seminar on Distillation study-mafia.pptx
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
 
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student MemberIEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
 
BRAIN TUMOR DETECTION for seminar ppt.pdf
BRAIN TUMOR DETECTION for seminar ppt.pdfBRAIN TUMOR DETECTION for seminar ppt.pdf
BRAIN TUMOR DETECTION for seminar ppt.pdf
 
Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
 
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
 
Rainfall intensity duration frequency curve statistical analysis and modeling...
Rainfall intensity duration frequency curve statistical analysis and modeling...Rainfall intensity duration frequency curve statistical analysis and modeling...
Rainfall intensity duration frequency curve statistical analysis and modeling...
 

A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB

  • 1. IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 15, Issue 5 (Nov. - Dec. 2013), PP 61-68 www.iosrjournals.org www.iosrjournals.org 61 | Page A Novel approach for Graphical User Interface development and real time Object and Face Tracking using Image Processing and Computer Vision Techniques implemented in MATLAB Satyajit Mahanta1 , Souvik Ghosal2 , Partha Das3 , Aditi Datta4 , SauravDebnath5 , Sauvik Das Gupta6 12345 West Bengal University of Technology, Kolkata, West Bengal, India 6 Schoolof Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA Abstract:Image Processing is an advancement in the Signal processing domain where the input is an image or video and output can be image, video or in any other forms. With daily advances in Human-Computer interaction, Image Processing, along with computer vision has become an integral part of any intelligent system. They are mainly based on the workings of the visual perception function of the human brain. As such most of the algorithms are designed based around this central concept. The major application of Computer Vision is in the field of intelligent machines and robotic visions, where machines are made intelligent by being able to sense their surrounding visually and optimize themselves accordingly. In case of robots, through computer vision, they are made capable of sensing their surroundings, just like a human. Index Terms: Image processing,GUI, Object Tracking, Face Tracking, Edge Detection. I. Introduction In Image Processing or Computer Vision, the basic idea is to mimic the human visual perception powers and try to find the mathematic models behind them [1] [2] [3]. Having achieved this, the next challenge is to implement these models into viable real world systems and applications. In this paper, we try and implement object detection & tracking along with Face Detection and tracking in real-time.We first start off with detecting and tracking of objects and gradually expand this system to controlling a real world object, here the iRobot Create. Though multiple algorithms have been developed for detecting objects, we have decided to base our approach on the Colour based segmentation. Through this we detect an object and its centroid and over the course of multiple frames, track it, based on its colour. We then further expand and apply the findings to real-world systems (Arduino and iRobot Create) and end up with the ability to control, wirelessly. We then develop the system further to allow us to have complete control of the robot as the user tend to change its position the robot moves simultaneously in that direction. We then moved onto Computer Vision wherein we implemented the Face Detection and tracking using the highly efficient Viola-Jones [4] [5] Algorithm. For the tracking part, we first detected the face and based on that and the skin tone information of the nose, which is the only part of the face having the least number of background pixels; we implemented a CAMShift [6] algorithm to track the face over time. After the initial findings, we expanded the system to be capable of detecting and tracking multiple faces. Lastly, we have gathered and implemented all our findings and systems in a unified GUI [7] in order to give the user a graphical user interfacefor ease of accessibility of the advanced functions. II. GUI for Manipulating Images GUI (Graphical User Interface) is a very popular Point-and-click control of a software application; it is a well-known design environment which is very effective as the user does not need to know the programing language (in which the software was built) in order to use the software. Thus, GUI is such an interface that can literally be handled by anybody. In this article, we are going to discuss about a GUI which we have created in the MATLAB‟s Graphics User Interface (GUI) environment and we will also discuss briefly how to make such a GUI in MATLAB environment. This GUI design is all about the editing of pictures. The overview of this GUI is given below but before that the procedure to make a GUI is mentioned below, in a nutshell. 2.1:Making a GUI in MATLAB – In MATLAB, There are vividly four kind of way to make a GUI, namely Basic Graphics, Animation, Handle Graphics Objects and Creation of GUI using Guide. Here we will be demonstrating about the GUI creation by using „GUIDE‟ command. This is a command which will fetch us to a window to make a new GUI or to open any existing GUI. Let us make a curt demonstration to make anew GUI.
  • 2. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 62 | Page Below in figure 1 is a chart about the various ways of making GUI in MATLAB: Figure1: Basic ways to implement a GUI Here, we are going to use the last one i.e. „Creating GUI using GUIDE‟ to make a new one. There are some typical steps involved in the process of making a GUI. The diagram shown below demarcatesthose typical steps which are to be followed: Figure 2: Basic steps to make one GUI by using GUIDE in MATLAB Now, let us illustrate what these steps really mean. At first, the Step is “Designing the GUI”. Here user must have a clear mental picture of what he or she is going to design and accordingly the user must start the design right away. Next comes “Laying out the GUI” [8] [9]. In this step the user will create a layout of his or her preferred design of the GUI in the Layout Editor. Then comes the “Programming the GUI”. Now, in this step user must create some logic for their own preferences and accordingly write that logic in the programing language of MATLAB. The last step which is “Saving & Running the GUI”. Now, there is one thing that is to be remembered at the time of making a GUI that the design that the user will do in the Layout Editor will be shown as the external interface and there are buttons, sliders and many more things that can be incorporated in the Design. These various buttons are available on the due left corner of the Layout Editor. The user of the GUI will not use any programming language to make any particular button active. Instead, the programming should be under the title of the Button such that when the Button is pressed, it commands MATLAB to run the program which is written under that function. 2.2 :Details - about the GUI-We have used several functions and operations in our design. Firstly there are few buttons (technically known as Push Button) available at the top left corner of our design, namely „Resize‟ (to resize an image to users desired resolution), Noise, Image adjust, Rotate, Blend etc. The operations that we have incorporated in our design are Morphological Operation, Image Segmentation etc. Here few of them are discussed in the sense of giving an idea about what these functions are and what they do. Firstly we have shown a screen shot of our design below.In this design we have shown the tweaking of the Red, Green and Blue (technically known as RGB) values of a sample image (left), and the result image (right). Figure 3: The GUI design at the time of real time RGB value tweaking operation on images
  • 3. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 63 | Page Next is the „Blend‟ Pushbutton. This is a button by clicking which we can add any two kinds (resolution wise) of picture over one and another, i.e. they get merged up together. First the user has to load one picture to the GUI main frame and this will be assigned as the 1st image, on which a second image will be blended. Then he/she can press this button so that a window will appear to load another image. We have designed this function in such a way that when the image is not equal in dimension with the first one, then firstly the resizing of image will be done according to the resolution of the first image and then the merging operation will be done. Here two distinct images and their blended form are shown below. Figure4: The Blend Operation Another very useful feature of Image Processing that has been integrated in our GUI,is the ability to find edges in a given image [10]. This functionality is widely used in many Computer vision techniques. As such there are multiple algorithms that are used for finding edges. In our GUI, we have included most of the available algorithm. But during our workings, we found that since the algorithms work only with Black & White images, there is a considerable amount of information loss when we need to find edges from a real world or color image, as it is converted to the Black & White image. In order to reduce this information loss, we have implemented a simple new algorithm, based on the original algorithms, to find edges for color images, as shown in the flow diagram (Figure 5). As can be seen from Figure 6: edges of objects that were lost in the general approach are quite visible through the new approach. The weakest edges are denoted by Red, followed by Green & Blue. The most prominent edges are marked with white. These can be used or discarded (Figure 8) as required, by a simple thresh-holding on the component.
  • 4. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 64 | Page Figure 5: Proposed Algorithm of the new Edge Detection Figure 6: Original Color Image Figure7: (left) Result after normal Edge Detection (Canny) (right) Result after the new Edge Detection algorithm Figure 8: Applying thresholding after the new Edge Detection algorithm to discard the weak edges III. Image Processing 3.1: Object Detection & Tracking- Like any living things, in order to track an object, we first need to identify and detect that object. There are many approaches to this. We base our approach on the colour of the object. For this, we first initialize the camera, the primary input of our processing system, to start grabbing images and return it in the RGB colour space. Once this is done, we then set up a loop that iterate through each frame of the input stream, since a video is essentially a stream of still images. With each of these grabbed frames, we first convert them to grey scale & then we segment the colour channel from it. After this we apply filters to remove the noise & hence minimize errors due to external conditions. Having done this, we find the area of the objects, which are now only the objects with the required colours. Once done, we obtain an array containing descriptions of the centroid and area of all the objects with the desired colour. We then use this array to draw a box around the detected object(s). Since we are finding the centroid for the objects in every frame, the object is tracked along with its motion over the entire duration of the stream.
  • 5. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 65 | Page Figure 9: Proposed scheme for Object detection & tracking Figure 10: Objects being detected & tracked 3.2: Object Based Control- Once we are able to detect and track an object, we can easily use the data so obtained to control real world systems. For the preliminary stage, we chose a basic Arduino board with an LED. The idea was to make a LED light up when it was in a given range on the X-Y axis and turn off when it went out of the range. We used the differential of the previous and present coordinates of the centroid of the tracked object. By analysing the result from this operation we can detect whether object was moved to the left or right or to the top or bottom. For example if the difference in the X Axis was greater than the Y Axis, then we can surely say that the object was moved in the X-Axis & vice-versa. A value within a small threshold is ignored in order to prevent the system from becoming too sensitive to motion and thus responding to even slight hand jerks. 3.3: Object Based control of iRobot – Once the control algorithm has been tested; we applied the findings to control iRobot over a Bluetooth Access Module (BAM), which gives us a huge range of wireless operability. Just like with the arduino board, we detected the motion using the same algorithm (figure 11) and for movement along Y-Axis, we made it turn clockwise/anti-clockwise, while for X-Axis, we made it go forward and backward, thus detecting 2-D movement of the object. We then further expanded the dimension of the iRobot movement by tracking the area too. If the area increases, then the object is coming closer to the camera, while the decrease signifies the object moving away from the camera in the Z-Axis. With this in mind we made the iRobot back away when the object moved closer and come closer when the object moved away from the camera, thus allowing a 3-D motion control of the iRobot. Figure 11: Proposed scheme for controlling iRobot through Object Detection GRAB IMAGE & RETURN TO RGB SPACE CONVERT TO GRAY SCALE SUBTRACT FROM REQUIRED COLOUR CHANNEL APPLLY FILTERS FIND AREA AND OBJECT CENTROID
  • 6. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 66 | Page 3.4: Object Motion Mimicking – After controlling the iRobot in three dimensions, we further expanded the capability by making the robot mimic the object‟s movement, thus freeing it from rigid straight line movements. As such, the robot‟s movement follows the pattern that is plotted by the movement of the object. The basic idea behind the algorithm is that we can describe each movement in a Cartesian plane with coordinates & when it is translated to the real world, it can be described by the change of angle along with forward, backward or sideways movements. Thus, like the previous algorithm, we detect which direction the object is moving and along with that we also calculate the angle at which the object is moved. This can be easily found by calculating the slope between the two points of the centroid and normalizing it to a value between π to –π degrees. Like before we also keep a threshold here to ignore the unwanted jerks of the hands. We then make the iRobot turn the calculated angle along with the direction of the movement, thus mimicking the motion of the object in the real world. 3.5 :Following a map – From all these, we also observed that we can make the iRobot follow a map or pattern in the real world based on a map or pattern drawn as an image. The basic idea is that like the mimicking algorithm, the iRobot will follow a pattern, but instead of the input being from the object‟s movement, it will be a pre-drawn pattern in a binary image. Another zero initialized matrix (reference) with the same dimension as the image is maintained to track looping. The algorithm is such that, from the starting point, the program will sample its next pixel, which can be in front or to the sides. If the one of the pixels is set as 1 and the corresponding pixel is not set as 1 in the reference, then the iRobot will move in that direction. When the iRobot moves, it updates the corresponding pixel in the reference too. The entire process is repeated until the end of the map is found (when there are no more pixels to move forward to) or the pixel which has already been traversed (using the reference) is met. If the robot encounters a cross-section (overlapping of path), it randomly decides on a direction and updates. The next time it comes, due to the reference being updated, it will only have one way to go. Thus the robot is made to follow a given map/pattern even with loops. The algorithm was further expanded by making the reference increase the weights it deposits on a traversed path, like an ant. This can track the number of loop the robot is required to perform. Also, a scale factor is kept for the map to real world distance scaling. This factor can be tweaked to indicate how much distance of the real world each pixel should designate. lV. Face Detection 4.1: Viola-Jones Algorithm -In 2001, Paul Viola and Michael Jones proposed a framework for an algorithm called the Viola-Jones algorithmthat used a variant of Adaboost, Haar Features and Integral Image to detect faces in any given image. This algorithm can also be used to detect other parts of the face like eyes, nose, upper body, etc. Although this algorithm was motivated by the problem of Face Detection, it is capable of being used as an object detector and hence this algorithm can be trained to detect a wide variety of objects too. 4.2: Methodology - We have used Viola-Jones algorithm forfacedetection to implement a real time face detection system. As illustrated in the flow chart in Figure12, we first initialize the cascade object detector to detect a face. There are various ways to detect a face using the Viola-Jones algorithm. The algorithm can be initialized to detect different parts of the face, like eyes, nose, upper body etc. Since the face is the part which is distinguished mainly by the skin tone, we base our approach on this method and as such we first convert the captured image to the Hue channel, as this gives us the skin tone information and much better detection rate. Once the face is detected, our next objective is to track. Since we are using computer vision, we implemented the Histogram Based tracker, that tracks an object based on the pixel density over time, to track the subject(s)‟ face. Again, in order to get better and constant tracking results, we first extract the face and detect the nose in it using Viola-Jones Algorithm, all using the Hue channel data. We specifically use the nose instead of the face to track the face because in a human face, the nose is the most prominent part. In addition, the nose has the lowest number of background pixels which has the least variance due to external conditions, like illumination. This can be easily seen in the skin tone information obtained through the Hue channel data. Once the nose is detected, we initialize the tracker with this and extrapolate the tracked information to account for the face. We even further expanded the system to detect multiple faces, using the same principle but looped many times over. The system so developed was integrated in to the GUI that has been developed and discussed in the first part of this paper. The Figure 13 shows the system detecting & tracking multiple faces at once.
  • 7. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 67 | Page Figure 12: Proposed method for Face Detection & Tracking Figure 13: Multiple faces being detected and tracked simultaneously V. Discussion & Future Prospects In the paper, we have seen and implemented many functions in the GUI which are widely used in the field of image processing and we also tried to make some improvements in some of the functions. However, all of the functions could not be shown in the one screen shot as there are other sub-steps involved in this process. Image Processing opens up a wide variety of Human-Computer interaction that can only be limited by the human imagination. The works that we have shown here can easily be expanded to various real world situations. For example, the iRobot function of following the map can easily be embedded and made into a robot that can perform tasks like a peon in a disaster stricken area where normal transportations are hindered. The topography of the area can be easily found out and mapped using conventional method and then using this approach an automated delivery or even rescue system can be setup quickly, by chalking out a map based on the topography, for the robot to follow. Again the motion mimicking part can be integrated here to make the entire process real time and thus help guide people within a, for example, collapsed building/area, by guiding the victims by drawing the escape path and making a guide robot follow it. This is but just one way of implementing the work we have done here in the real world. The algorithms, though, that have been implemented still does have room for improvements. For example, in the basic object detection model, instead of scanning the entire image for the next position of the tracked image, we could set up a probabilistic boundary in which to search for the next position. If the object is not found there, we scan the entire image. This can result in an even better, faster and efficient detection and tracking algorithm.
  • 8. A Novel approach for Graphical User Interface development and real time Object and Face Tracking www.iosrjournals.org 68 | Page Along the same line, we have Face Detection and tracking. Needless to say, this has an ever growing demand in the field of security. But, as more powerful hardware are getting miniaturized face detection and tracking are also creeping into our daily lives. For example, smart TVs and electronics can be more efficient by automatically tracking and thus regulating themselves for efficient management of energy. They can for example turn themselves off or go into a sleep mode when they can‟t see any person in their field of service. Also, advanced gesture recognition systems can be developed using facial features detection. One example, out of many such possibilities, is that a user might control robots or other electronics with just the movement of their heads, which the electronics can track. It is only a matter of time, before we will have electronics that will actually give us personalized services based on who we are. Acknowledgements We would like to thank ESL, www.eschoollearning.net, Kolkata for giving us a platform to work on this project. References [1] Richard Szeliski,Computer Vision algorithms and applications, (2010) [2] R. Fisher , K Dawson-Howe, A. Fitzgibbon, C.Robertson, E. Trucco, John Wiley, Dictionary of Computer Vision and Image Processing, 2005 [3] F. Tanner, B. Colder, C. Pullen, D. Heagy, C. Oertel, & P. Sallee, Overhead Imagery Research Data Set (OIRDS) – an annotated data library and tools to aid in the development of computer vision algorithms, June 2009 [4] Liming Wang, Jianbosh Shi, Gang Song, I-fan Shen, Object Detection Combining recognition and Segmentation, Fudan University, Shanghai, PRC, 200433, obj_det_liming_accv07.pdf [5] Viola, Jones: Robust Real-time Object Detection, IJCV 2001,http://en.wikipedia.org/wiki/Caltech_101#cite_note-Viola_Jones- [6] D. Comaniciu, V. Ramesh and P. Meer, Real-time tracking of non-rigid objects using mean shift. In Proc. Conf. Comp. Vision Pattern Rec., pages II: 142–149, Hilton Head, SC, and June 2000 [7] Scott T. Smith, MATLAB Advanced GUI Development, Dog Ear Publication, published (2006) [8] Yair Moshe, Signal and Image Processing Laboratory, GUI with MATLAB, Published May 2004 [9] Rafael C. Gonzalez, Richard E. Woods and Steven L, Eddins, Digital Image Processing using MATLAB, (2004) [10] Scott E Umbaugh, Digital image processing and analysis: human and computer vision applications with CVIPTools, segmentation and edge like detection, CRC press, November 19, 2010