SlideShare a Scribd company logo
1 of 45
Download to read offline
SIM UNIVERSITY
          SCHOOL OF SCIENCE AND TECHNOLOGY




   DEVELOPMENT OBJECT DETECTION
  PROGRAM FOR A CAMERA FOR MICRO-
          AERIAL VEHICLE




             STUDENT: AZLI ERWIN AZIZ (PI NO. E0806919)
             SUPERVISOR: SUTTHIPHONG SRIGRAROM
             PROJECT CODE: JAN2012/EAS/021


                 A project report submitted to SIM University
           In partial fulfilment of the requirements for the degree of
                Bachelor of Engineering in Aerospace System
                                       January 2012


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]      1
Abstract

Today, with the advancements of microprocessor technology, unmanned aerial
vehicles have been increasing in popularity with private agencies and hobby
enthusiast. These unmanned aerial vehicles have also reduced in size and come in
different forms of rotary vehicles. Instead of the conventional fixed wing platform,
unmanned aerial vehicles have taken new forms and unconventional platforms. Tri-
motor and quad-motor copters have gained popularity in the market. These new
platforms make it easier to take-off and land as these aerial vehicles do not need a
long runway and have attributes similar to that of a traditional helicopter. As global
positioning satellites (GPS) are becoming readily accessible to the public,
autonomous aerial vehicles have also sprung out. Attaching GPS receivers and
transmitter, these micro aerial vehicles can fly autonomously to designated way-paints
specified by the user. However, due to limitations of GPS such as weak signal
strength indoors, autonomous flying is restrictive indoors.


The aim of this project is to develop an object detection program for a digital camera
that can be used in conjunction with a microprocessor on a micro aerial vehicle for
autonomous flight in an indoor environment. The object detection program functions
to create object boundaries and edges found in the video recorded by the camera.
Thus creating boundaries of an environment. This project will use video and image
analysis techniques to build colour co-relations and edges of the objects. The results
from the analysis can then be passed to the microprocessor for processing. This then
enables the micro aerial vehicle to be able to avoid and head towards objects in its
surrounding autonomously.


This report will cover on the objectives of the project, literature research on image
analysis and project management aspects. The main focus will be on
conceptualization, development, implementation and testing of the developed object
detection    software     in   MATLAB.         Finally,    topics     such   as   conclusions,
recommendations, critical reviews and reflections of this project will also be
discussed.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                         2
Acknowledgements
I would like to take this opportunity to offer my sincere appreciation to my project
supervisor Dr Sutthiphong Srigrarom (Spot), for his guidance, patience and support
through out this entire capstone project. During this time, Dr Spot showed outmost
patience and support even when there were slow progress and when many challenges
were faced. He never failed to clarify my doubts and assist me in any way possible.


Special thanks to Mr Koh Pak Keng head of the BEHAS program, Ann lee and all
other friends in UNISIM for their support and practical suggestions towards this
project. They have made my learning journey in UNISIM a very fulfilling and
enriching experience.


To all lectures, professors and support staff of the School of science and technology
for help directly or indirectly in helping me complete my project. To my managers,
who have encouraged and supported me throughout my 4.5 years of learning in
UNISIM.


Last of all, I would like to thank my family members, fiancée and loved ones who
have shown great care and concern for me during this period of studies and have
given me moral support to complete this project. With their support and
encouragements, I was able to preserve on despite having other commitments.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   3
Table of Contents
Abstract .................................................................................................................................. 2
Acknowledgements ............................................................................................................ 3
List of figures ........................................................................................................................ 6
Chapter 1:                Introduction ............................................................................................... 7
   1.1 Background on Desired motivation ............................................................................... 7
   1.2 Objectives ...............................................................................................................................10
   1.3 Scope ........................................................................................................................................10
   1.4 Proposed Approach ............................................................................................................11
   1.5 Layout of the project report ............................................................................................12
Chapter 2:                Literature review ................................................................................... 13
   2.1 Fundamentals of image analysis....................................................................................13
     2.1 .1 Analog Image definition:........................................................................................................ 13
     2.1.2 Digital Image definition:.......................................................................................................... 13
     2.1.3: Binary Image Definition ......................................................................................................... 14
     2.1.4 Sampling ........................................................................................................................................ 14
     2.1.5 Quantisation................................................................................................................................. 15
     2.1.6 Grey level Histogram ................................................................................................................ 15
   2.2 Image analysis Methodology ...........................................................................................16
     2.1 Image Segmentation..................................................................................................................... 16
     2.2 Point independent thresh holding.......................................................................................... 17
     2.3 Neighbour dependant method ................................................................................................. 18
     2.4 Edge Detection................................................................................................................................ 18
     2.5 Morphological operations .......................................................................................................... 20
     2.6 Representation of objects .......................................................................................................... 20
Chapter 3 Software and Hardware Required ........................................................ 21
   3.1 Software ..................................................................................................................................21
   3.2 Hardware ...............................................................................................................................22
Chapter 4 Program Design and Development ....................................................... 23
   4.1 Project Requirement ..........................................................................................................23
   4.2 Object Detection Program Overview............................................................................23
   4.3 Design overview ..................................................................................................................24
   4.4 Software Design Procedures ...........................................................................................25
     4.4.1 Initialisation of image .............................................................................................................. 25
     4.4.2 Image Conversion ...................................................................................................................... 26
     4.4.3 Edge Detectors ............................................................................................................................ 28
   4.4.4 Display of Data ..................................................................................................................29
Chapter 5 Testing and Evaluation.............................................................................. 30
   5.1 Object Detection Program Test (Using Pre-recorded Video) ..............................30
   5.2 Object Detection Program Test (Using live acquisition Video) ..........................31
   5.3 Field Test of Final design ..................................................................................................32
     5.3.1 Object Test .................................................................................................................................... 32
     5.3.2 Object Test Evaluation ............................................................................................................. 34
     5.3.3 Lighting Test ................................................................................................................................ 34
     5.3.4 Lighting Test Evaluation ......................................................................................................... 36
Chapter 6 Recommendations and Conclusion ....................................................... 38


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                                                                                          4
6.1 Summary ................................................................................................................................38
   6.2 Overall Conclusion ..............................................................................................................38
   6.3 Recommendation ................................................................................................................38
7. Review and Reflection ............................................................................................... 40
References:......................................................................................................................... 42
Appendix A – Gantt chart .............................................................................................. 43
Appendix B – Initialize image code ......................... Error! Bookmark not defined.
Appendix C – Initial Design Simulink Code ............................................................. 44
Appendix D – Final Design Simulink Code .............................................................. 45




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                                                                            5
List of figures
Figure 1: Quad-Copter .............................................................................................................................. 7
Figure 2: UAV camera system ................................................................................................................. 9
Figure 3: Image analysis flowchart ........................................................................................................ 11
Figure 4: Digital Image Sample of a continuous image ......................................................................... 14
Figure 5: Grey-Level Histogram ............................................................................................................ 16
Figure 6: Image Segmentation using Grey-level Histogram .................................................................. 17
Figure 7: Adaptive Thresh holding (non-uniform illumination)............................................................. 18
Figure 8: Edge Detection ........................................................................................................................ 20
Figure 9: MATLAB Environment .......................................................................................................... 21
Figure 10: MATLAB Simulink Design Environment ............................................................................ 22
Figure 11: Standard Webcam and Digital Camera ................................................................................. 22
Figure 12: Vision Control Motion System ............................................................................................. 23
Figure 13: Object Detection Program Workflow ................................................................................... 24
Figure 14: Image import to MATLAB ................................................................................................... 25
Figure 15: Displaying of Image in MATLAB ........................................................................................ 26
Figure 16: Binary Image Conversion ..................................................................................................... 27
Figure 17: Binary Histogram .................................................................................................................. 27
Figure 18: Grey scaled image and histogram ......................................................................................... 28
Figure 19:Edge Detector......................................................................................................................... 28
Figure 20: Corridor Video Snapshot ...................................................................................................... 30
Figure 21: Result Display (Pre-recorded video) ..................................................................................... 31
Figure 22: Live feed design output ......................................................................................................... 32
Figure 23: Bottle Test ............................................................................................................................. 33
Figure 24: Table and Chair test .............................................................. Error! Bookmark not defined.
Figure 25: Office environment ............................................................................................................... 35
Figure 26: Out door Environment .......................................................................................................... 36
Figure 27: Out door Environment 2 ....................................................................................................... 36




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                                                                                  6
Chapter 1: Introduction
1.1 Background on Desired motivation

Unmanned Aerial vehicles (UAV) have been used with government armed forces and
military applications throughout the world for many years now. UAVs’ served mainly
as surveillance and security methods for these agencies across the world. However,
UAVs’ have grown in popularity among hobby enthusiast and private industries and
agencies in recent years. This growth in UAVs’ occurred with the advancement of
micro processing technology. The cost and size of these processors greatly reduced
and their processing capabilities have increased dramatically. UAVs’ these days are
easier to fly due to microprocessors as they help to control stability and flight of these
vehicles. Programmable microprocessors coupled with other components such as
sensors enable UAVs’ to fly autonomously.


Today, UAVs’ come in different shapes and sizes and conventional flight have been
pushed aside. Aside from the traditional helicopter and fixed wing aeroplanes, multi-
rotor vehicles have been gaining popularity. Some examples of these unconventional
vehicles are the tri-copter and quad-copters. These vehicles unlike a traditional
helicopter use more then one rotary motor to function. An example of a quad-copter
can be seen in figure 1.1 below.




Figure 1: Quad-Copter

[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                     7
Quad-copters have attributes similar to that of a traditional helicopter with additional
stability control. With four motors spinning in opposite directions, speed and attitude
controls are depended solely on power distribution to the motors by the
microprocessor. Flight stability of the quad copter has also increased with four
motors. Programmable microprocessors with auto stabilization function help to
stabilize these vehicles in flight as well.


Currently, UAVs’ are readily accessible in the market and are fully customisable and
configurable to perform all sorts of different functions. These UAVs can be manually
controlled using a traditional radio transmitter or Bluetooth and Wi-Fi enabled
smartphones and tablet computers. The user can also configure them to perform
specified automated task by pre-programming the microprocessor and assigning that
task to a switch on the controller. The UAV will then carry out that automated task
when the switch has been enabled.


In an outdoor environment, with the aid of global positioning satellite (GPS) modules,
UAVs’ can perform automated task such as flying a specific route defined by the user
and circling around a perimeter specified by the user. With the aid of barometers and
sonar-sensors, UAVs’ are able to maintain specific heights to hover and deploy. Thus,
with these sensors and GPS modules, UAVs’ are able to fully function autonomously
in an outdoor environment.


However, in an indoor environment, the scenario is different due to limitations of
GPS triangulations. This is due to weak signal strength of GPS satellite in an indoor
environment. Thus, UAVs’ are not able to function fully autonomously as they cannot
detect their exact location in reference to that of indoor environment. To overcome
this issue in an indoor environment, UAVs have to be installed with additional
sensors that help locate its position in reference to that of the environment. These
additional sensors include a camera system that records real time video of the
environment.


By developing an object detection program that detects key surfaces and references of
the environment will enable the UAV to locate its position in reference to the


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   8
surroundings. Used in conjunction with a camera system, results from the program
can then be passed to the main controller module of the UAV thus enabling
autonomous flight in indoor environments. This will then enable automation function
of UAVs’ in an indoor environment. The figure below shows an example of a UAV
camera system.




                                   Figure 2: UAV camera system




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                9
1.2 Objectives

The objective of this project is to develop an object detection program for UAVs or
micro aerial vehicle camera systems. The program will take video inputs from the
camera and detect objects in the video allowing the UAV or micro aerial vehicle to be
able to fly autonomously towards or away from obstacles. The information processed
by this program will enable the UAV to detect objects or key surfaces in its
surrounding.

1.3 Scope

This project will consists of the following stages of development:
    •   Video recording phase (to simulate real time video of the camera system in the
        micro aerial vehicle)
    •   Video rendering phase (splitting the video into still picture frames for
        analysis)
    •   Initial development of the program
    •   Co-relate each frame of the video using programming software such as mat
        lab to find difference in the frame.
    •   Testing and trail phase (different videos with different objects are tested to test
        the robustness of the program)
    •   Implementation corrections from testing and trial data


    A Gantt chart for key milestones and major target has been attached in Appendix
    A. This chart shows the detail breakdown of the project in different phases and the
    time proposed to accomplish the required phase of the project. This chart helps to
    manage the overall progress of the project from the design phase to the final
    development of the project. Keeping close attention to this chart will ensure the
    project is completed in a timely and organised manner.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                     10
1.4 Proposed Approach

The proposed method for tackling this project will be inline with image and video
analysing techniques. There will be a few stages for the development of the object
detection program. The main component to this project will be the detection of object
boundaries in an image or video. The flow chart below in figure 3 will show the
development process of video or image analysing program and this program will
follow closely to this development flow chart. There are 6 major phases in this project
that have been identified and will be explained in further detail in the sub categories
of this report.




                                 Figure 3: Image analysis flowchart




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                 11
1.5 Layout of the project report

Chapter 1:     Introduction and Background


Chapter 2:     Literature review on image analysing methods and concepts.


Chapter 3:     Hardware and Software Required


Chapter 4:     Program Design and Development


Chapter5:      Testing and Evaluation


Chapter 6:     Recommendation and Conclusions


Chapter 7:     Review and Reflections




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]         12
Chapter 2:           Literature review

2.1 Fundamentals of image analysis
Before development of the object detection program can take place, an understanding
of the basics and fundamentals of digital images have to be established.
Image analysis and manipulation can be divided into three stages:
    •    Image processing (image in => image out)
    •    Image analysis (image in => measurement out)
    •    Image Understanding (image => high level description out)
In this topic, concepts of the image processing and image analysis as shown above
will be discussed.

2.1 .1 Analogue Image definition:

An analogue image can be defined as a function of two variables, example A (x, y)
where ‘A’ as the amplitude of the image at a real coordinate position (x, y).
Variable ‘A’ in the function above can represent any parameters of the image such as
brightness, contrast and colours. Some concepts consider images to contain sub-
images and can these sub-images can be referred to as regions of interest (ROI) or
simply regions. With this concept, shows that images frequently contain a collection
of objects each that can be the basis for a region. Specific image processing
operations can be applied individually to these regions. Thus, in an image, different
regions can be processed differently and independently from each other.



2.1.2 Digital Image definition:

A digital image can be defined as a numeric representation of a two-dimensional
image. Digital images are usually represented in binary formats and can be derived in
a 2D space environment through a sampling process of an analogue image. This
sampling process is also referred as digitization. A 2D continuous image A (x, y) is
divided into N rows and M columns and has a finite set of digital values. The
intersection of these rows and columns are also termed as a pixel. Pixels are the
smallest individual element in an image, holding quantized values that represent the




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]               13
brightness of a given colour at any specific point. An example of a digital image with
rows and columns can be seen in figure 2 below.

                                        COLUMNS




            R
            O
            W
             S                                                             Pixels (Average of
                                                                           brightness)



                          Figure 4: Digital Image Sample of a continuous
                          image


As seen in figure 4 above, the image is divided into identical number of rows and
columns. In this case there are 16 rows and 16 columns. Each cell in the image is the
intersection of the columns and rows and can also be referred to as pixels as
mentioned earlier. The value of each pixel is the average brightness in the pixel
rounded to the nearest integer value.

2.1.3: Binary Image Definition

A binary image is an image that has 2 possible values for each pixel. Typically the
two colours showed in a binary image is either white or black. Binary images are also
called bi-level images and have pixel values of 0 or 1. Binary images are typically
used for image analysis and comparisons.

2.1.4 Sampling

Before an analogue image can be processed, it has to be digitized into a 2 dimension
digital image. This process of digitizing an analogue image is called sampling.
Sampling it the process of an image being represented by measurements at regular
spaced intervals. Two important criteria for sampling process are:
•   Sampling interval
        o Distance between sample points and pixels
•   Tessellation



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                         14
o The pattern of sampling points
The resolution of an image can be expressed as the number of pixels present in the
image. If the number of pixels present in an image is too small, individual pixels can
be seen and other undesired effects can be seen as well. Examples of undesired effects
are aliasing and noise. Image noise is a random variation of brightness and colour
information in digital images and is usually an aspect of electrical noise. Noise can be
generated by circuitry of digital cameras and is usually undesired during image
processing as they add extra information to the image. To reduce noise in an image,
the image has to be smoothen by a smoothening stage, typically Gaussian
smoothening.

2.1.5 Quantisation

Quantisation is a process that compresses a range of values to a single quantum value.
It is mainly used to reduce the number of colours required to display a digital image
thus enabling a reduction of file size of the image. In this process, an analogue to
digital converter is required to transform brightness values into a range of integer
number. The maximum number in that range (e.g. 0 to M) is limited by the analogue
to digital converter and the computer. In the example stated above, M refers to the
number of bits used to represent the value of each pixel. This then determines the
number of grey levels in that image. This process of representing the amplitude of the
2D signal at a given coordinate as an integer value L with different grey levels is
referred to as amplitude quantization. If there are too few bits in that image, steps can
be found in between grey levels.

2.1.6 Grey level Histogram

Grey-level histograms are one of the simplest yet most effective tools used to analyse
images. This is because, a grey level histogram can show faulty setting in an image
digitiser and is almost impossible to achieve without digital hardware. Below is an
example of a typical grey level histogram of an image.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   15
Figure 5: Grey-Level Histogram

A grey- level histogram is a function showing the number of pixels that have grey
level. In this figure, the grey-scaled image is shown as a grey level histogram.

2.2 Image analysis Methodology

Image analysis mainly comprises of different operations and functions. There are 3
main functions commonly used in image analysis and they are:
•   Segmentation
    1. Thresh holding
    2. Edge detection
•   Morphological operations
•   Representation of objects

2.1 Image Segmentation

To distinguish simple objects from background and determine its area, we can use
grey level histograms to perform this task. Image segmentation is the operation of
distinguishing important objects from the background (or from unimportant objects).
Figure 6 below shows an example of image segmentation using grey-level histogram.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                16
Figure 6: Image Segmentation using Grey-level Histogram

Image segmentation separates the dark object from the bright background as shown in
figure 5 above.
There are two common methods of segmentation and they are:
    •   Point independent Thresh holding method
            o Thresh holding (semi-thresh holding)
            o Adaptive thresh holding
    •   Neighbourhood- dependant method
            o Edge detection
            o Boundary tracking
            o Template matching

2.2 Point independent thresh holding

Points dependent method is a method of thresh holding images. This method operates
by locating groups of pixels with similar properties thresh holding. Thresh holding
assigns the value of 0 to the pixels that have a grey scale less than the threshold and
the value of 255 for pixels that have grey levels higher than the threshold. Thus, this
will segment the image into two regions, one corresponding to the background the
other corresponding to the object. Therefore, thresh holding is used to discriminate
the background from an object in an image. This is a straightforward if the image has
a bimodal grey level histogram. For more complex images, this method of thresh
holding is inaccurate and requires a different method of thresh holding.
Adaptive thresh holding is a more complex method process to discriminate grey
levels from backgrounds. In practical grey level histograms, images rarely have
bimodal grey levels, thus using point independent thresh holding is insufficient. This


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                 17
is due to random noise captured in the image, varying illumination when the image is
taken and object that have different sizes and shapes found in the image.
Adaptive thresh holding is very useful for images with non-uniform illumination.




                     Figure 7: Adaptive Thresh holding (non-uniform illumination)


2.3 Neighbour dependant method

Neighbour dependent method consists of three different types of operations such as
edge detection; boundary tracking and template matching that have been mention
earlier in this topic. Neighbour dependant operates by generating an “output” pixel by
the basis of the pixel at the corresponding position in the input image and on the basis
of it neighbouring pixels. The size of the neighbourhood may vary in size, however,
several technique uses 3 x 3 and 5 x 5 neighbourhoods centred at the input pixel.
Neighbour operation plays a key role in modern digital image processing.

2.4 Edge Detection


Edge detection is a tool used in image processing that is aimed at identifying points in
a digital image at which the image brightness changes sharply or has discontinuities.
Identifying sharp changes in brightness is to capture important events and changes in
properties of the image. The discontinuities in the image brightness are likely to
correspond to:
    •   Discontinuities in depth


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   18
•   Discontinuities in surface orientation
    •   Change in material properties
    •   Variation in scene illumination


The result of edge detection applied to an image may lead to a set of connected curves
that indicate the boundaries of objects, boundaries of surfaces as well as curves that
corresponds to the surface orientation.
There are several methods of edge detection methods commonly used for image
analysis. Edge detection can be classified into two categories. These categories are:


    1. Search based (Discrete approximation of gradient and thresh hold the gradient
        norm image – First derivative)
    2. Zero crossing based (Second derivative)
    3. Band pass Filtering
    4. Compass operators


A search based methods detects the edges by first computing a measure of strength,
that is usually a first derivative expression such as the gradient magnitude. It then
searches for the local directional maxima of the gradient magnitude using a computed
estimate of the local orientation of the edge. This is usually the gradient direction.
For zero crossing method on the other hand, it searches for zero crossing in a second
order derivative expression computed from the image in order to find edges in that
image. This process is usually a zero crossings of a non-linear differential expression.
A vital and important pre-processing step to edge detection is to reduce noise that was
mentioned in section 2.1.3 commonly using a Gaussian smoothening operation.


Another important operation during edge detection is edge thinning. This operation is
a technique to remove unwanted spurious points on the edges of the image. It is
usually employed after the image has been filtered for noise, edge detector has been
applied to detect the edges of objects in the image and after the image has been
smoothed using an appropriate thresh hold value.
Ann example of edge detection that has been applied to an image can be seen in
figure 7 below.



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                      19
Figure 8: Edge Detection


2.5 Morphological operations


Morphology is a broad set of image processing operations that process an image
based on shapes. This method applies a structured element to an input image, creating
an output image of the same size. In this method, the value of each pixel in the output
image is based on a comparison of the corresponding pixel in the input image with its
neighbours. A morphological operation usually consists of two operations, dilation
and erosion. Dilation adds pixels to the boundaries of objects in an image while
erosion removes pixels from object boundaries in an image.

2.6 Representation of objects

Once thresh holding and edge detectors have been applied to an image, it is necessary
to store information about each objet for use in future extractions. Each object needs
to be assigned a special identifier to it. There are three ways of doing so:


    •   Object membership map (value of each pixel is encodes the sequence number
        of the object)
    •   Line segment coding (objects represented as collection of chords oriented
        parallel to image lines)
    •   Boundary chain code
        (Defines only the position of object boundary)




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                 20
Chapter 3 Software and Hardware Required
3.1 Software

One of the requirements stated by the university, is to use proposed software
MATLAB. MATLAB stands for MATrix LABoratory, software developed by Math
works Inc. (www.mathworks.com). MATLAB is a high-level language interactive
numerical computation, visualisation and programming software. It has capabilities to
analyse data, develop algorithms and create models and applications. It uses similar
C++ program languages and object oriented program concept similar to that of C++
programming. The figure below shows MATLAB programming environment.




                                 Figure 9: MATLAB Environment


MATLAB incorporates built in math functions, languages and tools that are faster
than traditional languages such as C++ and java programs. MATLAB has various
toolboxes that enable the user different set of tools for different applications. For our
project, we will be using the image acquisition, image analysis and video analysis
toolbox. MATLAB also has the function of using a graphical interface to create
algorithms. This function in MATLAB is called Simulink. Simulink gives the user to
create programs using pre-defined block set. MATLAB also has different toolboxes
that enhance specific functions within MATLAB. Some examples of MATLAB
toolboxes are the aerospace toolbox and control system toolbox. Figure 10 below
shows Simulink design environment.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   21
Figure 10: MATLAB Simulink Design Environment

With a library of block sets, the user can then create different type of programs by
linking these block sets.

3.2 Hardware

To develop the object detection program, image-capturing devices will be required to
simulate a camera system of a UAV or micro aerial vehicle.
Currently, there are two methods of image capturing available commercially in the
market. They are either by a:
    •   Digital Cameras
    •   Analogue cameras
Digital cameras are available in various resolutions and have direct interface with
computer. Analogue cameras on the other hand, require additional suitable grabbing
cards or TV tuner card for interfacing with the computer. Digital cameras give high
quality low noise images unlike analogue cameras that have lower sensitivity
producing lower quality images.
Therefore, we will be using a standard digital camera and a webcam. A webcam is a
video camera that feeds images in real time to a computer via USB, Internet or WI-FI.
A standard webcam and digital camera can be seen in the figure below.




                              Figure 11: Standard Webcam and Digital Camera



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]               22
Chapter 4 Program Design and Development
4.1 Project Requirement

To develop an object detection program for a UAV or micro aerial vehicle we need
review the purpose and function of the program.


Requirements
    •   Grab and capture images and video from an image-recording device
    •   Co-relate frames of images captured (histogram)
    •   Detect object boundaries, surfaces and textures.


Based on literature review of image analysing techniques, we can then create a
program on these methodology and concepts to full fill the requirements stated above.

4.2 Object Detection Program Overview

This section will describe the design overview of the object detection program. The
object detection program will come under the Vision Controlled Motion system or
(VCM) in short. VCM is used in robotics and in our case an unmanned aerial vehicle
(UAV) or micro aerial vehicle. VCM as a system can be shown graphically in the
figure below.




Figure 12: Vision Control Motion System




VCM is a very important concept and our object detection program is a part of this
system. The object detection program is the image analysis portion of the VCM
system shown in figure 12. The program analyses images and videos acquired from
an image-capturing device. This then enables the UAV the ability to sense it’s
surrounding. User defined inputs will then allow the UAV to avoid obstacles and




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                23
identify key surfaces and track user defined object autonomously. In an indoor
environment, ability to sense the surrounding is the key to autonomous flight.

4.3 Design overview

The primary design objective of the program will be to read and acquire images and
videos from the image-capturing device attached to the microprocessor. The program
shall then need to filter and convert these images and videos to usable data for
analysis. Once these data has been converted, analysis operators can be applied to
these images and videos and the results can be displayed. A detailed workflow of the
program can be found in the figure below.




Figure 13: Object Detection Program Workflow




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]              24
4.4 Software Design Procedures

The object detection program will consist of critical processes and procedures. The
table below will show the steps involved in creating the program. Program steps are
as follows:
    1. Initialise video data to program
    2. Convert video and image data for processing
    3. Apply edge detectors
    4. Display output data


As described in the above table, there are 4 critical steps in our program. The initial
step will be to initialise an image or video that was captured using a digital camera
and then uploaded to the computer. Once initialized, conversions to binary and grey
scale images will be carried out and an edge detector will be applied to the converted
images. Lastly, the images will be displayed with edge detectors. Detailed procedures
of each process will be described in the following sections. For this project, we will
be using the Simulink function of MATLAB mentioned in the previous chapter of this
report.

4.4.1 Initialisation of image


The initial step for our object detection program is to sample images captured by the
webcam. This step involves initializing the image or video to the MATLAB software.
By doing so, we can then read pre-recorded videos or image. Below shows an image
being imported into MATLAB.




                                                                      Initialized
                                                                      image


                                                      Image
                                                      description




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                 25
Figure 14: Image import to MATLAB


In the figure above, an image was imported into MATLAB and we can see the
description of the image. The description of the image includes size, bits and class of
the image. Once the image has been imported, we can then show a preview of the
image in MATLAB. Figure 11 below shows a preview of the imported image.




                             Figure 15: Displaying of Image in MATLAB


By importing the image, we can then apply different analysing techniques discussed
in chapter 2 to the image.

4.4.2 Image Conversion

Once we have imported the image, we can then convert the image to binary images
and grey-scaled images. This process is a vital step in image analysis as discussed in
chapter 2 of the fundamentals of image analysis. Figure 16 below shows the above
image in figure 16 converted to a binary image. Converting the images to binary
images, we have changed the values of the each pixel and given them new values of
(0 and 1). The black areas in the picture represent pixels with values 0 while the white
areas in the picture represent pixels with values 1.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   26
Figure 16: Binary Image Conversion


Next we can then display the histogram of the binary image. The figure below shows
the colour histogram of the binary image above. The histogram below shows the
number of pixels that have (0 or 1) values.




                                   Figure 17: Binary Histogram

As explained in chapter 2, binary images have pixel values of either 0 or 1. This can
then be seen in the colour histogram in figure 16. The next step of conversion is to
create a grey scale image. A grey scale image, replaces the colour pixels of the image
with grey shades from white to black. Once the image has been converted, we will
take the grey-level histogram of the image as shown in figure 17 below.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                27
Figure 18: Grey scaled image and histogram

4.4.3 Edge Detectors

Applying edge detectors is the next important step of our object detection program.
The following figure will show the above image being edge segmented. Edge
segmentation creates boundaries in objects of the image. This is an important
component of our program as object boundaries and surfaces are important for a
micro aerial vehicle.




                                     Figure 19:Edge Detector




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]              28
By applying an edge detector to the image, we can see object boundaries in the image.
The white lines in figure 15 shows the boundaries of the image. The above figure was
applying an edge detector in this case a “sobel” edge detector to a binary image. Edge
detectors can only be applied to a grey scale or binary image. Therefore the
conversion step is very important.

4.4.4 Display of Data

The final step for the object detection program is to display the output image as
useable data for the UAV. The image can then be written as a binary file or written as
a set of codes for the microprocessor. With this data, the microprocessor can then
compute these data and instruct the UAV to fly towards or avoid structures in the
environment. However, for this project we will display the output of the image using
the video display screen of the software. We will also use statistical methods such as
histograms to view the data.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                29
Chapter 5 Testing and Evaluation
5.1 Object Detection Program Test (Using Pre-recorded Video)

The first initial program design takes on the first approach in design flowchart of the
overall object detection program. This approach uses a pre-recorded video that has
been recorded by a digital camera and then uploaded to the computer. A video of an
indoor environment was taken using a digital camera. To simulate structures such as
pillars and ceiling, we used a video of a typical corridor. Some images of this corridor
can be seen in the figure below.




                                Figure 20: Corridor Video Snapshot



The pre-recorded video will then be initialised to the program and converted to
useable images for analysis. Once this is done, an edge detector will be applied to the
image. For this design, a “Prewitt” edge detector will be used. It calculates the
gradient of the image intensity at each point giving the possible increase from light to



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                  30
dark and the rate of change in each direction. The output image of this video will be
displayed using video displays and a colour histogram. The figure below shows a
snapshot of this program design and output.




Figure 21: Result Display (Pre-recorded video)




5.2 Object Detection Program Test (Using live acquisition Video)

The second design of the object detection program uses life feed video instead of pre-
recorded video. This approach uses a standard webcam that is connected to the
computer via USB and processes and converts live images instantly. As mentioned in
our flowchart of the design overview, this is the second method of image acquisition.
This process takes place is in real time and any objects in the video are
instantaneously analysed and an output image is displayed. Real time processing
analyses the image at the same speed as the images are occurring. In the context of a
UAV or micro aerial vehicle, real time processing is required to process images
captured by the UAVs camera system. With real time processing, the on board
microprocessor is able to analyse data from the object detection program while the
UAV is in flight. This will then enable UAV autonomous functionality. Thus, this
will be the final design for the object detection program. The first step to this design is
to initialize the webcam to the program. Once that is done, the webcam will grab
frames of images at 30 frames per second. These images will then be converted and



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                     31
an edge detector will be applied to these images. The output data will be displayed as
images and a colour histogram. A “Prewitt” operator edge detector mentioned in the
first design will also be applied here. The figure below shows a snapshot of this
design output.




Figure 22: Live feed design output



From the output above, real time images were processed and an output was displayed
in the form of video displays. These displays include, the original image, binary
image, grey-scaled image, edged image and lastly an overlay of the edges on the grey-
scaled image. A colour histogram of the original image is also shown.

5.3 Field Test of Final design

To verify the final design of the object detection program, a field test was done. Using
different objects, the output image of the program was recorded and analysed to see
the robustness of the edge detector of the program. To test the design even further, a
test was carried out in different lighting conditions. Different lighting condition is an
important factor for indoor environment.

5.3.1 Object Test

During this test, different object were used to simulate common object found in an
indoor environment. First object used was a common drinking bottle found in an


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   32
indoor environment. The figure shows the output of a bottle after being processed by
the object detection program. A histogram of the colours of the video is also
displayed.




Figure 23: Bottle Test

Next, tables and chairs were used to simulate an indoor environment. As the previous
test, tables and chairs were placed in front of the camera and the output data was
being displayed in the video display. The figure below shows the output data of the
chair and table after processing.




                                Figure 24: Indoor Table and chairs



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]              33
5.3.2 Object Test Evaluation

The object test satisfies our requirement of an object detection program to detect
object boundaries and edges. From test carried out, we can see in figure 23 of the
bottle test that the full shape of the bottle was being detected and it boundaries
outlined. The colour histogram also reflects an increase in R channel, shows that the
bottle is being detected as it is red in colour. Using the colour histogram, we can
gauge distance of the object from camera. This means that if the amplitude of the
colours increase and the graph of colours widen the object are near the camera. This
test is very satisfactory for the project. From figure 23 above, it is evident that the
surfaces and boundaries of the table and chairs were detected and outlined in the
output display. In the binary image display, it is observed that the walls of the indoor
environment were given a value of 1 and it displays white while the rest of the
environment was given a value of 0 and displayed as black. Boundaries of the table
and chair on the other hand were detected and displayed in the edges display. This
showed that the program performed its desired function. However, the program did
not detect the shadow on wall.

5.3.3 Lighting Test


Lighting test is the next step for verifying the functionality of the object detection
program. For this test, we simulated an indoor environment with two distinct lighting
conditions, low lighting and bright lighting. This test is to simulate typically an office
environment whereby the UAV or micro aerial vehicle maybe placed to do
surveillance and monitoring. The figure below depicts the output data of program in a
normal office environment.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                       34
Figure 25: Office environment

The last and final test carried out was a high light environment. This test simulates an
outdoor environment. This test is simulated to captured output data of the object
detection program if it is coupled with a UAV or micro aerial vehicle that has outdoor
autonomous functionality. Out door functionality usually uses GPS triangulation,
however, if the program is able to detect surfaces in the environment, the program can
be used as an object avoidance system for the UAV or micro aerial vehicle. The
figure below show the output data of the object detection program in an outdoor
environment.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                  35
Figure 26: Out door Environment




Figure 27: Out door Environment 2

5.3.4 Lighting Test Evaluation

From the lighting test, the object detection program full fills the requirements of our
project. The object detection program is able to detect surfaces and boundaries of
objects in the image. The downside of the program however, is that the program is
unable to detect fully boundaries of objects that have low illumination levels. From
the output display in figure 24 above, it is observed that key structures of the
environment were detected. Wall boundaries and picture frames on the walls were



[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                 36
detected. However, the boundary of the door and the wall was not detected due to
shadow being casted on the door. This shows that in low lighting environments, the
object detection is not robust enough to detect object boundaries. Low light objects
were not detected and edges were not shown in the output display. From the results in
figure 26 and figure 27 of the outdoor environment test, surfaces and object
boundaries were detected. However, as in the case of the indoor environment, object
that have low light levels were not detected. The outdoor environment results shows
that this program can be used in an outdoor application as an object avoidance system.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                37
Chapter 6 Recommendations and Conclusion

6.1 Summary

After every test that was carried out, and after every evaluation that was carried out, it
has been determined that the object detection program is able to detect object
boundaries in the video that was pre-recorded or live video captured by a webcam.
The initial design of the program was able to detect object boundaries from a pre-
recorded video taken from a digital camera. The second and final design of the
program was able to detect object boundaries in real time via a USB webcam attached
to the computer. Therefore in summary, this project is a success whereby; both
designs of program are able to detect object boundaries.


6.2 Overall Conclusion

Overall, the object detection program satisfies the requirements of our project stated
in section 4.1 of this report. The object detection program is able to initialize images
and videos either a pre-recorded video or real time video. The object detection
program is then able to convert these images and video into grey-scaled images and
binary images. Finally the object detection program is able to detect object surfaces
and boundaries in the image and video by image analysing techniques such as
segmentation and applying an edge detector.


6.3 Recommendation

From the data collected from our tests and evaluations, there are many areas in which
this object detection program can be improved on for the project to be more
successful.


Firstly, the object detection program does not have a graphical user interface or GUI
for the user to select different output to be displayed. The program currently displays
all outputs simultaneously when the program starts and to certain users this




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                      38
information may not be critical to them. I would recommend future work on GUI
interface for easy operation of the program.


Another area, which can be improved on, is the base algorithm of the code. From the
results shown in the test we did, the program is able to detect objects that have
sufficient light levels however, if the object has insufficient light levels the object is
not detected and edge operators cannot be applied to the object. Further exploration in
light analysis in video images should be done to improve robustness of the program.
This would then enable the program to detect objects with different light levels
appropriately. Object tracking can also be improved on and will also be useful for the
program. With object tracking, the user can input pre-defined objects for the UAV to
track.


Lastly, actual implementation of the program to a UAV or micro aerial vehicle should
be done. Our program is just one part of a vision control motion system (VCM) and to
see its true potential it should be implemented and tested with a UAV. As our
program was simulated on a computer, the output displays were video images. On an
actual implementation of a UAV, the result of the output will be different, as it will
need to be analysed by the microprocessor controlling the UAV. Getting the
microprocessor to analyse data from this program and getting it the UAV to react to
these results will be challenging.


Though we are now at the end of the project, testing of the program will still be
carried out till the Poster Presentation day. Some improvements will be implemented
and tested. If another student takes up this project, we hope that the improvements
and suggestions presented in this report will be taken into consideration.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                          39
7. Review and Reflection
This project for me has been an interesting and exciting learning adventure. From the
initial stages of research to the actual implementation of image analysing concepts in
the program was a challenge.


In the beginning of the project, I spend countless hours researching on image analysis
techniques and concepts. Understanding how digital images are being processed in
various computer systems was very enriching. As I have no prior experience in digital
image processing, I had to be clear on how digital images were processed and what
kinds of operators were needed to full fill the requirements of the project.


Once that was done, I was faced with another daunting task of writing the program
code and developing this program. As I am not strong in programming algorithms,
this task proved to be a nightmare. I referred to tutorial books and attending additional
lessons on MATLAB. I researched on the full capability of MATLAB and discovered
different toolboxes and functions that are built in to MATLAB. With that, I decided to
take full advantage of the Simulink function in MATLAB that uses block diagrams to
create a program. As I am more of a visual person, programming using Simulink
enabled me to develop the program for this project. When the program was done,
analysing the output data of the program was a challenge. Luckily, with all the
research done on image analysis concepts, I was able to point out the limitation of the
program.


Throughout this entire project, time management was the toughest. As a part time
student, juggling between work commitments and family commitments was very
challenging. Time management was a crucial skill that I developed during the course
of this project. Prioritising important events in the project and having self-discipline
to accomplish these targets was a valuable lesson learnt.


In the course of this project, there were many milestones that were reached through
the use of the Gantt chart. Different goals were set to keep the project on track
difficulties were solved along the way. Problems faced along the way strengthened


[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                   40
my analytical and problem solving skills. One major problem solving skill learnt
during the course of this project was contingency planning. One major event that
happened to me during the duration was a computer crash and I had to quickly rectify
the problem quickly and find alternative plans to continue with the project.


This journey proved to be a very enriching experience for me and despite going
through a lot of challenges and obstacles along the way, I am very glad I am able to
take up this project. I had to pick up concepts of image analysis very quickly as I had
no knowledge of these concepts in the past. At every single stage of he project, there
was something new to learn and everyday I made new discoveries with regards of the
project. Overall, this journey has been a programming adventure for me.




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]                    41
References:

    •   Segmentation and Boundary Detection Using Multi scale Intensity
        Measurements – (Dept. of Computer Science and Applied Math The
        Weizmann Inst. of Science Rehovot, 76100, Israel)
    •   Fundamentals of Image Processing- (Ian T. Young, Jan J. Gerbrands, Lucas J.
        van Vliet)
    •   EdgeFlow: A Technique for Boundary Detection and Image Segmentation –
        (Wei-Ying Ma and B. S. Manjunath)
    •   Computer Vision System Toolbox –(www.Mathworks.com)
    •   Fundamentals of Image Processing I – (David Holburn University Engineering
        Department, Cambridge)
    •   Fundamentals of Image Processing II – (David Holburn University
        Engineering Department, Cambridge)
    •   Fundamentals of Image Analysis– (David Holburn University Engineering
        Department, Cambridge)
    •   Introduction to Video analysis using MATLAB – (Deepak Malani, REC
        Calicut Anant Malewar, IIT Bombay)




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]              42
Appendix A – Gantt chart




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]   43
Appendix B – Initial Design Simulink Code




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]   44
Appendix C – Final Design Simulink Code




[EAS 499 Capstone Project]
[Development of Object Detection Program for Micro Aerial Vehicles]   45

More Related Content

What's hot (7)

AIAA_StudentUndergraduateTeam_SilverBird
AIAA_StudentUndergraduateTeam_SilverBirdAIAA_StudentUndergraduateTeam_SilverBird
AIAA_StudentUndergraduateTeam_SilverBird
 
Rotorcraft flying handbook
Rotorcraft flying handbookRotorcraft flying handbook
Rotorcraft flying handbook
 
Development of a mechanical maintenance training simulator in OpenSimulator f...
Development of a mechanical maintenance training simulator in OpenSimulator f...Development of a mechanical maintenance training simulator in OpenSimulator f...
Development of a mechanical maintenance training simulator in OpenSimulator f...
 
ALMAULY_WALEED_LINKREPORT
ALMAULY_WALEED_LINKREPORTALMAULY_WALEED_LINKREPORT
ALMAULY_WALEED_LINKREPORT
 
Faa h-8083-1
Faa h-8083-1Faa h-8083-1
Faa h-8083-1
 
Scorecard - A Case study of the Joint Strike Fighter Program
Scorecard - A Case study of the Joint Strike Fighter ProgramScorecard - A Case study of the Joint Strike Fighter Program
Scorecard - A Case study of the Joint Strike Fighter Program
 
NASA RFP
NASA RFPNASA RFP
NASA RFP
 

Similar to Camara for uav jan2012 eas 021

AI Powered Helmet Detection for Enhanced Road Safety Thesis.pdf
AI Powered Helmet Detection for Enhanced Road Safety Thesis.pdfAI Powered Helmet Detection for Enhanced Road Safety Thesis.pdf
AI Powered Helmet Detection for Enhanced Road Safety Thesis.pdf
ABBUSINESS1
 
Mobile Friendly Web Services - Thesis
Mobile Friendly Web Services - ThesisMobile Friendly Web Services - Thesis
Mobile Friendly Web Services - Thesis
Niko Kumpu
 
masteroppgave_larsbrusletto
masteroppgave_larsbruslettomasteroppgave_larsbrusletto
masteroppgave_larsbrusletto
Lars Brusletto
 
Project Final Report Ismail MIM IT13078010 SHUID 24048259_final
Project Final Report Ismail MIM IT13078010 SHUID 24048259_finalProject Final Report Ismail MIM IT13078010 SHUID 24048259_final
Project Final Report Ismail MIM IT13078010 SHUID 24048259_final
Ismail Iqbal
 
Final_Semester_Project _Report
Final_Semester_Project _ReportFinal_Semester_Project _Report
Final_Semester_Project _Report
Sriram Raghavan
 
Application of terrestrial 3D laser scanning in building information modellin...
Application of terrestrial 3D laser scanning in building information modellin...Application of terrestrial 3D laser scanning in building information modellin...
Application of terrestrial 3D laser scanning in building information modellin...
Martin Ma
 

Similar to Camara for uav jan2012 eas 021 (20)

Bike sharing android application
Bike sharing android applicationBike sharing android application
Bike sharing android application
 
ROAD POTHOLE DETECTION USING YOLOV4 DARKNET
ROAD POTHOLE DETECTION USING YOLOV4 DARKNETROAD POTHOLE DETECTION USING YOLOV4 DARKNET
ROAD POTHOLE DETECTION USING YOLOV4 DARKNET
 
ML for blind people.pptx
ML for blind people.pptxML for blind people.pptx
ML for blind people.pptx
 
Motion capture for Animation
Motion capture for AnimationMotion capture for Animation
Motion capture for Animation
 
AI Powered Helmet Detection for Enhanced Road Safety Thesis.pdf
AI Powered Helmet Detection for Enhanced Road Safety Thesis.pdfAI Powered Helmet Detection for Enhanced Road Safety Thesis.pdf
AI Powered Helmet Detection for Enhanced Road Safety Thesis.pdf
 
Mini Project- 3D Graphics And Visualisation
Mini Project- 3D Graphics And VisualisationMini Project- 3D Graphics And Visualisation
Mini Project- 3D Graphics And Visualisation
 
Mobile Friendly Web Services - Thesis
Mobile Friendly Web Services - ThesisMobile Friendly Web Services - Thesis
Mobile Friendly Web Services - Thesis
 
Tr1546
Tr1546Tr1546
Tr1546
 
Real Time Moving Object Detection for Day-Night Surveillance using AI
Real Time Moving Object Detection for Day-Night Surveillance using AIReal Time Moving Object Detection for Day-Night Surveillance using AI
Real Time Moving Object Detection for Day-Night Surveillance using AI
 
masteroppgave_larsbrusletto
masteroppgave_larsbruslettomasteroppgave_larsbrusletto
masteroppgave_larsbrusletto
 
Project Final Report Ismail MIM IT13078010 SHUID 24048259_final
Project Final Report Ismail MIM IT13078010 SHUID 24048259_finalProject Final Report Ismail MIM IT13078010 SHUID 24048259_final
Project Final Report Ismail MIM IT13078010 SHUID 24048259_final
 
Object and pose detection
Object and pose detectionObject and pose detection
Object and pose detection
 
bhargav_flowing-fountain
bhargav_flowing-fountainbhargav_flowing-fountain
bhargav_flowing-fountain
 
FULLTEXT01
FULLTEXT01FULLTEXT01
FULLTEXT01
 
Final_Semester_Project _Report
Final_Semester_Project _ReportFinal_Semester_Project _Report
Final_Semester_Project _Report
 
Project final report
Project final reportProject final report
Project final report
 
Saksham seminar report
Saksham seminar reportSaksham seminar report
Saksham seminar report
 
pgdip-project-report-final-148245F
pgdip-project-report-final-148245Fpgdip-project-report-final-148245F
pgdip-project-report-final-148245F
 
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
 
Application of terrestrial 3D laser scanning in building information modellin...
Application of terrestrial 3D laser scanning in building information modellin...Application of terrestrial 3D laser scanning in building information modellin...
Application of terrestrial 3D laser scanning in building information modellin...
 

More from M.L. Kamalasana

คำว่า หน้าที่
คำว่า หน้าที่คำว่า หน้าที่
คำว่า หน้าที่
M.L. Kamalasana
 
คุณสมบัตินายทหารเรือไทย
คุณสมบัตินายทหารเรือไทยคุณสมบัตินายทหารเรือไทย
คุณสมบัตินายทหารเรือไทย
M.L. Kamalasana
 
การสงครามของไทยยุคปัจจุบัน
การสงครามของไทยยุคปัจจุบันการสงครามของไทยยุคปัจจุบัน
การสงครามของไทยยุคปัจจุบัน
M.L. Kamalasana
 
บรรยายยานบินเบาะอากาศ
บรรยายยานบินเบาะอากาศบรรยายยานบินเบาะอากาศ
บรรยายยานบินเบาะอากาศ
M.L. Kamalasana
 
ชลัมพ์ Theory ยานเบาะอากาศ
ชลัมพ์ Theory ยานเบาะอากาศชลัมพ์ Theory ยานเบาะอากาศ
ชลัมพ์ Theory ยานเบาะอากาศ
M.L. Kamalasana
 
คู่มือการใช้งาน Narai uav ฉบับแก้ไข pdf.
คู่มือการใช้งาน  Narai uav ฉบับแก้ไข pdf.คู่มือการใช้งาน  Narai uav ฉบับแก้ไข pdf.
คู่มือการใช้งาน Narai uav ฉบับแก้ไข pdf.
M.L. Kamalasana
 
แถลง Uav กลุ่ม 3 ทอ.
แถลง Uav กลุ่ม 3 ทอ.แถลง Uav กลุ่ม 3 ทอ.
แถลง Uav กลุ่ม 3 ทอ.
M.L. Kamalasana
 

More from M.L. Kamalasana (15)

ภาพรวมของการพัฒนาการบริหารจัดการภาครัฐ (New)
ภาพรวมของการพัฒนาการบริหารจัดการภาครัฐ (New)ภาพรวมของการพัฒนาการบริหารจัดการภาครัฐ (New)
ภาพรวมของการพัฒนาการบริหารจัดการภาครัฐ (New)
 
Balancescorecard
BalancescorecardBalancescorecard
Balancescorecard
 
คำว่า หน้าที่
คำว่า หน้าที่คำว่า หน้าที่
คำว่า หน้าที่
 
Technic
TechnicTechnic
Technic
 
Defense 120217
Defense  120217Defense  120217
Defense 120217
 
คุณสมบัตินายทหารเรือไทย
คุณสมบัตินายทหารเรือไทยคุณสมบัตินายทหารเรือไทย
คุณสมบัตินายทหารเรือไทย
 
การสงครามของไทยยุคปัจจุบัน
การสงครามของไทยยุคปัจจุบันการสงครามของไทยยุคปัจจุบัน
การสงครามของไทยยุคปัจจุบัน
 
World war 2
World war 2World war 2
World war 2
 
บรรยายยานบินเบาะอากาศ
บรรยายยานบินเบาะอากาศบรรยายยานบินเบาะอากาศ
บรรยายยานบินเบาะอากาศ
 
ชลัมพ์ Theory ยานเบาะอากาศ
ชลัมพ์ Theory ยานเบาะอากาศชลัมพ์ Theory ยานเบาะอากาศ
ชลัมพ์ Theory ยานเบาะอากาศ
 
คู่มือการใช้งาน Narai uav ฉบับแก้ไข pdf.
คู่มือการใช้งาน  Narai uav ฉบับแก้ไข pdf.คู่มือการใช้งาน  Narai uav ฉบับแก้ไข pdf.
คู่มือการใช้งาน Narai uav ฉบับแก้ไข pdf.
 
แถลง Uav กลุ่ม 3 ทอ.
แถลง Uav กลุ่ม 3 ทอ.แถลง Uav กลุ่ม 3 ทอ.
แถลง Uav กลุ่ม 3 ทอ.
 
Cg vs n
Cg vs nCg vs n
Cg vs n
 
Mega trend
Mega trendMega trend
Mega trend
 
ยกภูเขาไปอ่าว
ยกภูเขาไปอ่าวยกภูเขาไปอ่าว
ยกภูเขาไปอ่าว
 

Camara for uav jan2012 eas 021

  • 1. SIM UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY DEVELOPMENT OBJECT DETECTION PROGRAM FOR A CAMERA FOR MICRO- AERIAL VEHICLE STUDENT: AZLI ERWIN AZIZ (PI NO. E0806919) SUPERVISOR: SUTTHIPHONG SRIGRAROM PROJECT CODE: JAN2012/EAS/021 A project report submitted to SIM University In partial fulfilment of the requirements for the degree of Bachelor of Engineering in Aerospace System January 2012 [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 1
  • 2. Abstract Today, with the advancements of microprocessor technology, unmanned aerial vehicles have been increasing in popularity with private agencies and hobby enthusiast. These unmanned aerial vehicles have also reduced in size and come in different forms of rotary vehicles. Instead of the conventional fixed wing platform, unmanned aerial vehicles have taken new forms and unconventional platforms. Tri- motor and quad-motor copters have gained popularity in the market. These new platforms make it easier to take-off and land as these aerial vehicles do not need a long runway and have attributes similar to that of a traditional helicopter. As global positioning satellites (GPS) are becoming readily accessible to the public, autonomous aerial vehicles have also sprung out. Attaching GPS receivers and transmitter, these micro aerial vehicles can fly autonomously to designated way-paints specified by the user. However, due to limitations of GPS such as weak signal strength indoors, autonomous flying is restrictive indoors. The aim of this project is to develop an object detection program for a digital camera that can be used in conjunction with a microprocessor on a micro aerial vehicle for autonomous flight in an indoor environment. The object detection program functions to create object boundaries and edges found in the video recorded by the camera. Thus creating boundaries of an environment. This project will use video and image analysis techniques to build colour co-relations and edges of the objects. The results from the analysis can then be passed to the microprocessor for processing. This then enables the micro aerial vehicle to be able to avoid and head towards objects in its surrounding autonomously. This report will cover on the objectives of the project, literature research on image analysis and project management aspects. The main focus will be on conceptualization, development, implementation and testing of the developed object detection software in MATLAB. Finally, topics such as conclusions, recommendations, critical reviews and reflections of this project will also be discussed. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 2
  • 3. Acknowledgements I would like to take this opportunity to offer my sincere appreciation to my project supervisor Dr Sutthiphong Srigrarom (Spot), for his guidance, patience and support through out this entire capstone project. During this time, Dr Spot showed outmost patience and support even when there were slow progress and when many challenges were faced. He never failed to clarify my doubts and assist me in any way possible. Special thanks to Mr Koh Pak Keng head of the BEHAS program, Ann lee and all other friends in UNISIM for their support and practical suggestions towards this project. They have made my learning journey in UNISIM a very fulfilling and enriching experience. To all lectures, professors and support staff of the School of science and technology for help directly or indirectly in helping me complete my project. To my managers, who have encouraged and supported me throughout my 4.5 years of learning in UNISIM. Last of all, I would like to thank my family members, fiancée and loved ones who have shown great care and concern for me during this period of studies and have given me moral support to complete this project. With their support and encouragements, I was able to preserve on despite having other commitments. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 3
  • 4. Table of Contents Abstract .................................................................................................................................. 2 Acknowledgements ............................................................................................................ 3 List of figures ........................................................................................................................ 6 Chapter 1: Introduction ............................................................................................... 7 1.1 Background on Desired motivation ............................................................................... 7 1.2 Objectives ...............................................................................................................................10 1.3 Scope ........................................................................................................................................10 1.4 Proposed Approach ............................................................................................................11 1.5 Layout of the project report ............................................................................................12 Chapter 2: Literature review ................................................................................... 13 2.1 Fundamentals of image analysis....................................................................................13 2.1 .1 Analog Image definition:........................................................................................................ 13 2.1.2 Digital Image definition:.......................................................................................................... 13 2.1.3: Binary Image Definition ......................................................................................................... 14 2.1.4 Sampling ........................................................................................................................................ 14 2.1.5 Quantisation................................................................................................................................. 15 2.1.6 Grey level Histogram ................................................................................................................ 15 2.2 Image analysis Methodology ...........................................................................................16 2.1 Image Segmentation..................................................................................................................... 16 2.2 Point independent thresh holding.......................................................................................... 17 2.3 Neighbour dependant method ................................................................................................. 18 2.4 Edge Detection................................................................................................................................ 18 2.5 Morphological operations .......................................................................................................... 20 2.6 Representation of objects .......................................................................................................... 20 Chapter 3 Software and Hardware Required ........................................................ 21 3.1 Software ..................................................................................................................................21 3.2 Hardware ...............................................................................................................................22 Chapter 4 Program Design and Development ....................................................... 23 4.1 Project Requirement ..........................................................................................................23 4.2 Object Detection Program Overview............................................................................23 4.3 Design overview ..................................................................................................................24 4.4 Software Design Procedures ...........................................................................................25 4.4.1 Initialisation of image .............................................................................................................. 25 4.4.2 Image Conversion ...................................................................................................................... 26 4.4.3 Edge Detectors ............................................................................................................................ 28 4.4.4 Display of Data ..................................................................................................................29 Chapter 5 Testing and Evaluation.............................................................................. 30 5.1 Object Detection Program Test (Using Pre-recorded Video) ..............................30 5.2 Object Detection Program Test (Using live acquisition Video) ..........................31 5.3 Field Test of Final design ..................................................................................................32 5.3.1 Object Test .................................................................................................................................... 32 5.3.2 Object Test Evaluation ............................................................................................................. 34 5.3.3 Lighting Test ................................................................................................................................ 34 5.3.4 Lighting Test Evaluation ......................................................................................................... 36 Chapter 6 Recommendations and Conclusion ....................................................... 38 [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 4
  • 5. 6.1 Summary ................................................................................................................................38 6.2 Overall Conclusion ..............................................................................................................38 6.3 Recommendation ................................................................................................................38 7. Review and Reflection ............................................................................................... 40 References:......................................................................................................................... 42 Appendix A – Gantt chart .............................................................................................. 43 Appendix B – Initialize image code ......................... Error! Bookmark not defined. Appendix C – Initial Design Simulink Code ............................................................. 44 Appendix D – Final Design Simulink Code .............................................................. 45 [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 5
  • 6. List of figures Figure 1: Quad-Copter .............................................................................................................................. 7 Figure 2: UAV camera system ................................................................................................................. 9 Figure 3: Image analysis flowchart ........................................................................................................ 11 Figure 4: Digital Image Sample of a continuous image ......................................................................... 14 Figure 5: Grey-Level Histogram ............................................................................................................ 16 Figure 6: Image Segmentation using Grey-level Histogram .................................................................. 17 Figure 7: Adaptive Thresh holding (non-uniform illumination)............................................................. 18 Figure 8: Edge Detection ........................................................................................................................ 20 Figure 9: MATLAB Environment .......................................................................................................... 21 Figure 10: MATLAB Simulink Design Environment ............................................................................ 22 Figure 11: Standard Webcam and Digital Camera ................................................................................. 22 Figure 12: Vision Control Motion System ............................................................................................. 23 Figure 13: Object Detection Program Workflow ................................................................................... 24 Figure 14: Image import to MATLAB ................................................................................................... 25 Figure 15: Displaying of Image in MATLAB ........................................................................................ 26 Figure 16: Binary Image Conversion ..................................................................................................... 27 Figure 17: Binary Histogram .................................................................................................................. 27 Figure 18: Grey scaled image and histogram ......................................................................................... 28 Figure 19:Edge Detector......................................................................................................................... 28 Figure 20: Corridor Video Snapshot ...................................................................................................... 30 Figure 21: Result Display (Pre-recorded video) ..................................................................................... 31 Figure 22: Live feed design output ......................................................................................................... 32 Figure 23: Bottle Test ............................................................................................................................. 33 Figure 24: Table and Chair test .............................................................. Error! Bookmark not defined. Figure 25: Office environment ............................................................................................................... 35 Figure 26: Out door Environment .......................................................................................................... 36 Figure 27: Out door Environment 2 ....................................................................................................... 36 [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 6
  • 7. Chapter 1: Introduction 1.1 Background on Desired motivation Unmanned Aerial vehicles (UAV) have been used with government armed forces and military applications throughout the world for many years now. UAVs’ served mainly as surveillance and security methods for these agencies across the world. However, UAVs’ have grown in popularity among hobby enthusiast and private industries and agencies in recent years. This growth in UAVs’ occurred with the advancement of micro processing technology. The cost and size of these processors greatly reduced and their processing capabilities have increased dramatically. UAVs’ these days are easier to fly due to microprocessors as they help to control stability and flight of these vehicles. Programmable microprocessors coupled with other components such as sensors enable UAVs’ to fly autonomously. Today, UAVs’ come in different shapes and sizes and conventional flight have been pushed aside. Aside from the traditional helicopter and fixed wing aeroplanes, multi- rotor vehicles have been gaining popularity. Some examples of these unconventional vehicles are the tri-copter and quad-copters. These vehicles unlike a traditional helicopter use more then one rotary motor to function. An example of a quad-copter can be seen in figure 1.1 below. Figure 1: Quad-Copter [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 7
  • 8. Quad-copters have attributes similar to that of a traditional helicopter with additional stability control. With four motors spinning in opposite directions, speed and attitude controls are depended solely on power distribution to the motors by the microprocessor. Flight stability of the quad copter has also increased with four motors. Programmable microprocessors with auto stabilization function help to stabilize these vehicles in flight as well. Currently, UAVs’ are readily accessible in the market and are fully customisable and configurable to perform all sorts of different functions. These UAVs can be manually controlled using a traditional radio transmitter or Bluetooth and Wi-Fi enabled smartphones and tablet computers. The user can also configure them to perform specified automated task by pre-programming the microprocessor and assigning that task to a switch on the controller. The UAV will then carry out that automated task when the switch has been enabled. In an outdoor environment, with the aid of global positioning satellite (GPS) modules, UAVs’ can perform automated task such as flying a specific route defined by the user and circling around a perimeter specified by the user. With the aid of barometers and sonar-sensors, UAVs’ are able to maintain specific heights to hover and deploy. Thus, with these sensors and GPS modules, UAVs’ are able to fully function autonomously in an outdoor environment. However, in an indoor environment, the scenario is different due to limitations of GPS triangulations. This is due to weak signal strength of GPS satellite in an indoor environment. Thus, UAVs’ are not able to function fully autonomously as they cannot detect their exact location in reference to that of indoor environment. To overcome this issue in an indoor environment, UAVs have to be installed with additional sensors that help locate its position in reference to that of the environment. These additional sensors include a camera system that records real time video of the environment. By developing an object detection program that detects key surfaces and references of the environment will enable the UAV to locate its position in reference to the [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 8
  • 9. surroundings. Used in conjunction with a camera system, results from the program can then be passed to the main controller module of the UAV thus enabling autonomous flight in indoor environments. This will then enable automation function of UAVs’ in an indoor environment. The figure below shows an example of a UAV camera system. Figure 2: UAV camera system [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 9
  • 10. 1.2 Objectives The objective of this project is to develop an object detection program for UAVs or micro aerial vehicle camera systems. The program will take video inputs from the camera and detect objects in the video allowing the UAV or micro aerial vehicle to be able to fly autonomously towards or away from obstacles. The information processed by this program will enable the UAV to detect objects or key surfaces in its surrounding. 1.3 Scope This project will consists of the following stages of development: • Video recording phase (to simulate real time video of the camera system in the micro aerial vehicle) • Video rendering phase (splitting the video into still picture frames for analysis) • Initial development of the program • Co-relate each frame of the video using programming software such as mat lab to find difference in the frame. • Testing and trail phase (different videos with different objects are tested to test the robustness of the program) • Implementation corrections from testing and trial data A Gantt chart for key milestones and major target has been attached in Appendix A. This chart shows the detail breakdown of the project in different phases and the time proposed to accomplish the required phase of the project. This chart helps to manage the overall progress of the project from the design phase to the final development of the project. Keeping close attention to this chart will ensure the project is completed in a timely and organised manner. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 10
  • 11. 1.4 Proposed Approach The proposed method for tackling this project will be inline with image and video analysing techniques. There will be a few stages for the development of the object detection program. The main component to this project will be the detection of object boundaries in an image or video. The flow chart below in figure 3 will show the development process of video or image analysing program and this program will follow closely to this development flow chart. There are 6 major phases in this project that have been identified and will be explained in further detail in the sub categories of this report. Figure 3: Image analysis flowchart [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 11
  • 12. 1.5 Layout of the project report Chapter 1: Introduction and Background Chapter 2: Literature review on image analysing methods and concepts. Chapter 3: Hardware and Software Required Chapter 4: Program Design and Development Chapter5: Testing and Evaluation Chapter 6: Recommendation and Conclusions Chapter 7: Review and Reflections [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 12
  • 13. Chapter 2: Literature review 2.1 Fundamentals of image analysis Before development of the object detection program can take place, an understanding of the basics and fundamentals of digital images have to be established. Image analysis and manipulation can be divided into three stages: • Image processing (image in => image out) • Image analysis (image in => measurement out) • Image Understanding (image => high level description out) In this topic, concepts of the image processing and image analysis as shown above will be discussed. 2.1 .1 Analogue Image definition: An analogue image can be defined as a function of two variables, example A (x, y) where ‘A’ as the amplitude of the image at a real coordinate position (x, y). Variable ‘A’ in the function above can represent any parameters of the image such as brightness, contrast and colours. Some concepts consider images to contain sub- images and can these sub-images can be referred to as regions of interest (ROI) or simply regions. With this concept, shows that images frequently contain a collection of objects each that can be the basis for a region. Specific image processing operations can be applied individually to these regions. Thus, in an image, different regions can be processed differently and independently from each other. 2.1.2 Digital Image definition: A digital image can be defined as a numeric representation of a two-dimensional image. Digital images are usually represented in binary formats and can be derived in a 2D space environment through a sampling process of an analogue image. This sampling process is also referred as digitization. A 2D continuous image A (x, y) is divided into N rows and M columns and has a finite set of digital values. The intersection of these rows and columns are also termed as a pixel. Pixels are the smallest individual element in an image, holding quantized values that represent the [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 13
  • 14. brightness of a given colour at any specific point. An example of a digital image with rows and columns can be seen in figure 2 below. COLUMNS R O W S Pixels (Average of brightness) Figure 4: Digital Image Sample of a continuous image As seen in figure 4 above, the image is divided into identical number of rows and columns. In this case there are 16 rows and 16 columns. Each cell in the image is the intersection of the columns and rows and can also be referred to as pixels as mentioned earlier. The value of each pixel is the average brightness in the pixel rounded to the nearest integer value. 2.1.3: Binary Image Definition A binary image is an image that has 2 possible values for each pixel. Typically the two colours showed in a binary image is either white or black. Binary images are also called bi-level images and have pixel values of 0 or 1. Binary images are typically used for image analysis and comparisons. 2.1.4 Sampling Before an analogue image can be processed, it has to be digitized into a 2 dimension digital image. This process of digitizing an analogue image is called sampling. Sampling it the process of an image being represented by measurements at regular spaced intervals. Two important criteria for sampling process are: • Sampling interval o Distance between sample points and pixels • Tessellation [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 14
  • 15. o The pattern of sampling points The resolution of an image can be expressed as the number of pixels present in the image. If the number of pixels present in an image is too small, individual pixels can be seen and other undesired effects can be seen as well. Examples of undesired effects are aliasing and noise. Image noise is a random variation of brightness and colour information in digital images and is usually an aspect of electrical noise. Noise can be generated by circuitry of digital cameras and is usually undesired during image processing as they add extra information to the image. To reduce noise in an image, the image has to be smoothen by a smoothening stage, typically Gaussian smoothening. 2.1.5 Quantisation Quantisation is a process that compresses a range of values to a single quantum value. It is mainly used to reduce the number of colours required to display a digital image thus enabling a reduction of file size of the image. In this process, an analogue to digital converter is required to transform brightness values into a range of integer number. The maximum number in that range (e.g. 0 to M) is limited by the analogue to digital converter and the computer. In the example stated above, M refers to the number of bits used to represent the value of each pixel. This then determines the number of grey levels in that image. This process of representing the amplitude of the 2D signal at a given coordinate as an integer value L with different grey levels is referred to as amplitude quantization. If there are too few bits in that image, steps can be found in between grey levels. 2.1.6 Grey level Histogram Grey-level histograms are one of the simplest yet most effective tools used to analyse images. This is because, a grey level histogram can show faulty setting in an image digitiser and is almost impossible to achieve without digital hardware. Below is an example of a typical grey level histogram of an image. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 15
  • 16. Figure 5: Grey-Level Histogram A grey- level histogram is a function showing the number of pixels that have grey level. In this figure, the grey-scaled image is shown as a grey level histogram. 2.2 Image analysis Methodology Image analysis mainly comprises of different operations and functions. There are 3 main functions commonly used in image analysis and they are: • Segmentation 1. Thresh holding 2. Edge detection • Morphological operations • Representation of objects 2.1 Image Segmentation To distinguish simple objects from background and determine its area, we can use grey level histograms to perform this task. Image segmentation is the operation of distinguishing important objects from the background (or from unimportant objects). Figure 6 below shows an example of image segmentation using grey-level histogram. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 16
  • 17. Figure 6: Image Segmentation using Grey-level Histogram Image segmentation separates the dark object from the bright background as shown in figure 5 above. There are two common methods of segmentation and they are: • Point independent Thresh holding method o Thresh holding (semi-thresh holding) o Adaptive thresh holding • Neighbourhood- dependant method o Edge detection o Boundary tracking o Template matching 2.2 Point independent thresh holding Points dependent method is a method of thresh holding images. This method operates by locating groups of pixels with similar properties thresh holding. Thresh holding assigns the value of 0 to the pixels that have a grey scale less than the threshold and the value of 255 for pixels that have grey levels higher than the threshold. Thus, this will segment the image into two regions, one corresponding to the background the other corresponding to the object. Therefore, thresh holding is used to discriminate the background from an object in an image. This is a straightforward if the image has a bimodal grey level histogram. For more complex images, this method of thresh holding is inaccurate and requires a different method of thresh holding. Adaptive thresh holding is a more complex method process to discriminate grey levels from backgrounds. In practical grey level histograms, images rarely have bimodal grey levels, thus using point independent thresh holding is insufficient. This [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 17
  • 18. is due to random noise captured in the image, varying illumination when the image is taken and object that have different sizes and shapes found in the image. Adaptive thresh holding is very useful for images with non-uniform illumination. Figure 7: Adaptive Thresh holding (non-uniform illumination) 2.3 Neighbour dependant method Neighbour dependent method consists of three different types of operations such as edge detection; boundary tracking and template matching that have been mention earlier in this topic. Neighbour dependant operates by generating an “output” pixel by the basis of the pixel at the corresponding position in the input image and on the basis of it neighbouring pixels. The size of the neighbourhood may vary in size, however, several technique uses 3 x 3 and 5 x 5 neighbourhoods centred at the input pixel. Neighbour operation plays a key role in modern digital image processing. 2.4 Edge Detection Edge detection is a tool used in image processing that is aimed at identifying points in a digital image at which the image brightness changes sharply or has discontinuities. Identifying sharp changes in brightness is to capture important events and changes in properties of the image. The discontinuities in the image brightness are likely to correspond to: • Discontinuities in depth [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 18
  • 19. Discontinuities in surface orientation • Change in material properties • Variation in scene illumination The result of edge detection applied to an image may lead to a set of connected curves that indicate the boundaries of objects, boundaries of surfaces as well as curves that corresponds to the surface orientation. There are several methods of edge detection methods commonly used for image analysis. Edge detection can be classified into two categories. These categories are: 1. Search based (Discrete approximation of gradient and thresh hold the gradient norm image – First derivative) 2. Zero crossing based (Second derivative) 3. Band pass Filtering 4. Compass operators A search based methods detects the edges by first computing a measure of strength, that is usually a first derivative expression such as the gradient magnitude. It then searches for the local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge. This is usually the gradient direction. For zero crossing method on the other hand, it searches for zero crossing in a second order derivative expression computed from the image in order to find edges in that image. This process is usually a zero crossings of a non-linear differential expression. A vital and important pre-processing step to edge detection is to reduce noise that was mentioned in section 2.1.3 commonly using a Gaussian smoothening operation. Another important operation during edge detection is edge thinning. This operation is a technique to remove unwanted spurious points on the edges of the image. It is usually employed after the image has been filtered for noise, edge detector has been applied to detect the edges of objects in the image and after the image has been smoothed using an appropriate thresh hold value. Ann example of edge detection that has been applied to an image can be seen in figure 7 below. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 19
  • 20. Figure 8: Edge Detection 2.5 Morphological operations Morphology is a broad set of image processing operations that process an image based on shapes. This method applies a structured element to an input image, creating an output image of the same size. In this method, the value of each pixel in the output image is based on a comparison of the corresponding pixel in the input image with its neighbours. A morphological operation usually consists of two operations, dilation and erosion. Dilation adds pixels to the boundaries of objects in an image while erosion removes pixels from object boundaries in an image. 2.6 Representation of objects Once thresh holding and edge detectors have been applied to an image, it is necessary to store information about each objet for use in future extractions. Each object needs to be assigned a special identifier to it. There are three ways of doing so: • Object membership map (value of each pixel is encodes the sequence number of the object) • Line segment coding (objects represented as collection of chords oriented parallel to image lines) • Boundary chain code (Defines only the position of object boundary) [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 20
  • 21. Chapter 3 Software and Hardware Required 3.1 Software One of the requirements stated by the university, is to use proposed software MATLAB. MATLAB stands for MATrix LABoratory, software developed by Math works Inc. (www.mathworks.com). MATLAB is a high-level language interactive numerical computation, visualisation and programming software. It has capabilities to analyse data, develop algorithms and create models and applications. It uses similar C++ program languages and object oriented program concept similar to that of C++ programming. The figure below shows MATLAB programming environment. Figure 9: MATLAB Environment MATLAB incorporates built in math functions, languages and tools that are faster than traditional languages such as C++ and java programs. MATLAB has various toolboxes that enable the user different set of tools for different applications. For our project, we will be using the image acquisition, image analysis and video analysis toolbox. MATLAB also has the function of using a graphical interface to create algorithms. This function in MATLAB is called Simulink. Simulink gives the user to create programs using pre-defined block set. MATLAB also has different toolboxes that enhance specific functions within MATLAB. Some examples of MATLAB toolboxes are the aerospace toolbox and control system toolbox. Figure 10 below shows Simulink design environment. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 21
  • 22. Figure 10: MATLAB Simulink Design Environment With a library of block sets, the user can then create different type of programs by linking these block sets. 3.2 Hardware To develop the object detection program, image-capturing devices will be required to simulate a camera system of a UAV or micro aerial vehicle. Currently, there are two methods of image capturing available commercially in the market. They are either by a: • Digital Cameras • Analogue cameras Digital cameras are available in various resolutions and have direct interface with computer. Analogue cameras on the other hand, require additional suitable grabbing cards or TV tuner card for interfacing with the computer. Digital cameras give high quality low noise images unlike analogue cameras that have lower sensitivity producing lower quality images. Therefore, we will be using a standard digital camera and a webcam. A webcam is a video camera that feeds images in real time to a computer via USB, Internet or WI-FI. A standard webcam and digital camera can be seen in the figure below. Figure 11: Standard Webcam and Digital Camera [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 22
  • 23. Chapter 4 Program Design and Development 4.1 Project Requirement To develop an object detection program for a UAV or micro aerial vehicle we need review the purpose and function of the program. Requirements • Grab and capture images and video from an image-recording device • Co-relate frames of images captured (histogram) • Detect object boundaries, surfaces and textures. Based on literature review of image analysing techniques, we can then create a program on these methodology and concepts to full fill the requirements stated above. 4.2 Object Detection Program Overview This section will describe the design overview of the object detection program. The object detection program will come under the Vision Controlled Motion system or (VCM) in short. VCM is used in robotics and in our case an unmanned aerial vehicle (UAV) or micro aerial vehicle. VCM as a system can be shown graphically in the figure below. Figure 12: Vision Control Motion System VCM is a very important concept and our object detection program is a part of this system. The object detection program is the image analysis portion of the VCM system shown in figure 12. The program analyses images and videos acquired from an image-capturing device. This then enables the UAV the ability to sense it’s surrounding. User defined inputs will then allow the UAV to avoid obstacles and [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 23
  • 24. identify key surfaces and track user defined object autonomously. In an indoor environment, ability to sense the surrounding is the key to autonomous flight. 4.3 Design overview The primary design objective of the program will be to read and acquire images and videos from the image-capturing device attached to the microprocessor. The program shall then need to filter and convert these images and videos to usable data for analysis. Once these data has been converted, analysis operators can be applied to these images and videos and the results can be displayed. A detailed workflow of the program can be found in the figure below. Figure 13: Object Detection Program Workflow [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 24
  • 25. 4.4 Software Design Procedures The object detection program will consist of critical processes and procedures. The table below will show the steps involved in creating the program. Program steps are as follows: 1. Initialise video data to program 2. Convert video and image data for processing 3. Apply edge detectors 4. Display output data As described in the above table, there are 4 critical steps in our program. The initial step will be to initialise an image or video that was captured using a digital camera and then uploaded to the computer. Once initialized, conversions to binary and grey scale images will be carried out and an edge detector will be applied to the converted images. Lastly, the images will be displayed with edge detectors. Detailed procedures of each process will be described in the following sections. For this project, we will be using the Simulink function of MATLAB mentioned in the previous chapter of this report. 4.4.1 Initialisation of image The initial step for our object detection program is to sample images captured by the webcam. This step involves initializing the image or video to the MATLAB software. By doing so, we can then read pre-recorded videos or image. Below shows an image being imported into MATLAB. Initialized image Image description [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 25
  • 26. Figure 14: Image import to MATLAB In the figure above, an image was imported into MATLAB and we can see the description of the image. The description of the image includes size, bits and class of the image. Once the image has been imported, we can then show a preview of the image in MATLAB. Figure 11 below shows a preview of the imported image. Figure 15: Displaying of Image in MATLAB By importing the image, we can then apply different analysing techniques discussed in chapter 2 to the image. 4.4.2 Image Conversion Once we have imported the image, we can then convert the image to binary images and grey-scaled images. This process is a vital step in image analysis as discussed in chapter 2 of the fundamentals of image analysis. Figure 16 below shows the above image in figure 16 converted to a binary image. Converting the images to binary images, we have changed the values of the each pixel and given them new values of (0 and 1). The black areas in the picture represent pixels with values 0 while the white areas in the picture represent pixels with values 1. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 26
  • 27. Figure 16: Binary Image Conversion Next we can then display the histogram of the binary image. The figure below shows the colour histogram of the binary image above. The histogram below shows the number of pixels that have (0 or 1) values. Figure 17: Binary Histogram As explained in chapter 2, binary images have pixel values of either 0 or 1. This can then be seen in the colour histogram in figure 16. The next step of conversion is to create a grey scale image. A grey scale image, replaces the colour pixels of the image with grey shades from white to black. Once the image has been converted, we will take the grey-level histogram of the image as shown in figure 17 below. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 27
  • 28. Figure 18: Grey scaled image and histogram 4.4.3 Edge Detectors Applying edge detectors is the next important step of our object detection program. The following figure will show the above image being edge segmented. Edge segmentation creates boundaries in objects of the image. This is an important component of our program as object boundaries and surfaces are important for a micro aerial vehicle. Figure 19:Edge Detector [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 28
  • 29. By applying an edge detector to the image, we can see object boundaries in the image. The white lines in figure 15 shows the boundaries of the image. The above figure was applying an edge detector in this case a “sobel” edge detector to a binary image. Edge detectors can only be applied to a grey scale or binary image. Therefore the conversion step is very important. 4.4.4 Display of Data The final step for the object detection program is to display the output image as useable data for the UAV. The image can then be written as a binary file or written as a set of codes for the microprocessor. With this data, the microprocessor can then compute these data and instruct the UAV to fly towards or avoid structures in the environment. However, for this project we will display the output of the image using the video display screen of the software. We will also use statistical methods such as histograms to view the data. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 29
  • 30. Chapter 5 Testing and Evaluation 5.1 Object Detection Program Test (Using Pre-recorded Video) The first initial program design takes on the first approach in design flowchart of the overall object detection program. This approach uses a pre-recorded video that has been recorded by a digital camera and then uploaded to the computer. A video of an indoor environment was taken using a digital camera. To simulate structures such as pillars and ceiling, we used a video of a typical corridor. Some images of this corridor can be seen in the figure below. Figure 20: Corridor Video Snapshot The pre-recorded video will then be initialised to the program and converted to useable images for analysis. Once this is done, an edge detector will be applied to the image. For this design, a “Prewitt” edge detector will be used. It calculates the gradient of the image intensity at each point giving the possible increase from light to [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 30
  • 31. dark and the rate of change in each direction. The output image of this video will be displayed using video displays and a colour histogram. The figure below shows a snapshot of this program design and output. Figure 21: Result Display (Pre-recorded video) 5.2 Object Detection Program Test (Using live acquisition Video) The second design of the object detection program uses life feed video instead of pre- recorded video. This approach uses a standard webcam that is connected to the computer via USB and processes and converts live images instantly. As mentioned in our flowchart of the design overview, this is the second method of image acquisition. This process takes place is in real time and any objects in the video are instantaneously analysed and an output image is displayed. Real time processing analyses the image at the same speed as the images are occurring. In the context of a UAV or micro aerial vehicle, real time processing is required to process images captured by the UAVs camera system. With real time processing, the on board microprocessor is able to analyse data from the object detection program while the UAV is in flight. This will then enable UAV autonomous functionality. Thus, this will be the final design for the object detection program. The first step to this design is to initialize the webcam to the program. Once that is done, the webcam will grab frames of images at 30 frames per second. These images will then be converted and [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 31
  • 32. an edge detector will be applied to these images. The output data will be displayed as images and a colour histogram. A “Prewitt” operator edge detector mentioned in the first design will also be applied here. The figure below shows a snapshot of this design output. Figure 22: Live feed design output From the output above, real time images were processed and an output was displayed in the form of video displays. These displays include, the original image, binary image, grey-scaled image, edged image and lastly an overlay of the edges on the grey- scaled image. A colour histogram of the original image is also shown. 5.3 Field Test of Final design To verify the final design of the object detection program, a field test was done. Using different objects, the output image of the program was recorded and analysed to see the robustness of the edge detector of the program. To test the design even further, a test was carried out in different lighting conditions. Different lighting condition is an important factor for indoor environment. 5.3.1 Object Test During this test, different object were used to simulate common object found in an indoor environment. First object used was a common drinking bottle found in an [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 32
  • 33. indoor environment. The figure shows the output of a bottle after being processed by the object detection program. A histogram of the colours of the video is also displayed. Figure 23: Bottle Test Next, tables and chairs were used to simulate an indoor environment. As the previous test, tables and chairs were placed in front of the camera and the output data was being displayed in the video display. The figure below shows the output data of the chair and table after processing. Figure 24: Indoor Table and chairs [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 33
  • 34. 5.3.2 Object Test Evaluation The object test satisfies our requirement of an object detection program to detect object boundaries and edges. From test carried out, we can see in figure 23 of the bottle test that the full shape of the bottle was being detected and it boundaries outlined. The colour histogram also reflects an increase in R channel, shows that the bottle is being detected as it is red in colour. Using the colour histogram, we can gauge distance of the object from camera. This means that if the amplitude of the colours increase and the graph of colours widen the object are near the camera. This test is very satisfactory for the project. From figure 23 above, it is evident that the surfaces and boundaries of the table and chairs were detected and outlined in the output display. In the binary image display, it is observed that the walls of the indoor environment were given a value of 1 and it displays white while the rest of the environment was given a value of 0 and displayed as black. Boundaries of the table and chair on the other hand were detected and displayed in the edges display. This showed that the program performed its desired function. However, the program did not detect the shadow on wall. 5.3.3 Lighting Test Lighting test is the next step for verifying the functionality of the object detection program. For this test, we simulated an indoor environment with two distinct lighting conditions, low lighting and bright lighting. This test is to simulate typically an office environment whereby the UAV or micro aerial vehicle maybe placed to do surveillance and monitoring. The figure below depicts the output data of program in a normal office environment. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 34
  • 35. Figure 25: Office environment The last and final test carried out was a high light environment. This test simulates an outdoor environment. This test is simulated to captured output data of the object detection program if it is coupled with a UAV or micro aerial vehicle that has outdoor autonomous functionality. Out door functionality usually uses GPS triangulation, however, if the program is able to detect surfaces in the environment, the program can be used as an object avoidance system for the UAV or micro aerial vehicle. The figure below show the output data of the object detection program in an outdoor environment. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 35
  • 36. Figure 26: Out door Environment Figure 27: Out door Environment 2 5.3.4 Lighting Test Evaluation From the lighting test, the object detection program full fills the requirements of our project. The object detection program is able to detect surfaces and boundaries of objects in the image. The downside of the program however, is that the program is unable to detect fully boundaries of objects that have low illumination levels. From the output display in figure 24 above, it is observed that key structures of the environment were detected. Wall boundaries and picture frames on the walls were [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 36
  • 37. detected. However, the boundary of the door and the wall was not detected due to shadow being casted on the door. This shows that in low lighting environments, the object detection is not robust enough to detect object boundaries. Low light objects were not detected and edges were not shown in the output display. From the results in figure 26 and figure 27 of the outdoor environment test, surfaces and object boundaries were detected. However, as in the case of the indoor environment, object that have low light levels were not detected. The outdoor environment results shows that this program can be used in an outdoor application as an object avoidance system. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 37
  • 38. Chapter 6 Recommendations and Conclusion 6.1 Summary After every test that was carried out, and after every evaluation that was carried out, it has been determined that the object detection program is able to detect object boundaries in the video that was pre-recorded or live video captured by a webcam. The initial design of the program was able to detect object boundaries from a pre- recorded video taken from a digital camera. The second and final design of the program was able to detect object boundaries in real time via a USB webcam attached to the computer. Therefore in summary, this project is a success whereby; both designs of program are able to detect object boundaries. 6.2 Overall Conclusion Overall, the object detection program satisfies the requirements of our project stated in section 4.1 of this report. The object detection program is able to initialize images and videos either a pre-recorded video or real time video. The object detection program is then able to convert these images and video into grey-scaled images and binary images. Finally the object detection program is able to detect object surfaces and boundaries in the image and video by image analysing techniques such as segmentation and applying an edge detector. 6.3 Recommendation From the data collected from our tests and evaluations, there are many areas in which this object detection program can be improved on for the project to be more successful. Firstly, the object detection program does not have a graphical user interface or GUI for the user to select different output to be displayed. The program currently displays all outputs simultaneously when the program starts and to certain users this [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 38
  • 39. information may not be critical to them. I would recommend future work on GUI interface for easy operation of the program. Another area, which can be improved on, is the base algorithm of the code. From the results shown in the test we did, the program is able to detect objects that have sufficient light levels however, if the object has insufficient light levels the object is not detected and edge operators cannot be applied to the object. Further exploration in light analysis in video images should be done to improve robustness of the program. This would then enable the program to detect objects with different light levels appropriately. Object tracking can also be improved on and will also be useful for the program. With object tracking, the user can input pre-defined objects for the UAV to track. Lastly, actual implementation of the program to a UAV or micro aerial vehicle should be done. Our program is just one part of a vision control motion system (VCM) and to see its true potential it should be implemented and tested with a UAV. As our program was simulated on a computer, the output displays were video images. On an actual implementation of a UAV, the result of the output will be different, as it will need to be analysed by the microprocessor controlling the UAV. Getting the microprocessor to analyse data from this program and getting it the UAV to react to these results will be challenging. Though we are now at the end of the project, testing of the program will still be carried out till the Poster Presentation day. Some improvements will be implemented and tested. If another student takes up this project, we hope that the improvements and suggestions presented in this report will be taken into consideration. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 39
  • 40. 7. Review and Reflection This project for me has been an interesting and exciting learning adventure. From the initial stages of research to the actual implementation of image analysing concepts in the program was a challenge. In the beginning of the project, I spend countless hours researching on image analysis techniques and concepts. Understanding how digital images are being processed in various computer systems was very enriching. As I have no prior experience in digital image processing, I had to be clear on how digital images were processed and what kinds of operators were needed to full fill the requirements of the project. Once that was done, I was faced with another daunting task of writing the program code and developing this program. As I am not strong in programming algorithms, this task proved to be a nightmare. I referred to tutorial books and attending additional lessons on MATLAB. I researched on the full capability of MATLAB and discovered different toolboxes and functions that are built in to MATLAB. With that, I decided to take full advantage of the Simulink function in MATLAB that uses block diagrams to create a program. As I am more of a visual person, programming using Simulink enabled me to develop the program for this project. When the program was done, analysing the output data of the program was a challenge. Luckily, with all the research done on image analysis concepts, I was able to point out the limitation of the program. Throughout this entire project, time management was the toughest. As a part time student, juggling between work commitments and family commitments was very challenging. Time management was a crucial skill that I developed during the course of this project. Prioritising important events in the project and having self-discipline to accomplish these targets was a valuable lesson learnt. In the course of this project, there were many milestones that were reached through the use of the Gantt chart. Different goals were set to keep the project on track difficulties were solved along the way. Problems faced along the way strengthened [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 40
  • 41. my analytical and problem solving skills. One major problem solving skill learnt during the course of this project was contingency planning. One major event that happened to me during the duration was a computer crash and I had to quickly rectify the problem quickly and find alternative plans to continue with the project. This journey proved to be a very enriching experience for me and despite going through a lot of challenges and obstacles along the way, I am very glad I am able to take up this project. I had to pick up concepts of image analysis very quickly as I had no knowledge of these concepts in the past. At every single stage of he project, there was something new to learn and everyday I made new discoveries with regards of the project. Overall, this journey has been a programming adventure for me. [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 41
  • 42. References: • Segmentation and Boundary Detection Using Multi scale Intensity Measurements – (Dept. of Computer Science and Applied Math The Weizmann Inst. of Science Rehovot, 76100, Israel) • Fundamentals of Image Processing- (Ian T. Young, Jan J. Gerbrands, Lucas J. van Vliet) • EdgeFlow: A Technique for Boundary Detection and Image Segmentation – (Wei-Ying Ma and B. S. Manjunath) • Computer Vision System Toolbox –(www.Mathworks.com) • Fundamentals of Image Processing I – (David Holburn University Engineering Department, Cambridge) • Fundamentals of Image Processing II – (David Holburn University Engineering Department, Cambridge) • Fundamentals of Image Analysis– (David Holburn University Engineering Department, Cambridge) • Introduction to Video analysis using MATLAB – (Deepak Malani, REC Calicut Anant Malewar, IIT Bombay) [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 42
  • 43. Appendix A – Gantt chart [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 43
  • 44. Appendix B – Initial Design Simulink Code [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 44
  • 45. Appendix C – Final Design Simulink Code [EAS 499 Capstone Project] [Development of Object Detection Program for Micro Aerial Vehicles] 45