SlideShare a Scribd company logo
1 of 45
Download to read offline
DUBLIN INSTITUTE OF TECHNOLOGY
School of Electrical Engineering Systems
Bachelor of Engineering Technology:
Electrical and Control Engineering (DT009)
Final Year Project, 2015
Student Name: Ben O’Brien
Student Number: D11128055
Project Title: Robotic Luggage
Robotic Luggage 2015
II
Declaration Statement:
I the undersigned declare, that this report is entirely my own written work, except where
otherwise accredited, and has not been submitted for academic assessment to any other
university or institution.
Name: Ben O’Brien
Signature: ____________________ Date: _________________
Robotic Luggage 2015
III
Abstract
Suitcase technology has changed over the past 20 years from suitcases that were
carried, to suitcases that had a single pair of rollers, to suitcases that roll effortlessly on four
casters. What is next for the suitcase? The next evolution could be a robotic one .The problem
addressed in this work is how to get a suitcase to correctly follow the path of a user using
electronics? This problem contains many more problems which will be mentioned later on.
The objective of this project is to have a suitcase follow your path and regulate its
distance by making use of electronic equipment such as cameras and micro computers, and
programming skills.
This thesis examines a way of getting a suitcase to follow its owner by using a
Raspberry Pi computer, along with its camera module, Python programming abilities and
GPIO Pins to control the motors which will drive the suitcase.
Robotic Luggage 2015
IV
Table of Contents
Chapter 1: Introduction..............................................................................................................2
1.1 Introduction......................................................................................................................2
Chapter 2: Literature Review.....................................................................................................5
2.1 Literature Review.............................................................................................................5
2.2 Block Diagram...............................................................................................................10
Chapter 3: Design ....................................................................................................................11
3.1 Image Analysis...............................................................................................................11
3.2 Motor Control ................................................................................................................11
3.3 Distance Control ............................................................................................................11
3.4 Chassis Design...............................................................................................................13
3.5 The Final Product...........................................................................................................16
Chapter 4: Testing....................................................................................................................18
4.1 Testing............................................................................................................................18
Chapter 5: Discussion & Evaluation........................................................................................25
5.1 Discussion & Evaluation................................................................................................25
Chapter 6: Conclusion..............................................................................................................28
6.1 Conclusion .....................................................................................................................28
Bibliography ............................................................................................................................29
Appendix..................................................................................................................................30
Chassis Design Drawings ....................................................................................................30
OpenCV Installation Commands.........................................................................................36
Scipy/Numpy/Matplotlib Installation Commands ...............................................................37
Raspberry Pi Schematics......................................................................................................37
SN75441ONE Data Sheet (First Page)................................................................................38
Sample Code........................................................................................................................40
Robotic Luggage 2015
1
Robotic Luggage 2015
2
Chapter 1: Introduction
1.1 Introduction
Luggage retail is a very important market throughout the world. By 2015, the global
luggage market is forecasted to generate about 31.62 billion U.S. dollars. The retail sales value of the
travel bag segment worldwide is expected to generate about 12.89 billion U.S. dollars in 2015, making it
the largest segment of the luggage market (Statista 2015a).
The timeline shown below in Figure 1.1 was taken from Statistics portal Statista.com (Statista
2015b) depicts the compound annual growth rate of the luggage market's retail sales worldwide between
2010 and 2015, by region.
Figure 1.1: Compound Annual Growth of Luggage Market’s Retail Sales
Robotic Luggage 2015
3
This timeline shown below in Figure 1.2 was taken from Statistics portal Statista.com (Statista
2015c) shows the retail sales value of the global luggage market in 2006 and 2010 and provides a forecast
for 2015, by region.
Figure 1.2: Retail Sales Value of the Global Luggage Market in 2006 and 2010 and Forecast
for 2015, by Region.
At the moment, nobody is selling a robotic suitcase as part of their line of products, although that
hasn’t stopped individuals from trying to create them. The closest thing on the market is “smart luggage”
which has many fancy features like Bluetooth tracking, digital locking and phone charging.
The problem faced in this work is how to get a suitcase to correctly follow the path of a user using
electronics? This problem contains many more problems such as taking and processing pictures quick
enough to make this method of following effective, making sure the suitcase doesn’t bump into the user
and stops when the user stops, what to do if the suitcase can’t find its user, how to differentiate between
users if multiple cases are being used, etc…
The solution to this problem examined in this thesis is to use a Raspberry Pi to control the two D.C
motors and to decide how the case will behave in following its owner, and a self-designed chassis and case
mount to attach the case to and pull it along.
The chassis is a “horse and carriage” type design. The wheels of the case would be set into a
mount, holding it securely. The mount is then attached to the main chassis which houses the motors and
other components. A pivot point between the mount for the case and the main chassis allows the two parts
to turn freely.
Robotic Luggage 2015
4
A Raspberry Pi, is a low cost, credit-card sized computer that plugs into a computer monitor or
TV, and uses a standard keyboard and mouse, as the brains of the operation. This is where the coding is
done and all commands will be coming from. The Raspberry Pi also has a camera module which was used
in this work.
The camera would take pictures of the path ahead of the case looking for one specific thing three
red L.E.D’s which are located on the owner of the case’s back pocket. As part of the work of this project, a
Python program was developed which gives us the same image in black and white leaving only what is
sought after, the three red L.E.D’s. Every pixel containing the colour output of the three L.E.D’s will be
changed to white and every other pixel, which is not needed, will be changed to black.
This new image of black and white will then be used to find out in which direction the owner is by
calculating the average position of the white pixels (three red L.E.D’s) along the x-axis in regard to the
centre of the image, which is known from the resolution of the camera (e.g. 640x480). From this
information, the two D.C motors are set either left, right or forward.
The aim of this project is to design and create a robotic suitcase within a three month time frame
(16/02/2015 – 15/05/2015) which will decide what direction to go from images it sees and be able to keep
a distance of one meter from its owner, and from this learn about the world of programming, in particular
the programming language Python, learn more about motor control, engage in P.I control, get to know the
Raspberry Pi and its abilities and functions and become a better electrical engineer.
The rest of this work is laid out as follows:
Chapter 2: Literature Review details the general characteristics of various technologies and components
used in the project.
Chapter 3: Design describes the design of the various aspects of the proposed system.
Chapter 4: Testing shows recordings from multiple tests carried out on the system.
Chapter 5: Discussion & Evaluation talks about the problems encountered in this work and also some
future work opportunities.
Chapter 6: Conclusion gives the final comments on what parts of the system worked or did not work and
why.
Robotic Luggage 2015
5
Chapter 2: Literature Review
2.1 Literature Review
This section gives the reader a background of the methods used to complete the task
2.1.1 Raspberry Pi
A Raspberry Pi (as seen in Figure 2.1) is a small computer manufactured in the United
Kingdom with a price tag of €25-€35.This particular model is the B Revision 2 which is powered by a 5V
micro-USB and can draw a current of 700mA-1000mA. It has a wide range of capabilities and is produced
with the intent of it being used all over the world by kids to learn how to program. In this particular case it
was used as the brains of the robotic case. It has many features which appeal to this particular work as it
can be used to write and run the Python program, it has its own camera module and GPIO pins which are
labelled in Figure 2.2, replacing the need of a programmable chip (Raspberry Pi 2015a).
Figure 2.1: Raspberry Pi
Robotic Luggage 2015
6
Figure 2.2: GPIO Numbers
(RasPi.TV 2014)
2.1.2 Raspberry Pi Camera Module
The Raspberry Pi camera module, shown attached to the Raspberry Pi in Figure 2.3, is a
ribbon cable attachment which connects via the CSI port on the Raspberry Pi itself. It has a five megapixel
fixed-focus camera that supports 1080p30, 720p60 and VGA90 video modes, as well as stills capture
(Raspberry Pi 2015b).
Robotic Luggage 2015
7
Figure 2.3: Raspberry Pi Camera Module
2.1.3 SN75441ONE Chip
The SN75441ONE chip is a quadruple half-h driver manufactured by Texas Instruments
which has many uses which includes driving motors up to 1 Amp. It was used to receive signals from the
Raspberry Pi and set the motors accordingly. Below in Figure 2.4 is a diagram of the chip (Texas
Instruments 2015).
Figure 2.4: SN75441ONE Chip Diagram
Robotic Luggage 2015
8
2.1.4 H-Bridge
An H-Bridge is an electronic circuit, made up of four transistors which act as switches, that
enables a voltage to be applied across a load in either direction. H-Bridges are often used in robotics to
allow D.C motors to run forwards and backwards. In this work they are used for exactly that reason. This
information was gathered from the book “`Microcontroller Projects Using The Basic Stamp”
(Microcontroller Projects Using The Basic Stamp 2002). Below in Figure 2.5 is a schematic diagram of an
H-Bridge.
Figure 2.5: Schematic Diagram of H-Bridge
2.1.5 Python
Python is a free, high level, general purpose programming language with a vast library that
supports modules and packages that programmers love, not only for the reasons just stated, but because it
is so easily debugged (Python 2015).
2.1.6 OpenCV
OpenCV is a library of programming functions written in C/C++ which is designed for
computational efficiency. In this works case, it is downloaded then a particular library called cv2 which is
imported at the start of the program so its functions can be used for quick analysis of Numpy arrays. This
information was taken from the OpenCV website (OpenCV 2015).
Robotic Luggage 2015
9
2.1.7 Numpy
Numpy is a highly stable and fast array processing library which is usually used in scientific
computing. It is also downloaded and imported at the start of the program so its powerful N-array
functions can be used. This information was gathered from the Numpy website (Numpy 2015).
2.1.8 Thresholding
Thresholding, by definition, is changing a gray scale image to a black and white image. It
works by scanning through every pixel of the image and changing its value to white (255) or black (0)
based on whether the red green and blue values of the current pixel satisfy certain constraints. A
thresholding value is set, so that any pixel with a value higher will be changed to a 255, and any pixel with
a value lower will be set to 0. The resulting image is in black and white only like the example image in
Figure 2.6. This information was gathered from the book “Learning OpenCV” (Learning OpenCV 2008).
Figure 2.6: Threshold Image Example
2.1.9 Inkscape
Inkscape is a free and open-source vector graphics editor that can create or edit vector
graphics such as illustrations, diagrams, line arts, charts, logos and even complex paintings (Inkscape
2015). In this work Inkscape was used to design the chassis, case mount and motor casings.
2.1.10 Pulse Width Modulation (PWM)
Pulse Width Modulation, or PWM, is a technique for getting analogue results with digital
means. Digital control is used to create a square wave, a signal switched between on and off (Arduino
2015).
Robotic Luggage 2015
10
2.2 Block Diagram
Figure 2.7: Block Diagram of System
The block diagram, shown in Figure 2.7, gives a visual representation of the processes in
the order they happen. The first step is capturing the image as a PiRGBArray using the Raspberry Pi
camera module. This image is then analysed using OpenCV and Numpy for its fast array analyzing
abilities. Once the threshold has been applied and the L.E.D’s have been found it is on to the next step.
The decision logic looks at the average L.E.D position in regard to the X-axis and also the
dispersion of the pixels which is used to calculate the distance. From this information, the motor function
is chosen.
The D.C motors have each variable defined as a function in the program (e.g. forward or left) and
once the decision is made on which one is appropriate at any given time via the decision logic step. The
GPIO pins that correspond to each individual motor are set. If the motors are set forward, each motor will
have one pin set as high and the other low. This works through the SN75441ONE H-Bridge chip which
was explained previously.
This process is repeated over and over to keep track of the target.
Robotic Luggage 2015
11
Chapter 3: Design
3.1 Image Analysis
Once an image is captured by the camera it must be altered and analysed to find the critical
data needed to allow the Python program to make decisions on which way to set the two D.C motors.
The first step is capturing the image as an array and in RGB format using the PiRGBArray
import. This allows one to view the image as a Numpy array of BGR values, for each individual pixel in
the image (in Numpy, when the RGB values appear they are in the reverse order of BGR, for example, the
colour blue in RGB is (0,0,255), but when shown in a Numpy array the colour blue will appear in the BGR
format like this (255,0,0)). From here the camera resolution is chosen as 640x480 to allow for quicker
image processing but still giving quality.
Now that the image has been taken and can be seen as an array of BGR values, an upper limit and
lower limit of the three parts of the pixel – red green and blue, are set and the program thresholds the
image to black and white. By using cv2, the black and white image can be seen. The position of the white
pixels which, in theory, should indicate where the red L.E.D’s are, is found using the Numpy function
“np.max” and set into another array. The average of the array of white pixels is found using the Numpy
function “np.mean” to determine where exactly the L.E.D’s are in relation to the X-axis of the image
.
The final step in image analysis is working out the dispersion of the pixels to determine the
distance of the case from the target. The further the target is, the closer the white pixels (L.E.D’s) will be
together. To calculate this value the Numpy functions “np.min” and “np.max” were used to find the
white pixel closest to the Y-axis and furthest from the Y-axis respectively. The minimum value was then
subtracted from the maximum value to give the dispersion.
3.2 Motor Control
The two D.C motors are controlled via a SN75441ONE H-Bridge chip. After the Python
program is finished its decision logic, the GPIO pins of the Raspberry Pi send signals to the driver chip
which, in turn, will set the two D.C motors in whatever direction the target is deemed to be. Because of the
chassis design, when one motor on and the other is not, the motor which is off will act as a pivot point and
the rear of the chassis will be dragged around. An idea to solve this problem was to use ‘Pulse Width
Modulation’ to set the motor which would normally be off, to a certain percentage of it actual speed, but
the Raspberry Pi only has one PWM output so this is not possible with the Raspberry Pi alone.
3.3 Distance Control
For this work the goal is to keep the case one metre from the target, to avoid the cases
bumping into the owner. This was done by using the dispersion of the L.E.D’s to calculate the distance
from the target. Dispersion in this case is how far apart the L.E.D’s are from one another in the image.
Testing was carried out, as seen in Chapter 4 of this thesis, to determine the dispersion of the white pixels
Robotic Luggage 2015
12
at set distances. Finding the dispersion value itself was mentioned previous in the image analysis section of
Chapter 3.
A dispersion value of 11 was found when the target was placed exactly one meter from the chassis,
with the L.E.D average having an X co-ordinate of 320 which is the middle of the image as the image
shape is 640x480, the resulting image is shown below in Figure 3.1. This value was then added to the
Python program as a constraint on each of the functions (e.g. forward, left, right) which would not allow
the motors to go directly forward while the dispersion value was above the limit.
Figure 3.1: Dispersion of pixels at 1metre
Robotic Luggage 2015
13
3.4 Chassis Design
The chassis design had many different iterations at the beginning. As time went on and
more decisions were made as to which way things were being done and parts available were taken into
consideration, the final design idea was chosen.
The decision was made to make a “horse and carriage” like design, which would include a
mounting point for the case which could be changed over for different case designs making this a more
universal product and more of an add-on to an already existing case instead of a brand new case with a
compromise in interior space due to interior electronics.
The drawings for the chassis were done on Inkscape, which is a program which allows the user to
create and edit vectors. Below are the drawings which were cut out of a sheet of 6mm thick plastic with an
Epilog 40Watt laser cutter to create the finished product as seen in Figure 3.3.
Figure 3.2: Inkscape Chassis Design
Robotic Luggage 2015
14
Figure 3.3: Finished Chassis Design
Robotic Luggage 2015
15
As seen in the above picture the case mount is attached to the main chassis using a 1mm bolt and
raised up using spacers, which allow the mount to rotate without loosening or tightening the bolt.
The two D.C motors are held in place using casings cut from the same material as the chassis. They
are placed on the under-side of the chassis then held in place with four M3x40mm screws.
The extra level on the chassis is for mounting the breadboard. This was done to save space and
keep the chassis size to a minimum. It is held in place using the same M3x40mm screws as the D.C
motors.
The camera casing for the Raspberry Pi camera module is placed on the front of the chassis to give
full visibility with 2mm screws and a caster ball is placed at the tail-end of the chassis with 2mm screws to
allow full movement.
Robotic Luggage 2015
16
3.5 The Final Product
Figure 3.4: Finished Product (Side View)
Robotic Luggage 2015
17
Figure 3.5: Finished Product (Front View)
Robotic Luggage 2015
18
Chapter 4: Testing
4.1 Testing
4.1.1 Colour Testing
As mentioned in Chapter 3, the colour that needed to be found was thought to be red, so
upper and lower limits of red were set in the program to find the L.E.D’s. When this program was run it
gave back results which did not match where the L.E.D’s were. Upon closer inspection it was found that
what was actually being picked up as the L.E.D’s was in-fact the red cable from the variable power supply.
This was realised by using cv2, which is an interface of OpenCV. A preview window was opened which
showed the captured, threshold image. From this it was clear that it was not the L.E.D’s which were being
picked up. The same function method was used to see the original image before thresholding took place
and from that it was observed that the L.E.D’s which are red to the human eye, appear as bright white to
the camera, as seen below in Figure 4.1.
Figure 4.1: Camera Saturation
This white gets dimmer as it gets closer to the edge. This happens because the cameras sensors are
becoming saturated. Altering the Raspberry Pi’s camera brightness and saturation settings makes little
Robotic Luggage 2015
19
difference to this. So instead of searching for red pixels, the constraints were modified to a broader
spectrum giving the results seen below. Figure 4.2 is the result of the upper red limit being set to white
(255,255,255). This gave the L.E.D’s but allowed for much more interference from lights. Figure 4.3 is the
result of the upper limit of red being set a little bit lower, remember these values are BGR as mentioned in
Chapter 3, (205,205,255), which reduces the amount of interference from lights.
Figure 4.2: Upper Limit at (255, 255, 255)
Robotic Luggage 2015
20
Figure 4.3: Upper Limit at (205, 205, 255)
This makes the distance control much more difficult, if one pixel which is not an L.E.D is detected
a few pixels away from the L.E.D’s it would make the dispersion value much higher and the program
would think that it is very close to the case and stop the two D.C motors from being set in the forward
direction.
Robotic Luggage 2015
21
It does not cause many problems to the direction of the case if only a few pixels are found which
are not L.E.D’s as the average value is taken, but if a lot of pixels are picked up which are not L.E.D’s then
this will cause problems.
4.1.2 Distance Testing
To work out the dispersion value of pixels at different distances the target was put at a set
distance from the camera and then a program was run which gave the average value of the three L.E.D’s
position in relation to the X-axis and the dispersion of the pixels. This program was run at set distances,
multiple times, while the average L.E.D position was 320, which is the centre of the image, and the
average readings were recorded. The results are shown in the table (Figure 4.4) below along with a graph
(Figure 4.5) of these results and an image (Figure 4.6) of the testing taking place at 0.75m.
Distance From LEDs (m) Dispersion of LEDs in pixels
0.25 46
0.5 21
0.75 14
1 11
1.25 9
1.5 7
1.75 5
2 3
Figure 4.4: Distance Testing Results
Figure 4.5: Dispersion of L.E.D’s in Pixels Vs. Distance from Target
0
10
20
30
40
50
0 0.5 1 1.5 2 2.5
Dispersionofpixels
Distance from target in meters
Dispersion of LEDs in pixels Vs.
Distance from target
Robotic Luggage 2015
22
Figure 4.6: Distance Testing at 0.75metres.
Robotic Luggage 2015
23
4.1.3 Angle Testing
A test of pixel values along the X-axis of an image with resolution 640x480 was carried out
at 20 degrees right and left of centre at 1meter distance and 1.5meter distance. The results are given below
in Figure 16.
20 degrees left 0 degrees 20 degrees right
1m 43.55 320 596.45
1.5m 129.6 320 510.4
Figure 4.7: Angle Testing Results
The distance from the target is aimed to be kept at 1meter. So from Figure 4.7 it can be seen that if
the target is 20 degrees to the right or left of centre, it will be near the edge of the image. This limits the
visibility of the camera. This matter will be discussed further in the future work part of the discussion
chapter.
4.1.4 Program Testing
Throughout this whole work, many different programs were used to try and gain optimal
results. The aim was to capture an image and get the location of the L.E.D’s as quickly as possible then use
this information to control the two D.C motors. In the beginning stages the initial program was taking
upwards of 8seconds to capture an image and analyse it without any motor control. This is too long of a
time frame, for example, if the target was straight ahead and the two D.C motors were set forward, then the
target was to make a sharp turn left or right, the two D.C motors would not be set in the correct direction
for another 8seconds. This means either the case would be moving forward for these 8seconds until told to
do otherwise, or move forward for a designated amount of time and then stop and wait until the new image
has been captured to be set another way, by this time the target will be a much greater distance away and
makes this time frame inefficient. One of the reasons behind this long process was in the image processing,
the image was taken as an RGB file, and using the video port of the camera with full resolution of still
images at 2592 x 1944, and then converted into a HSV file. This was done as HSV is better at colour
rendering than RGB. The resulting HSV image and also the thresheld image were then saved to the
computer which takes more time. It was a good starting point but not quick enough.
From these earlier time consuming methods, it was giving a good idea of what not to do. In the
final version of the program the video port was not used, the resolution of the image was reduced to
640x480 and the image was not saved to the computers drive but to an in program stream which saved
time. Also, the image was not converted to HSV, although this is a better method of colour rendering, it
takes too much time in a process which is time critical. Instead the RGB values were used in the
thresholding of the image to detect the three red L.E.D’s. OpenCV was also made use of in this work as its
image processing times are said to be superior to that of Python according to OpenCV-Python Tutorials
(OpenCV-Python Tutorials 2014).
Robotic Luggage 2015
24
The goal was to capture and image, analyse it, complete the decision logic and set the two D.C
motors in under 1second which was achieved by learning from different programming attempts and
learning from each one.
Robotic Luggage 2015
25
Chapter 5: Discussion & Evaluation
5.1 Discussion & Evaluation
This section has been broken down under two headings, problems, which lists some of the
problems that were encountered in this work, and future work, which lists things that could be done to
improve on this work.
Problems
One of the first and most time consuming problems which were encountered in this work
was the installation of the Scipy/Numpy/Matplotlib package. This download and installation was said to
take four hours or more which was an understatement. Eight hours or more were spent trying to install this
package with the end result being that it did not install properly. Installation was attempted several times
and the outcome was the same. Another attempt was made after uninstalling all of the partially installed
files and using different commands to finally download and install the package successfully. The set of
commands that were used in the successful installation are shown in the appendix.
The chassis design caused an unforeseen problem. The reason the wheels are mounted on the front
and not under the case itself is to take the load off of the shaft of the two D.C motors which connect into
each wheel, but in doing this it has affected the turning abilities. For example, if the chassis was to turn
left, with the right D.C motor set forward and the left D.C motor off, this would cause the left D.C motor
to act as a pivot point and the rear of the chassis would be dragged around, probably carrying the whole
case to the side and causing unnecessary stress on the system
Camera sensor saturation caused a big problem with the program, the threshold levels had to be set
to certain values so the three red L.E.D’s, which appeared as almost white, could be found. This caused the
threshold image to pick up unwanted pixels, such as room lights, causing the dispersion value to be off
which affects the distance control and if there are a lot of unwanted pixels the direction can become a
problem too.
Finding every white pixel in the threshold image was also a problem. Initially the command
‘maxLoc’ was used to show the location of the highest value pixel, of which there were many of equal
value, but this command only returns the first. A solution to this was to use ‘maxLoc’ to find the first
pixel, save its location and then change the value of this pixel to black using the command
‘img[x,y]=[0]’, then this process would be repeated to find all values. This seemed very long and
drawn out so more research was done and it was found that Numpy had a command ‘np.max’ which
could be used to find every max value pixel in the whole image.
The material used for the chassis was plastic which must be dealt with carefully, particularly when
bolting objects on and drilling. It would, on occasion, break if too much pressure was applied and then the
cutting process must be done again.
Robotic Luggage 2015
26
In environments such as airports which have a lot of glass and reflective floors, the image
processing also becomes an even bigger problem as the reflection is also picked up by the program. The
program does not take into account that this is a reflection and counts the extra pixels and uses them for
distance and angle calculations.
Future Work
The following additional considerations and problems could be addressed in future work on
this topic.
To distinguish one case owner from another, each target could flash a unique binary code that only
its own case would recognise.
A case only design with no chassis would be another good step as it would eliminate the turning
problem and it would be an all-in-one device with all components hidden inside. This kind of idea is
shown in the first design in the appendix.
Another future work would be another camera which could be adjusted to have better recognition
of L.E.D’s and not become saturated.
The problem with using PWM was that the Raspberry Pi only has one PWM output, but this
function could be passed on to a slave device like an MSP430. This chip is capable of doing PWM for both
of the D.C motors.
Other options made available by freeing up the Raspberry Pi by introducing a slave device such as
an MSP430, which is a microcontroller produced by Texas Instruments. If the Raspberry Pi was to deal
with only the image processing and the slave device dealt with the decision logic and motor control, the
Raspberry Pi would be free to do things like GPS tracking, interference alarm and remote locking if
programmed correctly.
To solve the problem of reflective surfaces, the three red L.E.D’s could be placed in a certain way
to create a scalene triangle (MathsIsFun 2014), which is a triangle where no sides are equal and no angles
are equal. An example is shown below in Figure 5.1.
Figure 5.1: Scalene Triangle
Robotic Luggage 2015
27
The ratio of the length of the sides in Figure 17 of 14:13:9, is the same as 1.556:1.444:1. So if a
mirror image of the L.E.D’s appears underneath, the program could differentiate between the two by
working out which white pixel has the highest Y value, search for the next white pixel below this and see
if the ratio fits that of the one given. If it does, the original target has been found and the reflection can be
discarded.
The above stated method could also be used for any “noise” or unwanted pixels in the image. All
pixels could be evaluated on their relation to other pixels in the image and the ones that fit the ratio criteria
will be the three red L.E.D’s.
Robotic Luggage 2015
28
Chapter 6: Conclusion
6.1 Conclusion
This work set out to give a solution to the problem, how to get a suitcase to correctly follow
the path of a user using electronics? The main tasks were to capture an image and analyse it in one second
or less, then from this information decide which direction the target is in and how far away, and finally to
set the motors in the right direction from this information.
The image capturing and analysis was completed by using the Raspberry Pi camera module along
with the Numpy package and thresholding in the Python program. The one flaw in this part of the program
is the colour rendering. The fact that the three red L.E.D’s saturate the cameras sensors and appear almost
white in the picture caused a problem. When setting the limits for the BGR values the upper limit was set
to near white which, depending on the environment, could pick up unwanted parts of the image e.g. room
lighting. This can cause havoc with the dispersion of the pixels causing the distance control to think it is
closer than it actually is, which could disable the two D.C motors from being set forward.
The decision logic was completed in a Python program but due to the sporadic nature of the
unwanted pixels in the image from the flaw in the image analysis, the decision made is always right from
the information received, but not always in relation to where the target is.
The motor control was completed by using a combination of the Python program, the GPIO pins on
the Raspberry Pi and a SN75441ONE H-Bridge chip. The motors functions were set successfully but
again, due to the flaw in the image analysis which carried through to the decision logic, affected whether
they were being set in a certain direction at all.
Robotic Luggage 2015
29
Bibliography
(Statista 2015a) http://www.statista.com/topics/1320/luggage-market-worldwide/ last accessed on
13/5/2015
(Statista 2015b) http://www.statista.com/statistics/252842/growth-rate-of-the-luggage-market-worldwide-
by-region/ last accessed on 13/5/2015
(Statista 2015c) http://www.statista.com/statistics/252836/retail-sales-value-of-the-global-luggage-market-
by-region/ last accessed on 13/5/2015
(OpenCV-Python Tutorials 2014) http://opencv-python-
tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_intro/py_intro.html#intro last accessed on
14/5/2015
(RaspberryPi 2015a) https://www.raspberrypi.org/help/what-is-a-raspberry-pi/ last accessed on 14/5/2015
(Adrian Kaehler, Gary Rost Bradski 2008) “Learning OpenCV” O'Reilly Media, Incorporated, 24 Sep
2008
(Inkscape 2015) https://inkscape.org/en/ last accessed on 15/5/2015
(OpenCV 2015) http://opencv.org/ last accessed on 14/5/2015
(Numpy 2015) http://www.numpy.org/ last accessed on 14/5/2015
(RasPi.TV 2014) http://raspi.tv/2014/rpi-gpio-quick-reference-updated-for-raspberry-pi-b last accessed on
14/5/2015
(RaspberryPi 2015b) https://www.raspberrypi.org/products/camera-module/ last accessed on 14/5/2015
(Texas Instruments 2015) http://www.ti.com/lit/ds/symlink/sn754410.pdf last accessed on 14/5/2015
(Al Williams 2002) “Microcontroller Projects Using The Basic Stamp” CRC Press 2002
(Python 2015) https://www.python.org/doc/essays/blurb/ last accessed on 15/5/2015
(Arduino 2015) http://www.arduino.cc/en/Tutorial/PWM last accessed on 15/5/2015
(MathsIsFun 2014) http://www.mathsisfun.com/definitions/scalene-triangle.html last accessed on
15/5/2015
Robotic Luggage 2015
30
Appendix
Chassis Design Drawings
Figure A.1
Robotic Luggage 2015
31
Figure A.2
Robotic Luggage 2015
32
Figure A.3
Robotic Luggage 2015
33
Figure A.4
Robotic Luggage 2015
34
Figure A.5
Robotic Luggage 2015
35
Figure A.6
Robotic Luggage 2015
36
OpenCV Installation Commands
Make sure Raspbian is up to date:
sudo apt-get update
sudo apt-get upgrade
Install dependencies
First do this:
sudo apt-get -y install build-essential cmake cmake-curses-gui pkg-
config libpng12-0 libpng12-dev libpng++-dev libpng3 libpnglite-dev
zlib1g-dbg zlib1g zlib1g-dev pngtools libtiff4-dev libtiff4
libtiffxx0c2 libtiff-tools libeigen3-dev
Then do
sudo apt-get -y install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-
progs ffmpeg libavcodec-dev libavcodec53 libavformat53 libavformat-dev
libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev
libxine1-ffmpeg libxine-dev libxine1-bin libunicap2 libunicap2-dev swig
libv4l-0 libv4l-dev Python-Numpy libPython2.6 Python-dev Python2.6-dev
libgtk2.0-dev
You don’t need the lib1394 libraries as there is no FireWire on the Raspberry Pi, but something in this list
will grab them anyway (sigh). We install in two stages as there is a possibility of broken package errors if
the install order is wrong. It may be fine, but why risk having to fix it.
Install OpenCV
Download OpenCV from http://opencv.org/downloads.html
e.g.
wget http://sourceforge.net/projects/opencvlibrary/files/opencv-
unix/2.4.8/opencv-2.4.8.zip/download opencv-2.4.8.zip
Unzip and prepare for build
unzip opencv-2.4.8.zip
cd opencv-2.4.8
mkdir release
cd release
ccmake ../
Configuring
press ‘c’ to configure
Once done toggle the options you want.
press ‘c’ again to configure with your new settings
press ‘g’ to generate the Makefile
Robotic Luggage 2015
37
And finally, build. This will take a long time (about 10 hours!).
make
sudo make install
Scipy/Numpy/Matplotlib Installation Commands
$ sudo apt-get install libblas-dev ## 1-2 minutes
$ sudo apt-get install liblapack-dev ## 1-2 minutes
[$ sudo apt-get install Python-dev ## Optional]
[$ sudo apt-get install libatlas-base-dev ## Optional speed up
execution]
$ sudo apt-get install gfortran ## 2-3 minutes
$ sudo apt-get install Python-setuptools ## ?
$ sudo easy_install scipy ## 2-3 hours
$ ## previous might also work: Python-scipy without all dependencancies
$ sudo apt-get install Python-matplotlib ## 1 hour
Raspberry Pi Schematics
Figure A.7
Robotic Luggage 2015
38
PART NUMBER PACKAGE (PIN) BODY SIZE (NOM)
SN754410 PDIP (16) 19.80 mm × 6.35 mm
SN75441ONE Data Sheet (First Page)
Tools & Software
SN754410
SLRS007C –NOVEMBER 1986–REVISED JANUARY 2015
SN754410 Quadruple Half-H Driver
1 Features 3 Description
• 1-A Output-Current Capability Per Driver
• Applications Include Half-H and Full-H Solenoid
The SN754410 is a quadruple high-current half-H
driver designed to provide bidirectional drive currents
up to 1 A at voltages from 4.5 V to 36 V. The device
Drivers and Motor Drivers is designed to drive inductive loads such as relays,
• Designed for Positive-Supply Applications solenoids, DC and bipolar stepping motors, as well as
• Wide Supply-Voltage Range of 4.5 V to 36 V
• TTL- and CMOS-Compatible High-Impedance
Diode-Clamped Inputs
• Separate Input-Logic Supply
other high-current/high-voltage loads in positive-
supply applications.
All inputs are compatible with TTL-and low-level
CMOS logic. Each output (Y) is a complete totem-
pole driver with a Darlington transistor sink and a
• Thermal Shutdown pseudo-Darlington source. Drivers are enabled in
• Internal ESD Protection pairs with drivers 1 and 2 enabled by 1,2EN and
• Input Hysteresis Improves Noise Immunity
• 3-State Outputs
drivers 3 and 4 enabled by 3,4EN. When an enable
input is high, the associated drivers are enabled and
their outputs become active and in phase with their
• Minimized Power Dissipation inputs. When the enable input is low, those drivers
• Sink/Source Interlock Circuitry Prevents
Simultaneous Conduction
• No Output Glitch During Power Up or Power
Down
• Improved Functional Replacement for the SGS
L293
2 Applications
• Stepper Motor Drivers
• DC Motor Drivers
• Latching Relay Drivers
are disabled and their outputs are off and in a high-
impedance state. With the proper data inputs, each
pair of drivers form a full-H (or bridge) reversible drive
suitable for solenoid or motor applications.
A separate supply voltage (VCC1) is provided for the
logic input circuits to minimize device power
dissipation. Supply voltage VCC2 is used for the output
circuits.
The SN754410 is designed for operation from −40°C
to 85°C.
Device Information(1)
Robotic Luggage 2015
39
(1) For all available packages, see the
orderable addendum at the end of the
datasheet.
4 Simplified Schematic
Robotic Luggage 2015
40
Sample Code
import cv2
import numpy as np
import math
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(7, GPIO.OUT)
GPIO.setup(11, GPIO.OUT)
GPIO.setup(13, GPIO.OUT)
GPIO.setup(15, GPIO.OUT)
def forward():
GPIO.output(7, True)
GPIO.output(11, False)
GPIO.output(13, True)
GPIO.output(15, False)
def reverse():
GPIO.output(7, False)
GPIO.output(11, True)
GPIO.output(13, False)
GPIO.output(15, True)
def turn_left():
GPIO.output(7, False)
GPIO.output(11, False)
GPIO.output(13, True)
GPIO.output(15, False)
def turn_right():
GPIO.output(7, True)
GPIO.output(11, False)
GPIO.output(13, False)
GPIO.output(15, False)
def stop():
GPIO.output(7, False)
GPIO.output(11, False)
GPIO.output(13, False)
GPIO.output(15, False)
Robotic Luggage 2015
41
camera=PiCamera()
camera.resolution=(640,480)
cv2.namedWindow("preview") # Won't be needed
while 1:
rawCapture=PiRGBArray(camera)
camera.capture(rawCapture,format="bgr")
frame=rawCapture.array
num = (frame[...,...,2] > 220)
red_lower = np.array([6, 6, 255], np.uint8)
red_upper = np.array([205, 205, 255], np.uint8)
red_binary = cv2.inRange(frame, red_lower, red_upper)
xy_val = red_binary.nonzero()
cv2.imshow("preview",red_binary) # Won't be needed
rbmax = np.max(red_binary, axis=0)
led_posn = rbmax.nonzero()
print (led_posn) # Won't be needed
print(np.mean(led_posn)) # Won't be needed
max_l = np.max(led_posn)
min_l = np.min(led_posn)
distance = (max_l - min_l)
print(distance) # Won't be needed
if((led_posn < 280) & (distance < 10)):
print ("turning left")
turn_left()
time.sleep(1)
GPIO.cleanup()
if((led_posn > 360) & (distance < 10)):
print ("turning right")
turn_right()
time.sleep(1)
GPIO.cleanup()
if((led_posn < 360) & (led_posn > 280) & (distance < 10)):
print ("going forward")
forward()
time.sleep(1)
GPIO.cleanup()
cv2.waitKey(0)

More Related Content

Viewers also liked

Boubaddara Youssef: Animer un groupe
Boubaddara Youssef: Animer un groupeBoubaddara Youssef: Animer un groupe
Boubaddara Youssef: Animer un groupeYoussef Boubaddara
 
La Gestion de Configuration des logiciels et du Système d’Information Cours J...
La Gestion de Configuration des logiciels et du Système d’Information Cours J...La Gestion de Configuration des logiciels et du Système d’Information Cours J...
La Gestion de Configuration des logiciels et du Système d’Information Cours J...Jean-Antoine Moreau
 
Cea mai-puternica-rugaciune-de-pe-pamant
Cea mai-puternica-rugaciune-de-pe-pamantCea mai-puternica-rugaciune-de-pe-pamant
Cea mai-puternica-rugaciune-de-pe-pamantNastase Ecaterina
 
neville goddard-puterea-constiintei
 neville goddard-puterea-constiintei neville goddard-puterea-constiintei
neville goddard-puterea-constiinteiNastase Ecaterina
 
Facebook Adverts Manager: Set up and add users
Facebook Adverts Manager: Set up and add usersFacebook Adverts Manager: Set up and add users
Facebook Adverts Manager: Set up and add usersFollowSunday
 

Viewers also liked (6)

Boubaddara Youssef: Animer un groupe
Boubaddara Youssef: Animer un groupeBoubaddara Youssef: Animer un groupe
Boubaddara Youssef: Animer un groupe
 
La Gestion de Configuration des logiciels et du Système d’Information Cours J...
La Gestion de Configuration des logiciels et du Système d’Information Cours J...La Gestion de Configuration des logiciels et du Système d’Information Cours J...
La Gestion de Configuration des logiciels et du Système d’Information Cours J...
 
Cea mai-puternica-rugaciune-de-pe-pamant
Cea mai-puternica-rugaciune-de-pe-pamantCea mai-puternica-rugaciune-de-pe-pamant
Cea mai-puternica-rugaciune-de-pe-pamant
 
neville goddard-puterea-constiintei
 neville goddard-puterea-constiintei neville goddard-puterea-constiintei
neville goddard-puterea-constiintei
 
Facebook Adverts Manager: Set up and add users
Facebook Adverts Manager: Set up and add usersFacebook Adverts Manager: Set up and add users
Facebook Adverts Manager: Set up and add users
 
Trauma Informed Care
Trauma Informed CareTrauma Informed Care
Trauma Informed Care
 

Similar to Thesis Ben O'Brien D11128055

MAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOTMAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOTPRABHAHARAN429
 
TFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdfTFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdfhoussemouni2
 
IRJET- Lane Segmentation for Self-Driving Cars using Image Processing
IRJET-  	  Lane Segmentation for Self-Driving Cars using Image ProcessingIRJET-  	  Lane Segmentation for Self-Driving Cars using Image Processing
IRJET- Lane Segmentation for Self-Driving Cars using Image ProcessingIRJET Journal
 
IRJET- Pick and Place Robot for Color based Sorting
IRJET-  	  Pick and Place Robot for Color based SortingIRJET-  	  Pick and Place Robot for Color based Sorting
IRJET- Pick and Place Robot for Color based SortingIRJET Journal
 
ECET 3640 Group 2 Project Report
ECET 3640 Group 2 Project ReportECET 3640 Group 2 Project Report
ECET 3640 Group 2 Project ReportLogan Isler
 
Automated Fire Extinguisher Robot .pdf
Automated Fire Extinguisher Robot .pdfAutomated Fire Extinguisher Robot .pdf
Automated Fire Extinguisher Robot .pdfSakritapannChakma
 
Machine Vision On Embedded Platform -Report
Machine Vision On Embedded Platform -ReportMachine Vision On Embedded Platform -Report
Machine Vision On Embedded Platform -ReportOmkar Rane
 
Building a-line-following-robot
Building a-line-following-robotBuilding a-line-following-robot
Building a-line-following-robotFahmy Akbar Aparat
 
Building a-line-following-robot
Building a-line-following-robotBuilding a-line-following-robot
Building a-line-following-robotgolapkantidey
 
IRJET- New Generation Multilevel based Atm Security System
IRJET- New Generation Multilevel based Atm Security SystemIRJET- New Generation Multilevel based Atm Security System
IRJET- New Generation Multilevel based Atm Security SystemIRJET Journal
 
Law cost portable machine vision system
Law cost portable machine vision systemLaw cost portable machine vision system
Law cost portable machine vision systemSagarika Muthukumarana
 
Partial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather ConditionsPartial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather ConditionsIRJET Journal
 
Classroom Attendance using Face Detection and Raspberry-Pi
Classroom Attendance using Face Detection and Raspberry-PiClassroom Attendance using Face Detection and Raspberry-Pi
Classroom Attendance using Face Detection and Raspberry-PiIRJET Journal
 
Final_Report_-_Smart_Door_Lock_PDF.pdf
Final_Report_-_Smart_Door_Lock_PDF.pdfFinal_Report_-_Smart_Door_Lock_PDF.pdf
Final_Report_-_Smart_Door_Lock_PDF.pdfAimanAnuar6
 
IRJET- Review on Colored Object Sorting System using Arduino UNO
IRJET- Review on Colored Object Sorting System using Arduino UNOIRJET- Review on Colored Object Sorting System using Arduino UNO
IRJET- Review on Colored Object Sorting System using Arduino UNOIRJET Journal
 
Mis 589 Success Begins / snaptutorial.com
Mis 589  Success Begins / snaptutorial.comMis 589  Success Begins / snaptutorial.com
Mis 589 Success Begins / snaptutorial.comWilliamsTaylor44
 
Mis 589 Massive Success / snaptutorial.com
Mis 589 Massive Success / snaptutorial.comMis 589 Massive Success / snaptutorial.com
Mis 589 Massive Success / snaptutorial.comStephenson185
 

Similar to Thesis Ben O'Brien D11128055 (20)

MAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOTMAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOT
 
TFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdfTFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdf
 
IRJET- Lane Segmentation for Self-Driving Cars using Image Processing
IRJET-  	  Lane Segmentation for Self-Driving Cars using Image ProcessingIRJET-  	  Lane Segmentation for Self-Driving Cars using Image Processing
IRJET- Lane Segmentation for Self-Driving Cars using Image Processing
 
IRJET- Pick and Place Robot for Color based Sorting
IRJET-  	  Pick and Place Robot for Color based SortingIRJET-  	  Pick and Place Robot for Color based Sorting
IRJET- Pick and Place Robot for Color based Sorting
 
ECET 3640 Group 2 Project Report
ECET 3640 Group 2 Project ReportECET 3640 Group 2 Project Report
ECET 3640 Group 2 Project Report
 
Automated Fire Extinguisher Robot .pdf
Automated Fire Extinguisher Robot .pdfAutomated Fire Extinguisher Robot .pdf
Automated Fire Extinguisher Robot .pdf
 
Computer graphics by bahadar sher
Computer graphics by bahadar sherComputer graphics by bahadar sher
Computer graphics by bahadar sher
 
MAJOR PROJECT
MAJOR PROJECT MAJOR PROJECT
MAJOR PROJECT
 
Machine Vision On Embedded Platform -Report
Machine Vision On Embedded Platform -ReportMachine Vision On Embedded Platform -Report
Machine Vision On Embedded Platform -Report
 
Color Tracking Robot
Color Tracking RobotColor Tracking Robot
Color Tracking Robot
 
Building a-line-following-robot
Building a-line-following-robotBuilding a-line-following-robot
Building a-line-following-robot
 
Building a-line-following-robot
Building a-line-following-robotBuilding a-line-following-robot
Building a-line-following-robot
 
IRJET- New Generation Multilevel based Atm Security System
IRJET- New Generation Multilevel based Atm Security SystemIRJET- New Generation Multilevel based Atm Security System
IRJET- New Generation Multilevel based Atm Security System
 
Law cost portable machine vision system
Law cost portable machine vision systemLaw cost portable machine vision system
Law cost portable machine vision system
 
Partial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather ConditionsPartial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather Conditions
 
Classroom Attendance using Face Detection and Raspberry-Pi
Classroom Attendance using Face Detection and Raspberry-PiClassroom Attendance using Face Detection and Raspberry-Pi
Classroom Attendance using Face Detection and Raspberry-Pi
 
Final_Report_-_Smart_Door_Lock_PDF.pdf
Final_Report_-_Smart_Door_Lock_PDF.pdfFinal_Report_-_Smart_Door_Lock_PDF.pdf
Final_Report_-_Smart_Door_Lock_PDF.pdf
 
IRJET- Review on Colored Object Sorting System using Arduino UNO
IRJET- Review on Colored Object Sorting System using Arduino UNOIRJET- Review on Colored Object Sorting System using Arduino UNO
IRJET- Review on Colored Object Sorting System using Arduino UNO
 
Mis 589 Success Begins / snaptutorial.com
Mis 589  Success Begins / snaptutorial.comMis 589  Success Begins / snaptutorial.com
Mis 589 Success Begins / snaptutorial.com
 
Mis 589 Massive Success / snaptutorial.com
Mis 589 Massive Success / snaptutorial.comMis 589 Massive Success / snaptutorial.com
Mis 589 Massive Success / snaptutorial.com
 

Thesis Ben O'Brien D11128055

  • 1. DUBLIN INSTITUTE OF TECHNOLOGY School of Electrical Engineering Systems Bachelor of Engineering Technology: Electrical and Control Engineering (DT009) Final Year Project, 2015 Student Name: Ben O’Brien Student Number: D11128055 Project Title: Robotic Luggage
  • 2. Robotic Luggage 2015 II Declaration Statement: I the undersigned declare, that this report is entirely my own written work, except where otherwise accredited, and has not been submitted for academic assessment to any other university or institution. Name: Ben O’Brien Signature: ____________________ Date: _________________
  • 3. Robotic Luggage 2015 III Abstract Suitcase technology has changed over the past 20 years from suitcases that were carried, to suitcases that had a single pair of rollers, to suitcases that roll effortlessly on four casters. What is next for the suitcase? The next evolution could be a robotic one .The problem addressed in this work is how to get a suitcase to correctly follow the path of a user using electronics? This problem contains many more problems which will be mentioned later on. The objective of this project is to have a suitcase follow your path and regulate its distance by making use of electronic equipment such as cameras and micro computers, and programming skills. This thesis examines a way of getting a suitcase to follow its owner by using a Raspberry Pi computer, along with its camera module, Python programming abilities and GPIO Pins to control the motors which will drive the suitcase.
  • 4. Robotic Luggage 2015 IV Table of Contents Chapter 1: Introduction..............................................................................................................2 1.1 Introduction......................................................................................................................2 Chapter 2: Literature Review.....................................................................................................5 2.1 Literature Review.............................................................................................................5 2.2 Block Diagram...............................................................................................................10 Chapter 3: Design ....................................................................................................................11 3.1 Image Analysis...............................................................................................................11 3.2 Motor Control ................................................................................................................11 3.3 Distance Control ............................................................................................................11 3.4 Chassis Design...............................................................................................................13 3.5 The Final Product...........................................................................................................16 Chapter 4: Testing....................................................................................................................18 4.1 Testing............................................................................................................................18 Chapter 5: Discussion & Evaluation........................................................................................25 5.1 Discussion & Evaluation................................................................................................25 Chapter 6: Conclusion..............................................................................................................28 6.1 Conclusion .....................................................................................................................28 Bibliography ............................................................................................................................29 Appendix..................................................................................................................................30 Chassis Design Drawings ....................................................................................................30 OpenCV Installation Commands.........................................................................................36 Scipy/Numpy/Matplotlib Installation Commands ...............................................................37 Raspberry Pi Schematics......................................................................................................37 SN75441ONE Data Sheet (First Page)................................................................................38 Sample Code........................................................................................................................40
  • 6. Robotic Luggage 2015 2 Chapter 1: Introduction 1.1 Introduction Luggage retail is a very important market throughout the world. By 2015, the global luggage market is forecasted to generate about 31.62 billion U.S. dollars. The retail sales value of the travel bag segment worldwide is expected to generate about 12.89 billion U.S. dollars in 2015, making it the largest segment of the luggage market (Statista 2015a). The timeline shown below in Figure 1.1 was taken from Statistics portal Statista.com (Statista 2015b) depicts the compound annual growth rate of the luggage market's retail sales worldwide between 2010 and 2015, by region. Figure 1.1: Compound Annual Growth of Luggage Market’s Retail Sales
  • 7. Robotic Luggage 2015 3 This timeline shown below in Figure 1.2 was taken from Statistics portal Statista.com (Statista 2015c) shows the retail sales value of the global luggage market in 2006 and 2010 and provides a forecast for 2015, by region. Figure 1.2: Retail Sales Value of the Global Luggage Market in 2006 and 2010 and Forecast for 2015, by Region. At the moment, nobody is selling a robotic suitcase as part of their line of products, although that hasn’t stopped individuals from trying to create them. The closest thing on the market is “smart luggage” which has many fancy features like Bluetooth tracking, digital locking and phone charging. The problem faced in this work is how to get a suitcase to correctly follow the path of a user using electronics? This problem contains many more problems such as taking and processing pictures quick enough to make this method of following effective, making sure the suitcase doesn’t bump into the user and stops when the user stops, what to do if the suitcase can’t find its user, how to differentiate between users if multiple cases are being used, etc… The solution to this problem examined in this thesis is to use a Raspberry Pi to control the two D.C motors and to decide how the case will behave in following its owner, and a self-designed chassis and case mount to attach the case to and pull it along. The chassis is a “horse and carriage” type design. The wheels of the case would be set into a mount, holding it securely. The mount is then attached to the main chassis which houses the motors and other components. A pivot point between the mount for the case and the main chassis allows the two parts to turn freely.
  • 8. Robotic Luggage 2015 4 A Raspberry Pi, is a low cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse, as the brains of the operation. This is where the coding is done and all commands will be coming from. The Raspberry Pi also has a camera module which was used in this work. The camera would take pictures of the path ahead of the case looking for one specific thing three red L.E.D’s which are located on the owner of the case’s back pocket. As part of the work of this project, a Python program was developed which gives us the same image in black and white leaving only what is sought after, the three red L.E.D’s. Every pixel containing the colour output of the three L.E.D’s will be changed to white and every other pixel, which is not needed, will be changed to black. This new image of black and white will then be used to find out in which direction the owner is by calculating the average position of the white pixels (three red L.E.D’s) along the x-axis in regard to the centre of the image, which is known from the resolution of the camera (e.g. 640x480). From this information, the two D.C motors are set either left, right or forward. The aim of this project is to design and create a robotic suitcase within a three month time frame (16/02/2015 – 15/05/2015) which will decide what direction to go from images it sees and be able to keep a distance of one meter from its owner, and from this learn about the world of programming, in particular the programming language Python, learn more about motor control, engage in P.I control, get to know the Raspberry Pi and its abilities and functions and become a better electrical engineer. The rest of this work is laid out as follows: Chapter 2: Literature Review details the general characteristics of various technologies and components used in the project. Chapter 3: Design describes the design of the various aspects of the proposed system. Chapter 4: Testing shows recordings from multiple tests carried out on the system. Chapter 5: Discussion & Evaluation talks about the problems encountered in this work and also some future work opportunities. Chapter 6: Conclusion gives the final comments on what parts of the system worked or did not work and why.
  • 9. Robotic Luggage 2015 5 Chapter 2: Literature Review 2.1 Literature Review This section gives the reader a background of the methods used to complete the task 2.1.1 Raspberry Pi A Raspberry Pi (as seen in Figure 2.1) is a small computer manufactured in the United Kingdom with a price tag of €25-€35.This particular model is the B Revision 2 which is powered by a 5V micro-USB and can draw a current of 700mA-1000mA. It has a wide range of capabilities and is produced with the intent of it being used all over the world by kids to learn how to program. In this particular case it was used as the brains of the robotic case. It has many features which appeal to this particular work as it can be used to write and run the Python program, it has its own camera module and GPIO pins which are labelled in Figure 2.2, replacing the need of a programmable chip (Raspberry Pi 2015a). Figure 2.1: Raspberry Pi
  • 10. Robotic Luggage 2015 6 Figure 2.2: GPIO Numbers (RasPi.TV 2014) 2.1.2 Raspberry Pi Camera Module The Raspberry Pi camera module, shown attached to the Raspberry Pi in Figure 2.3, is a ribbon cable attachment which connects via the CSI port on the Raspberry Pi itself. It has a five megapixel fixed-focus camera that supports 1080p30, 720p60 and VGA90 video modes, as well as stills capture (Raspberry Pi 2015b).
  • 11. Robotic Luggage 2015 7 Figure 2.3: Raspberry Pi Camera Module 2.1.3 SN75441ONE Chip The SN75441ONE chip is a quadruple half-h driver manufactured by Texas Instruments which has many uses which includes driving motors up to 1 Amp. It was used to receive signals from the Raspberry Pi and set the motors accordingly. Below in Figure 2.4 is a diagram of the chip (Texas Instruments 2015). Figure 2.4: SN75441ONE Chip Diagram
  • 12. Robotic Luggage 2015 8 2.1.4 H-Bridge An H-Bridge is an electronic circuit, made up of four transistors which act as switches, that enables a voltage to be applied across a load in either direction. H-Bridges are often used in robotics to allow D.C motors to run forwards and backwards. In this work they are used for exactly that reason. This information was gathered from the book “`Microcontroller Projects Using The Basic Stamp” (Microcontroller Projects Using The Basic Stamp 2002). Below in Figure 2.5 is a schematic diagram of an H-Bridge. Figure 2.5: Schematic Diagram of H-Bridge 2.1.5 Python Python is a free, high level, general purpose programming language with a vast library that supports modules and packages that programmers love, not only for the reasons just stated, but because it is so easily debugged (Python 2015). 2.1.6 OpenCV OpenCV is a library of programming functions written in C/C++ which is designed for computational efficiency. In this works case, it is downloaded then a particular library called cv2 which is imported at the start of the program so its functions can be used for quick analysis of Numpy arrays. This information was taken from the OpenCV website (OpenCV 2015).
  • 13. Robotic Luggage 2015 9 2.1.7 Numpy Numpy is a highly stable and fast array processing library which is usually used in scientific computing. It is also downloaded and imported at the start of the program so its powerful N-array functions can be used. This information was gathered from the Numpy website (Numpy 2015). 2.1.8 Thresholding Thresholding, by definition, is changing a gray scale image to a black and white image. It works by scanning through every pixel of the image and changing its value to white (255) or black (0) based on whether the red green and blue values of the current pixel satisfy certain constraints. A thresholding value is set, so that any pixel with a value higher will be changed to a 255, and any pixel with a value lower will be set to 0. The resulting image is in black and white only like the example image in Figure 2.6. This information was gathered from the book “Learning OpenCV” (Learning OpenCV 2008). Figure 2.6: Threshold Image Example 2.1.9 Inkscape Inkscape is a free and open-source vector graphics editor that can create or edit vector graphics such as illustrations, diagrams, line arts, charts, logos and even complex paintings (Inkscape 2015). In this work Inkscape was used to design the chassis, case mount and motor casings. 2.1.10 Pulse Width Modulation (PWM) Pulse Width Modulation, or PWM, is a technique for getting analogue results with digital means. Digital control is used to create a square wave, a signal switched between on and off (Arduino 2015).
  • 14. Robotic Luggage 2015 10 2.2 Block Diagram Figure 2.7: Block Diagram of System The block diagram, shown in Figure 2.7, gives a visual representation of the processes in the order they happen. The first step is capturing the image as a PiRGBArray using the Raspberry Pi camera module. This image is then analysed using OpenCV and Numpy for its fast array analyzing abilities. Once the threshold has been applied and the L.E.D’s have been found it is on to the next step. The decision logic looks at the average L.E.D position in regard to the X-axis and also the dispersion of the pixels which is used to calculate the distance. From this information, the motor function is chosen. The D.C motors have each variable defined as a function in the program (e.g. forward or left) and once the decision is made on which one is appropriate at any given time via the decision logic step. The GPIO pins that correspond to each individual motor are set. If the motors are set forward, each motor will have one pin set as high and the other low. This works through the SN75441ONE H-Bridge chip which was explained previously. This process is repeated over and over to keep track of the target.
  • 15. Robotic Luggage 2015 11 Chapter 3: Design 3.1 Image Analysis Once an image is captured by the camera it must be altered and analysed to find the critical data needed to allow the Python program to make decisions on which way to set the two D.C motors. The first step is capturing the image as an array and in RGB format using the PiRGBArray import. This allows one to view the image as a Numpy array of BGR values, for each individual pixel in the image (in Numpy, when the RGB values appear they are in the reverse order of BGR, for example, the colour blue in RGB is (0,0,255), but when shown in a Numpy array the colour blue will appear in the BGR format like this (255,0,0)). From here the camera resolution is chosen as 640x480 to allow for quicker image processing but still giving quality. Now that the image has been taken and can be seen as an array of BGR values, an upper limit and lower limit of the three parts of the pixel – red green and blue, are set and the program thresholds the image to black and white. By using cv2, the black and white image can be seen. The position of the white pixels which, in theory, should indicate where the red L.E.D’s are, is found using the Numpy function “np.max” and set into another array. The average of the array of white pixels is found using the Numpy function “np.mean” to determine where exactly the L.E.D’s are in relation to the X-axis of the image . The final step in image analysis is working out the dispersion of the pixels to determine the distance of the case from the target. The further the target is, the closer the white pixels (L.E.D’s) will be together. To calculate this value the Numpy functions “np.min” and “np.max” were used to find the white pixel closest to the Y-axis and furthest from the Y-axis respectively. The minimum value was then subtracted from the maximum value to give the dispersion. 3.2 Motor Control The two D.C motors are controlled via a SN75441ONE H-Bridge chip. After the Python program is finished its decision logic, the GPIO pins of the Raspberry Pi send signals to the driver chip which, in turn, will set the two D.C motors in whatever direction the target is deemed to be. Because of the chassis design, when one motor on and the other is not, the motor which is off will act as a pivot point and the rear of the chassis will be dragged around. An idea to solve this problem was to use ‘Pulse Width Modulation’ to set the motor which would normally be off, to a certain percentage of it actual speed, but the Raspberry Pi only has one PWM output so this is not possible with the Raspberry Pi alone. 3.3 Distance Control For this work the goal is to keep the case one metre from the target, to avoid the cases bumping into the owner. This was done by using the dispersion of the L.E.D’s to calculate the distance from the target. Dispersion in this case is how far apart the L.E.D’s are from one another in the image. Testing was carried out, as seen in Chapter 4 of this thesis, to determine the dispersion of the white pixels
  • 16. Robotic Luggage 2015 12 at set distances. Finding the dispersion value itself was mentioned previous in the image analysis section of Chapter 3. A dispersion value of 11 was found when the target was placed exactly one meter from the chassis, with the L.E.D average having an X co-ordinate of 320 which is the middle of the image as the image shape is 640x480, the resulting image is shown below in Figure 3.1. This value was then added to the Python program as a constraint on each of the functions (e.g. forward, left, right) which would not allow the motors to go directly forward while the dispersion value was above the limit. Figure 3.1: Dispersion of pixels at 1metre
  • 17. Robotic Luggage 2015 13 3.4 Chassis Design The chassis design had many different iterations at the beginning. As time went on and more decisions were made as to which way things were being done and parts available were taken into consideration, the final design idea was chosen. The decision was made to make a “horse and carriage” like design, which would include a mounting point for the case which could be changed over for different case designs making this a more universal product and more of an add-on to an already existing case instead of a brand new case with a compromise in interior space due to interior electronics. The drawings for the chassis were done on Inkscape, which is a program which allows the user to create and edit vectors. Below are the drawings which were cut out of a sheet of 6mm thick plastic with an Epilog 40Watt laser cutter to create the finished product as seen in Figure 3.3. Figure 3.2: Inkscape Chassis Design
  • 18. Robotic Luggage 2015 14 Figure 3.3: Finished Chassis Design
  • 19. Robotic Luggage 2015 15 As seen in the above picture the case mount is attached to the main chassis using a 1mm bolt and raised up using spacers, which allow the mount to rotate without loosening or tightening the bolt. The two D.C motors are held in place using casings cut from the same material as the chassis. They are placed on the under-side of the chassis then held in place with four M3x40mm screws. The extra level on the chassis is for mounting the breadboard. This was done to save space and keep the chassis size to a minimum. It is held in place using the same M3x40mm screws as the D.C motors. The camera casing for the Raspberry Pi camera module is placed on the front of the chassis to give full visibility with 2mm screws and a caster ball is placed at the tail-end of the chassis with 2mm screws to allow full movement.
  • 20. Robotic Luggage 2015 16 3.5 The Final Product Figure 3.4: Finished Product (Side View)
  • 21. Robotic Luggage 2015 17 Figure 3.5: Finished Product (Front View)
  • 22. Robotic Luggage 2015 18 Chapter 4: Testing 4.1 Testing 4.1.1 Colour Testing As mentioned in Chapter 3, the colour that needed to be found was thought to be red, so upper and lower limits of red were set in the program to find the L.E.D’s. When this program was run it gave back results which did not match where the L.E.D’s were. Upon closer inspection it was found that what was actually being picked up as the L.E.D’s was in-fact the red cable from the variable power supply. This was realised by using cv2, which is an interface of OpenCV. A preview window was opened which showed the captured, threshold image. From this it was clear that it was not the L.E.D’s which were being picked up. The same function method was used to see the original image before thresholding took place and from that it was observed that the L.E.D’s which are red to the human eye, appear as bright white to the camera, as seen below in Figure 4.1. Figure 4.1: Camera Saturation This white gets dimmer as it gets closer to the edge. This happens because the cameras sensors are becoming saturated. Altering the Raspberry Pi’s camera brightness and saturation settings makes little
  • 23. Robotic Luggage 2015 19 difference to this. So instead of searching for red pixels, the constraints were modified to a broader spectrum giving the results seen below. Figure 4.2 is the result of the upper red limit being set to white (255,255,255). This gave the L.E.D’s but allowed for much more interference from lights. Figure 4.3 is the result of the upper limit of red being set a little bit lower, remember these values are BGR as mentioned in Chapter 3, (205,205,255), which reduces the amount of interference from lights. Figure 4.2: Upper Limit at (255, 255, 255)
  • 24. Robotic Luggage 2015 20 Figure 4.3: Upper Limit at (205, 205, 255) This makes the distance control much more difficult, if one pixel which is not an L.E.D is detected a few pixels away from the L.E.D’s it would make the dispersion value much higher and the program would think that it is very close to the case and stop the two D.C motors from being set in the forward direction.
  • 25. Robotic Luggage 2015 21 It does not cause many problems to the direction of the case if only a few pixels are found which are not L.E.D’s as the average value is taken, but if a lot of pixels are picked up which are not L.E.D’s then this will cause problems. 4.1.2 Distance Testing To work out the dispersion value of pixels at different distances the target was put at a set distance from the camera and then a program was run which gave the average value of the three L.E.D’s position in relation to the X-axis and the dispersion of the pixels. This program was run at set distances, multiple times, while the average L.E.D position was 320, which is the centre of the image, and the average readings were recorded. The results are shown in the table (Figure 4.4) below along with a graph (Figure 4.5) of these results and an image (Figure 4.6) of the testing taking place at 0.75m. Distance From LEDs (m) Dispersion of LEDs in pixels 0.25 46 0.5 21 0.75 14 1 11 1.25 9 1.5 7 1.75 5 2 3 Figure 4.4: Distance Testing Results Figure 4.5: Dispersion of L.E.D’s in Pixels Vs. Distance from Target 0 10 20 30 40 50 0 0.5 1 1.5 2 2.5 Dispersionofpixels Distance from target in meters Dispersion of LEDs in pixels Vs. Distance from target
  • 26. Robotic Luggage 2015 22 Figure 4.6: Distance Testing at 0.75metres.
  • 27. Robotic Luggage 2015 23 4.1.3 Angle Testing A test of pixel values along the X-axis of an image with resolution 640x480 was carried out at 20 degrees right and left of centre at 1meter distance and 1.5meter distance. The results are given below in Figure 16. 20 degrees left 0 degrees 20 degrees right 1m 43.55 320 596.45 1.5m 129.6 320 510.4 Figure 4.7: Angle Testing Results The distance from the target is aimed to be kept at 1meter. So from Figure 4.7 it can be seen that if the target is 20 degrees to the right or left of centre, it will be near the edge of the image. This limits the visibility of the camera. This matter will be discussed further in the future work part of the discussion chapter. 4.1.4 Program Testing Throughout this whole work, many different programs were used to try and gain optimal results. The aim was to capture an image and get the location of the L.E.D’s as quickly as possible then use this information to control the two D.C motors. In the beginning stages the initial program was taking upwards of 8seconds to capture an image and analyse it without any motor control. This is too long of a time frame, for example, if the target was straight ahead and the two D.C motors were set forward, then the target was to make a sharp turn left or right, the two D.C motors would not be set in the correct direction for another 8seconds. This means either the case would be moving forward for these 8seconds until told to do otherwise, or move forward for a designated amount of time and then stop and wait until the new image has been captured to be set another way, by this time the target will be a much greater distance away and makes this time frame inefficient. One of the reasons behind this long process was in the image processing, the image was taken as an RGB file, and using the video port of the camera with full resolution of still images at 2592 x 1944, and then converted into a HSV file. This was done as HSV is better at colour rendering than RGB. The resulting HSV image and also the thresheld image were then saved to the computer which takes more time. It was a good starting point but not quick enough. From these earlier time consuming methods, it was giving a good idea of what not to do. In the final version of the program the video port was not used, the resolution of the image was reduced to 640x480 and the image was not saved to the computers drive but to an in program stream which saved time. Also, the image was not converted to HSV, although this is a better method of colour rendering, it takes too much time in a process which is time critical. Instead the RGB values were used in the thresholding of the image to detect the three red L.E.D’s. OpenCV was also made use of in this work as its image processing times are said to be superior to that of Python according to OpenCV-Python Tutorials (OpenCV-Python Tutorials 2014).
  • 28. Robotic Luggage 2015 24 The goal was to capture and image, analyse it, complete the decision logic and set the two D.C motors in under 1second which was achieved by learning from different programming attempts and learning from each one.
  • 29. Robotic Luggage 2015 25 Chapter 5: Discussion & Evaluation 5.1 Discussion & Evaluation This section has been broken down under two headings, problems, which lists some of the problems that were encountered in this work, and future work, which lists things that could be done to improve on this work. Problems One of the first and most time consuming problems which were encountered in this work was the installation of the Scipy/Numpy/Matplotlib package. This download and installation was said to take four hours or more which was an understatement. Eight hours or more were spent trying to install this package with the end result being that it did not install properly. Installation was attempted several times and the outcome was the same. Another attempt was made after uninstalling all of the partially installed files and using different commands to finally download and install the package successfully. The set of commands that were used in the successful installation are shown in the appendix. The chassis design caused an unforeseen problem. The reason the wheels are mounted on the front and not under the case itself is to take the load off of the shaft of the two D.C motors which connect into each wheel, but in doing this it has affected the turning abilities. For example, if the chassis was to turn left, with the right D.C motor set forward and the left D.C motor off, this would cause the left D.C motor to act as a pivot point and the rear of the chassis would be dragged around, probably carrying the whole case to the side and causing unnecessary stress on the system Camera sensor saturation caused a big problem with the program, the threshold levels had to be set to certain values so the three red L.E.D’s, which appeared as almost white, could be found. This caused the threshold image to pick up unwanted pixels, such as room lights, causing the dispersion value to be off which affects the distance control and if there are a lot of unwanted pixels the direction can become a problem too. Finding every white pixel in the threshold image was also a problem. Initially the command ‘maxLoc’ was used to show the location of the highest value pixel, of which there were many of equal value, but this command only returns the first. A solution to this was to use ‘maxLoc’ to find the first pixel, save its location and then change the value of this pixel to black using the command ‘img[x,y]=[0]’, then this process would be repeated to find all values. This seemed very long and drawn out so more research was done and it was found that Numpy had a command ‘np.max’ which could be used to find every max value pixel in the whole image. The material used for the chassis was plastic which must be dealt with carefully, particularly when bolting objects on and drilling. It would, on occasion, break if too much pressure was applied and then the cutting process must be done again.
  • 30. Robotic Luggage 2015 26 In environments such as airports which have a lot of glass and reflective floors, the image processing also becomes an even bigger problem as the reflection is also picked up by the program. The program does not take into account that this is a reflection and counts the extra pixels and uses them for distance and angle calculations. Future Work The following additional considerations and problems could be addressed in future work on this topic. To distinguish one case owner from another, each target could flash a unique binary code that only its own case would recognise. A case only design with no chassis would be another good step as it would eliminate the turning problem and it would be an all-in-one device with all components hidden inside. This kind of idea is shown in the first design in the appendix. Another future work would be another camera which could be adjusted to have better recognition of L.E.D’s and not become saturated. The problem with using PWM was that the Raspberry Pi only has one PWM output, but this function could be passed on to a slave device like an MSP430. This chip is capable of doing PWM for both of the D.C motors. Other options made available by freeing up the Raspberry Pi by introducing a slave device such as an MSP430, which is a microcontroller produced by Texas Instruments. If the Raspberry Pi was to deal with only the image processing and the slave device dealt with the decision logic and motor control, the Raspberry Pi would be free to do things like GPS tracking, interference alarm and remote locking if programmed correctly. To solve the problem of reflective surfaces, the three red L.E.D’s could be placed in a certain way to create a scalene triangle (MathsIsFun 2014), which is a triangle where no sides are equal and no angles are equal. An example is shown below in Figure 5.1. Figure 5.1: Scalene Triangle
  • 31. Robotic Luggage 2015 27 The ratio of the length of the sides in Figure 17 of 14:13:9, is the same as 1.556:1.444:1. So if a mirror image of the L.E.D’s appears underneath, the program could differentiate between the two by working out which white pixel has the highest Y value, search for the next white pixel below this and see if the ratio fits that of the one given. If it does, the original target has been found and the reflection can be discarded. The above stated method could also be used for any “noise” or unwanted pixels in the image. All pixels could be evaluated on their relation to other pixels in the image and the ones that fit the ratio criteria will be the three red L.E.D’s.
  • 32. Robotic Luggage 2015 28 Chapter 6: Conclusion 6.1 Conclusion This work set out to give a solution to the problem, how to get a suitcase to correctly follow the path of a user using electronics? The main tasks were to capture an image and analyse it in one second or less, then from this information decide which direction the target is in and how far away, and finally to set the motors in the right direction from this information. The image capturing and analysis was completed by using the Raspberry Pi camera module along with the Numpy package and thresholding in the Python program. The one flaw in this part of the program is the colour rendering. The fact that the three red L.E.D’s saturate the cameras sensors and appear almost white in the picture caused a problem. When setting the limits for the BGR values the upper limit was set to near white which, depending on the environment, could pick up unwanted parts of the image e.g. room lighting. This can cause havoc with the dispersion of the pixels causing the distance control to think it is closer than it actually is, which could disable the two D.C motors from being set forward. The decision logic was completed in a Python program but due to the sporadic nature of the unwanted pixels in the image from the flaw in the image analysis, the decision made is always right from the information received, but not always in relation to where the target is. The motor control was completed by using a combination of the Python program, the GPIO pins on the Raspberry Pi and a SN75441ONE H-Bridge chip. The motors functions were set successfully but again, due to the flaw in the image analysis which carried through to the decision logic, affected whether they were being set in a certain direction at all.
  • 33. Robotic Luggage 2015 29 Bibliography (Statista 2015a) http://www.statista.com/topics/1320/luggage-market-worldwide/ last accessed on 13/5/2015 (Statista 2015b) http://www.statista.com/statistics/252842/growth-rate-of-the-luggage-market-worldwide- by-region/ last accessed on 13/5/2015 (Statista 2015c) http://www.statista.com/statistics/252836/retail-sales-value-of-the-global-luggage-market- by-region/ last accessed on 13/5/2015 (OpenCV-Python Tutorials 2014) http://opencv-python- tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_intro/py_intro.html#intro last accessed on 14/5/2015 (RaspberryPi 2015a) https://www.raspberrypi.org/help/what-is-a-raspberry-pi/ last accessed on 14/5/2015 (Adrian Kaehler, Gary Rost Bradski 2008) “Learning OpenCV” O'Reilly Media, Incorporated, 24 Sep 2008 (Inkscape 2015) https://inkscape.org/en/ last accessed on 15/5/2015 (OpenCV 2015) http://opencv.org/ last accessed on 14/5/2015 (Numpy 2015) http://www.numpy.org/ last accessed on 14/5/2015 (RasPi.TV 2014) http://raspi.tv/2014/rpi-gpio-quick-reference-updated-for-raspberry-pi-b last accessed on 14/5/2015 (RaspberryPi 2015b) https://www.raspberrypi.org/products/camera-module/ last accessed on 14/5/2015 (Texas Instruments 2015) http://www.ti.com/lit/ds/symlink/sn754410.pdf last accessed on 14/5/2015 (Al Williams 2002) “Microcontroller Projects Using The Basic Stamp” CRC Press 2002 (Python 2015) https://www.python.org/doc/essays/blurb/ last accessed on 15/5/2015 (Arduino 2015) http://www.arduino.cc/en/Tutorial/PWM last accessed on 15/5/2015 (MathsIsFun 2014) http://www.mathsisfun.com/definitions/scalene-triangle.html last accessed on 15/5/2015
  • 34. Robotic Luggage 2015 30 Appendix Chassis Design Drawings Figure A.1
  • 40. Robotic Luggage 2015 36 OpenCV Installation Commands Make sure Raspbian is up to date: sudo apt-get update sudo apt-get upgrade Install dependencies First do this: sudo apt-get -y install build-essential cmake cmake-curses-gui pkg- config libpng12-0 libpng12-dev libpng++-dev libpng3 libpnglite-dev zlib1g-dbg zlib1g zlib1g-dev pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools libeigen3-dev Then do sudo apt-get -y install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg- progs ffmpeg libavcodec-dev libavcodec53 libavformat53 libavformat-dev libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev libxine1-ffmpeg libxine-dev libxine1-bin libunicap2 libunicap2-dev swig libv4l-0 libv4l-dev Python-Numpy libPython2.6 Python-dev Python2.6-dev libgtk2.0-dev You don’t need the lib1394 libraries as there is no FireWire on the Raspberry Pi, but something in this list will grab them anyway (sigh). We install in two stages as there is a possibility of broken package errors if the install order is wrong. It may be fine, but why risk having to fix it. Install OpenCV Download OpenCV from http://opencv.org/downloads.html e.g. wget http://sourceforge.net/projects/opencvlibrary/files/opencv- unix/2.4.8/opencv-2.4.8.zip/download opencv-2.4.8.zip Unzip and prepare for build unzip opencv-2.4.8.zip cd opencv-2.4.8 mkdir release cd release ccmake ../ Configuring press ‘c’ to configure Once done toggle the options you want. press ‘c’ again to configure with your new settings press ‘g’ to generate the Makefile
  • 41. Robotic Luggage 2015 37 And finally, build. This will take a long time (about 10 hours!). make sudo make install Scipy/Numpy/Matplotlib Installation Commands $ sudo apt-get install libblas-dev ## 1-2 minutes $ sudo apt-get install liblapack-dev ## 1-2 minutes [$ sudo apt-get install Python-dev ## Optional] [$ sudo apt-get install libatlas-base-dev ## Optional speed up execution] $ sudo apt-get install gfortran ## 2-3 minutes $ sudo apt-get install Python-setuptools ## ? $ sudo easy_install scipy ## 2-3 hours $ ## previous might also work: Python-scipy without all dependencancies $ sudo apt-get install Python-matplotlib ## 1 hour Raspberry Pi Schematics Figure A.7
  • 42. Robotic Luggage 2015 38 PART NUMBER PACKAGE (PIN) BODY SIZE (NOM) SN754410 PDIP (16) 19.80 mm × 6.35 mm SN75441ONE Data Sheet (First Page) Tools & Software SN754410 SLRS007C –NOVEMBER 1986–REVISED JANUARY 2015 SN754410 Quadruple Half-H Driver 1 Features 3 Description • 1-A Output-Current Capability Per Driver • Applications Include Half-H and Full-H Solenoid The SN754410 is a quadruple high-current half-H driver designed to provide bidirectional drive currents up to 1 A at voltages from 4.5 V to 36 V. The device Drivers and Motor Drivers is designed to drive inductive loads such as relays, • Designed for Positive-Supply Applications solenoids, DC and bipolar stepping motors, as well as • Wide Supply-Voltage Range of 4.5 V to 36 V • TTL- and CMOS-Compatible High-Impedance Diode-Clamped Inputs • Separate Input-Logic Supply other high-current/high-voltage loads in positive- supply applications. All inputs are compatible with TTL-and low-level CMOS logic. Each output (Y) is a complete totem- pole driver with a Darlington transistor sink and a • Thermal Shutdown pseudo-Darlington source. Drivers are enabled in • Internal ESD Protection pairs with drivers 1 and 2 enabled by 1,2EN and • Input Hysteresis Improves Noise Immunity • 3-State Outputs drivers 3 and 4 enabled by 3,4EN. When an enable input is high, the associated drivers are enabled and their outputs become active and in phase with their • Minimized Power Dissipation inputs. When the enable input is low, those drivers • Sink/Source Interlock Circuitry Prevents Simultaneous Conduction • No Output Glitch During Power Up or Power Down • Improved Functional Replacement for the SGS L293 2 Applications • Stepper Motor Drivers • DC Motor Drivers • Latching Relay Drivers are disabled and their outputs are off and in a high- impedance state. With the proper data inputs, each pair of drivers form a full-H (or bridge) reversible drive suitable for solenoid or motor applications. A separate supply voltage (VCC1) is provided for the logic input circuits to minimize device power dissipation. Supply voltage VCC2 is used for the output circuits. The SN754410 is designed for operation from −40°C to 85°C. Device Information(1)
  • 43. Robotic Luggage 2015 39 (1) For all available packages, see the orderable addendum at the end of the datasheet. 4 Simplified Schematic
  • 44. Robotic Luggage 2015 40 Sample Code import cv2 import numpy as np import math from picamera.array import PiRGBArray from picamera import PiCamera import time import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) GPIO.setup(7, GPIO.OUT) GPIO.setup(11, GPIO.OUT) GPIO.setup(13, GPIO.OUT) GPIO.setup(15, GPIO.OUT) def forward(): GPIO.output(7, True) GPIO.output(11, False) GPIO.output(13, True) GPIO.output(15, False) def reverse(): GPIO.output(7, False) GPIO.output(11, True) GPIO.output(13, False) GPIO.output(15, True) def turn_left(): GPIO.output(7, False) GPIO.output(11, False) GPIO.output(13, True) GPIO.output(15, False) def turn_right(): GPIO.output(7, True) GPIO.output(11, False) GPIO.output(13, False) GPIO.output(15, False) def stop(): GPIO.output(7, False) GPIO.output(11, False) GPIO.output(13, False) GPIO.output(15, False)
  • 45. Robotic Luggage 2015 41 camera=PiCamera() camera.resolution=(640,480) cv2.namedWindow("preview") # Won't be needed while 1: rawCapture=PiRGBArray(camera) camera.capture(rawCapture,format="bgr") frame=rawCapture.array num = (frame[...,...,2] > 220) red_lower = np.array([6, 6, 255], np.uint8) red_upper = np.array([205, 205, 255], np.uint8) red_binary = cv2.inRange(frame, red_lower, red_upper) xy_val = red_binary.nonzero() cv2.imshow("preview",red_binary) # Won't be needed rbmax = np.max(red_binary, axis=0) led_posn = rbmax.nonzero() print (led_posn) # Won't be needed print(np.mean(led_posn)) # Won't be needed max_l = np.max(led_posn) min_l = np.min(led_posn) distance = (max_l - min_l) print(distance) # Won't be needed if((led_posn < 280) & (distance < 10)): print ("turning left") turn_left() time.sleep(1) GPIO.cleanup() if((led_posn > 360) & (distance < 10)): print ("turning right") turn_right() time.sleep(1) GPIO.cleanup() if((led_posn < 360) & (led_posn > 280) & (distance < 10)): print ("going forward") forward() time.sleep(1) GPIO.cleanup() cv2.waitKey(0)