SlideShare a Scribd company logo
1
.
E C P E 1 9 6 : S E N I O R P R O J E C T II
P R O F E S S O R : D R . K H O I E
S P O N S O R : D R . R O S S
S P R I N G 2 0 1 5
C H A M B E R S T O U R G U I D E R O B O T W I T H V I S I O N ,
H E A R I N G A N D S P E E C H
M E M B E R S O F T H E T E A M:
J E Z R Y L G I R O N
B I N G Z H A N G
E V A N B O M G A R D N E R
K A R L A D U R A N
2
Table of Contents
1 Timetable and Distribution of Tasks 8
2 Project Overview 9
2.1 Description of the project 9
2.2 Project Requirements and Specifications 9
3 System Design
3.1 Navigation and Obstacle Avoidance System
11
13
4 Hardware and Software Design Research 16
4.1 Voice Recognition System 16
4.2 Speech System 17
4.3 Drive System 18
4.4 Power System 19
4.5 Obstacle Avoidance System 20
4.6 Navigation System 22
4.7 Structure 26
5 Project Development and Assembly 27
5.1 Speech System, Tour Audio 27
5.2 Navigation System
5.3 Drive System
27
29
5.4 Power System 30
5.5 Speech and Hearing System 30
5.6 Obstacle Avoidance System 31
6 Justifications for Hardware/Software 33
3
6.1 Drive System 33
6.2 Power System 34
6.3 Robot Structure 35
6.4 Speech and Voice Recognition 36
6.5 Obstacle Avoidance System 36
6.6 Navigation System 39
6.7 Microcontroller Choice 41
7 Results of Fully-assembled System 42
7.1 Assembled System Test Results 42
7.2 Dollar Budget 43
7.3 Power Budget 44
8 Conclusions 45
8.1 USB hub and RPi Power 45
8.2 Microcontroller and RPi communication 45
8.3 Structure 45
8.4 Power System 45
8.5 Drive System 46
9 Applications, Social, Environmental and Economical 47
9.1 Global and Environmental Impact 48
9.2 Impact of Robotics 49
9.3 Social Impact of Technological Advancements 50
10 Lessons Learned and Future Improvements 53
10.1 Lessons Learned 53
10.1.1 Don’t Trust the inventoried parts 53
10.1.2 Planning is of paramount importance 53
10.1.3 Perfectly functional code doesn’t mean a perfectly functional system 53
4
10.1.4 Physical systems need to be tested on the ACTUAL site 54
10.2 Design Improvements 54
10.2.1 Voice Recognition Software 54
10.2.2 Future improvements 54
10.2.3 Microphone and Speaker Quality 54
10.2.4 More Stable Wheels 55
10.2.5 Higher Quality Sensors 55
References 56
5
Table of Figures
Figure 1, Behavior Diagram 10
Figure 2, Functional Diagram
Figure 3, 8 Sensor Configuration
Figure 4, High Level Flowchart of Obstacle Avoidance Behavior
11
12
13
Figure 5, 12V DC to DC convert circuit 19
Figure 6, HC-SR04 Ultrasonic Sensor
Figure 7, Sample Diagram for Potential Fields
Figure 8, Pololu IR Beacon Transreceiver Pair
Figure 9, Xbee Module Pair
20
21
22
23
Figure 10, Testing the Smart Beacon Technology Kit 24
Figure 11, Robot Structure 25
Figure 12, Beacon Pairing Concept 27
Figure 13, nRF51822 Bluetooth Smart Beacon Kit 28
Figure 14, Robot Driver System 32
Figure 15, DC to DC Converter for Voice Recognition and Speech System 33
Figure 16, Low Current 5V and 3.3V Regulator 34
Figure 17, Back Free Wheel and Driving Wheel 34
Figure 18, Second Floor of the Robot
Figure 19, GP2Y0A02YK0F Sharp Long Range IR Sensor
Figure 20, HMC5883L Compass Sensor
Figure 21, CTC Hallway Ceiling Elevation Model
Figure 22, Fully Assembled Robot in Lab
35
37
38
39
41
6
7
List of Tables
Table 1, Timetable of Scheduled Tasks
Table 2, Xbee Wireless Distance Calculation
Table 3, Speech and Hearing Test Results
Table 4, Sensor Range Test Results
Table 5, Functional Performance Results
Table 6, GPIO Pin Requirements
Table 7, Parts List and Cost
Table 8, System Power Budget
7
27
30
30
31
40
42
43
8
Abstract
The Chambers tour guide robot project is a multidisciplinary engineering project that
incorporates electrical and computer engineering skills to design an autonomous robot that is
capable of providing a physical and verbal tour of the Chambers Technology Center (CTC)
building at the University of the Pacific with minimal human oversight. This robot can
autonomously adjust its position to navigate around static and mobile obstacles while traveling to
and introducing points of interest within the structure. Mobility is achieved by utilizing two
wheels rotated by independent stepper motors and two "free" wheels for stability. Obstacle
avoidance is achieved through the use of ultrasonic sensors positioned at 8 points around the
robot, a compass sensor and the Bug0 algorithm. Localization is achieved through the use of an
additional sensor pointed toward the ceiling which takes advantage or the unique physical
structure of the building. Obstacle avoidance and localization sensor data is processed and
interpreted by an Arduino Mega 2560 microcontroller which sends instructions to an Arduino
Pro Mini 328 that controls the stepper motors and propels the robot. Tour participants can
interact with the robot by issuing voice commands and hear verbal descriptions of each tour
destination. Voice command and verbal descriptions will be handled by a Raspberry Pi running
Voicecommand software that will interpret spoken commands and play tour audio based on
location information received from the Arduino Mega 2560.
9
Chapter 1: Timetable and Distribution of Tasks
The development of the tour guide robot was an in depth process requiring the
coordination of 4 different team members. The systems and the specific tasks for each system
operator is detailed in table 1 for the Spring 2015 semester.
Table 1 : Timetable of scheduled tasks
Chapter 2: Project Overview
10
2.1 Descriptionof the Project
The Chambers tour guide robot project is a multidisciplinary engineering project that
incorporates electrical and computer engineering skills to design an autonomous robot that is
capable of providing a physical and verbal tour of the Chambers Technology Center building at
the University of the Pacific with minimal human oversight. This robot can autonomously adjust
itself based on the surrounding environment by detecting mobile and static obstacles as well as
determining its location within the building. Mobility is achieved by utilizing two wheels
controlled by stepper motors and two "free" wheels for stability. Interaction with the physical
environment is made possible by a suite of sensors located at strategic positions on the robot
chassis. Data from the sensors which will be used to execute the simple but efficient Bug0
behavior for navigating around obstacles. Perfect relative orientation is maintained through the
stepper motor's measured rotations and additional guidance is provided by a compass sensor,
making sure movement will be parallel to the walls. The robot will localize it’s position within
the building through the use of a vertical sensor and the unique architecture of the hallway.
Voice recognition will be utilized to respond to audience commands opening a degree of
interactivity with its users. All data processing will be handled by an ATmega Microcontroller
and a Raspberry Pi microcomputer. Plastic, wood and metal will be used for the construction of
the robot's chassis to minimize cost, maximize stability and minimize hazard to the environment.
Per the project specifications power for the motors and all systems will be provided by
rechargeable batteries which have dedicated purposes dictated by the systems needs. One battery
will provide power to the drive motor and a separate will provide power to the onboard electrical
systems(e.g. the microcontroller and microcomputer). The electrical systems battery system shall
incorporate several power regulating IC’s to create the desired voltage level to drive the
necessary devices.
2.2 Final ProjectRequirements and Specifications
• The robot structure: The robot should be approximately three feet tall and be able to
navigate its way through the CTC building first floor hallway. The robot should avoid
bumping into objects or people. It should be able to go forward, backward and turn left and
right.
• Tour Structure: The tour should include any laboratory or classroom on the CTC first
floor hallway as well as the Co-Op board. During the tour the robot is not required to
physically leave the hallway. At the end of the tour the robot should return to a set
destination.
• Vision: The robot will utilize sensors to interpret its location and assist in object vs. person
identification.
• Speech: The robot should begin its tour with a greeting. At each tour destination the robot
will communicate via audio information provided by the PR department. If an object is
encountered during travel and that object is a human being it should communicate a
greeting and/or request to pass.
11
• Hearing: The robot should be able to receive verbal instruction to start, stop, and mute.
• Aesthetic: The robot should have a semi finished look. Building materials should be
modern.
• Power: The robot should run on rechargeable batteries that can be charged via a plug
without having to remove the batteries.
12
Chapter 3: System Design
Figure 1 depicts the behavioral diagram for the Chambers Technology building tour
guide robot. It is representative of the lowest level of abstraction of the completed system and
displays the system inputs and outputs. For the robot to respond based on the environment, three
different inputs are given: The obstacle input, which signals whenever the robot encounters an
object or person that obstructs or blocks the path and stops the robot from continuing with the
tour. The second input refers to the verbal commands that users can give the robot in order to
command the robot to start the tour or inquire about its creators. The third input, location signal,
is used to determine the actual location or position of the robot in the Chambers Technology
Center. The output signals are produced based on the inputs received, for example, if someone is
blocking the hallway, the robot will try to see if there is another possible way of continuing with
the tour; it will look for an open space and it may need to turn left, right, stop, or go in reverse in
order to continue ahead. The speaking output refers to the specific recording that will be played
by a Raspberry Pi microcomputer based on the location signal received. The identify room
output determines the specific classroom that the robot locates within the Chambers Technology
Center Building.
Figure 1: Behavioral Diagram
13
Figure 2: Functional Diagram
Figure 2 represents the completed system which will support the required functionality. It
is depicted at a high level of abstraction which demonstrates the effective data transferred
between electrical components. As is shown, speech and hearing is accomplished with the
Raspberry Pi microcomputer, and navigation and motor control are accomplished with individual
microcontrollers. The robot will receive verbal commands through the microphone and then the
Raspberry Pi microcomputer will respond by playing a specific recording if it is not able to
recognize the instruction that was given or send a signal to the Arduino Mega microcontroller
(depicted as Microcontroller_1 Navigation in figure 2) whenever the command “Begin tour” is
recognized. In addition to the signal received by the Raspberry Pi, the Arduino Mega will also
receive signals from the sensors and the compass sensor. Based on these three different inputs,
14
the Arduino Mega will send a three bit signal into the Arduino pro mini 328 (depicted as
Microcontroller_2 Motor Controller in figure 2), which will control the motors.
3.1 Navigation and Obstacle Avoidance System:
The final sensor configuration is composed of multiple sensors placed around the
robot to provide object proximity data. There will be a total of 8x Sharp Long Range IR
Sensor GP2Y0A02YK0F sensors placed around the robot. This is the minimal amount of
sensors that would provide the robot with reliable obstacle avoidance considering the
environment.
Figure 3: 8 Sensor Configuration
Sensor (1) will be the primary sensor detecting obstacles directly in front of the
robot. Since the effective cone of the sensor leaves blind spots for obstacles slightly
displaced from the immediate front off the robot, sensors (2) and (3) are needed. These 2
sensors cover the mentioned blind spots and takes care of sensing angled walls that might
not reflect the signals back to the frontal sensor (1) because of the reflection angle.
Together, sensors (1), (2) and (3) provide reliable frontal sensing for the robot. Sensors
(4) and (5) take care of the sides and sensor (6) will help the robot with backing up when
needed. Sensors (7) and (8) are the major ranging sensors that help the robot with
navigating using landmarks on its side.
The frontal sensors on the robot will periodically check for obstacles impeding its
advance. Once detected, the robot will attempt to move around it to try and avoid the
obstruction turning to either the left or right direction. The side sensors will provide the
robot with information for an educated choice to either turn right or left. The robot will
15
move towards this direction until the path towards the beacon is clear. The compass
sensor HMC833l would be utilized to execute measured turns and orientation auto-
correction.
To allow for moving objects to naturally clear the robots path, upon detection of
an obstacle the robot will pause. This is to make time for moving obstructions to pass by
and avoid the robot themselves (if they are capable, which is very likely since moving
obstacles are usually people). Ideally this will prevent the robot from consuming time
attempting to clear the obstacle on it’s own. The Bug0 algorithm will be utilized as the
overarching path planning behavior. Figure 4 details the process below.
Figure 4: High Level Flowchart of Obstacle Avoidance Behavior
16
A vertical laser range sensor will be looking out for the lower ceiling elevations
that mark each of the rooms in the hallway. Detecting these lower elevations triggers a
unique behavior depending on the robot’s state. Each state describes the varying
behaviors the robot will execute to make sense for which actual room the robot is
interacting with. Unique audio tracks will be played depending on which rooms the robot
is looking at.
At various points in the hallway, auto-correction points will be implemented using
the compass sensor so the robot can adjust for significant offsets made by unpredictable
wheel performance.
Chapter 4: Research on Different Hardware/Software Designs
17
4.1 Voice Recognition System:
To meet the project specifications the hardware that supports the voice
recognition system must be capable of interfacing with a microphone, running or
connecting to software that would convert speech to text (STT) and have general purpose
input and output (GPIO) pins to send digital signals to the other electric systems. The
software requirement mandates the incorporation of a computer capable of running
programs and the digital signal output implies that a microcontroller would also be
needed.
The initial and final hardware chosen to support voice recognition is the
Raspberry Pi microcomputer. Alternative microcomputers such as the BeagleBone and
Humming Bird were considered as they provided more RAM and processor speed.
However, having established that the RPi was capable, there was no need to use these
alternative microcomputers which which required more power and also were more
expensive. As the RPI’s 700MHz processor and 512MB of RAM of were both adequate
to run the already selected Voicecommand [3] software, and it included a bank of GPIO
pins to achieve intermodule communication it was chosen to provide the hardware
support for the voice recognition system in the final design.
Regarding the voice recognition software several open source programs such as
JASPER[1], JULIUS[2], and POCKETSPHINX were initially considered and researched
for feasibility. As all of them used local STT engines which were occasionally unreliable
and require lots of local data storage, further research was performed. An alternative
called Voicecommand uses an internet connection to run input through Google's
advanced and reliable STT engine.
Voicecommand has better voice recognition and more support than the initial
software candidates. The software is being regularly updated and problems with
command execution were being actively discussed on blogs. As a result implementing
commands and running software or interfacing with GPIO pins could be achieved with
more efficiency with Voicecommand. In light of these advantages Voicecommand is the
software used to achieve the speech recognition in the final design.
4.2 Speech System:
Per the speech specification dictated by the project sponsor, the robot must play a
tour audio when arriving at the points of interest. To achieve this goal the speech system
18
must be able to receive signals indicating what point of interest the robot is located at and
it must have a means of sending different audio signals to a speaker.
Based on the above needs the hardware used to achieve this system will need to
have memory that is capable of storing the needed tour audio files, GPIO pins to receive
location data from other systems and the capability to turn those digital audio files into
electrical signals that a transducer can “reinterpret” as the unaltered pre recorded sounds.
Coincidently the above requirements can all be fulfilled by a microcomputer with
the appropriate peripherals. After brief research the Raspberry Pi (RPI) was chosen to test
the feasibility of this systems. As the RPI has a built in soundcard with a 3mm jack, a SD
flash memory slot, 17 GPIO pins, and was already chosen to support the verbal
recognition system it is used to provide hardware support for the final speech system.
The descriptions that will be used for each one of the classrooms that will be part
of the tour in the Chambers Technology Center in order to meet the requirements are the
following:
John T. Chambers, Room 112
The CIMS laboratory is a fully integrated collection of CNC machine tools, robots,
flexible manufacturing system and measurement and product inspection related
equipment designed for teaching and demonstrating state-of-the-practice manufacturing
methods for undergraduate and graduate students. This laboratory supports all types of
student’s projects and faculty’s research. This laboratory also serves as a resource for
local middle and small size companies. This laboratory is especially valuable for the
courses ENGR 15, MECH 100, MECH 120, MECH 125, MECH 141, MECH 175, and
MECH 191.
John T. Chambers, Room 113
This lab is a large, open lab without partitions, which supports collaboration and
creativity. The lab needs to be flexible in configuration and functionality. Flexibility and
adaptability are critical. Flexible computing is achieved through thin clients such as the
Sun Ray laptops. This allows support for multiple environments, while adding mobility
and wireless networking. In addition to providing state-of-the-art support for
collaborative research and learning, the room can also be used for lecturing.
John T. Chambers, Room 114
This is a hands-on teaching classroom. Each student has a computer workstation so the
instructor can assign problems in class and interact with the students as they solve
problems. Each system is capable of running both Microsoft Windows and some version
of Linux or Unix.
19
John T. Chambers, Room 115
The two-tier studio classroom is not meant to replace the laboratory experience, but to
enhance it. Instructors are able to introduce material through standard lecture. Lab
workstations for each student can be utilized by simply rotating around. Digital, electrical
and electronics courses utilize the space.
4.3 Drive System:
The drive system must hold the whole weight of the robot and provide enough
force to make the robot move. This system is easy to control and stable. It receives
signals from the other electrical systems and translates them into commands that dictate a
desired motor movement. The drive system can make the robot go straight, turn right,
turn left, stop, speed up, and slow down.
The drive system utilizes an Arduino pro mini 328 microcontroller, two H-bridge
motor drivers, and two electric motors. Based on the results of the location and obstacle
avoidance systems, different navigation instructions are sent to a program function that
sends signals from the Arduino pro mini 328 microcontroller to the H-bridge motor
drivers which turn the motors. There are different potential drive boards for motors and
each of them have different specifications. Based on research the L298N driver board
was considered to drive the motors. The L298N can handle an average 2A current and 3A
peak current. Another driver board that was considered is the TB6560, which is an
adjustable driver board. It can handle a working current from 1A to 3.5A with a
maximum current of 4A. The final drive system uses the TB6560 driver board because
the measured working current for the robot is 2.6A, which is larger than the limit of the
L298N and will provide a safety margin. The TB6560 operates the motors based on 4
data lines connected to the microcontroller.
Initially there were two motor types considered, DC and Stepper motors. For DC
and stepper motors, the torque must be larger than 20 kg because the robot weight is
about 15 kg. The final robot design uses stepper motors. Compared to DC motors, the
stepper motors run more accurate because they rotate in controllable 0.9 degree step
rotations. The motor RPM’s or speed depends on frequency of steps created by pulses
from the microcontroller. Although the DC motor can handle more torque than the
stepper motor, it is not efficient. When it runs, it loses forty percent of its power. Another
problem of the DC motor is that every DC motor, due to manufacturing and material
consistency, has a different speed, so if the robot wants to run perfectly straight a two
wheeled two motored drive system, it would require an additional solution to achieve
this.
20
4.4 Power System:
Based on robot’s electrical system requirements the power system must supply 5V
for Raspberry Pi and sensors, and 3.3V for Raspberry Pi’s signals. So a DC to DC
converter circuit from 12V to 5V and 12V to 3.3V is needed. As the integrated circuit
chips available to accomplish this depend on the current limit required we must look at
the potential current needs of the individual systems. The Raspberry Pi needs 700mA.
The current of the microcontrollers in the robot are less than 0.5A. LM7805, LM1085,
and LTC1147CS8 (all various IC voltage regulators) will all provide a steady 5V.
LM7805’s output current is 1A, LM1085’s output current is 3A, and LTC1147CS8’s
output current is 0.05A. For the final power system the LM7805 was used for the
microcontroller and other sensors. LM1085 is used for the Raspberry Pi because of its
power needs and the ability of the LM1085’s to handle a peak current of 1A.
To provide the needed 3.3V the system uses a LM1117 which is a cost effective
IC DC regulator that provides a low current output of 0.8A, which is ideal as the
components that require this voltage are expected to have a very low current draw. Figure
3 shows the main DC to DC circuit. The raspberry pi’s regulator is individual. This
regulator draws a large amount of current and consequently, a fan was added in order to
cool down the temperature.
Figure 5: 12V DC to DC convert circuit
4.5 Obstacle Avoidance System:
The obstacle avoidance system can be very complex or relatively simple
depending on the needs of the design problem. The shape and geometry of our designated
working location made for a simple and efficient method for obstacle avoidance.
21
Components for having a working obstacle avoidance system are the sensors to give the
robot information about its environment and the behavior algorithm you want the robot to
use to make decisions regarding the obstacles it sees.
First, the sensors were considered and researched. There were many types of
sensors that we considered using. We considered using Infrared (IR) sensors, Ultrasonic
sensors and even laser range finders.
Infrared sensors are one of the most common types of sensors used for obstacle
detection. IR sensors are relatively cheap and fairly easy to use. They output analog
voltages that scale depending on the distance of the object in front of it. Generally,
infrared sensors have a narrower beam angle because it uses infrared light and they have
a shorted overall range compared to other types of sensors. IR sensors are very good
when it comes to short distance ranging and general proximity detection. Although since
it uses light, there are a number of factors that may skew the accuracy of the sensor.
Ambient light, if too significant can drastically change the distance of a measurement
reading. The color of the object or its reflectivity also skews the distance measurement of
an IR sensor. Therefore, IR is mainly used indoors because of the vast amount of sunlight
outdoors.
Ultrasonic sensors are also very common for detecting obstacles or movement.
Ultrasonic sensors are also relatively cheap but are generally more expensive than IR
sensors. Instead of light, ultrasonic sensors use sound (or sonar) to detect the range of
objects in front of it. Because it uses sound, ultrasonic sensors generally have a wider
cone or beam angle than IR sensors. They also generally have a longer range than IR
sensors. Ultrasonic sensors are more accurate than their IR counterparts. They are better
than IR at detecting accurate distances at longer ranges. The only things that can skew
the accuracy of the ultrasonic sensor are sound absorbent objects like sponges or
soundproofing materials and crosstalk or “ghost” signals that are basically stray sound
signals that bounced off other walls.
Figure 6: HC-SR04 Ultrasonic Sensor
22
Laser range finders are very well made and are higher up the quality sensor
ladder. As a result, laser based sensors are generally a lot more expensive than IR and
ultrasonics. Laser based sensors have a significantly longer range than it‘s counterparts
because of the strength of its signal. Since laser sensors use a fairly concentrated beam,
its cone or beam angle is very narrow compared to other sensors which it gives it a
narrower field of view.
Another component potentially useful for the obstacle avoidance and navigation
system is the compass sensor. A compass sensor measures the magnetic field of the earth
and gives measured numbers to the bearing of north, south, east, west and everything else
in between. The compass sensor gives a reasonable approximation of the general
directions which makes it very useful for making accurate angled turns, auto-correcting
orientation and making sure it’s moving relatively straight through a hallway.
Based on the data gathered by the sensors around it the robot can only avoid
obstacles with the use of behavior algorithms. How will it react to an object in front of it?
Those are questions addressed by behavior algorithms.
As mentioned earlier at the start of this chapter, the shape and geometry of the
working location was taken into consideration while picking an appropriate algorithm for
the robot. The location is a simple rectangular hallway with minimal static obstacles and
occasional moving obstacles (humans). We looked at the Bug algorithms and Potential
fields algorithm to guide our robot with obstacle avoidance.
Bug algorithms are relatively the simplest of the algorithms because it just
theoretically follows a straight line towards the destination. When it encounters an object
blocking it, it follows the edge of an object it detects until it can see the destination again
and resumes movement. Bug algorithms are more straightforward to use but there may be
challenges with orientation.
Potential fields on the other hand is slightly more complex. It sees the whole
location as a sea of vector fields leading towards the destination. It sees the map and the
obstacles on it and computes vectors that lead to the destination. The robot follows these
vectors until it reaches the destination. Obviously, this algorithm has problems dealing
with moving obstacles.
23
Figure 7: Sample Diagram for Potential Fields
There are many options for possible solutions that could help us design this
system but these are what we researched for the senior project.
4.6 Navigation System:
These are the different components we considered and looked at for our
navigation system
The navigation system provides a way for the robot to travel through the
environment and locate its designated destination with repeatable precision. We looked at
camera solutions and different types of beacon technology to achieve this functionality.
Most robots use cameras to navigate its surroundings. It uses line fitting and
visual cues to mark the surroundings and figure out how to navigate towards the
destination. Cameras require a significant amount of processing power and an appropriate
platform that can handle image processing. The demand for acceptable processing power
heightens the cost up significantly, with the alternative being to wait a significant time for
the robot to finish understanding the images it gets from the camera. It is a sophisticated
solution with a sophisticated resource requirement.
An alternative solution we were (slightly) biased for is the beacon technology
solution. The idea of the beacon technology is for their to be a device pair, (a beacon
transmitter and a beacon receiver), where the receiver can eventually find its way to the
transmitter using useful signals that the transmitter provides. The transmitter would
ideally be placed on the destination and the receiver on the robot.
We looked at an IR beacon pair. The Pololu IR Beacon Transceiver Pair is a pair
of devices that can detect one another omnidirectionally on one plane. The device has IR
detectors and transmitters on each chip that can detect and follow the other chip wherever
it goes. The one big limitation that this solution had was the effective range of the
24
beacons. The range was not long enough to accommodate distances in the hallway. Also,
the price was $50.00 for a single pair so it was fairly costly for just a single pair of chips.
Figure 8: Pololu IR Beacon Transceiver Pair
The Xbee wireless technology is another solution that we researched. We needed
a wireless solution to the beacon method and Xbee seemed to be a valid candidate for
this. Xbee is a wireless solution that allows communication between devices using radio
waves. There is a way to measure the distance between Xbee devices through signals
using the ToF (time of flight) of the signal and its RSSI (signal strength). Having this
functionality made Xbee a viable solution to the beacon method. The range for the Xbee
was considerable even extending through miles. The extended range might work against
our situation however because the lowest distance resolution was around 5 meters which
is way too big for our needs. The cost for the Xbee solution wasn’t cheap however. To
have a valid working connection for two microcontrollers, two xbee shields are needed to
accommodate the two xbee modules. Each xbee module is roughly $25.00 and xbee
shields are roughly $15.00 so all in all it will amount to $80.00 for just one pair of
wireless communication.
Figure 9: Xbee Module Pair
Long range lasers were also considered for this solution. If there was a sensor that
could cover the whole distance of the hallway, we could use that solution to divide the
25
location into a distance-varying topographic map. Where the distance would be the
marker that the robot relies on to know if it is near a certain room in the hallway.
Unfortunately, affordable long range sensors that could encompass the whole distance of
the hallway don’t exist and the ones that came close were very expensive. The LIDAR
Lite long range laser range finder could cover up to 50 meters (about half of the hallway)
but is $90.00 each.
Another alternative for the navigation system that was considered involved the
use of Beacon technology. The Bluetooth smart Beacon that we ordered, and tested was
the nRF51822 Bluetooth Smart Beacon Kit from Nordic Semiconductor. The smart
Beacon Kit came with an application that could be downloaded from the apple itunes
store for IOs systems and from the Google Play Store for Android systems. This
application is called NRF Beacon. The chip included in the Smart Beacon Kit was
attached to the door of the specific place we wanted the robot to identify. Then, this
application gave us the option to select the specific event that would be triggered after
identifying the desired Beacon. Each Beacon chip had its own identification code, so that
different events for different Beacons could be triggered. Once the application from
Nordic semiconductor was opened, it started tracking the signal strength of the chip
attached to the door. It provided us with a set of three different options which included:
far, near, or next to the Beacon. The option we selected involved generating a trigger
when the smartphone was next to the Beacon. Then, the only options provided as a
trigger involved playing an alarm, or opening an application called Tasker. After doing
research on the Tasker application we found out that this application was able of running
SL4A Python scripts. This was the key factor on setting the specific trigger when the
Beacon was detected. A script in Python was written, which was then included into the
Tasker application, and whenever the Beacon attached to the door was next to the
Beacon, the Tasker application opened by itself and ran the Python script, which played a
specific recording based on the information received. Figure 9 shows when the Beacon
Technology was being tested.
This navigation system was accurate and had the advantage of reusability, since
we just needed to change the position of the Beacons and the Python script to execute
different actions. The downside is that this system was going to be more expensive than
the final system, which only requires the use of one sensor that is able to detect whenever
there is a change on the roof’s height. If the smart Beacon Technology would have been
used, the project would have been more expensive, since a Smart Beacon was needed for
each of the classrooms included on the tour, and each Beacon is $35.00. We would have
needed at least 5 Smart Beacons, which would have resulted in a total of $175.00.
26
Figure 10: Testing the Smart Beacon Technology Kit
4.7 Structure:
The initial structure design of the robot was an amalgamation of machinable
polymers and metal. However after attempting to procure these materials and get them
machined it was determined infeasible (the allocated budget to build this listening,
talking, autonomous robot is a mere 300 dollars, and support from the campus machine
shop was difficult to attain). As an alternative that would minimize cost and allow for
easier construction of the components, wood was used.
27
Figure 11: Robot Structure
28
Chapter 5: Results of Hardware and Software Testing
5.1 Speech System, Tour Audio
A tour script has been completed and tested. This tour contains valid information
about the tour destinations provided by the public relations department. Each of the
following descriptions was converted to a robot voice recording and stored on the
Raspberry Pi SD card for playback access.
5.2 Navigation System:
Testing the navigation system was fairly challenging because most of the products
that we wanted to test for viability was not on our possession right off the bat. There was
a lot of waiting and budget balancing.
We purchased the Pololu IR Beacon Transceiver Pair to test but unfortunately the
product that was delivered to us was faulty and so the test could not be performed. We
later on decided to discard the planned test because the product did not have enough
theoretical range to work for the long hallway anyway.
We tried to construct our own beacon pair using simple cheap ultrasonic HC-
SR04 sensors. We programmed one of the sensors to just transmit constant sonar pulses
while we disabled the transmitter of the other one so it could only receive the sonar
pulses. Using this pair of sensors we simulated a small scale scenario of the small robot
prototype passing by a room with the ultrasonic transmitter with the ultrasonic receiver
pointed towards the general direction of the room. We programmed the robot to turn
towards the ultrasonic signal and stop when its near enough the beacon. This small scale
simulation worked perfectly on the small scale but we realized we had to wire up all the
transmitters together because they needed to be synced for the receiver to make sense of
the pulses. It didn’t seem feasible to run wires across most of the rooms in the CTC
hallway so this was discarded as a farfetched idea.
29
Figure 12: Beacon Pairing Concept
We purchased and tested a set of Xbee devices to test their wireless
communication capabilities. We wanted to measure the distance between two devices that
each had an Xbee module. We used a code that took the signal’s time of flight (ToF) and
RSSI (signal strength) to determine the approximate distance between the two devices.
The two devices were each set on the opposite sides of the hallway in the CTC building.
We took multiple readings while gradually moving one of the devices closer to the other.
The results are recorded on the table below.
Actual Distance (m) Xbee Method Reading (m)
50 62
40 31
30 27
20 23
10 12
Table 2: Xbee Wireless Distance Calculation
The Xbee method had incredible range but its accuracy and unpredictability was
hard to compensate for. Having rooms two meters apart will be hard to differentiate with
this method of distance measurement. Also, the distance readings get absurd numbers at
times It doesn’t like closed spaces because the signals bounce around the walls and skew
the signal strength at the time of arrival.
30
The LIDAR Lite laser range finder was the next thing we tested. This expensive
laser rangefinder had a distance range of 0 to 40 meters. If we could use this rangefinder
to measure the robot’s relative distance to one end of the hallway then it would be viable
to use it to see the hallway as a distance topography map where the rooms could be
marked by distances from the end of the hallway. We tested if it was even possible to get
and measure that maximum distance with the laser sensor. The big problem was that both
ends of the hallway are predominantly glass which the laser sensor has trouble detecting.
We purchased and tested the Smart Beacon technology from Nordic
Semiconductors. The component used was the nRF51822 Bluetooth Smart Beacon which
allowed us to identify a specific place or position by sending a signal through an
application called NRF Beacon installed on a Smartphone. We then worked on writing a
routine that would allow us to pass parameters through an application called Tasker into
another application called Arduino commander, after running an SL4A Python script,
which sent specific numbers through our Arduino Board based on the Beacon detected.
This number then would have been passed through the Raspberry Pi in order to play the
specific recording for a lab or classroom description.
Figure 13: nRF51822 Bluetooth Smart Beacon Kit
5.3 Drive system
After complete assembly of the robot’s power and drive system simple
functionality tests were performed to investigate the ability of the robot to move around.
The initial design used a single microcontroller to poll navigation sensors and operate the
motors via H-bridge drivers, however, performance with this configuration suffered due
to the amount of clock cycles required to check the obstacle avoidance sensors,
localization sensors, and the compass sensor. Robot speed and the smoothness of
movement suffered. To compensate for this a second microcontroller was added (pictured
in figure 2) to allow for consistent and smooth operation of the stepper motors. This
second microcontroller receives a three bit signal from the ATMega controller and then
operates the motors accordingly. Testing the configuration with this second
31
microcontroller confirmed that the robot had the ability to move forward, backward, and
turn both left and right depending on signals from the navigation system.
During these tests another potential flaw of the system became apparent however
it does not, at this time, have an affect on performance. There is a perceivable stuttering
that occurs during robot turning. During turns the navigation microcontroller is constantly
checking the compass sensor to determine the degrees through which the turn has been
executed and this communication results in intermittent pauses of the turn command sent
to the drive system. The result is a slight but perceptible jerkiness during turn execution.
5.4 Power system
During all testing of the power system the battery dedicated to run the motors
worked without fail. It proved to be capable of running constantly during tests without
noticeable performance depreciation. The electronic component power system did not
perform as smoothly.
Early integration and testing of components showed that the initial power plan
was adequate for all sensor and microcontroller needs. Problems developed when the
Raspberry Pi and powered USB hub were connected. The 12V to 5V DC regulators
chosen were unable to supply the currents needed by all devices. To solve this problem
further testing was performed in which the RPi and USB hub were provided a single
dedicated regulator, which resulted in the regulator overheating and power fluctuations.
Even more tests with two additional converters (one for the RPi and one for the USB hub)
showed that overheating still occurred. To solve this problem heat sinks and a fan were
utilized for new regulators which resulted in a steady consistent power source that
produced the results mentioned in section 5.5.
5.5 Speech and Hearing System
To confirm the performance and dependability of the speech and hearing system,
the Raspberry Pi and all of its peripherals were powered with the commercial outlet
transformers provided with the USB hub and Raspberry Pi. The systems was then started,
allowed to fully boot and then tested by performing 3 different voice commands, and then
shut down and unplugged. The same test was then performed with the USB hub and the
Raspberry Pi running off of the DC converted power from the robots onboard 12 volt
batteries. The test results are displayed in table 3, with a successful test being one in
which the Raspberry Pi fully boots, responds to 3 commands with the appropriate action
or audio playback (such as the dialogue in section 5.1), and then shuts down completely.
32
Test Conditions Test Results
12V Outlet Power 10 successes in 10 attempts
5V converted battery power 10 successes in 10 attempts
Table 3: Speech and Hearing test results
5.6 Obstacle Avoidance System
We performed a lot of tests to make sure the obstacle avoidance system runs
reliably and efficiently.
The sensor range test was done to compare the different sensors that we had on
hand and the once that we had purchased to use for the project. We wanted to compare
the sensors with each other in terms of range-accuracy and cost. We constructed a set up
on a flat table where a meter stick is lying flat on the table to measure the distance
between the sensor and the obstacle. The obstacle we used was a purple box two feet tall
and one foot wide. We took turns measuring each sensor’s minimum range by measuring
its closest accurate distance. We measured max range by looking for the longest “stable”
reading we could get from the sensor while maintaining its accuracy.
Sensor Type Min. Range Max.Range Price Each
Sharp GP2D120x Infrared 1.5” 11.8” (~1ft) $12.00
Sharp GP2Y0A02 Infrared 8” 59” (~5ft) $14.00
HC-SR04 Ultrasonic 1” 156” (13ft) $1.39
LV MaxSonar EZ1 Ultrasonic 6” 254” (21ft) $25.00
Table 4: Sensor Range Test Results
We also tested each sensor on the smaller prototype robot that we constructed.
This test was to see the viability of each sensor when utilized by an actual moving robot.
This hopes to distinguish if there are sensors that are slower at getting important readings
than others or there are consistency issues depending on environmental parameters. We
constructed a robot behavior program with basic obstacle avoidance and we let it run on
different ambient lighting settings. We graded each performance based on speed/latency
and consistency.
33
Sensor Type Well-lit
Latency
Well-lit
Consistency
Mid-lit
Latency
Mid-lit
Consistency
Dark
Latency
Dark
Consistency
Sharp GP2D120x Infrared 10 7 10 9 8 9
Sharp GP2Y0A02 Infrared 10 7 10 9 8 10
HC-SR04 Ultrasonic 10 10 10 10 10 10
LV MaxSonar EZ1 Ultrasonic 10 10 10 10 10 10
Table 5: Functional Performance Test Results
We also tested the compass sensor CMPS03 on the actual hallway. We
implemented a simple test where we stepped through the center of the hallway from one
end to the other while take readings of the compass’ orientation through it. Theoretically,
the readings should be very close with each other since the hallway is only following one
basic direction the whole time. But this simple test yielded a wildly fluctuating accuracy
because of all the electronics present in the actual hallway. The readings varied about +/-
20 degrees from the actual direction. The electronic devices caused inaccurate
measurements from the compass sensor, which almost made us discard the compass
sensor completely from our design.
Chapter 6: Justifications for Choices of Hardware or Software
6.1 Drive system
As predicted by the research detailed in section 4.3, the NEMA 23 stepper provide the
needed torque to drive the robot at approximately 2.8 km per hour, which is approximately half
of the average human walking speed of 5 km per hour. The use of two stepper motors has
eliminated any need to monitor rotations and ensure that both wheels turn at the same speed. The
initial needed torque calculation of 1.4Nm is exceeded by these motors. Thus the the results
show that the drive system design is powerful enough, accurate and fast enough to meet the
project requirements.
34
The TB6560 driver board, shown in figure 14, was determined to draw a 2.6A current
after measurements. After the initial problems with procuring power for the voice recognition
and speech system it was apparent that temperature of high current devices must be a concern.
However as the TB6560 driver boards have large heat sinks that dissipate heat and prevent
power fluctuations the prudence of this choice became even more apparent as the project moved
forward. Additionally alternatives, like the L298N, can only drive small stepper motors that draw
low current and run at low speeds (as noted in chapter 4.3), if the motors run too fast, the chip
will overheat and cut off power.
Figure 14: Robot Driver system
6.2 Power system
After the initial tests detailed in chapter 5, during which additional voltage regulators
were added for the voice recognition and speech system, the original concept (to use
rechargeable 12v batteries and voltage regulators), proved to be a highly effective means of
powering the individual robot systems. The battery life experienced during testing was adequate
to complete more than one tour and the power provided was stable enough such that no
noticeable effects of low power were observed. The final power system uses two LT1085
regulators, one LM7085 regulator, and one LM1117 regulator. The voice recognition and speech
system draws the most current through its two regulators. The total current flow through
regulators is almost 2A (10 watts) and although this caused initial overheating of the LT1085
regulators, the addition of heat sinks and a fan have stabilized the regulators temperature within a
safe operating region. The final power system configuration has shown itself to be dependable
and robust.
35
Figure 15: DC to DC converter circuit for voice recognition and speech system
Power for the navigation and drive microcontrollers as well as the navigation sensors is
provided by two DC to DC regulators. Each of these regulators can provide 1A of current. As the
total current draw of the remaining systems is less than 0.5A the power loading requirements for
the LM7805 and LM1117 regulators are below 50% of their maximum available current. As the
sensors and microcontrollers require different voltages these regulators have also proven
dependable at providing needed voltage levels.
Figure 16: Low current 5V and 3.3V regulator
6.3 Robot Structure
The robot has a total of four wheels, two supporting “free wheels” and two for the drive
system that can transmit the torque to move the robot. As this configuration was created in
response to early instability problems that occurred during initial tests, the result is a working
solution. One freewheel is located in the very front of the robot and a second on the back. The
back one also is shown in figure 17 and it helps to split the weight from driving wheels.
36
Figure 17: Back free wheel and driving wheels
An additional feature of the structure is its physical layout and ability to support the
logistic needs of the other systems. As the microphone and speaker have specific placement
needs the two tier platform provided an ideal situation for their mounting to best maximize
performance. The bottom platform (picture in figure 18) provided a sturdy base for the mounting
of heavy batteries, the placement of sensors and the routing of wires to the various components.
Besides the addition of the fourth wheel the design has undergone only minimal augmentations
from the original conception. The wood in which it is composed has proven to be a forgiving
substrate material that readily accepts the hot glue and duct tape as binding components.
Figure 18: Second floor of robot
6.4 Speech and Voice Recognition System
After fully integrating the voice recognition and speech system into the completed robot
the performance experienced during tests justifies both the hardware and software choices.
Although other microcomputers might have accomplished the same tasks the support and
affordability of the Raspberry Pi made it the most cost effective microcomputer capable of
performing the needed tasks. Despite the ubiquity of similar products such as the Beaglebone
and Hummingbird minicomputers, the Raspberry Pi has performed well at a minimal cost.
37
The software design, that includes a parent C+ program to control the running of the third
party Voicecommand software, and audio player has also proven to be an effective way to
manage the robots speech and voice recognition capabilities. During tests and troubleshooting it
created and easy one stop location where all components of both systems were accessible.
6.5 Obstacle Avoidance System
Finalizing the components for the obstacle avoidance system was relatively simple
because of all the tests that we ended up doing for most of our alternative design ideas.
For the final design of our obstacle avoidance system, we ended up using the Sharp Long
Range IR Sensor GP2Y0A02YK0F. We heavily favored the HC-SR04 Ultrasonic sensor for a
long time through our design process for this project. Just looking at the tabled results of the tests
we performed for the sensors we had back at chapter 5 section 6, we can clearly see the quality
performance brought by the HC-SR04 ultrasonic sensor. It had superior range and accuracy for
a very low price tag. It also doesn’t have the annoying weakness of IR sensors where a number
of ambient factors could heavily skew the sensor readings like ambient light intensity and object
colors. The HC-SR04 ultrasonic sensor was the perfect choice for our robot until we took the
assembled robot to the CTC hallway to test functionality. What we found out was that all the
ultrasonic sensors we were using suddenly performed unpredictably. The values of measurement
fluctuated over a range of 30 inches which has never happened before. Suddenly the ultrasonic
sensors that we favored so much became unreliable. We discovered that the reason for this was
that the building was using ultrasonic signals for motion detection. These motion detectors were
scattered across the hallway spreading stray ultrasonic signals that skewed the accuracy of our
sensors. Ultrasonic sensors, of any type, became unviable for our situation. We looked at the
highest performing IR sensor from our list because it was the only reasonably priced option
(laser sensors were impossible to consider for our budget).
38
Figure 19: GP2Y0A02YK0F Sharp Long Range IR Sensor
The Sharp Long Range IR Sensor GP2Y0A02YK0F performed reasonably better than the
HC-SR04 when used in the CTC hallway. Since the location was indoors, sunlight didn’t pose as
a big negative factor in using this sensor. Although its maximum range of 150 cm isn’t as long as
the HC-SR04’s 400 cm, it is long enough for the robot to perform well in the hallway.
We ended up settling to use a compass sensor for our final design because the
functionality it offered was too important to pass up. Compass sensors allowed our robot to
execute reasonably accurate “measured” turns. Even though the electronics present in the
hallway skewed the general measured orientation through the hallway, taking readings at one
specific point of the hallway was still reliable enough to veer an accurate number of degrees
from that original reading. This means making accurate 90 degree turns were still possible
through the noise. The model we chose is the HMC5883L compass sensor because it was the
cheapest one we could find on the market.
39
Figure 20: HMC5883L Compass Sensor
As for the behavioral algorithm, we stuck with the simple yet efficient Bug0 algorithm.
The shining characteristic of the Bug0 algorithm is its simplicity. This simplicity speeds up the
robot’s function and reaction time to obstacles because it doesn’t take a considerable amount of
lines of code to implement. It speeds up the loop cycle considerably which makes the robot more
reactive to its environment. Also, it frees up memory and clutter on the microcontroller memory
space because it is very simple to run. And of course, we don’t only like it because it’s simple,
we like it because it works despite of its simplicity. The simple rectangular shape of the hallway
and the minimal amount of fixed obstacles along the way allowed for the use of this algorithm.
Moving obstacles would sound like it would pose a problem but because the moving objects are
“intelligent” beings who have their own obstacle avoidance systems, it becomes simple to avoid
collision with them.
6.6 Navigation System
It was a grueling process looking for the perfect beacon technology solution that fit our
needs. We knew we wanted to implement beacon technology early on through our design
process because it seemed intuitively more simple than using a camera and a computer to process
images. We wanted a solution that made use of simplistic ideas to achieve a complex goal. This,
in our opinion, is the beauty of engineering. The brilliance of a design is not stemmed from how
incredibly complex the solution is but on how something simple can solve something complex.
We tested a number of beacon technology solutions but after each test we performed, we
realized how “over-the-budget” we would be if we ended up using each solution. The concept
40
was viable and it was doable using some of our alternative designs but the basic problem always
boiled down to cost versus performance. The products we needed were always too expensive to
fit to our budget. The bluetooth solution that we tested was very viable but we needed to
purchase one of those bluetooth chips and stick them on “each” of the room doors along the
hallway. That would be pretty expensive to actually implement.
One day, we realized that we could utilize something even simpler than beacon
technology to accomplish navigating through each room. The basic question of navigation was,
“How would the robot know when it’s by a room of importance and how would the robot know
which room it is?”. We noticed something so simple about the architecture of the hallway that
allowed us a simple solution to this problem. The ceiling elevation is always lower right where
the doors are.
Figure 21: CTC Hallway Ceiling Elevation Model
We decided to use these simple architectural details as landmarks for our robot to
determine it’s very near a specific room. Which device would we need to accomplish this? The
answer is one very cheap range sensor, IR or ultrasonic would do. As long as it’s high enough to
detect the change in ceiling elevation, it is enough. One vertically oriented sensor is the solution.
For our purposes, since we could not use ultrasonic sensors in this hallway, we decided to use the
cheapest laser range finder we could find. IR sensors were possible but their range was too short
and we didn’t want to make the robot too tall because it might affect stability. The final concept
of our navigation system design is to use architectural details from the hallway itself to localize
the robot to its surroundings.
6.7 Microcontroller Choice
Initially we used the Arduino Uno for demo purposes because it was readily available.
Using this microcontroller made it easy to choose which type of microcontroller we really
needed. Since we we’re using 8 proximity sensors, 2 ranging sensors and 1 compass sensor, it
was clear that we needed more than 22 GPIO pins since all these sensors require at least 2 GPIO
pins each.
41
Component Number of GPIO Pins Total Required
Drive System Microcontroller 3
Raspberry Pi (Voice Recognition Module) 5
8 x Proximity Sensors 16
2 x Ranging Vertical Sensors 4
Compass Sensor 3 (I2C)
TOTAL 31
Table 6 GPIO Pin Requirements
We also needed I2C and PWM functionality for the compass sensor and IR sensors respectively.
The IR sensors require 8 PWM pins. The Raspberry Pi and Compass sensor could use I2C
communication each for communicating with the microcontroller. The Arduino board that fits
our requirement is the Arduino Mega 2560.
Microcontroller Choice : Arduino Mega 2560
● ATmega2560 Microcontroller
● Input Voltage - 7-12V
● 54 Digital I/O Pins (14 PWM outputs)
● 16 Analog Inputs
● 256k Flash Memory
● 16Mhz Clock Speed
The drive system requires a dedicated microcontroller for continuous functionality of the wheel
system. 11 GPIO Pins are required. We decided to use the cheapest Arduino board with enough
pins to accommodate but still open to expansion.
Drive System Dedicated Microcontroller : Arduino Pro Mini 328
Chapter 7: Results of Fully-Assembled System:
42
Figure 22: Fully Assembled Robot In Lab
7.1 Assembled System Test Results
During the final testing period several areas for potential improvement presented
themselves. The localization, voice recognition, the object detection, and structure encountered
problems that prevented them from functioning smoothly and constantly for an entire tour. The
power system provided good power throughout the majority of these tests, and aside from WiFi
connectivity issues the voice command system turned out to be capable of a few but important
robot controls.
The sensors underwent a drastic change in type because of the sensor test that we
implemented on site. We set the robot on the middle of the hallway at the starting point of the
tour. Before we started anything, we tested if each sensor were giving off reasonably accurate
readings by using the serial window to make the measurements visible. The sensors were giving
off wildly fluctuating numbers from around 20 inches to 40 inches without the robot moving an
inch. Even for sensors not pointed at anything solid, there would be times that they would always
detect something at around 15 to 20 inches in front of them which is clearly a false reading.
These skewed sensor readings were because of stray ultrasonic signals coming from motion
detection devices in the hallway. This prompted us to change the ultrasonic sensors to IR
sensors.
The exact same sensor calibration test was implemented with the GP2Y0A02YK0F IR
sensors used. The results were considerably better than the ultrasonic sensor number. The
accuracy of the IR sensors weren’t fine tuned to perfection but they were reasonably within 3
inches of the correct value almost all the time.
Most of the tests implemented with the fully assembled system were actual tour runs
where we would see where the robot would perform badly and tweak the code to adjust for it.
Examples of these changes were compass sensor turn values (calibration), delay values for
dealing with moving obstacles and orientation auto-correction values.
7.2 Dollar Budget
43
All parts, the quantity of those parts, the individual cost of each part and the total cost of
all parts that made it into the final CTC tour robot design are listed in Table 7
Item Cost Per ($) # Used Total Cost ($)
LM1117 0.427 1 0.427
LM7805 0.39 1 0.39
LM1085 IT 1.95 2 3.9
12V 5A rechargeable battery 22.95 1 22.95
12V 7.2A rechargeable battery 29.36 1 29.36
Fan 15 1 15
Heat sink 0.35 2 0.7
Motor 24 2 48
Arduino Pro Mini 9.95 1 9.95
H-bridge 19 2 38
Shaft couplers 8 2 16
Wheels & parts 23 2 46
Motors mounting 10 2 20
Structure 25 1 25
Raspberry Pi 2 43.78 1 43.78
Logitec Speaker 59.99 1 59.99
USB Microphone 25 1 25
USB Hub 22.75 1 22.75
WiFi Dongle 11.79 1 11.79
8Gb SD Card 14.98 1 14.98
Arduino Mega 2560 13.96 1 13.96
Sharp Long Range IR Sensor 7.95 8 63.6
Range Finder Sensor 15 2 30
Compass Sensor HMC 5883L 7 1 7
Total Cost 568.527
Table 7: Parts List and Cost
7.3 Power Budget
The power of consumed by each electrical component, the current drawn and the total
power supplied by the Robot’s onboard batteries are listed in table 8
Item Current(A) Power(W)
44
Ultrasonic sensors X8 0.048 0.24
Microcontroller_1 0.043 0.215
Microcontroller_2 0.035 0.175
Compass Sensor 0.007 0.035
Raspberry Pi 0.5 2.5
Speaker & Microphone & Wifi 0.6 3.0
Laser Sensor X2 0.184 0.93
Fans 0.15 1.8
IR sensors X3 0.105 0.525
Stepper motors X2 5.2 62.4
Total power 71.8
Table 8: System Power Budget
45
Chapter 8: Conclusions:
Recap of performance
(Below is a list Highlighting any major issues that we had to deal
with)
8.1 USB hub and RPi Power
While trying to integrate the the voice recognition and speech system into the completed
design the initial beliefs that it could be powered from the same DC converter that would be used
for other 5 volt components proved to be incorrect. Eventually, do the poor performance of the
available DC converters, an independent converter was used for each the USB hub and the
Raspberry Pi. Furthermore it was found that (even when operating well below the converters
maximum current) the converters would overheat and cause system instability. This was solved
with the addition of a fan and large heat sinks to control the converter temperature.
8.2 Microcontroller and RPi communication
During individual module testing and development it was assumed that I2C
communication would allow the RPi to receive data from the navigation microcontroller to
determine which tour audio was appropriate to play, however, upon system integration it was
learned that the navigation microcontroller was already set up as an I2C master to communicate
with the compass sensor. Upon trying to connect the RPi to the I2C bus the compass system
stopped working. Although the problem might have had a software workaround, in light of the
available digital gpio pins, it was easier just to run additional data lines from the Navigation
microcontroller to the RPi. Thus enabling the transmission of data through simple 3 bit data
lines.
8.3 Addition of a Fourth Wheel-Fading to the right
During early on testing, the initial design of structure was not capable of fulfilling the
project requirements. When the robot was turning, the wheels were not stable and unexpectedly
detached.The problem was the wheels also covered weight of robot, not only offered the torque
to move. Which meant the stepper motors lost lots of torque to move the robot. To solve this
problem, under the base platform, an additional free wheel was added to help support the robot
weight between the drive wheels. In that location, that free wheel could help motors share the
weight of robot. This extended the life of our drive system but created a noticeable fade to the
right when the robot was supposed to travel straight ahead.
8.4 Power system
Because of wrong design at the beginning, the regulators in DC to DC circuit were
overheat while run the raspberry pi and hub. They drawed lots of current through the regulators.
The solution was using two regulators in parallel to split the current that could split power and
could decrease the temperature in regulators. Also, it added the heatsinks and fans that could
46
cool down the temperature of DC to DC convert circuit. Eventually, the power of whole system
is around 63 watts in drive system, 10 watts in other electronics parts. The running time of robot
is around 1 hour and 40 minutes.
8.5 Undependable Sensors
One major obstacle that we faced was the discovery that ultrasonic sensors were not
going to be viable for use on the hallway because of other devices giving out crosstalk signals.
This was emotionally frustrating because it was like a blindside problem where we had no way
of knowing about it until we tried the sensors on site. Because of this, we struggled to find
sensors that lived up to the impressive functionality of the HC-SR04. We couldn’t find anything
affordable that could match the range of the ultrasonic sensor. We needed sensors with a certain
maximum range because we needed to detect changes in the hallway architecture for
localization.
Even after we swapped the ultrasonic sensors for the next best option, the new IR sensors
still gave us a different type of problem. The sharp long range IR sensors occasionally put out
absurd distance readings (~infinite values). These readings totally messed up our code behavior
because it treated infinity as an invalid reading where a comparison with an integer value would
be impossible and get the code stuck.
Chapter 9: Applications in Real World, Social, Environmental, and
Economical Impacts:
Today there is almost no industry that is untouched by the the ubiquitous objects we call
robots. By some estimates 50% of the American workforce have jobs that are capable of being
performed by automated machines and are therefore at risk of being lost to robots in the future.
Beyond just the automated google car, almost every automaker (and some private universities)
has some division of engineers working on automated cars that will eliminate millions of cab and
truck driver positions. Likewise manufacturing robot from rethink robotics has the capability to
47
perform almost any repetitive manual labor job. The robot, named Baxter, can actively and
quickly learn to perform its duties through on the job training by simply observing a supervisor
(a capability that until now only human workers had). Even positions that are highly skilled and
previously thought inaccessible to robots are being considered potential areas for growth. An
engineering firm called Intuitive Surgical and another known as the Raven project have
developed robots capable of performing surgery with limited human oversight. Additionally
inroads are constantly being made toward the removal of the human element from current war
robots such as aerial drones. As human lives could someday be in the hands of robots via the
scalpel or guided missile, we must ask: is this a bad thing?
In answering this question it is important to remember that the history of people being
displaced from menial or skilled jobs did not start with the silicon transistor. A thousand years
ago the majority of people had one important job: to make food so that they could live; the
practice of farming dominated the time of everyone. Like the robotics, advances in technology
displaced the vast majority of people working the fields. Although farming is the ideal example,
consider blacksmiths, horse farriers, elevator operators, copy boys, bowling pin setters and milk
delivery men; all professions that used to be abundant but dwindled due to advances in
technology and methodology. In light of this we must acknowledge that termination of an
archaic job is not necessarily a curse. Consider how much longer it would have taken for
someone to create the printing press had Johannes Gutenberg been to busy sowing seeds and
plowing fields to pursue the lofty cause of invention. As a consequence how many of the
developments since, such as lower infant mortality and personal hygiene (which are generally
considered good), would have been delayed or not have happened as a result of a world without
a practical means of documenting and disseminating information. From a global perspective the
inconvenience of being displaced from a job is well worth the extra time and abilities that
modern technologies have provided.
As was farming, so too will be autonomous labor. The freedoms provided by the
advanced technologies of the past will again be bolstered as more people are allowed to consider
pursuits more meaningful than cab-ing people across town all day. As robots become more
prevalent a large percentage of current human activity will be eliminated (who wants to brush
their teeth and make their own bed anyways?) but people will adapt and just as there is no place
in todays society for the farmers of the past, who individually and time consumingly plant seeds
by hand, the future society will have no place for anyone as ignorant as today’s average person.
By necessity people will set their sights higher. Considering that that just a few centuries ago it
was extremely unrealistic to think that everyone would someday know how to read, a global
advancement of human intellect as a result of new technology is for from the realm of fantasy.
Autonomous robots, like our tour bot, will free humanity from the mundane tasks which we are
already outgrowing and allow for advancements which currently seem as impossible as a 80%
global literacy rate once did.
48
9.1 The Global and Environmental Impact
The tour bot is capable of all the basic functions, such as hearing, vision and speech, that
would be found in a current robot of the world. Common robots have many different roles in
many different fields such as industrial work, family support, education and others. As humans
we are realizing that we cannot simply continue to pursue the proliferation of autonomous robots
without considering the peripheral consequences. For example the mass spread of robots could
lead to financial problems, environmental problems, and natural resource scarcity. Alternatively
robots can provide great profits in the future. Despite that this senior project, the tour bot, is of a
very small scale and not aptly compared to designs currently on the global market, it can be used
in small areas such as schools, companies and other public places. The potential problem in
identifying the hazards associated with the tour bot project is the that it is not comparable to
other robots from companies with more financial support and technical resources.
Currently there are many different kinds of robots that are at a production level. In the
industrial realm there is the Robotics arm, and many countries have successful robot enterprises
such as the Swiss firm ABB, and FANUC in japan. Moving away from industrial robots, there
are many small companies who create autonomous robots designed to help the family. A
common manifestation of these efforts is the intelligent cleaning robot. Beyond family robots
there is also many applications in the medical field where robots, instead of humans, produce
medical products and provide sanitary germ free conditions for clean rooms. In the military
robots have special tasks such as the removal of mines and other tasks which take them into
dangerous places.
Despite the aforementioned advances the new technology has many potential negative
side effects. Robot construction requires many manufactured parts, which are not natural
materials. After the service life of a robot concludes these materials are unable to dissolve in the
environment and remain for a very long time. Electrical power sources such as batteries or
nuclear fuels pose severe risks to the environment as every year toxic battery wastes affect the
ecosystem.
Irregardless of the consequences humans continue to pursue the development of robotics
and recent advancements have created robots that are smarter than ever before. A long term goal
that is closer than ever to being realized is the creation of a robotic brain that is similar to the
human’s. At IBM they have created a brain-like computer chip which can respond to input and
perform logical reasoning similar to a human. However what will happen in the future if this
robot brain exceeds the capabilities of its human counterpart?
The convenience can capabilities offered by robots will make life easier and increase the
rate at which the global economy grows. Already robotic factory production has eclipsed that of
49
a human operated factory but at the same time, the phenomena of using a robot instead of human
labor will cost countless jobs as the expense of a robot is far less than that of a person.
9.2 Positive and Negative Impact of Robotics
The development of robotics in this time and age is considered a huge technological
progress. Robotics has a pretty long list of positive effects. In terms of what robotics actually
means, we can already find good things about it. Robotics looks to automate tasks using
electronics and machines to substitute for actual people having to do the work. Robotics makes
life easier for people in general because it can do work in place of people. The great thing about
this is not just the fact that we don’t have to do the work, but that we can make the work more
efficient than if a normal human being chooses to do the same task. Human beings have lots of
limitations because of our physical bodies, robots can eliminate some of these weaknesses
making them ideal for certain tasks that needs to be done. Examples of these tasks are really
hazardous tasks like bomb disposals and hazmat handling. This reduces the risk to human society
because less people will have to get injured with such tasks. Other tasks that might be dangerous,
and not to mention physically demanding, is retrieving or recovering important objects from
wreckages or crashes. There are also what you would call exploration robots that can traverse
conditions that are hazardous to the human health like outer space and distant planets. Having the
help of robots makes life easier for us.
But of course, with the positive also comes the negative. Robots, because they are so
efficient (they don’t get tired. don’t have sick days and don’t make any mistakes), tend to put a
lot of people out of work. Robots are really more preferable to humans from a business
standpoint because of their super efficiency and that fact that robots can be programmed to do
multiple jobs at a time. Even though robots also open up jobs for robot maintenance, those new
jobs fail to compensate for the total lost jobs because of robotic efficiency. This slowly
introduces us to the possibility of the “robots taking over the world” idea that we so commonly
see in sci-fi shows or movies. This heavily affects the economy because so many people will lose
their jobs as long as robotics flourishes. Another negative thing is that working with robots is
less safe than working with other human beings. Because humans have a higher awareness of
danger and robots are usually only focused on the tasks at hand, there are higher potential
dangers in the workplace when people and robots share a workplace. Since robots have trouble
sensing approaching workers , there have been cases in which workers are severely injured or
killed by active machinery. And the last thing that makes the list of negative effects robotics has
on society is the problem of disposing these materials that are used in building robots (that are
harmful to nature and our environment). In the long run, if enough robots were made to replace
all the human jobs in the word, the earth might be in danger of dealing with hazardous effects
from these materials that are disposed of unsafely. This might cause health hazards and
environmental hazards the the people inhabiting the earth.
50
9.3 Social Impacts of Further Technological Advancements
Many authors discuss the advantages and disadvantages of all the technological advances.
There are some people who are in favor of the creation of new technology, which include
advanced cell phones, better known as smartphones, tablets and computers, robots, etc. However,
there are some people who oppose the idea of having new technology because it can damage
families and society. Nowadays we can see how technology has changed society by just looking
around. We do not have to go really far, we can see this behavior on campus every day, students
walking and texting at the same time. I have even seen students using their cellphones while
going in their skateboards and bikes, which I believe is dangerous. I think we have got to a point
where this technology forms a big part of our daily life. However, there are also many
advantages when using all these new technologies. As mentioned above, there are now robots
capable of performing surgeries, which is an advantage to society because a higher quality of
life can be achieved after going through these procedures.
There are several readings that discuss the ethical issues caused by technology and how
these play a really significant role in the lives of all people. The authors mention in their readings
that in order to achieve progress we as society need to believe in the sufficiency of scientific and
technological innovation as the basis for general progress. The authors explain in their readings
that if we can ensure that the advances of science based on different technologies, then the rest of
the aspects involved in the development of this new technology and applied sciences will take
care of themselves. However, I do not completely agree with the authors because I believe that
the consequences of creating new technology do not take care of themselves. We as a society
have to find different ways of solving the problems created by the new technologies. As the
author mentioned when developing new technologies we need to analyze the situation and see if
this new technology will bring positive or negative impact to our society. I believe that our tour
guided robot would have a positive impact in our society because by using it to give tours of
Chambers building to prospective students, it can be the cause of inspiration for students to
pursue a career involving science and technology.
There are different aspects we have to consider when deciding if a new technology may
cause positive or negative impact into our society. For example, we need to see if the technology
uses materials that would be disposed in the near future causing damage to the environment.
These negative aspects of technology may affect the health of society if the disposables are toxic
or contain chemicals that can damage people’s health. Then this situation will oppose the idea
mentioned above, where robots were considered a positive aspect on human live because it could
be used to save lives after being programmed to operate on people and cure them form an illness.
When considering the health topic, we not only need to consider our own health and how this
new technology would affect us, but we also need to consider future problems involved that may
damage the health of our future children, since those toxic chemicals can end contaminating
water or the soil where different fruits and vegetables are cultivated.
Another downside of making technology a priority downgrades the importance that we
put onto our own lives. Sometimes people work for a long period of time in order to discover
51
new technology, but by doing this; they ignore other important aspects in their lives. The
problem is that we can never recuperate time, and that is one of the reasons why time is so
valuable. They sometimes forget that they have a family and that there are other more important
aspects in life. Sometimes we give more importance to technological or scientific advances than
to family, society and ethical values. We live in a turbulent and materialistic world, and that
makes it hard for us to value and admire the most important things in life. We see how there are
some children who prefer to play videogames or spend time surfing the internet or texting,
instead of establishing a communication with their families. Nowadays smartphones are really
popular, everybody wants to have the best technology available. Most people want to have the
best technology, but that means that not only the developer of new technology has to work hard
and dedicate more time to this projects, but members of the society may also need to work harder
or for longer periods of time in order to afford buying this new technology. Now we can have
internet access almost anywhere if we have a portable device, like a tablet, smartphone or laptop.
However, we are paying more attention to this new technologies than to the effects that all of
these new improvements are causing to our society.
We support the notion that mostly everything is good if it is used with moderation, just as
mentioned by Aristotle- moderation is the key word not just for technology but for many other
aspects in our lives. Problems start whenever we start abusing or overusing these new
technologies. Nowadays, at school, or if we go to a hockey game or to the movie theater, it is
common to see families, friends and couples using their cellphones most of the time, even if they
are next to each other. This is not proper because we put our priorities in the wrong order.
Overusing new technologies can cause damage to our society.
It seems that nowadays we are paying more attention to the development and use of new
technologies and we are forgetting about our ethical values, which were considered as the most
important before, like love, freedom, and family. Technological advances are really important,
but we have also need to worry about our society. We need to pay more attention to our lives and
use technology in a correct manner in order to enjoy freedom, love and family. As a society, we
need to learn how to balance technology in order to live a healthier life. Technology is not bad if
we know how to use it and when to apply it. For example, nowadays Internet is integral to
everyday life because it helps society solidify a web of communication, however, when we start
abusing it, we end up with a problem; we need to learn how to balance this situation. The
positive or negative outcomes and consequences depend solely on the way we use this
technology.
52
Chapter 10: Lessons Learned and Future Improvements
Through the application of the skills learned here at the UOP School of Computer
Engineering and Computer Science, we can now have a deep appreciation for the painstaking
research and dedication that is required to bring a product to market. Likewise, there is a
renewed feeling of confidence that has erupted from deep inside each and every member of our
team, as we came together, struggled, stumbled, and eventually thrived while executing our
design of an autonomous tour robot. The value of the senior project program cannot be
understated enough.
10.1 Lessons Learned
10.1.1 Don’t Trust the inventoried parts
Throughout the development of the voice recognition and hearing system
problems were frequent however unpredictable and sporadic. While trying to accomplish
simple tasks, for example connecting the RPi to the UOP WiFi system, problems would
arise and then disappear without any clear changes in the system configuration. During
the initial testing of this technology the general assumption was that all problems resulted
from design or user errors, however after swapping out the old RPi from the University
inventory with one purchased from an electronics dealer many of the problems
disappeared. Future designers should try to use new equipment whenever necessary
however due to the pittance budget of 300 dollars this might not be possible. Another
similar problem was encountered when using the Arduino Uno obtained from the
inventory, several pins were not working. Initially we thought the problem was with the
code written to control the I/O pins, or some of the other components in the circuit, but
then we found out the problem was with the board. After purchasing a new Arduino Uno,
we realized the problem was due to the malfunction of the board we had gotten from
University of the Pacific’s inventory.
10.1.2 Planning is of paramount importance
The initial design of the robot structure was really important. The motors selected
fulfilled the requirement of weight, but we encountered some issues with the stability of
our initial design. With our initial design, the robot was not able to run very well
whenever it started. Once we observed this behavior, we decided to change our initial
robot structure design, and after several changes were made to the structure, the stability
problems were solved.
10.1.3 Perfectly functional code doesn’t mean a perfectly functional system
Writing code for any of the modules were straightforward at times and
challenging the others. But finally completing a functional code written for a physical
system is just the beginning. Even if the code is perfect, hardware is very prone to
53
malfunctions and mistakes. The simplest things can go wrong which offset the whole
performance of the system. Modular testing with the physical components of your system
is very helpful. Don’t assume everything is fine because the code compiles and runs
smoothly.
10.1.4 Physical systems need to be tested on the ACTUAL site
It’s always helpful to calibrate systems on the actual site and not on the lab as
soon as possible. As with our case, we had no way of knowing that our sensor type
choice was going to be impossible to use until we did on-site tests.
10.2 DesignImprovements
10.2.1 Voice Recognition Software
The open source software Voicecommand that was utilized for this project is a
starting point, it was relatively easy to use and allowed for simple integration with other
RPi functions however it is by no means the best software at performing speech to text
conversion. Future iterations of the tour robot should investigate software that responds
faster and does not have to verify the command keyword.
10.2.2 Future Improvements
If we had more time and resources, we could have worked on finding a way for
“Rosy” to understand multiple languages. Moreover, we would have added a more
humanoid appearance to the robot, or worked on developing a head for the robot, so that
different faces could be displayed based on environment using LEDS and other
mechanical and electronic components.
10.2.3 Microphone and Speaker Quality
Due to the abysmal joke that is the project’s budget high quality components that
would have greatly improved system performance were unavailable. For example, instead
of using an available webcam for a microphone, which has a limited effective range and
requires a user to talk directly into the device, a high quality omnidirectional microphone
would allow for greater voice recognition. Likewise the current system uses a USB laptop
speaker to achieve speech. This speaker is only clearly heard from one direction and
lacks the ability to give an effective tour. Future iterations could benefit from multiple
speakers that will broadcast sound 360 degrees around the robot.
54
10.2.4 More Stable Wheels
If we had more time and resources, we could have worked on finding a more
stable configuration for the wheels. Predictable and smooth movements will only be
possible with perfectly stable wheels. Having the robot go straight sounds so simple to do
but it’s nearly impossible with implement if the wheels aren’t stable enough.
10.2.5 Higher Quality Sensors
I could not emphasize enough how much time was wasted on making due with
lower quality sensors. Whole paragraphs of code had to be written to compensate for
faulty sensor performances. Important parts of the robot’s behavior sometimes hinged on
some of these sensors measuring a wall distance correctly. Using higher quality sensors
would be very helpful for us designers and for the people this robot would help someday.
55
References
[1]S.Saha, C Marsh (2013). Jasper[Online].Available: http://jasperproject.github.io/
[2]Algorhythmic from Aon²(2012). Speech Recognition Using The Raspberry Pi[Online].
Available: http://www.aonsquared.co.uk/raspi_voice_control
[3]S.Hickson(2014). Voicecommand[Online].Available: http://stevenhickson.blogspot.com/
[4]J.Blum, Wiley (2013). Arduino Tools and Techniques for Engineering Wizardry.
56

More Related Content

What's hot

Arduino Autonomous Robot
Arduino Autonomous Robot Arduino Autonomous Robot
Arduino Autonomous Robot
Dominick Squillace
 
Yuvaraj.K Resume
Yuvaraj.K ResumeYuvaraj.K Resume
Yuvaraj.K Resumeyuvaraj k
 
An introduction to Autonomous mobile robots
An introduction to Autonomous mobile robotsAn introduction to Autonomous mobile robots
An introduction to Autonomous mobile robots
Zahra Sadeghi
 
A Line Follower Robot Using Lego Mindstorm
A Line Follower Robot Using Lego MindstormA Line Follower Robot Using Lego Mindstorm
A Line Follower Robot Using Lego Mindstorm
Mithun Chowdhury
 
ALIAS WP6 Results
ALIAS WP6 ResultsALIAS WP6 Results
ALIAS WP6 Resultsgeigeralias
 
Modular Self assembling robot cubes with SWARM Communication
Modular Self assembling robot cubes with SWARM CommunicationModular Self assembling robot cubes with SWARM Communication
Modular Self assembling robot cubes with SWARM Communication
Sarath S Menon
 
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
 SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS  SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
Nandakishor Jahagirdar
 
MAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOTMAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOT
PRABHAHARAN429
 
IRJET-Automatic Self-Parking Chair using Nissan Technology
IRJET-Automatic Self-Parking Chair using Nissan TechnologyIRJET-Automatic Self-Parking Chair using Nissan Technology
IRJET-Automatic Self-Parking Chair using Nissan Technology
IRJET Journal
 

What's hot (9)

Arduino Autonomous Robot
Arduino Autonomous Robot Arduino Autonomous Robot
Arduino Autonomous Robot
 
Yuvaraj.K Resume
Yuvaraj.K ResumeYuvaraj.K Resume
Yuvaraj.K Resume
 
An introduction to Autonomous mobile robots
An introduction to Autonomous mobile robotsAn introduction to Autonomous mobile robots
An introduction to Autonomous mobile robots
 
A Line Follower Robot Using Lego Mindstorm
A Line Follower Robot Using Lego MindstormA Line Follower Robot Using Lego Mindstorm
A Line Follower Robot Using Lego Mindstorm
 
ALIAS WP6 Results
ALIAS WP6 ResultsALIAS WP6 Results
ALIAS WP6 Results
 
Modular Self assembling robot cubes with SWARM Communication
Modular Self assembling robot cubes with SWARM CommunicationModular Self assembling robot cubes with SWARM Communication
Modular Self assembling robot cubes with SWARM Communication
 
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
 SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS  SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
 
MAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOTMAKING OF LINE FOLLOWER ROBOT
MAKING OF LINE FOLLOWER ROBOT
 
IRJET-Automatic Self-Parking Chair using Nissan Technology
IRJET-Automatic Self-Parking Chair using Nissan TechnologyIRJET-Automatic Self-Parking Chair using Nissan Technology
IRJET-Automatic Self-Parking Chair using Nissan Technology
 

Viewers also liked

Диагностика работы правоохранительных органов по охране общественного порядка...
Диагностика работы правоохранительных органов по охране общественного порядка...Диагностика работы правоохранительных органов по охране общественного порядка...
Диагностика работы правоохранительных органов по охране общественного порядка...
KomitetGI
 
Instrument Tech 3
Instrument Tech 3Instrument Tech 3
Instrument Tech 3Will Lomax
 
DigiPay4Growth: Bristol Prospects
DigiPay4Growth: Bristol ProspectsDigiPay4Growth: Bristol Prospects
DigiPay4Growth: Bristol Prospects
DP4G
 
Results-Based Programme Management
Results-Based Programme ManagementResults-Based Programme Management
Results-Based Programme ManagementTimur Niyazov
 
Secuencia didáctica fracciones 5°
Secuencia didáctica   fracciones 5°Secuencia didáctica   fracciones 5°
Secuencia didáctica fracciones 5°
jorge quiñones
 
Adapting to PPC changes in 2016 and beyond
Adapting to PPC changes in 2016 and beyondAdapting to PPC changes in 2016 and beyond
Adapting to PPC changes in 2016 and beyond
Aaron Levy
 
ARESTHU
ARESTHUARESTHU
ARESTHU
Cetia54
 
What is Word Processing? Powerpoint Presentation PPT
What is Word Processing? Powerpoint Presentation PPT What is Word Processing? Powerpoint Presentation PPT
What is Word Processing? Powerpoint Presentation PPT
Tech
 
Social Group Work with Educational Setting
Social Group Work with Educational Setting Social Group Work with Educational Setting
Social Group Work with Educational Setting
Solomon Raj
 
Forced migration
Forced migrationForced migration
Forced migration
Thasleem MP
 
Relatorio conselho tutelar denilson e jacson
Relatorio conselho tutelar denilson e jacsonRelatorio conselho tutelar denilson e jacson
Relatorio conselho tutelar denilson e jacsonRaquel Becker
 
το φεμινιστικό κίνημα
το φεμινιστικό κίνηματο φεμινιστικό κίνημα
το φεμινιστικό κίνημα4lykeiotrip
 
Αμερικανική και Γαλλική επανάσταση 3
Αμερικανική και Γαλλική επανάσταση 3Αμερικανική και Γαλλική επανάσταση 3
Αμερικανική και Γαλλική επανάσταση 3
argisdrougas
 

Viewers also liked (16)

Диагностика работы правоохранительных органов по охране общественного порядка...
Диагностика работы правоохранительных органов по охране общественного порядка...Диагностика работы правоохранительных органов по охране общественного порядка...
Диагностика работы правоохранительных органов по охране общественного порядка...
 
Creatine Final (1)
Creatine Final (1)Creatine Final (1)
Creatine Final (1)
 
Instrument Tech 3
Instrument Tech 3Instrument Tech 3
Instrument Tech 3
 
DigiPay4Growth: Bristol Prospects
DigiPay4Growth: Bristol ProspectsDigiPay4Growth: Bristol Prospects
DigiPay4Growth: Bristol Prospects
 
Results-Based Programme Management
Results-Based Programme ManagementResults-Based Programme Management
Results-Based Programme Management
 
Secuencia didáctica fracciones 5°
Secuencia didáctica   fracciones 5°Secuencia didáctica   fracciones 5°
Secuencia didáctica fracciones 5°
 
Adapting to PPC changes in 2016 and beyond
Adapting to PPC changes in 2016 and beyondAdapting to PPC changes in 2016 and beyond
Adapting to PPC changes in 2016 and beyond
 
Powerpoint5
Powerpoint5Powerpoint5
Powerpoint5
 
ARESTHU
ARESTHUARESTHU
ARESTHU
 
What is Word Processing? Powerpoint Presentation PPT
What is Word Processing? Powerpoint Presentation PPT What is Word Processing? Powerpoint Presentation PPT
What is Word Processing? Powerpoint Presentation PPT
 
Social Group Work with Educational Setting
Social Group Work with Educational Setting Social Group Work with Educational Setting
Social Group Work with Educational Setting
 
Forced migration
Forced migrationForced migration
Forced migration
 
Αμερικανική επανάσταση
Αμερικανική επανάστασηΑμερικανική επανάσταση
Αμερικανική επανάσταση
 
Relatorio conselho tutelar denilson e jacson
Relatorio conselho tutelar denilson e jacsonRelatorio conselho tutelar denilson e jacson
Relatorio conselho tutelar denilson e jacson
 
το φεμινιστικό κίνημα
το φεμινιστικό κίνηματο φεμινιστικό κίνημα
το φεμινιστικό κίνημα
 
Αμερικανική και Γαλλική επανάσταση 3
Αμερικανική και Γαλλική επανάσταση 3Αμερικανική και Γαλλική επανάσταση 3
Αμερικανική και Γαλλική επανάσταση 3
 

Similar to Report

PC-based mobile robot navigation sytem
PC-based mobile robot navigation sytemPC-based mobile robot navigation sytem
PC-based mobile robot navigation sytem
ANKIT SURATI
 
Summer Training Program Report On Embedded system and robot
Summer Training Program Report On Embedded system and robot Summer Training Program Report On Embedded system and robot
Summer Training Program Report On Embedded system and robot
Arcanjo Salazaku
 
H011114758
H011114758H011114758
H011114758
IOSR Journals
 
Developing a Humanoid Robot Platform
Developing a Humanoid Robot PlatformDeveloping a Humanoid Robot Platform
Developing a Humanoid Robot Platform
Dr. Amarjeet Singh
 
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGA
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGAHigh-Speed Neural Network Controller for Autonomous Robot Navigation using FPGA
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGA
iosrjce
 
Wireless robo Report
Wireless robo  ReportWireless robo  Report
Wireless robo Report
Sumit Saini
 
A Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiA Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry Pi
Angela Shin
 
Nasa final report
Nasa final reportNasa final report
Nasa final report
Muhammad Mohsin Raza
 
Design and implementation of an sms based robotic system for hole- detection ...
Design and implementation of an sms based robotic system for hole- detection ...Design and implementation of an sms based robotic system for hole- detection ...
Design and implementation of an sms based robotic system for hole- detection ...
eSAT Journals
 
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdf
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdfA SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdf
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdf
Aakash Sheelvant
 
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT
A SMART VOICE CONTROLLED PICK AND PLACE ROBOTA SMART VOICE CONTROLLED PICK AND PLACE ROBOT
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT
IRJET Journal
 
TFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdfTFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdf
houssemouni2
 
Bachelors Project Report
Bachelors Project ReportBachelors Project Report
Bachelors Project Report
Manu Mitra
 
Robot arm ppt
Robot arm pptRobot arm ppt
Robot arm ppt
Minuchaudhari1
 
Embedded system for traffic light control
Embedded system for traffic light controlEmbedded system for traffic light control
Embedded system for traffic light control
Madhu Prasad
 
BLE_Indoor_Report
BLE_Indoor_ReportBLE_Indoor_Report
BLE_Indoor_ReportTianhao Li
 
IRJET - Autonomous Eviscerating BOT using ANT Colony Optimization
IRJET -  	  Autonomous Eviscerating BOT using ANT Colony OptimizationIRJET -  	  Autonomous Eviscerating BOT using ANT Colony Optimization
IRJET - Autonomous Eviscerating BOT using ANT Colony Optimization
IRJET Journal
 
Report on Pick and Place Line following Robot
Report on Pick and Place Line following RobotReport on Pick and Place Line following Robot
Report on Pick and Place Line following Robot
Pradeep Yadav
 

Similar to Report (20)

PC-based mobile robot navigation sytem
PC-based mobile robot navigation sytemPC-based mobile robot navigation sytem
PC-based mobile robot navigation sytem
 
Summer Training Program Report On Embedded system and robot
Summer Training Program Report On Embedded system and robot Summer Training Program Report On Embedded system and robot
Summer Training Program Report On Embedded system and robot
 
Automatic P2R Published Paper P1277-1283
Automatic P2R Published Paper P1277-1283Automatic P2R Published Paper P1277-1283
Automatic P2R Published Paper P1277-1283
 
H011114758
H011114758H011114758
H011114758
 
Developing a Humanoid Robot Platform
Developing a Humanoid Robot PlatformDeveloping a Humanoid Robot Platform
Developing a Humanoid Robot Platform
 
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGA
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGAHigh-Speed Neural Network Controller for Autonomous Robot Navigation using FPGA
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGA
 
Wireless robo Report
Wireless robo  ReportWireless robo  Report
Wireless robo Report
 
A Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiA Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry Pi
 
Nasa final report
Nasa final reportNasa final report
Nasa final report
 
Design and implementation of an sms based robotic system for hole- detection ...
Design and implementation of an sms based robotic system for hole- detection ...Design and implementation of an sms based robotic system for hole- detection ...
Design and implementation of an sms based robotic system for hole- detection ...
 
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdf
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdfA SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdf
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT.pdf
 
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT
A SMART VOICE CONTROLLED PICK AND PLACE ROBOTA SMART VOICE CONTROLLED PICK AND PLACE ROBOT
A SMART VOICE CONTROLLED PICK AND PLACE ROBOT
 
TFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdfTFG_Oriol_Torta.pdf
TFG_Oriol_Torta.pdf
 
Bachelors Project Report
Bachelors Project ReportBachelors Project Report
Bachelors Project Report
 
Robot arm ppt
Robot arm pptRobot arm ppt
Robot arm ppt
 
Embedded system for traffic light control
Embedded system for traffic light controlEmbedded system for traffic light control
Embedded system for traffic light control
 
BLE_Indoor_Report
BLE_Indoor_ReportBLE_Indoor_Report
BLE_Indoor_Report
 
pdfreport
pdfreportpdfreport
pdfreport
 
IRJET - Autonomous Eviscerating BOT using ANT Colony Optimization
IRJET -  	  Autonomous Eviscerating BOT using ANT Colony OptimizationIRJET -  	  Autonomous Eviscerating BOT using ANT Colony Optimization
IRJET - Autonomous Eviscerating BOT using ANT Colony Optimization
 
Report on Pick and Place Line following Robot
Report on Pick and Place Line following RobotReport on Pick and Place Line following Robot
Report on Pick and Place Line following Robot
 

Report

  • 1. 1 . E C P E 1 9 6 : S E N I O R P R O J E C T II P R O F E S S O R : D R . K H O I E S P O N S O R : D R . R O S S S P R I N G 2 0 1 5 C H A M B E R S T O U R G U I D E R O B O T W I T H V I S I O N , H E A R I N G A N D S P E E C H M E M B E R S O F T H E T E A M: J E Z R Y L G I R O N B I N G Z H A N G E V A N B O M G A R D N E R K A R L A D U R A N
  • 2. 2 Table of Contents 1 Timetable and Distribution of Tasks 8 2 Project Overview 9 2.1 Description of the project 9 2.2 Project Requirements and Specifications 9 3 System Design 3.1 Navigation and Obstacle Avoidance System 11 13 4 Hardware and Software Design Research 16 4.1 Voice Recognition System 16 4.2 Speech System 17 4.3 Drive System 18 4.4 Power System 19 4.5 Obstacle Avoidance System 20 4.6 Navigation System 22 4.7 Structure 26 5 Project Development and Assembly 27 5.1 Speech System, Tour Audio 27 5.2 Navigation System 5.3 Drive System 27 29 5.4 Power System 30 5.5 Speech and Hearing System 30 5.6 Obstacle Avoidance System 31 6 Justifications for Hardware/Software 33
  • 3. 3 6.1 Drive System 33 6.2 Power System 34 6.3 Robot Structure 35 6.4 Speech and Voice Recognition 36 6.5 Obstacle Avoidance System 36 6.6 Navigation System 39 6.7 Microcontroller Choice 41 7 Results of Fully-assembled System 42 7.1 Assembled System Test Results 42 7.2 Dollar Budget 43 7.3 Power Budget 44 8 Conclusions 45 8.1 USB hub and RPi Power 45 8.2 Microcontroller and RPi communication 45 8.3 Structure 45 8.4 Power System 45 8.5 Drive System 46 9 Applications, Social, Environmental and Economical 47 9.1 Global and Environmental Impact 48 9.2 Impact of Robotics 49 9.3 Social Impact of Technological Advancements 50 10 Lessons Learned and Future Improvements 53 10.1 Lessons Learned 53 10.1.1 Don’t Trust the inventoried parts 53 10.1.2 Planning is of paramount importance 53 10.1.3 Perfectly functional code doesn’t mean a perfectly functional system 53
  • 4. 4 10.1.4 Physical systems need to be tested on the ACTUAL site 54 10.2 Design Improvements 54 10.2.1 Voice Recognition Software 54 10.2.2 Future improvements 54 10.2.3 Microphone and Speaker Quality 54 10.2.4 More Stable Wheels 55 10.2.5 Higher Quality Sensors 55 References 56
  • 5. 5 Table of Figures Figure 1, Behavior Diagram 10 Figure 2, Functional Diagram Figure 3, 8 Sensor Configuration Figure 4, High Level Flowchart of Obstacle Avoidance Behavior 11 12 13 Figure 5, 12V DC to DC convert circuit 19 Figure 6, HC-SR04 Ultrasonic Sensor Figure 7, Sample Diagram for Potential Fields Figure 8, Pololu IR Beacon Transreceiver Pair Figure 9, Xbee Module Pair 20 21 22 23 Figure 10, Testing the Smart Beacon Technology Kit 24 Figure 11, Robot Structure 25 Figure 12, Beacon Pairing Concept 27 Figure 13, nRF51822 Bluetooth Smart Beacon Kit 28 Figure 14, Robot Driver System 32 Figure 15, DC to DC Converter for Voice Recognition and Speech System 33 Figure 16, Low Current 5V and 3.3V Regulator 34 Figure 17, Back Free Wheel and Driving Wheel 34 Figure 18, Second Floor of the Robot Figure 19, GP2Y0A02YK0F Sharp Long Range IR Sensor Figure 20, HMC5883L Compass Sensor Figure 21, CTC Hallway Ceiling Elevation Model Figure 22, Fully Assembled Robot in Lab 35 37 38 39 41
  • 6. 6
  • 7. 7 List of Tables Table 1, Timetable of Scheduled Tasks Table 2, Xbee Wireless Distance Calculation Table 3, Speech and Hearing Test Results Table 4, Sensor Range Test Results Table 5, Functional Performance Results Table 6, GPIO Pin Requirements Table 7, Parts List and Cost Table 8, System Power Budget 7 27 30 30 31 40 42 43
  • 8. 8 Abstract The Chambers tour guide robot project is a multidisciplinary engineering project that incorporates electrical and computer engineering skills to design an autonomous robot that is capable of providing a physical and verbal tour of the Chambers Technology Center (CTC) building at the University of the Pacific with minimal human oversight. This robot can autonomously adjust its position to navigate around static and mobile obstacles while traveling to and introducing points of interest within the structure. Mobility is achieved by utilizing two wheels rotated by independent stepper motors and two "free" wheels for stability. Obstacle avoidance is achieved through the use of ultrasonic sensors positioned at 8 points around the robot, a compass sensor and the Bug0 algorithm. Localization is achieved through the use of an additional sensor pointed toward the ceiling which takes advantage or the unique physical structure of the building. Obstacle avoidance and localization sensor data is processed and interpreted by an Arduino Mega 2560 microcontroller which sends instructions to an Arduino Pro Mini 328 that controls the stepper motors and propels the robot. Tour participants can interact with the robot by issuing voice commands and hear verbal descriptions of each tour destination. Voice command and verbal descriptions will be handled by a Raspberry Pi running Voicecommand software that will interpret spoken commands and play tour audio based on location information received from the Arduino Mega 2560.
  • 9. 9 Chapter 1: Timetable and Distribution of Tasks The development of the tour guide robot was an in depth process requiring the coordination of 4 different team members. The systems and the specific tasks for each system operator is detailed in table 1 for the Spring 2015 semester. Table 1 : Timetable of scheduled tasks Chapter 2: Project Overview
  • 10. 10 2.1 Descriptionof the Project The Chambers tour guide robot project is a multidisciplinary engineering project that incorporates electrical and computer engineering skills to design an autonomous robot that is capable of providing a physical and verbal tour of the Chambers Technology Center building at the University of the Pacific with minimal human oversight. This robot can autonomously adjust itself based on the surrounding environment by detecting mobile and static obstacles as well as determining its location within the building. Mobility is achieved by utilizing two wheels controlled by stepper motors and two "free" wheels for stability. Interaction with the physical environment is made possible by a suite of sensors located at strategic positions on the robot chassis. Data from the sensors which will be used to execute the simple but efficient Bug0 behavior for navigating around obstacles. Perfect relative orientation is maintained through the stepper motor's measured rotations and additional guidance is provided by a compass sensor, making sure movement will be parallel to the walls. The robot will localize it’s position within the building through the use of a vertical sensor and the unique architecture of the hallway. Voice recognition will be utilized to respond to audience commands opening a degree of interactivity with its users. All data processing will be handled by an ATmega Microcontroller and a Raspberry Pi microcomputer. Plastic, wood and metal will be used for the construction of the robot's chassis to minimize cost, maximize stability and minimize hazard to the environment. Per the project specifications power for the motors and all systems will be provided by rechargeable batteries which have dedicated purposes dictated by the systems needs. One battery will provide power to the drive motor and a separate will provide power to the onboard electrical systems(e.g. the microcontroller and microcomputer). The electrical systems battery system shall incorporate several power regulating IC’s to create the desired voltage level to drive the necessary devices. 2.2 Final ProjectRequirements and Specifications • The robot structure: The robot should be approximately three feet tall and be able to navigate its way through the CTC building first floor hallway. The robot should avoid bumping into objects or people. It should be able to go forward, backward and turn left and right. • Tour Structure: The tour should include any laboratory or classroom on the CTC first floor hallway as well as the Co-Op board. During the tour the robot is not required to physically leave the hallway. At the end of the tour the robot should return to a set destination. • Vision: The robot will utilize sensors to interpret its location and assist in object vs. person identification. • Speech: The robot should begin its tour with a greeting. At each tour destination the robot will communicate via audio information provided by the PR department. If an object is encountered during travel and that object is a human being it should communicate a greeting and/or request to pass.
  • 11. 11 • Hearing: The robot should be able to receive verbal instruction to start, stop, and mute. • Aesthetic: The robot should have a semi finished look. Building materials should be modern. • Power: The robot should run on rechargeable batteries that can be charged via a plug without having to remove the batteries.
  • 12. 12 Chapter 3: System Design Figure 1 depicts the behavioral diagram for the Chambers Technology building tour guide robot. It is representative of the lowest level of abstraction of the completed system and displays the system inputs and outputs. For the robot to respond based on the environment, three different inputs are given: The obstacle input, which signals whenever the robot encounters an object or person that obstructs or blocks the path and stops the robot from continuing with the tour. The second input refers to the verbal commands that users can give the robot in order to command the robot to start the tour or inquire about its creators. The third input, location signal, is used to determine the actual location or position of the robot in the Chambers Technology Center. The output signals are produced based on the inputs received, for example, if someone is blocking the hallway, the robot will try to see if there is another possible way of continuing with the tour; it will look for an open space and it may need to turn left, right, stop, or go in reverse in order to continue ahead. The speaking output refers to the specific recording that will be played by a Raspberry Pi microcomputer based on the location signal received. The identify room output determines the specific classroom that the robot locates within the Chambers Technology Center Building. Figure 1: Behavioral Diagram
  • 13. 13 Figure 2: Functional Diagram Figure 2 represents the completed system which will support the required functionality. It is depicted at a high level of abstraction which demonstrates the effective data transferred between electrical components. As is shown, speech and hearing is accomplished with the Raspberry Pi microcomputer, and navigation and motor control are accomplished with individual microcontrollers. The robot will receive verbal commands through the microphone and then the Raspberry Pi microcomputer will respond by playing a specific recording if it is not able to recognize the instruction that was given or send a signal to the Arduino Mega microcontroller (depicted as Microcontroller_1 Navigation in figure 2) whenever the command “Begin tour” is recognized. In addition to the signal received by the Raspberry Pi, the Arduino Mega will also receive signals from the sensors and the compass sensor. Based on these three different inputs,
  • 14. 14 the Arduino Mega will send a three bit signal into the Arduino pro mini 328 (depicted as Microcontroller_2 Motor Controller in figure 2), which will control the motors. 3.1 Navigation and Obstacle Avoidance System: The final sensor configuration is composed of multiple sensors placed around the robot to provide object proximity data. There will be a total of 8x Sharp Long Range IR Sensor GP2Y0A02YK0F sensors placed around the robot. This is the minimal amount of sensors that would provide the robot with reliable obstacle avoidance considering the environment. Figure 3: 8 Sensor Configuration Sensor (1) will be the primary sensor detecting obstacles directly in front of the robot. Since the effective cone of the sensor leaves blind spots for obstacles slightly displaced from the immediate front off the robot, sensors (2) and (3) are needed. These 2 sensors cover the mentioned blind spots and takes care of sensing angled walls that might not reflect the signals back to the frontal sensor (1) because of the reflection angle. Together, sensors (1), (2) and (3) provide reliable frontal sensing for the robot. Sensors (4) and (5) take care of the sides and sensor (6) will help the robot with backing up when needed. Sensors (7) and (8) are the major ranging sensors that help the robot with navigating using landmarks on its side. The frontal sensors on the robot will periodically check for obstacles impeding its advance. Once detected, the robot will attempt to move around it to try and avoid the obstruction turning to either the left or right direction. The side sensors will provide the robot with information for an educated choice to either turn right or left. The robot will
  • 15. 15 move towards this direction until the path towards the beacon is clear. The compass sensor HMC833l would be utilized to execute measured turns and orientation auto- correction. To allow for moving objects to naturally clear the robots path, upon detection of an obstacle the robot will pause. This is to make time for moving obstructions to pass by and avoid the robot themselves (if they are capable, which is very likely since moving obstacles are usually people). Ideally this will prevent the robot from consuming time attempting to clear the obstacle on it’s own. The Bug0 algorithm will be utilized as the overarching path planning behavior. Figure 4 details the process below. Figure 4: High Level Flowchart of Obstacle Avoidance Behavior
  • 16. 16 A vertical laser range sensor will be looking out for the lower ceiling elevations that mark each of the rooms in the hallway. Detecting these lower elevations triggers a unique behavior depending on the robot’s state. Each state describes the varying behaviors the robot will execute to make sense for which actual room the robot is interacting with. Unique audio tracks will be played depending on which rooms the robot is looking at. At various points in the hallway, auto-correction points will be implemented using the compass sensor so the robot can adjust for significant offsets made by unpredictable wheel performance. Chapter 4: Research on Different Hardware/Software Designs
  • 17. 17 4.1 Voice Recognition System: To meet the project specifications the hardware that supports the voice recognition system must be capable of interfacing with a microphone, running or connecting to software that would convert speech to text (STT) and have general purpose input and output (GPIO) pins to send digital signals to the other electric systems. The software requirement mandates the incorporation of a computer capable of running programs and the digital signal output implies that a microcontroller would also be needed. The initial and final hardware chosen to support voice recognition is the Raspberry Pi microcomputer. Alternative microcomputers such as the BeagleBone and Humming Bird were considered as they provided more RAM and processor speed. However, having established that the RPi was capable, there was no need to use these alternative microcomputers which which required more power and also were more expensive. As the RPI’s 700MHz processor and 512MB of RAM of were both adequate to run the already selected Voicecommand [3] software, and it included a bank of GPIO pins to achieve intermodule communication it was chosen to provide the hardware support for the voice recognition system in the final design. Regarding the voice recognition software several open source programs such as JASPER[1], JULIUS[2], and POCKETSPHINX were initially considered and researched for feasibility. As all of them used local STT engines which were occasionally unreliable and require lots of local data storage, further research was performed. An alternative called Voicecommand uses an internet connection to run input through Google's advanced and reliable STT engine. Voicecommand has better voice recognition and more support than the initial software candidates. The software is being regularly updated and problems with command execution were being actively discussed on blogs. As a result implementing commands and running software or interfacing with GPIO pins could be achieved with more efficiency with Voicecommand. In light of these advantages Voicecommand is the software used to achieve the speech recognition in the final design. 4.2 Speech System: Per the speech specification dictated by the project sponsor, the robot must play a tour audio when arriving at the points of interest. To achieve this goal the speech system
  • 18. 18 must be able to receive signals indicating what point of interest the robot is located at and it must have a means of sending different audio signals to a speaker. Based on the above needs the hardware used to achieve this system will need to have memory that is capable of storing the needed tour audio files, GPIO pins to receive location data from other systems and the capability to turn those digital audio files into electrical signals that a transducer can “reinterpret” as the unaltered pre recorded sounds. Coincidently the above requirements can all be fulfilled by a microcomputer with the appropriate peripherals. After brief research the Raspberry Pi (RPI) was chosen to test the feasibility of this systems. As the RPI has a built in soundcard with a 3mm jack, a SD flash memory slot, 17 GPIO pins, and was already chosen to support the verbal recognition system it is used to provide hardware support for the final speech system. The descriptions that will be used for each one of the classrooms that will be part of the tour in the Chambers Technology Center in order to meet the requirements are the following: John T. Chambers, Room 112 The CIMS laboratory is a fully integrated collection of CNC machine tools, robots, flexible manufacturing system and measurement and product inspection related equipment designed for teaching and demonstrating state-of-the-practice manufacturing methods for undergraduate and graduate students. This laboratory supports all types of student’s projects and faculty’s research. This laboratory also serves as a resource for local middle and small size companies. This laboratory is especially valuable for the courses ENGR 15, MECH 100, MECH 120, MECH 125, MECH 141, MECH 175, and MECH 191. John T. Chambers, Room 113 This lab is a large, open lab without partitions, which supports collaboration and creativity. The lab needs to be flexible in configuration and functionality. Flexibility and adaptability are critical. Flexible computing is achieved through thin clients such as the Sun Ray laptops. This allows support for multiple environments, while adding mobility and wireless networking. In addition to providing state-of-the-art support for collaborative research and learning, the room can also be used for lecturing. John T. Chambers, Room 114 This is a hands-on teaching classroom. Each student has a computer workstation so the instructor can assign problems in class and interact with the students as they solve problems. Each system is capable of running both Microsoft Windows and some version of Linux or Unix.
  • 19. 19 John T. Chambers, Room 115 The two-tier studio classroom is not meant to replace the laboratory experience, but to enhance it. Instructors are able to introduce material through standard lecture. Lab workstations for each student can be utilized by simply rotating around. Digital, electrical and electronics courses utilize the space. 4.3 Drive System: The drive system must hold the whole weight of the robot and provide enough force to make the robot move. This system is easy to control and stable. It receives signals from the other electrical systems and translates them into commands that dictate a desired motor movement. The drive system can make the robot go straight, turn right, turn left, stop, speed up, and slow down. The drive system utilizes an Arduino pro mini 328 microcontroller, two H-bridge motor drivers, and two electric motors. Based on the results of the location and obstacle avoidance systems, different navigation instructions are sent to a program function that sends signals from the Arduino pro mini 328 microcontroller to the H-bridge motor drivers which turn the motors. There are different potential drive boards for motors and each of them have different specifications. Based on research the L298N driver board was considered to drive the motors. The L298N can handle an average 2A current and 3A peak current. Another driver board that was considered is the TB6560, which is an adjustable driver board. It can handle a working current from 1A to 3.5A with a maximum current of 4A. The final drive system uses the TB6560 driver board because the measured working current for the robot is 2.6A, which is larger than the limit of the L298N and will provide a safety margin. The TB6560 operates the motors based on 4 data lines connected to the microcontroller. Initially there were two motor types considered, DC and Stepper motors. For DC and stepper motors, the torque must be larger than 20 kg because the robot weight is about 15 kg. The final robot design uses stepper motors. Compared to DC motors, the stepper motors run more accurate because they rotate in controllable 0.9 degree step rotations. The motor RPM’s or speed depends on frequency of steps created by pulses from the microcontroller. Although the DC motor can handle more torque than the stepper motor, it is not efficient. When it runs, it loses forty percent of its power. Another problem of the DC motor is that every DC motor, due to manufacturing and material consistency, has a different speed, so if the robot wants to run perfectly straight a two wheeled two motored drive system, it would require an additional solution to achieve this.
  • 20. 20 4.4 Power System: Based on robot’s electrical system requirements the power system must supply 5V for Raspberry Pi and sensors, and 3.3V for Raspberry Pi’s signals. So a DC to DC converter circuit from 12V to 5V and 12V to 3.3V is needed. As the integrated circuit chips available to accomplish this depend on the current limit required we must look at the potential current needs of the individual systems. The Raspberry Pi needs 700mA. The current of the microcontrollers in the robot are less than 0.5A. LM7805, LM1085, and LTC1147CS8 (all various IC voltage regulators) will all provide a steady 5V. LM7805’s output current is 1A, LM1085’s output current is 3A, and LTC1147CS8’s output current is 0.05A. For the final power system the LM7805 was used for the microcontroller and other sensors. LM1085 is used for the Raspberry Pi because of its power needs and the ability of the LM1085’s to handle a peak current of 1A. To provide the needed 3.3V the system uses a LM1117 which is a cost effective IC DC regulator that provides a low current output of 0.8A, which is ideal as the components that require this voltage are expected to have a very low current draw. Figure 3 shows the main DC to DC circuit. The raspberry pi’s regulator is individual. This regulator draws a large amount of current and consequently, a fan was added in order to cool down the temperature. Figure 5: 12V DC to DC convert circuit 4.5 Obstacle Avoidance System: The obstacle avoidance system can be very complex or relatively simple depending on the needs of the design problem. The shape and geometry of our designated working location made for a simple and efficient method for obstacle avoidance.
  • 21. 21 Components for having a working obstacle avoidance system are the sensors to give the robot information about its environment and the behavior algorithm you want the robot to use to make decisions regarding the obstacles it sees. First, the sensors were considered and researched. There were many types of sensors that we considered using. We considered using Infrared (IR) sensors, Ultrasonic sensors and even laser range finders. Infrared sensors are one of the most common types of sensors used for obstacle detection. IR sensors are relatively cheap and fairly easy to use. They output analog voltages that scale depending on the distance of the object in front of it. Generally, infrared sensors have a narrower beam angle because it uses infrared light and they have a shorted overall range compared to other types of sensors. IR sensors are very good when it comes to short distance ranging and general proximity detection. Although since it uses light, there are a number of factors that may skew the accuracy of the sensor. Ambient light, if too significant can drastically change the distance of a measurement reading. The color of the object or its reflectivity also skews the distance measurement of an IR sensor. Therefore, IR is mainly used indoors because of the vast amount of sunlight outdoors. Ultrasonic sensors are also very common for detecting obstacles or movement. Ultrasonic sensors are also relatively cheap but are generally more expensive than IR sensors. Instead of light, ultrasonic sensors use sound (or sonar) to detect the range of objects in front of it. Because it uses sound, ultrasonic sensors generally have a wider cone or beam angle than IR sensors. They also generally have a longer range than IR sensors. Ultrasonic sensors are more accurate than their IR counterparts. They are better than IR at detecting accurate distances at longer ranges. The only things that can skew the accuracy of the ultrasonic sensor are sound absorbent objects like sponges or soundproofing materials and crosstalk or “ghost” signals that are basically stray sound signals that bounced off other walls. Figure 6: HC-SR04 Ultrasonic Sensor
  • 22. 22 Laser range finders are very well made and are higher up the quality sensor ladder. As a result, laser based sensors are generally a lot more expensive than IR and ultrasonics. Laser based sensors have a significantly longer range than it‘s counterparts because of the strength of its signal. Since laser sensors use a fairly concentrated beam, its cone or beam angle is very narrow compared to other sensors which it gives it a narrower field of view. Another component potentially useful for the obstacle avoidance and navigation system is the compass sensor. A compass sensor measures the magnetic field of the earth and gives measured numbers to the bearing of north, south, east, west and everything else in between. The compass sensor gives a reasonable approximation of the general directions which makes it very useful for making accurate angled turns, auto-correcting orientation and making sure it’s moving relatively straight through a hallway. Based on the data gathered by the sensors around it the robot can only avoid obstacles with the use of behavior algorithms. How will it react to an object in front of it? Those are questions addressed by behavior algorithms. As mentioned earlier at the start of this chapter, the shape and geometry of the working location was taken into consideration while picking an appropriate algorithm for the robot. The location is a simple rectangular hallway with minimal static obstacles and occasional moving obstacles (humans). We looked at the Bug algorithms and Potential fields algorithm to guide our robot with obstacle avoidance. Bug algorithms are relatively the simplest of the algorithms because it just theoretically follows a straight line towards the destination. When it encounters an object blocking it, it follows the edge of an object it detects until it can see the destination again and resumes movement. Bug algorithms are more straightforward to use but there may be challenges with orientation. Potential fields on the other hand is slightly more complex. It sees the whole location as a sea of vector fields leading towards the destination. It sees the map and the obstacles on it and computes vectors that lead to the destination. The robot follows these vectors until it reaches the destination. Obviously, this algorithm has problems dealing with moving obstacles.
  • 23. 23 Figure 7: Sample Diagram for Potential Fields There are many options for possible solutions that could help us design this system but these are what we researched for the senior project. 4.6 Navigation System: These are the different components we considered and looked at for our navigation system The navigation system provides a way for the robot to travel through the environment and locate its designated destination with repeatable precision. We looked at camera solutions and different types of beacon technology to achieve this functionality. Most robots use cameras to navigate its surroundings. It uses line fitting and visual cues to mark the surroundings and figure out how to navigate towards the destination. Cameras require a significant amount of processing power and an appropriate platform that can handle image processing. The demand for acceptable processing power heightens the cost up significantly, with the alternative being to wait a significant time for the robot to finish understanding the images it gets from the camera. It is a sophisticated solution with a sophisticated resource requirement. An alternative solution we were (slightly) biased for is the beacon technology solution. The idea of the beacon technology is for their to be a device pair, (a beacon transmitter and a beacon receiver), where the receiver can eventually find its way to the transmitter using useful signals that the transmitter provides. The transmitter would ideally be placed on the destination and the receiver on the robot. We looked at an IR beacon pair. The Pololu IR Beacon Transceiver Pair is a pair of devices that can detect one another omnidirectionally on one plane. The device has IR detectors and transmitters on each chip that can detect and follow the other chip wherever it goes. The one big limitation that this solution had was the effective range of the
  • 24. 24 beacons. The range was not long enough to accommodate distances in the hallway. Also, the price was $50.00 for a single pair so it was fairly costly for just a single pair of chips. Figure 8: Pololu IR Beacon Transceiver Pair The Xbee wireless technology is another solution that we researched. We needed a wireless solution to the beacon method and Xbee seemed to be a valid candidate for this. Xbee is a wireless solution that allows communication between devices using radio waves. There is a way to measure the distance between Xbee devices through signals using the ToF (time of flight) of the signal and its RSSI (signal strength). Having this functionality made Xbee a viable solution to the beacon method. The range for the Xbee was considerable even extending through miles. The extended range might work against our situation however because the lowest distance resolution was around 5 meters which is way too big for our needs. The cost for the Xbee solution wasn’t cheap however. To have a valid working connection for two microcontrollers, two xbee shields are needed to accommodate the two xbee modules. Each xbee module is roughly $25.00 and xbee shields are roughly $15.00 so all in all it will amount to $80.00 for just one pair of wireless communication. Figure 9: Xbee Module Pair Long range lasers were also considered for this solution. If there was a sensor that could cover the whole distance of the hallway, we could use that solution to divide the
  • 25. 25 location into a distance-varying topographic map. Where the distance would be the marker that the robot relies on to know if it is near a certain room in the hallway. Unfortunately, affordable long range sensors that could encompass the whole distance of the hallway don’t exist and the ones that came close were very expensive. The LIDAR Lite long range laser range finder could cover up to 50 meters (about half of the hallway) but is $90.00 each. Another alternative for the navigation system that was considered involved the use of Beacon technology. The Bluetooth smart Beacon that we ordered, and tested was the nRF51822 Bluetooth Smart Beacon Kit from Nordic Semiconductor. The smart Beacon Kit came with an application that could be downloaded from the apple itunes store for IOs systems and from the Google Play Store for Android systems. This application is called NRF Beacon. The chip included in the Smart Beacon Kit was attached to the door of the specific place we wanted the robot to identify. Then, this application gave us the option to select the specific event that would be triggered after identifying the desired Beacon. Each Beacon chip had its own identification code, so that different events for different Beacons could be triggered. Once the application from Nordic semiconductor was opened, it started tracking the signal strength of the chip attached to the door. It provided us with a set of three different options which included: far, near, or next to the Beacon. The option we selected involved generating a trigger when the smartphone was next to the Beacon. Then, the only options provided as a trigger involved playing an alarm, or opening an application called Tasker. After doing research on the Tasker application we found out that this application was able of running SL4A Python scripts. This was the key factor on setting the specific trigger when the Beacon was detected. A script in Python was written, which was then included into the Tasker application, and whenever the Beacon attached to the door was next to the Beacon, the Tasker application opened by itself and ran the Python script, which played a specific recording based on the information received. Figure 9 shows when the Beacon Technology was being tested. This navigation system was accurate and had the advantage of reusability, since we just needed to change the position of the Beacons and the Python script to execute different actions. The downside is that this system was going to be more expensive than the final system, which only requires the use of one sensor that is able to detect whenever there is a change on the roof’s height. If the smart Beacon Technology would have been used, the project would have been more expensive, since a Smart Beacon was needed for each of the classrooms included on the tour, and each Beacon is $35.00. We would have needed at least 5 Smart Beacons, which would have resulted in a total of $175.00.
  • 26. 26 Figure 10: Testing the Smart Beacon Technology Kit 4.7 Structure: The initial structure design of the robot was an amalgamation of machinable polymers and metal. However after attempting to procure these materials and get them machined it was determined infeasible (the allocated budget to build this listening, talking, autonomous robot is a mere 300 dollars, and support from the campus machine shop was difficult to attain). As an alternative that would minimize cost and allow for easier construction of the components, wood was used.
  • 27. 27 Figure 11: Robot Structure
  • 28. 28 Chapter 5: Results of Hardware and Software Testing 5.1 Speech System, Tour Audio A tour script has been completed and tested. This tour contains valid information about the tour destinations provided by the public relations department. Each of the following descriptions was converted to a robot voice recording and stored on the Raspberry Pi SD card for playback access. 5.2 Navigation System: Testing the navigation system was fairly challenging because most of the products that we wanted to test for viability was not on our possession right off the bat. There was a lot of waiting and budget balancing. We purchased the Pololu IR Beacon Transceiver Pair to test but unfortunately the product that was delivered to us was faulty and so the test could not be performed. We later on decided to discard the planned test because the product did not have enough theoretical range to work for the long hallway anyway. We tried to construct our own beacon pair using simple cheap ultrasonic HC- SR04 sensors. We programmed one of the sensors to just transmit constant sonar pulses while we disabled the transmitter of the other one so it could only receive the sonar pulses. Using this pair of sensors we simulated a small scale scenario of the small robot prototype passing by a room with the ultrasonic transmitter with the ultrasonic receiver pointed towards the general direction of the room. We programmed the robot to turn towards the ultrasonic signal and stop when its near enough the beacon. This small scale simulation worked perfectly on the small scale but we realized we had to wire up all the transmitters together because they needed to be synced for the receiver to make sense of the pulses. It didn’t seem feasible to run wires across most of the rooms in the CTC hallway so this was discarded as a farfetched idea.
  • 29. 29 Figure 12: Beacon Pairing Concept We purchased and tested a set of Xbee devices to test their wireless communication capabilities. We wanted to measure the distance between two devices that each had an Xbee module. We used a code that took the signal’s time of flight (ToF) and RSSI (signal strength) to determine the approximate distance between the two devices. The two devices were each set on the opposite sides of the hallway in the CTC building. We took multiple readings while gradually moving one of the devices closer to the other. The results are recorded on the table below. Actual Distance (m) Xbee Method Reading (m) 50 62 40 31 30 27 20 23 10 12 Table 2: Xbee Wireless Distance Calculation The Xbee method had incredible range but its accuracy and unpredictability was hard to compensate for. Having rooms two meters apart will be hard to differentiate with this method of distance measurement. Also, the distance readings get absurd numbers at times It doesn’t like closed spaces because the signals bounce around the walls and skew the signal strength at the time of arrival.
  • 30. 30 The LIDAR Lite laser range finder was the next thing we tested. This expensive laser rangefinder had a distance range of 0 to 40 meters. If we could use this rangefinder to measure the robot’s relative distance to one end of the hallway then it would be viable to use it to see the hallway as a distance topography map where the rooms could be marked by distances from the end of the hallway. We tested if it was even possible to get and measure that maximum distance with the laser sensor. The big problem was that both ends of the hallway are predominantly glass which the laser sensor has trouble detecting. We purchased and tested the Smart Beacon technology from Nordic Semiconductors. The component used was the nRF51822 Bluetooth Smart Beacon which allowed us to identify a specific place or position by sending a signal through an application called NRF Beacon installed on a Smartphone. We then worked on writing a routine that would allow us to pass parameters through an application called Tasker into another application called Arduino commander, after running an SL4A Python script, which sent specific numbers through our Arduino Board based on the Beacon detected. This number then would have been passed through the Raspberry Pi in order to play the specific recording for a lab or classroom description. Figure 13: nRF51822 Bluetooth Smart Beacon Kit 5.3 Drive system After complete assembly of the robot’s power and drive system simple functionality tests were performed to investigate the ability of the robot to move around. The initial design used a single microcontroller to poll navigation sensors and operate the motors via H-bridge drivers, however, performance with this configuration suffered due to the amount of clock cycles required to check the obstacle avoidance sensors, localization sensors, and the compass sensor. Robot speed and the smoothness of movement suffered. To compensate for this a second microcontroller was added (pictured in figure 2) to allow for consistent and smooth operation of the stepper motors. This second microcontroller receives a three bit signal from the ATMega controller and then operates the motors accordingly. Testing the configuration with this second
  • 31. 31 microcontroller confirmed that the robot had the ability to move forward, backward, and turn both left and right depending on signals from the navigation system. During these tests another potential flaw of the system became apparent however it does not, at this time, have an affect on performance. There is a perceivable stuttering that occurs during robot turning. During turns the navigation microcontroller is constantly checking the compass sensor to determine the degrees through which the turn has been executed and this communication results in intermittent pauses of the turn command sent to the drive system. The result is a slight but perceptible jerkiness during turn execution. 5.4 Power system During all testing of the power system the battery dedicated to run the motors worked without fail. It proved to be capable of running constantly during tests without noticeable performance depreciation. The electronic component power system did not perform as smoothly. Early integration and testing of components showed that the initial power plan was adequate for all sensor and microcontroller needs. Problems developed when the Raspberry Pi and powered USB hub were connected. The 12V to 5V DC regulators chosen were unable to supply the currents needed by all devices. To solve this problem further testing was performed in which the RPi and USB hub were provided a single dedicated regulator, which resulted in the regulator overheating and power fluctuations. Even more tests with two additional converters (one for the RPi and one for the USB hub) showed that overheating still occurred. To solve this problem heat sinks and a fan were utilized for new regulators which resulted in a steady consistent power source that produced the results mentioned in section 5.5. 5.5 Speech and Hearing System To confirm the performance and dependability of the speech and hearing system, the Raspberry Pi and all of its peripherals were powered with the commercial outlet transformers provided with the USB hub and Raspberry Pi. The systems was then started, allowed to fully boot and then tested by performing 3 different voice commands, and then shut down and unplugged. The same test was then performed with the USB hub and the Raspberry Pi running off of the DC converted power from the robots onboard 12 volt batteries. The test results are displayed in table 3, with a successful test being one in which the Raspberry Pi fully boots, responds to 3 commands with the appropriate action or audio playback (such as the dialogue in section 5.1), and then shuts down completely.
  • 32. 32 Test Conditions Test Results 12V Outlet Power 10 successes in 10 attempts 5V converted battery power 10 successes in 10 attempts Table 3: Speech and Hearing test results 5.6 Obstacle Avoidance System We performed a lot of tests to make sure the obstacle avoidance system runs reliably and efficiently. The sensor range test was done to compare the different sensors that we had on hand and the once that we had purchased to use for the project. We wanted to compare the sensors with each other in terms of range-accuracy and cost. We constructed a set up on a flat table where a meter stick is lying flat on the table to measure the distance between the sensor and the obstacle. The obstacle we used was a purple box two feet tall and one foot wide. We took turns measuring each sensor’s minimum range by measuring its closest accurate distance. We measured max range by looking for the longest “stable” reading we could get from the sensor while maintaining its accuracy. Sensor Type Min. Range Max.Range Price Each Sharp GP2D120x Infrared 1.5” 11.8” (~1ft) $12.00 Sharp GP2Y0A02 Infrared 8” 59” (~5ft) $14.00 HC-SR04 Ultrasonic 1” 156” (13ft) $1.39 LV MaxSonar EZ1 Ultrasonic 6” 254” (21ft) $25.00 Table 4: Sensor Range Test Results We also tested each sensor on the smaller prototype robot that we constructed. This test was to see the viability of each sensor when utilized by an actual moving robot. This hopes to distinguish if there are sensors that are slower at getting important readings than others or there are consistency issues depending on environmental parameters. We constructed a robot behavior program with basic obstacle avoidance and we let it run on different ambient lighting settings. We graded each performance based on speed/latency and consistency.
  • 33. 33 Sensor Type Well-lit Latency Well-lit Consistency Mid-lit Latency Mid-lit Consistency Dark Latency Dark Consistency Sharp GP2D120x Infrared 10 7 10 9 8 9 Sharp GP2Y0A02 Infrared 10 7 10 9 8 10 HC-SR04 Ultrasonic 10 10 10 10 10 10 LV MaxSonar EZ1 Ultrasonic 10 10 10 10 10 10 Table 5: Functional Performance Test Results We also tested the compass sensor CMPS03 on the actual hallway. We implemented a simple test where we stepped through the center of the hallway from one end to the other while take readings of the compass’ orientation through it. Theoretically, the readings should be very close with each other since the hallway is only following one basic direction the whole time. But this simple test yielded a wildly fluctuating accuracy because of all the electronics present in the actual hallway. The readings varied about +/- 20 degrees from the actual direction. The electronic devices caused inaccurate measurements from the compass sensor, which almost made us discard the compass sensor completely from our design. Chapter 6: Justifications for Choices of Hardware or Software 6.1 Drive system As predicted by the research detailed in section 4.3, the NEMA 23 stepper provide the needed torque to drive the robot at approximately 2.8 km per hour, which is approximately half of the average human walking speed of 5 km per hour. The use of two stepper motors has eliminated any need to monitor rotations and ensure that both wheels turn at the same speed. The initial needed torque calculation of 1.4Nm is exceeded by these motors. Thus the the results show that the drive system design is powerful enough, accurate and fast enough to meet the project requirements.
  • 34. 34 The TB6560 driver board, shown in figure 14, was determined to draw a 2.6A current after measurements. After the initial problems with procuring power for the voice recognition and speech system it was apparent that temperature of high current devices must be a concern. However as the TB6560 driver boards have large heat sinks that dissipate heat and prevent power fluctuations the prudence of this choice became even more apparent as the project moved forward. Additionally alternatives, like the L298N, can only drive small stepper motors that draw low current and run at low speeds (as noted in chapter 4.3), if the motors run too fast, the chip will overheat and cut off power. Figure 14: Robot Driver system 6.2 Power system After the initial tests detailed in chapter 5, during which additional voltage regulators were added for the voice recognition and speech system, the original concept (to use rechargeable 12v batteries and voltage regulators), proved to be a highly effective means of powering the individual robot systems. The battery life experienced during testing was adequate to complete more than one tour and the power provided was stable enough such that no noticeable effects of low power were observed. The final power system uses two LT1085 regulators, one LM7085 regulator, and one LM1117 regulator. The voice recognition and speech system draws the most current through its two regulators. The total current flow through regulators is almost 2A (10 watts) and although this caused initial overheating of the LT1085 regulators, the addition of heat sinks and a fan have stabilized the regulators temperature within a safe operating region. The final power system configuration has shown itself to be dependable and robust.
  • 35. 35 Figure 15: DC to DC converter circuit for voice recognition and speech system Power for the navigation and drive microcontrollers as well as the navigation sensors is provided by two DC to DC regulators. Each of these regulators can provide 1A of current. As the total current draw of the remaining systems is less than 0.5A the power loading requirements for the LM7805 and LM1117 regulators are below 50% of their maximum available current. As the sensors and microcontrollers require different voltages these regulators have also proven dependable at providing needed voltage levels. Figure 16: Low current 5V and 3.3V regulator 6.3 Robot Structure The robot has a total of four wheels, two supporting “free wheels” and two for the drive system that can transmit the torque to move the robot. As this configuration was created in response to early instability problems that occurred during initial tests, the result is a working solution. One freewheel is located in the very front of the robot and a second on the back. The back one also is shown in figure 17 and it helps to split the weight from driving wheels.
  • 36. 36 Figure 17: Back free wheel and driving wheels An additional feature of the structure is its physical layout and ability to support the logistic needs of the other systems. As the microphone and speaker have specific placement needs the two tier platform provided an ideal situation for their mounting to best maximize performance. The bottom platform (picture in figure 18) provided a sturdy base for the mounting of heavy batteries, the placement of sensors and the routing of wires to the various components. Besides the addition of the fourth wheel the design has undergone only minimal augmentations from the original conception. The wood in which it is composed has proven to be a forgiving substrate material that readily accepts the hot glue and duct tape as binding components. Figure 18: Second floor of robot 6.4 Speech and Voice Recognition System After fully integrating the voice recognition and speech system into the completed robot the performance experienced during tests justifies both the hardware and software choices. Although other microcomputers might have accomplished the same tasks the support and affordability of the Raspberry Pi made it the most cost effective microcomputer capable of performing the needed tasks. Despite the ubiquity of similar products such as the Beaglebone and Hummingbird minicomputers, the Raspberry Pi has performed well at a minimal cost.
  • 37. 37 The software design, that includes a parent C+ program to control the running of the third party Voicecommand software, and audio player has also proven to be an effective way to manage the robots speech and voice recognition capabilities. During tests and troubleshooting it created and easy one stop location where all components of both systems were accessible. 6.5 Obstacle Avoidance System Finalizing the components for the obstacle avoidance system was relatively simple because of all the tests that we ended up doing for most of our alternative design ideas. For the final design of our obstacle avoidance system, we ended up using the Sharp Long Range IR Sensor GP2Y0A02YK0F. We heavily favored the HC-SR04 Ultrasonic sensor for a long time through our design process for this project. Just looking at the tabled results of the tests we performed for the sensors we had back at chapter 5 section 6, we can clearly see the quality performance brought by the HC-SR04 ultrasonic sensor. It had superior range and accuracy for a very low price tag. It also doesn’t have the annoying weakness of IR sensors where a number of ambient factors could heavily skew the sensor readings like ambient light intensity and object colors. The HC-SR04 ultrasonic sensor was the perfect choice for our robot until we took the assembled robot to the CTC hallway to test functionality. What we found out was that all the ultrasonic sensors we were using suddenly performed unpredictably. The values of measurement fluctuated over a range of 30 inches which has never happened before. Suddenly the ultrasonic sensors that we favored so much became unreliable. We discovered that the reason for this was that the building was using ultrasonic signals for motion detection. These motion detectors were scattered across the hallway spreading stray ultrasonic signals that skewed the accuracy of our sensors. Ultrasonic sensors, of any type, became unviable for our situation. We looked at the highest performing IR sensor from our list because it was the only reasonably priced option (laser sensors were impossible to consider for our budget).
  • 38. 38 Figure 19: GP2Y0A02YK0F Sharp Long Range IR Sensor The Sharp Long Range IR Sensor GP2Y0A02YK0F performed reasonably better than the HC-SR04 when used in the CTC hallway. Since the location was indoors, sunlight didn’t pose as a big negative factor in using this sensor. Although its maximum range of 150 cm isn’t as long as the HC-SR04’s 400 cm, it is long enough for the robot to perform well in the hallway. We ended up settling to use a compass sensor for our final design because the functionality it offered was too important to pass up. Compass sensors allowed our robot to execute reasonably accurate “measured” turns. Even though the electronics present in the hallway skewed the general measured orientation through the hallway, taking readings at one specific point of the hallway was still reliable enough to veer an accurate number of degrees from that original reading. This means making accurate 90 degree turns were still possible through the noise. The model we chose is the HMC5883L compass sensor because it was the cheapest one we could find on the market.
  • 39. 39 Figure 20: HMC5883L Compass Sensor As for the behavioral algorithm, we stuck with the simple yet efficient Bug0 algorithm. The shining characteristic of the Bug0 algorithm is its simplicity. This simplicity speeds up the robot’s function and reaction time to obstacles because it doesn’t take a considerable amount of lines of code to implement. It speeds up the loop cycle considerably which makes the robot more reactive to its environment. Also, it frees up memory and clutter on the microcontroller memory space because it is very simple to run. And of course, we don’t only like it because it’s simple, we like it because it works despite of its simplicity. The simple rectangular shape of the hallway and the minimal amount of fixed obstacles along the way allowed for the use of this algorithm. Moving obstacles would sound like it would pose a problem but because the moving objects are “intelligent” beings who have their own obstacle avoidance systems, it becomes simple to avoid collision with them. 6.6 Navigation System It was a grueling process looking for the perfect beacon technology solution that fit our needs. We knew we wanted to implement beacon technology early on through our design process because it seemed intuitively more simple than using a camera and a computer to process images. We wanted a solution that made use of simplistic ideas to achieve a complex goal. This, in our opinion, is the beauty of engineering. The brilliance of a design is not stemmed from how incredibly complex the solution is but on how something simple can solve something complex. We tested a number of beacon technology solutions but after each test we performed, we realized how “over-the-budget” we would be if we ended up using each solution. The concept
  • 40. 40 was viable and it was doable using some of our alternative designs but the basic problem always boiled down to cost versus performance. The products we needed were always too expensive to fit to our budget. The bluetooth solution that we tested was very viable but we needed to purchase one of those bluetooth chips and stick them on “each” of the room doors along the hallway. That would be pretty expensive to actually implement. One day, we realized that we could utilize something even simpler than beacon technology to accomplish navigating through each room. The basic question of navigation was, “How would the robot know when it’s by a room of importance and how would the robot know which room it is?”. We noticed something so simple about the architecture of the hallway that allowed us a simple solution to this problem. The ceiling elevation is always lower right where the doors are. Figure 21: CTC Hallway Ceiling Elevation Model We decided to use these simple architectural details as landmarks for our robot to determine it’s very near a specific room. Which device would we need to accomplish this? The answer is one very cheap range sensor, IR or ultrasonic would do. As long as it’s high enough to detect the change in ceiling elevation, it is enough. One vertically oriented sensor is the solution. For our purposes, since we could not use ultrasonic sensors in this hallway, we decided to use the cheapest laser range finder we could find. IR sensors were possible but their range was too short and we didn’t want to make the robot too tall because it might affect stability. The final concept of our navigation system design is to use architectural details from the hallway itself to localize the robot to its surroundings. 6.7 Microcontroller Choice Initially we used the Arduino Uno for demo purposes because it was readily available. Using this microcontroller made it easy to choose which type of microcontroller we really needed. Since we we’re using 8 proximity sensors, 2 ranging sensors and 1 compass sensor, it was clear that we needed more than 22 GPIO pins since all these sensors require at least 2 GPIO pins each.
  • 41. 41 Component Number of GPIO Pins Total Required Drive System Microcontroller 3 Raspberry Pi (Voice Recognition Module) 5 8 x Proximity Sensors 16 2 x Ranging Vertical Sensors 4 Compass Sensor 3 (I2C) TOTAL 31 Table 6 GPIO Pin Requirements We also needed I2C and PWM functionality for the compass sensor and IR sensors respectively. The IR sensors require 8 PWM pins. The Raspberry Pi and Compass sensor could use I2C communication each for communicating with the microcontroller. The Arduino board that fits our requirement is the Arduino Mega 2560. Microcontroller Choice : Arduino Mega 2560 ● ATmega2560 Microcontroller ● Input Voltage - 7-12V ● 54 Digital I/O Pins (14 PWM outputs) ● 16 Analog Inputs ● 256k Flash Memory ● 16Mhz Clock Speed The drive system requires a dedicated microcontroller for continuous functionality of the wheel system. 11 GPIO Pins are required. We decided to use the cheapest Arduino board with enough pins to accommodate but still open to expansion. Drive System Dedicated Microcontroller : Arduino Pro Mini 328 Chapter 7: Results of Fully-Assembled System:
  • 42. 42 Figure 22: Fully Assembled Robot In Lab 7.1 Assembled System Test Results During the final testing period several areas for potential improvement presented themselves. The localization, voice recognition, the object detection, and structure encountered problems that prevented them from functioning smoothly and constantly for an entire tour. The power system provided good power throughout the majority of these tests, and aside from WiFi connectivity issues the voice command system turned out to be capable of a few but important robot controls. The sensors underwent a drastic change in type because of the sensor test that we implemented on site. We set the robot on the middle of the hallway at the starting point of the tour. Before we started anything, we tested if each sensor were giving off reasonably accurate readings by using the serial window to make the measurements visible. The sensors were giving off wildly fluctuating numbers from around 20 inches to 40 inches without the robot moving an inch. Even for sensors not pointed at anything solid, there would be times that they would always detect something at around 15 to 20 inches in front of them which is clearly a false reading. These skewed sensor readings were because of stray ultrasonic signals coming from motion detection devices in the hallway. This prompted us to change the ultrasonic sensors to IR sensors. The exact same sensor calibration test was implemented with the GP2Y0A02YK0F IR sensors used. The results were considerably better than the ultrasonic sensor number. The accuracy of the IR sensors weren’t fine tuned to perfection but they were reasonably within 3 inches of the correct value almost all the time. Most of the tests implemented with the fully assembled system were actual tour runs where we would see where the robot would perform badly and tweak the code to adjust for it. Examples of these changes were compass sensor turn values (calibration), delay values for dealing with moving obstacles and orientation auto-correction values. 7.2 Dollar Budget
  • 43. 43 All parts, the quantity of those parts, the individual cost of each part and the total cost of all parts that made it into the final CTC tour robot design are listed in Table 7 Item Cost Per ($) # Used Total Cost ($) LM1117 0.427 1 0.427 LM7805 0.39 1 0.39 LM1085 IT 1.95 2 3.9 12V 5A rechargeable battery 22.95 1 22.95 12V 7.2A rechargeable battery 29.36 1 29.36 Fan 15 1 15 Heat sink 0.35 2 0.7 Motor 24 2 48 Arduino Pro Mini 9.95 1 9.95 H-bridge 19 2 38 Shaft couplers 8 2 16 Wheels & parts 23 2 46 Motors mounting 10 2 20 Structure 25 1 25 Raspberry Pi 2 43.78 1 43.78 Logitec Speaker 59.99 1 59.99 USB Microphone 25 1 25 USB Hub 22.75 1 22.75 WiFi Dongle 11.79 1 11.79 8Gb SD Card 14.98 1 14.98 Arduino Mega 2560 13.96 1 13.96 Sharp Long Range IR Sensor 7.95 8 63.6 Range Finder Sensor 15 2 30 Compass Sensor HMC 5883L 7 1 7 Total Cost 568.527 Table 7: Parts List and Cost 7.3 Power Budget The power of consumed by each electrical component, the current drawn and the total power supplied by the Robot’s onboard batteries are listed in table 8 Item Current(A) Power(W)
  • 44. 44 Ultrasonic sensors X8 0.048 0.24 Microcontroller_1 0.043 0.215 Microcontroller_2 0.035 0.175 Compass Sensor 0.007 0.035 Raspberry Pi 0.5 2.5 Speaker & Microphone & Wifi 0.6 3.0 Laser Sensor X2 0.184 0.93 Fans 0.15 1.8 IR sensors X3 0.105 0.525 Stepper motors X2 5.2 62.4 Total power 71.8 Table 8: System Power Budget
  • 45. 45 Chapter 8: Conclusions: Recap of performance (Below is a list Highlighting any major issues that we had to deal with) 8.1 USB hub and RPi Power While trying to integrate the the voice recognition and speech system into the completed design the initial beliefs that it could be powered from the same DC converter that would be used for other 5 volt components proved to be incorrect. Eventually, do the poor performance of the available DC converters, an independent converter was used for each the USB hub and the Raspberry Pi. Furthermore it was found that (even when operating well below the converters maximum current) the converters would overheat and cause system instability. This was solved with the addition of a fan and large heat sinks to control the converter temperature. 8.2 Microcontroller and RPi communication During individual module testing and development it was assumed that I2C communication would allow the RPi to receive data from the navigation microcontroller to determine which tour audio was appropriate to play, however, upon system integration it was learned that the navigation microcontroller was already set up as an I2C master to communicate with the compass sensor. Upon trying to connect the RPi to the I2C bus the compass system stopped working. Although the problem might have had a software workaround, in light of the available digital gpio pins, it was easier just to run additional data lines from the Navigation microcontroller to the RPi. Thus enabling the transmission of data through simple 3 bit data lines. 8.3 Addition of a Fourth Wheel-Fading to the right During early on testing, the initial design of structure was not capable of fulfilling the project requirements. When the robot was turning, the wheels were not stable and unexpectedly detached.The problem was the wheels also covered weight of robot, not only offered the torque to move. Which meant the stepper motors lost lots of torque to move the robot. To solve this problem, under the base platform, an additional free wheel was added to help support the robot weight between the drive wheels. In that location, that free wheel could help motors share the weight of robot. This extended the life of our drive system but created a noticeable fade to the right when the robot was supposed to travel straight ahead. 8.4 Power system Because of wrong design at the beginning, the regulators in DC to DC circuit were overheat while run the raspberry pi and hub. They drawed lots of current through the regulators. The solution was using two regulators in parallel to split the current that could split power and could decrease the temperature in regulators. Also, it added the heatsinks and fans that could
  • 46. 46 cool down the temperature of DC to DC convert circuit. Eventually, the power of whole system is around 63 watts in drive system, 10 watts in other electronics parts. The running time of robot is around 1 hour and 40 minutes. 8.5 Undependable Sensors One major obstacle that we faced was the discovery that ultrasonic sensors were not going to be viable for use on the hallway because of other devices giving out crosstalk signals. This was emotionally frustrating because it was like a blindside problem where we had no way of knowing about it until we tried the sensors on site. Because of this, we struggled to find sensors that lived up to the impressive functionality of the HC-SR04. We couldn’t find anything affordable that could match the range of the ultrasonic sensor. We needed sensors with a certain maximum range because we needed to detect changes in the hallway architecture for localization. Even after we swapped the ultrasonic sensors for the next best option, the new IR sensors still gave us a different type of problem. The sharp long range IR sensors occasionally put out absurd distance readings (~infinite values). These readings totally messed up our code behavior because it treated infinity as an invalid reading where a comparison with an integer value would be impossible and get the code stuck. Chapter 9: Applications in Real World, Social, Environmental, and Economical Impacts: Today there is almost no industry that is untouched by the the ubiquitous objects we call robots. By some estimates 50% of the American workforce have jobs that are capable of being performed by automated machines and are therefore at risk of being lost to robots in the future. Beyond just the automated google car, almost every automaker (and some private universities) has some division of engineers working on automated cars that will eliminate millions of cab and truck driver positions. Likewise manufacturing robot from rethink robotics has the capability to
  • 47. 47 perform almost any repetitive manual labor job. The robot, named Baxter, can actively and quickly learn to perform its duties through on the job training by simply observing a supervisor (a capability that until now only human workers had). Even positions that are highly skilled and previously thought inaccessible to robots are being considered potential areas for growth. An engineering firm called Intuitive Surgical and another known as the Raven project have developed robots capable of performing surgery with limited human oversight. Additionally inroads are constantly being made toward the removal of the human element from current war robots such as aerial drones. As human lives could someday be in the hands of robots via the scalpel or guided missile, we must ask: is this a bad thing? In answering this question it is important to remember that the history of people being displaced from menial or skilled jobs did not start with the silicon transistor. A thousand years ago the majority of people had one important job: to make food so that they could live; the practice of farming dominated the time of everyone. Like the robotics, advances in technology displaced the vast majority of people working the fields. Although farming is the ideal example, consider blacksmiths, horse farriers, elevator operators, copy boys, bowling pin setters and milk delivery men; all professions that used to be abundant but dwindled due to advances in technology and methodology. In light of this we must acknowledge that termination of an archaic job is not necessarily a curse. Consider how much longer it would have taken for someone to create the printing press had Johannes Gutenberg been to busy sowing seeds and plowing fields to pursue the lofty cause of invention. As a consequence how many of the developments since, such as lower infant mortality and personal hygiene (which are generally considered good), would have been delayed or not have happened as a result of a world without a practical means of documenting and disseminating information. From a global perspective the inconvenience of being displaced from a job is well worth the extra time and abilities that modern technologies have provided. As was farming, so too will be autonomous labor. The freedoms provided by the advanced technologies of the past will again be bolstered as more people are allowed to consider pursuits more meaningful than cab-ing people across town all day. As robots become more prevalent a large percentage of current human activity will be eliminated (who wants to brush their teeth and make their own bed anyways?) but people will adapt and just as there is no place in todays society for the farmers of the past, who individually and time consumingly plant seeds by hand, the future society will have no place for anyone as ignorant as today’s average person. By necessity people will set their sights higher. Considering that that just a few centuries ago it was extremely unrealistic to think that everyone would someday know how to read, a global advancement of human intellect as a result of new technology is for from the realm of fantasy. Autonomous robots, like our tour bot, will free humanity from the mundane tasks which we are already outgrowing and allow for advancements which currently seem as impossible as a 80% global literacy rate once did.
  • 48. 48 9.1 The Global and Environmental Impact The tour bot is capable of all the basic functions, such as hearing, vision and speech, that would be found in a current robot of the world. Common robots have many different roles in many different fields such as industrial work, family support, education and others. As humans we are realizing that we cannot simply continue to pursue the proliferation of autonomous robots without considering the peripheral consequences. For example the mass spread of robots could lead to financial problems, environmental problems, and natural resource scarcity. Alternatively robots can provide great profits in the future. Despite that this senior project, the tour bot, is of a very small scale and not aptly compared to designs currently on the global market, it can be used in small areas such as schools, companies and other public places. The potential problem in identifying the hazards associated with the tour bot project is the that it is not comparable to other robots from companies with more financial support and technical resources. Currently there are many different kinds of robots that are at a production level. In the industrial realm there is the Robotics arm, and many countries have successful robot enterprises such as the Swiss firm ABB, and FANUC in japan. Moving away from industrial robots, there are many small companies who create autonomous robots designed to help the family. A common manifestation of these efforts is the intelligent cleaning robot. Beyond family robots there is also many applications in the medical field where robots, instead of humans, produce medical products and provide sanitary germ free conditions for clean rooms. In the military robots have special tasks such as the removal of mines and other tasks which take them into dangerous places. Despite the aforementioned advances the new technology has many potential negative side effects. Robot construction requires many manufactured parts, which are not natural materials. After the service life of a robot concludes these materials are unable to dissolve in the environment and remain for a very long time. Electrical power sources such as batteries or nuclear fuels pose severe risks to the environment as every year toxic battery wastes affect the ecosystem. Irregardless of the consequences humans continue to pursue the development of robotics and recent advancements have created robots that are smarter than ever before. A long term goal that is closer than ever to being realized is the creation of a robotic brain that is similar to the human’s. At IBM they have created a brain-like computer chip which can respond to input and perform logical reasoning similar to a human. However what will happen in the future if this robot brain exceeds the capabilities of its human counterpart? The convenience can capabilities offered by robots will make life easier and increase the rate at which the global economy grows. Already robotic factory production has eclipsed that of
  • 49. 49 a human operated factory but at the same time, the phenomena of using a robot instead of human labor will cost countless jobs as the expense of a robot is far less than that of a person. 9.2 Positive and Negative Impact of Robotics The development of robotics in this time and age is considered a huge technological progress. Robotics has a pretty long list of positive effects. In terms of what robotics actually means, we can already find good things about it. Robotics looks to automate tasks using electronics and machines to substitute for actual people having to do the work. Robotics makes life easier for people in general because it can do work in place of people. The great thing about this is not just the fact that we don’t have to do the work, but that we can make the work more efficient than if a normal human being chooses to do the same task. Human beings have lots of limitations because of our physical bodies, robots can eliminate some of these weaknesses making them ideal for certain tasks that needs to be done. Examples of these tasks are really hazardous tasks like bomb disposals and hazmat handling. This reduces the risk to human society because less people will have to get injured with such tasks. Other tasks that might be dangerous, and not to mention physically demanding, is retrieving or recovering important objects from wreckages or crashes. There are also what you would call exploration robots that can traverse conditions that are hazardous to the human health like outer space and distant planets. Having the help of robots makes life easier for us. But of course, with the positive also comes the negative. Robots, because they are so efficient (they don’t get tired. don’t have sick days and don’t make any mistakes), tend to put a lot of people out of work. Robots are really more preferable to humans from a business standpoint because of their super efficiency and that fact that robots can be programmed to do multiple jobs at a time. Even though robots also open up jobs for robot maintenance, those new jobs fail to compensate for the total lost jobs because of robotic efficiency. This slowly introduces us to the possibility of the “robots taking over the world” idea that we so commonly see in sci-fi shows or movies. This heavily affects the economy because so many people will lose their jobs as long as robotics flourishes. Another negative thing is that working with robots is less safe than working with other human beings. Because humans have a higher awareness of danger and robots are usually only focused on the tasks at hand, there are higher potential dangers in the workplace when people and robots share a workplace. Since robots have trouble sensing approaching workers , there have been cases in which workers are severely injured or killed by active machinery. And the last thing that makes the list of negative effects robotics has on society is the problem of disposing these materials that are used in building robots (that are harmful to nature and our environment). In the long run, if enough robots were made to replace all the human jobs in the word, the earth might be in danger of dealing with hazardous effects from these materials that are disposed of unsafely. This might cause health hazards and environmental hazards the the people inhabiting the earth.
  • 50. 50 9.3 Social Impacts of Further Technological Advancements Many authors discuss the advantages and disadvantages of all the technological advances. There are some people who are in favor of the creation of new technology, which include advanced cell phones, better known as smartphones, tablets and computers, robots, etc. However, there are some people who oppose the idea of having new technology because it can damage families and society. Nowadays we can see how technology has changed society by just looking around. We do not have to go really far, we can see this behavior on campus every day, students walking and texting at the same time. I have even seen students using their cellphones while going in their skateboards and bikes, which I believe is dangerous. I think we have got to a point where this technology forms a big part of our daily life. However, there are also many advantages when using all these new technologies. As mentioned above, there are now robots capable of performing surgeries, which is an advantage to society because a higher quality of life can be achieved after going through these procedures. There are several readings that discuss the ethical issues caused by technology and how these play a really significant role in the lives of all people. The authors mention in their readings that in order to achieve progress we as society need to believe in the sufficiency of scientific and technological innovation as the basis for general progress. The authors explain in their readings that if we can ensure that the advances of science based on different technologies, then the rest of the aspects involved in the development of this new technology and applied sciences will take care of themselves. However, I do not completely agree with the authors because I believe that the consequences of creating new technology do not take care of themselves. We as a society have to find different ways of solving the problems created by the new technologies. As the author mentioned when developing new technologies we need to analyze the situation and see if this new technology will bring positive or negative impact to our society. I believe that our tour guided robot would have a positive impact in our society because by using it to give tours of Chambers building to prospective students, it can be the cause of inspiration for students to pursue a career involving science and technology. There are different aspects we have to consider when deciding if a new technology may cause positive or negative impact into our society. For example, we need to see if the technology uses materials that would be disposed in the near future causing damage to the environment. These negative aspects of technology may affect the health of society if the disposables are toxic or contain chemicals that can damage people’s health. Then this situation will oppose the idea mentioned above, where robots were considered a positive aspect on human live because it could be used to save lives after being programmed to operate on people and cure them form an illness. When considering the health topic, we not only need to consider our own health and how this new technology would affect us, but we also need to consider future problems involved that may damage the health of our future children, since those toxic chemicals can end contaminating water or the soil where different fruits and vegetables are cultivated. Another downside of making technology a priority downgrades the importance that we put onto our own lives. Sometimes people work for a long period of time in order to discover
  • 51. 51 new technology, but by doing this; they ignore other important aspects in their lives. The problem is that we can never recuperate time, and that is one of the reasons why time is so valuable. They sometimes forget that they have a family and that there are other more important aspects in life. Sometimes we give more importance to technological or scientific advances than to family, society and ethical values. We live in a turbulent and materialistic world, and that makes it hard for us to value and admire the most important things in life. We see how there are some children who prefer to play videogames or spend time surfing the internet or texting, instead of establishing a communication with their families. Nowadays smartphones are really popular, everybody wants to have the best technology available. Most people want to have the best technology, but that means that not only the developer of new technology has to work hard and dedicate more time to this projects, but members of the society may also need to work harder or for longer periods of time in order to afford buying this new technology. Now we can have internet access almost anywhere if we have a portable device, like a tablet, smartphone or laptop. However, we are paying more attention to this new technologies than to the effects that all of these new improvements are causing to our society. We support the notion that mostly everything is good if it is used with moderation, just as mentioned by Aristotle- moderation is the key word not just for technology but for many other aspects in our lives. Problems start whenever we start abusing or overusing these new technologies. Nowadays, at school, or if we go to a hockey game or to the movie theater, it is common to see families, friends and couples using their cellphones most of the time, even if they are next to each other. This is not proper because we put our priorities in the wrong order. Overusing new technologies can cause damage to our society. It seems that nowadays we are paying more attention to the development and use of new technologies and we are forgetting about our ethical values, which were considered as the most important before, like love, freedom, and family. Technological advances are really important, but we have also need to worry about our society. We need to pay more attention to our lives and use technology in a correct manner in order to enjoy freedom, love and family. As a society, we need to learn how to balance technology in order to live a healthier life. Technology is not bad if we know how to use it and when to apply it. For example, nowadays Internet is integral to everyday life because it helps society solidify a web of communication, however, when we start abusing it, we end up with a problem; we need to learn how to balance this situation. The positive or negative outcomes and consequences depend solely on the way we use this technology.
  • 52. 52 Chapter 10: Lessons Learned and Future Improvements Through the application of the skills learned here at the UOP School of Computer Engineering and Computer Science, we can now have a deep appreciation for the painstaking research and dedication that is required to bring a product to market. Likewise, there is a renewed feeling of confidence that has erupted from deep inside each and every member of our team, as we came together, struggled, stumbled, and eventually thrived while executing our design of an autonomous tour robot. The value of the senior project program cannot be understated enough. 10.1 Lessons Learned 10.1.1 Don’t Trust the inventoried parts Throughout the development of the voice recognition and hearing system problems were frequent however unpredictable and sporadic. While trying to accomplish simple tasks, for example connecting the RPi to the UOP WiFi system, problems would arise and then disappear without any clear changes in the system configuration. During the initial testing of this technology the general assumption was that all problems resulted from design or user errors, however after swapping out the old RPi from the University inventory with one purchased from an electronics dealer many of the problems disappeared. Future designers should try to use new equipment whenever necessary however due to the pittance budget of 300 dollars this might not be possible. Another similar problem was encountered when using the Arduino Uno obtained from the inventory, several pins were not working. Initially we thought the problem was with the code written to control the I/O pins, or some of the other components in the circuit, but then we found out the problem was with the board. After purchasing a new Arduino Uno, we realized the problem was due to the malfunction of the board we had gotten from University of the Pacific’s inventory. 10.1.2 Planning is of paramount importance The initial design of the robot structure was really important. The motors selected fulfilled the requirement of weight, but we encountered some issues with the stability of our initial design. With our initial design, the robot was not able to run very well whenever it started. Once we observed this behavior, we decided to change our initial robot structure design, and after several changes were made to the structure, the stability problems were solved. 10.1.3 Perfectly functional code doesn’t mean a perfectly functional system Writing code for any of the modules were straightforward at times and challenging the others. But finally completing a functional code written for a physical system is just the beginning. Even if the code is perfect, hardware is very prone to
  • 53. 53 malfunctions and mistakes. The simplest things can go wrong which offset the whole performance of the system. Modular testing with the physical components of your system is very helpful. Don’t assume everything is fine because the code compiles and runs smoothly. 10.1.4 Physical systems need to be tested on the ACTUAL site It’s always helpful to calibrate systems on the actual site and not on the lab as soon as possible. As with our case, we had no way of knowing that our sensor type choice was going to be impossible to use until we did on-site tests. 10.2 DesignImprovements 10.2.1 Voice Recognition Software The open source software Voicecommand that was utilized for this project is a starting point, it was relatively easy to use and allowed for simple integration with other RPi functions however it is by no means the best software at performing speech to text conversion. Future iterations of the tour robot should investigate software that responds faster and does not have to verify the command keyword. 10.2.2 Future Improvements If we had more time and resources, we could have worked on finding a way for “Rosy” to understand multiple languages. Moreover, we would have added a more humanoid appearance to the robot, or worked on developing a head for the robot, so that different faces could be displayed based on environment using LEDS and other mechanical and electronic components. 10.2.3 Microphone and Speaker Quality Due to the abysmal joke that is the project’s budget high quality components that would have greatly improved system performance were unavailable. For example, instead of using an available webcam for a microphone, which has a limited effective range and requires a user to talk directly into the device, a high quality omnidirectional microphone would allow for greater voice recognition. Likewise the current system uses a USB laptop speaker to achieve speech. This speaker is only clearly heard from one direction and lacks the ability to give an effective tour. Future iterations could benefit from multiple speakers that will broadcast sound 360 degrees around the robot.
  • 54. 54 10.2.4 More Stable Wheels If we had more time and resources, we could have worked on finding a more stable configuration for the wheels. Predictable and smooth movements will only be possible with perfectly stable wheels. Having the robot go straight sounds so simple to do but it’s nearly impossible with implement if the wheels aren’t stable enough. 10.2.5 Higher Quality Sensors I could not emphasize enough how much time was wasted on making due with lower quality sensors. Whole paragraphs of code had to be written to compensate for faulty sensor performances. Important parts of the robot’s behavior sometimes hinged on some of these sensors measuring a wall distance correctly. Using higher quality sensors would be very helpful for us designers and for the people this robot would help someday.
  • 55. 55 References [1]S.Saha, C Marsh (2013). Jasper[Online].Available: http://jasperproject.github.io/ [2]Algorhythmic from Aon²(2012). Speech Recognition Using The Raspberry Pi[Online]. Available: http://www.aonsquared.co.uk/raspi_voice_control [3]S.Hickson(2014). Voicecommand[Online].Available: http://stevenhickson.blogspot.com/ [4]J.Blum, Wiley (2013). Arduino Tools and Techniques for Engineering Wizardry.
  • 56. 56