More Related Content Similar to IDCFinalReport (20) IDCFinalReport1. Scott Lee
Ed Shin
5 December 2015
IDC Final Report
Abstract
In the age we live in, machines have become a ubiquitous part of everyday life. Everything from
our cars to our microwaves require circuitry and sensing to ensure that they are as effective as
they can be. Within the scope of the Integrated Design Challenge (IDC), our group was in
charge of constructing a robot that could line follow, detect light, and communicate with the four
other bots to cooperate and line themselves up in the correct orientation. The decision making
our robot had to do hinged on the number of lights sensed, meaning anything less than the
highest level of reliability and effectiveness could derail the entire operation. Using a metal
framed arm to extend a phototransistor over the five lights and the QTI line followers on the
bottom of the bot, we wrote code to sync up light detection with the hashmark present for each
light source. The bot kept two main counters: one for the number of light sources and one for
the number of hashmarks along the path detected. The light count was used to determine the
final parking slot and what number to communicate to the other bots while the hashmark count
was used to determine where to wait before embarking on parking and the final parking
destination. Overall
Introduction
The objective of our individual robot (LightBot) as a member of Yellow Squadron was to sense
which lights were active, stop at the appropriate spot, and then communicate with the other bots
before heading to the corresponding parking spot. Meanwhile, LightBot had to display the
number of lights turned on on the LCD while traveling to the destination.
The overall section’s goal is to successfully park the five robots in the correct spot based on the
number each bot senses. Aside from the Yellow Squadron, there was Blue Squadron in charge
of using magnets to determine in which position the Death Star was placed, Orange Squadron
in charge of using RFIDs to determine how many responding tags were placed on the path, Red
Squadron in charge of counting the number of black and white blocks present, and Green
Squadron in charge of determining the distance between two blocks. To solve these tasks, we
implemented a variety of techniques learned through previous labs.
Over the course of this project, we explored Arduino coding, Parallax BOEBOT hardware and
wiring, and sensors such as phototransistors, QTI line followers, and XBee communication. The
previous seven labs all contributed to the knowledge needed to successfully understand and
implement the IDC. Lab 1: Microcontrollers and Digital Logic taught us the very basics of
Arduino syntax and how to wire an ATmega2560 microcontroller. Lab 2: Exploring Digital Logic
2. introduced our first addition to the circuit board: a 7segment display that, although different
from the LCD we ultimately used in our LightBot, introduced the concept of wiring an element to
the circuit board we had to feed inputs to. Lab 3: Basic Electrical Measurements and
Temperature Sensing and Lab 4: Basic Electrical Measurements II and Pressure Sensing
further demonstrated wiring schematics and introduced two sensors to us. While we did not use
either the temperature sensor or the pressure sensor, we learned how to code in Arduino to
take measurements from a sensor and display the values recorded onto the serial monitor. Lab
5: Light Tracking Robot introduced us to servometer hardware and the accompanying Arduino
coding. This lab was vital for the IDC as it not only taught us how to control servomotor speed,
direction, and calibration, but it also demonstrated how to control servomotors based on sensor
inputs. We applied this very concept in the form of line following. Lab 6: Light Controlled Tone
Generator was arguably the most important lab for the Yellow Squadron as it demonstrated the
use of two light sensors: photoresistors and phototransistors. The lab showed the basics of light
sensing Arduino code while also providing a solid background in both sensors’ strengths and
weaknesses. Lab 7: Filtering and Frequency Response helped to further refine some of our
Arduino coding for sensors.
Because our LightBot specializes in detecting light and collecting information about its
surroundings, potential realworld applications the bot could be adapted for include searching
for light sources in hard to reach areas such as caves or thick shrubbery and analyzing a light
system and determining where there are broken bulbs or shorts.
From the Integrated Design Challenge, students should be able to understand Arduino syntax to
the extent that if given the minimal framework for a sensor or attachment, they could write the
remaining code to incorporate it into a larger code, as well as being able to wire a bot
appropriately to fit the corresponding code. The challenge builds upon the foundation of the
previous labs but also calls upon the students to search the web for additional code and help,
promoting very hands on self teaching.
Experimental Procedure and Results
Before embarking on the designing and building of the bot, we first conducted a trade
study on which type of sensors we would be using. This study quantified the costs and benefits
of integrating each of the possible sensors and compared the resulting values to determine
which of the sensors will give us the best result. The entire trade study result can be found in
AppendixB. After having a broad sense of which sensors we are going to use for different
parts of the task, we have developed a costs estimate of our bot. The cost estimate turned out
to be approximately around $300. Then, to add more structure to our timeline of when to
complete which task, we have created a gantt chart that specifies our schedule. Both the cost
estimate and Gantt chart can be found in AppendixB.
3. Week of 10/19–10/23: Communication
The objective of the Communication demo was to create a bot that could send a signal
to the rest of the bots and react when the bot receives a signal from the other bots. The groups
were each given an XBee communication device to integrate within the circuit in order to send
and receive signals via Bluetooth. In achieving this, the lab group identified several different
components that needed to be put in. These included sending out a signal, receiving a signal,
reacting appropriately to the received signal by turning on the LED light and incorporating the
button to control the system. While conceptually exploring the possible circuit designs, the lab
group concluded to have two separate circuits for sending and receiving signals and displaying
the light, rather than trying to integrate both of the features in one extensive circuit. This had
allowed us to simplify the circuit drastically compared to a relatively complex version which tried
to connect the LED light to the input signal of XBee.
As a class, we decided to demonstrate communication via XBees, push buttons, and
LEDs. We knew that although we wouldn’t ultimately be using buttons or LEDs to show proper
communication, we felt it was the most effective way with the experience we had at the moment.
The XBee is ideal for lower power applications, uses IEEE 802.15.4 networking protocol for fast
pointtomultipoint communication. Since our bots use a rechargeable battery as main power
source with limited power supply, it is in our favor that the communication module does not
require too much power for its operation. The four main pins used are for ground, a 5V power
source, an input source, and an output source (See FigureA.1 for the wiring diagram). Next,
we used a red LED light with a 220 Ohms resistor. One thing to note was that when using an
LED, it is important to always connect it to a resistor since the LED is a diode and the current
through a diode increases exponentially rather than linearly. Therefore, in order to prevent the
LED from being damaged, we need a resistor to decrease the amount of current going into the
LED.
First we connected the XBee module to the breadboard of our Arduino Shield following
the circuit diagram shown in FIgureA.2 . In this, we have connected the DIN and DOUT to its
respective pins (note that only some of the pins can be used for Rx, which conducts the
receiving of data). Then we power the XBee by connecting it to voltage source of 5V and
ground. Then we wire the LED with resistor on the breadboard. However, instead of having it
wired to the 5V voltage source, we wire the LED between one of the digital pins and the ground.
The circuit diagram for this circuit can be found in FigureA.3. Then, we integrate the button
circuit to the breadboard, following the circuit diagram in FigureA.4. The circuit diagram is
shown when applied to a different Arduino board. However, the general topology of the circuit is
the same for wiring the button. We connect one end to the 5V, the other to the resistor and then
to ground, and one other lead to one of the digital pins, which will take in the button state value
as input.
The biggest challenge in carrying out communication was integrating different parts into
one unified system of codes and hardware that was effective and robust. At first our lab group
tried to devise a circuit that took the DOUT of XBee as an input pin for the LED light since that
would allow the LED to turn on whenever the XBee received a signal. However, this model
turned out to be unrealistic due to the fact that the LED needed to be turned on when the button
4. was pressed and the XBee was sending a signal as well. This required that the LED to be
connected to two or three different voltage sources that are respectively connected to the DIN,
DOUT, and the button. Our group concluded that this approach unnecessarily complicates the
design and decided to separate out each of the circuits. As a result, we have wired the LED to a
separate pin set to output and send out HIGH whenever the button is pressed or
xbee.available() command is true. In addition, when the button state is HIGH after pressing the
button, the Xbee sends a signal out using the Xbee.print() command. As the final step of either
sending a signal or receiving a signal, we turned the LED on for three seconds and turned if off
by setting the Pin connected to the LED back to LOW.
Week of 10/26–10/30: Line following, Communication
The main challenge with line following was understanding how to utilize the QTI sensor
readings and controlling the servos to react according to stay on the line. We knew that the
middle two sensors had to be on black while the two outside sensors had to detect white (unless
on a hash mark) and when these conditions weren’t met, the servos had to adjust accordingly.
The raw line following code was the most difficult part of the challenge as it required meticulous
calibration until the bot ran smoothly.
Line following consisted of using the QTI Line Sensors (See FigureA.5 and
FigureA.6). They are each a closeproximity infrared emitter and receiver pair mounted on a
printed circuit board. There are two main applications of QTI sensors: they can be used as
analog sensors to differentiate between different levels of infrared reflectivity or they can be
used as digital devices that return 1’s when they detect a black line or 0’s if they detect white.
The infrared receiver picks up on the infrared LED’s emissions, with more light leading to a
greater voltage to charge the QTI’s capacitor. When the LED emits infrared rays over a black
surface/line, the majority of emissions are absorbed while on a white surface, the infrared rays
are reflected at a greater rate. Thus, depending on the color of the surface under each QTI
sensor, a black line will charge the capacitor less than a white surface does. Each QTI sensor’s
capacitor’s discharge time is measured and fed to Arduino. Within the Arduino coding , the
decay time is run through a short calculation and compared to a threshold. If above the
threshold, Ardunio determined the sensor was over white space and printed a 0; if under the
threshold, Arduino determined the sensor was over the black line and printed a 1.
To line follow, we implemented a set of four QTI sensors (setup detailed in Appendix C,
Lines 4570), coding several ifcases corresponding to whether the four sensors picked up
black or white that then told the servos what to do. The servos came preinstalled on the
Arduino board connected to digital pins 12 and 13. See Appendix C, Lines 911 for servo
setup. We calibrated the servos by running a simple code (See FigureA.7) that, with calibrated
wheels, should result in no wheel rotation. It is important to note that 1500 microseconds acts as
the “zero” for servos, with 1400 equivalent to 100 and 1600 equivalent to +100. If the wheels
did move while running the standstill code, we used a screwdriver and adjusted the screw in
the rear of each servometer until there was no visible movement. Then, we screwed the QTI
sensors onto the bot by following the instructions demonstrated on the FigureA.8 and wire the
sensors to the Parallax board following the schematics of FigureA.9 and FigureA.10.
5. Our Arduino code was written to initially write out the binary four digit combination of
what the sensors were picking up and then transform it into a base10 number; for example, a
serial print of 0011 corresponded to the two leftmost sensors reading black and the two
rightmost sensors reading the white line. This binary 0011 was then read as base10 6. Each
four digit combination, and thus a number from 0 to 15, corresponded to an ifcase that dictated
how the servos would turn, either keeping the bot moving straight, tilting left slightly, or tilting
right slightly (See Table 1 for corresponding ifcases and combinations, see Appendix C, Lines
72 to 122 for the Arduino code). Servos were adjusted in each ifcase by changing the value of
variables vL and vR. vL and vR were used in Appendix C, Lines 118121 in which the servos
rotation speed were adjusted using servoL.writeMicroseconds(1500 + vL) and
servoR.writeMicroseconds(1500 vR).
Input Base10 vL, vR Direction
0001 1 0, 100 counterclockwise
0010 2 0, 50 counterclockwise
0011 3 0, 50 counterclockwise
0100 4 50, 0 clockwise
0110 6 100, 100 straight
1000 8 100, 0 clockwise
1100 12 50, 0 clockwise
1111 15 0, 0 stop
else 30, 30 straight
Table 1: Case analysis for the QTI line following
Week of 11/2 11/6: Information Gathering, Line Following and Communication
When it came to making a decision about what device we would implement to sense
light, we had to choose between a fifth QTI sensor and a phototransistor. We initially ran several
tests using both sensors and found that the QTI sensor was not as reliable or consistent as the
phototransistor. The QTI required the utmost positioning on top of the light source to correctly
pick up that it was on. The QTI was also far more susceptible to ambient light causing false
positives. See Table 2 for trial results comparing the two sensors. The phototransistor we used
had a far greater margin for error in terms of where the bot stopped and took a light sensing
measurement. The difference between a light source turned on and one turned off was very
apparent in the serial monitor during our tests. This allowed us to set a threshold (1.75 V) that
was well above the standard offlight source and ambient light voltage range (0.300.45 V) while
6. well under the onlight source voltage range (2.903.30V), depending on battery of light source).
This led to the chances of a false positive using the phototransistor being extremely low.
Sensor Light ON correct Light OFF correct Total Correct Trials
Phototransistor 14/15 15/15 29/30
QTI sensor 13/15 11/15 24/30
Table 2: Comparison Study of QTI sensor and Phototransistor
A phototransistor is a photonic device that acts as a valve that regulates the amount of
electric current that passes through two of its three terminals. The third terminal regulates how
open the valve is, allowing a greater amount of current to flow through when there is more light
shined on the phototransistor and allowing less current to flow through when there is less light
shined. This current then passes through a fixedvalue resistor. By Ohm’s Law, V = IR, as
current (I) increases for a constant resistance R, voltage (V) increases. Thus, the Arduino can
indirectly read how much light is being shined on the phototransistor by relative voltages across
the resistor.
To wire the phototransistor to the LightBot while being able to hang it off of the side as to
pass over the top of lights, we built an arm out of metal frames and bolts. We then hooked up
the phototransistor to a threewire power cable, connecting the positive end of the
phototransistor to 5V and the negative end to a 2K Ohm resistor and ground. Then, we wired an
analog pin the phototransistor and the resistor to take the voltage on over the phototransistor as
input. The schematic for wiring the phototransistor can be found in FigureA.11. For the
LightBot, the VA3 is connected to analog pin 3. The phototransistor, after being wired to the
board, was attached to the metal arm on the bot’s left side using a secure rubber band. Within
Arduino (See Appendix C, Lines 132134), we printed the phototransistor’s outputs into the
serial monitor to figure out the best threshold. Eventually, the threshold was set to 1.75V to
allow for the maximal margin of error between the light’s on and off state. See Table 3 for the
test trials in determining the threshold.
Light On Voltage Light Off Voltage
3.12 0.36
3.09 0.34
3.13 0.39
3.15 0.44
3.12 0.41
3.03 0.40
7. 3.04 0.38
3.06 0.37
3.08 0.33
3.09 0.34
Table 3: Results from determining the threshold
Week of 11/9 – 11/13: Integrated Sensing, Processing, Navigation, and Communication
This week required us to make it to the end of our sensing phase, stop, and send out our
sensed number to other bots. Integrating the phototransistor with the QTI sensors was critical to
the success of the challenge. It was not feasible to have the phototransistor constantly sensing
or else every light source would have been counted multiple times and yielded no results. Thus,
our code centered around the five hashmarks that accompanied each light source as triggers for
our bot. The ifcases previously mentioned that dealt with line following only applied when the
QTIs didn’t output 15 (1111, all black), representing a hashmark. This week, we edited the code
(See Appendix C, Lines 118122) so that anytime one of the five hashmarks were detected,
several “counters” were adjusted as well as the phototransistor being activated.
Before each run, our bot began with several counters (See Appendix C, Lines 12,
1619) to keep track of certain values. The first counter, lightcounter, started at 0 and was
straightforward. Every time the bot reached a hashmark (read 15), the phototransistor would
take one measurement as the bot stopped for a brief moment. lightcounter would increase by
one for every time the phototransistor took a measurement that was above the 1.75V threshold
and remain unchanged if the value was below the threshold. See (Appendix C, Lines
136138). The second counter, STOPcount, began at 0 and increased by 1 for every time the
bot sensed a hashmark. We then kept STOPcount within an if tree that would stop the bot once
STOPcount reached six: five hashmarks plus the line that connects all five lanes before the
parking lane.
The biggest challenge of this week was to successfully integrate the light sensing, line
following, and communication codes together. This required that we put the three different
codes into one code that would successfully loop and carry out a coherent line of actions. One
of the problems faced by our group was that as the circuit on the breadboard got more
complicated, the limited amount of voltage put on each of the element changed, consequently
changing the various threshold values we had determined. For the phototransistor, the threshold
value had decreased significantly (from 1.75 V to 0.75 V) so that we had to adjust the value in
the final code.
We had trouble laying down the specific course of action due to inconsistency in line
following. For some reason, despite the fact that we coded the bot to stop for a second
whenever it hit a hash mark, it failed to sense the hashmark consistently. Since we wanted to
set the bot to sense the light every time it hit the hash mark, not being able to consistently
detect the hash mark became a huge problem. We spent huge amount of time figuring out what
8. was causing this bug and eventually figured out that the QTI sensors were not sensing the
values frequently enough that the bot would run through a hash mark in between the two
readings of QTI sensors. To fix this, we reduced the total time it took for the bot to run through
the main loop so that the qti sensors would be called to sense the value more rapidly. In doing
so we reduced various delay values in hopes of significantly improving the consistency.
However, most of the adjustments had just minimal effects on the consistency. Then, we
reduced the delay time that to recharge the capacitor on each QTI sensor. We decreased that
value of delay time from 200 microseconds to 10 microseconds (See Appendix C, Line 53) and
significantly improved the consistency of line following and hash mark sensing. From this, we
concluded that our main issue was not knowing the RC time of QTI sensor and having the code
delay for much longer than what was needed.
Another problem that arose after getting line following to be more consistent was
doublecounting of turned on light sources. To combat this problem, we hardcoded a short
“move straight” command after landing on a hash mark. We had to tinker with the timing a little
to make sure that the bot would move off the hashmark without ending up off the line entirely.
See Appendix C, Line 182184 for the Arduino code that makes the bot move straight for 0.25
seconds before reentering the line following loop until it reaches the next hash mark. We found
that any longer and our bot ran the risk of falling off the line entirely and any shorter and the bot
risked double counting. See Table 4 for trials dealing with doublecounting.
Hardcoded “go” length Double counted light? Bot fall off the line?
0.10 sec Yes No
0.15 sec Yes No
0.20 sec Yes No
0.25 sec No No
0.30 sec No Yes
0.35 sec No Yes
Table 4: Dealing with doublecounting results
Week of 11/16–11/20: Team Sensing, Processing, Navigation, and Communication
This week, our main goal was to successfully finish our individual challenge and then
communicate with the rest of the team to park in the correct orientation. We already had sensing
fully accomplished the week prior and focused mainly on successfully communicating our
sensed number (lightcounter) to the rest of the team, receiving other groups’ sensed numbers,
and parking. When our bot completed its individual task, it stopped when STOPcount equaled
six. This week, we adjusted the code to include a boolean called wait that dictated whether to
begin parking (if the sensed number was 1) or to wait for a signal from the bot with lightcounter
9. 1 at which point the wait boolean switched from its initial true to false. See Appendix C, Line
153). When receiving the signals from other groups, we had to include a conversion from the
ASCII language the XBee signals were transmitted in by subtracting 48. We did not need to
convert any value before sending because the conversion into ASCII happened automatically
via XBee. See Appendix C, Lines 17, 1314, and 142168 for Arduino coding of the XBee.
After receiving the signal to begin parking (or if the sensed number was 1), we wrote
code for the bot to cross over from the end of the initial sensing phase onto the parking lane and
then stopping on our appropriate spot. Going from the end of the sensing phase onto the
parking line required lots of trial and error to properly line up the QTI sensors with the new line
after crossing white space. Our final, trialtested method involved hard coding the bot to turn
clockwise in place for a 0.5 seconds and then running straight for 1.2 seconds. After the 1.2
seconds, the bot returned to the main line follower code and traced the parking line. To park in
the appropriate slot, the bot implemented another counter: stopnum. stopnum took on the value
6 lightcounter and told the bot how many hashes to count before stopping. For example, if
LightBot sensed four lights, stopnum would take on the value of two, stopping on the second
hashmark of the parking lane. See Appendix C, Line 169179.
Another difficulty was making sure that the communication process worked for all of the
bots since the entire system couldn’t work if any of the bots failed to communicate. This
troubleshooting process required teamwork from all of the others in our lab group but couldn’t
coordinate enough lab time due to our differences in schedule.
The final part of modifying the bot was attaching an LCD panel to display the detected
value when the bot hits the end of the first lane. Wiring the LCD on the Arduino Shield was
relatively simple and straightforward. We screwed two Lshaped extension on both sides of the
Shield board, and screwed the LCD boards on the extensions. The configuration for attaching
the LCD can be found in FigureA.12 in appendix. After attaching the LCD on LightBot, the LCD
needed to be connected to the Arduino Shield using wires. In doing this, we follow the
schematic in FigureA.13 that can be found in appendix A. The initial setup for the LCD can be
found in Appendix C, Line 3042. After wiring the LCD, we wrote a function called callLCD
which takes a numerical value as an input and display “Turrets detected:” along with the
numerical input value. We call the function callLCD inside the if statement of if (STOPcount ==
stopnum){}. We call the function callLCD as lightcounter value as the input. This allows the bot
to display the number of lights that are on when the bot hits the sixth hash mark, which is the
end of the straight lane. The example of the LCD displaying at the at the end of the lane can be
found at FigureA.14. The code for the function callLCD can be found in Appendix C, Line
193203.
Week of 11/30–12/4: Full System
Heading into the final presentation, our bot was fully functioning. It could correctly line
follow and detect lights, display the light count on the LCD at the end of the initial path, wait for
the signal from the bot with lightcounter 1 sensed, and then park in the correct spot. We tested
its capabilities several times without wall power and found all the elements to run smoothly. We
ran more trials with the bot to ensure that no issues would arise.
10. However, the final demonstration did not go exactly as planned as several other groups
had problems sensing the right number and thus, sending the correct signal via XBee. We did
not build in a backup code for the situation if communication failed between bots because the
condition was that all the bots had to make it to the correct parking spot and anything short of
that was equivalent to getting none of the bots parked.
Analysis and Discussion
Upon completion of the Integrated Design Challenge, there are a handful of thoughts
that stick out. What is most noticeable about our progress throughout the project was the cycle
of stagnation and rapid improvement. For example, line following proved to be quite a struggle
for the longest time, with jerky movements and not enough measurements to detect the
hashmarks reliably enough to initiate the phototransistor. One change in a delay time fixed all of
our problems and got the ball rolling for integrating light sensing and hash mark sensing. We
progressed so quickly from there until we reached our second major roadblock: communication.
We were successfully able to send our lightcounter value to other bots but were for some
reason unable to receive anything. Eventually, after plenty of reading and troubleshooting, we
discovered that the input pin for our XBee was invalid. After a brief conversation with the lab TA,
we realized only some of the pins can be used as an XBee Rx (receiving pin): pins 10, 11, 12,
13, 50, 51, 52, 53, 62, 63, 64, 65, 66, 67, 68, and 69.
In terms of cost (See Cost Estimate in AppendixB), the main discrepancy came from
deciding whether to use a QTI sensor (for an extra $29.99) or a phototransistor (for $1.49).
Because our initial tests revealed that the margin of error inherent with using a QTI sensor was
too small to be reliable while the phototransistor was far more effective and didn’t require exact
placement above each light to pick up a noticeable change, the decision to use the cheaper
phototransistor as our primary sensor was simple.
There is also the fact that our LightBot utilized four QTI line sensors when we would
have been fine using just three. This could have reduced our cost while not sacrificing any
efficacy based on some of the other bots that we saw. It is possible that three line following
sensors could have been just as reliable to line follow and detect every hashmark as four
sensors.
One aspect of the design that could have been streamlined was if the metal frame arm
used to hold the phototransistor over the light sources been replaced with a solid bar with less
individual pieces capable of shifting and throwing off calibration. The arm was the best we could
do with the available resources and proved to be reliable regardless. Another was the matter of
the rubber tires. On several unlucky runs, the tires came undone and caused movement to be
severely compromised. Had we had some method to secure the tires to the wheels better, it
would have improved the bots’ reliability. The more prevalent problem with our line following
was that our bot jerked occasionally instead of maintaining a smooth course due to the fact that
there was an unused metal bar stuck in the front of our bot that prevented us from attaching the
outer QTI sensors closer together to the middle. See FigureA.5 in AppendixA.
19. Normalized
Value
10 9 8 7 6 5 4 3 2
Cost ($) <3 35 57 710 1013 1317 1725 2550 50100
Reliability
(%)
>95 9095 8590 8085 7580 7075 6070 5060 2550
Safety Baby
proof
Extremely
safe
Very
safe
Safe Somew
hat safe
Risky Somewhat
risky
Very
risky
Possible
danger
Hassle None Not
noticeable
Very
little
A little Some A good
bit
A lot Too
much
Way too
much
Technical
Difficulty
None Barely
any
thought
Very
little
thought
A little
thought
needed
Some
thought
needed
A good
amount
of
thought
A lot of
thought
Difficult Very
difficult
Factors Cost Reliability Safety Hassle Technical Difficulty
Weight 4 10 1 3 5
Reason Want to
spend a
modest
amount
Important to
consistently get
the same
working results
Not much harm
can come from
these sensors
We have time to
work with the
best equipment
if exhausting
Want to keep the
robot simple if we
can
20. Sensor IR Phototransistor QTI Line Follower Temperature Sensor
Weight Norm Value Total Norm Value Total Norm Value Total
Cost ($) 4 10 40 3 12 7 28
Reliability 10 10 100 6 60 1 10
Safety 1 9 9 9 9 9 9
Hassle 3 9 27 9 27 9 27
Difficulty 5 8 40 6 30 3 15
Grand Total 216 138 170
The IR phototransistor ends with the highest overall score, meaning it is the best sensor to use in terms of
our weighted parameters.