SlideShare a Scribd company logo
1 of 4
Download to read offline
Usability Evaluation of Beep-To-The-Box
Young Seok Lee, Santosh Basapur, Harry Zhang, Claudia Guerrero, Noel Massey
Motorola Applied Research Center
1295 E. Algonquin Rd.
Schaumburg IL, 60196, U.S.A.
{younglee, sbasapur, harryzhang, claudiag, noel.massey}@motorola.com
ABSTRACT
Radio Frequency Identification (RFID) provides various
opportunities to increase the productivity of retail business. In
this paper, we describe a usability evaluation study for an RFID-
based location tracking application, called Beep-To-The-Box
(BTTB). The experiment was conducted in a simulated retail
store to gain in-depth understanding of the usefulness and
usability of the prototype in determining visual and audio user
interface features. We describe the features of the BTTB, report
the experimental results, and discuss insights gained to provide
design recommendations for the final product design.
Categories and Subject Descriptors
H.5.2 [User Interfaces]: Evaluation/methodology, User-centered
design.
General Terms
Design, Experimentation, Human Factors
Keywords
Usability, visual and auditory UI design, RFID, Indoor location
tracking
1. INTRODUCTION
Radio Frequency Identification (RFID) has recently become a
viable replacement for the Universal Product Code (UPC)
technology in many industries including retail business. Its fast
growth and huge potential benefits have motivated a major move
independently taken by large retailers such as Wal-Mart, Target,
and Walgreens, to name a few [1]. In response, a number of
major technology companies, including IBM, Intel, and Motorola,
have made an investment to create various retail solutions using
RFID that provide benefits such as improved inventory control,
simplified business process, and reduced labor costs.
An RFID system consists of two primary components – a tag and
a reader. An RFID tag, similar to UPC, is usually attached to a
tracking object; a reader is then used to track tagged objects. In
retail space, RFID tags are useful as identification and proximity
sensors – when a reader can access information on a tag, the
reader is able to identify the tag (or the object bearing the tag) as
well as infer that the tag is in close physical proximity of the
reader. In an effort to utilize the potential of RFID, researchers at
Motorola Applied Research Center have developed a mobile
application, called Beep-To-The-Box (BTTB), which infers the
physical proximity of a tag and directs users to the location of the
tagged item. In this paper, we describe a usability evaluation
study on BTTB to gain in-depth understanding of the usefulness
and usability of this prototype in determining visual and audio
user interface features. The remainder of this paper presents
major features of the BTTB and the details of the laboratory-based
usability study, and discuss how the results have provided the
basis for practical design recommendation for the final product
design.
2. Beep-To-The-Box Prototype
BTTB is a mobile application that reads a passive RFID tag and
locates an item that bears the tag. A passive tag has no battery of
its own and makes use of the incoming radio waves broadcasted
by a reader to power its response. A BTTB prototype was built on
a Motorola MC9090 hand-held mobile computer (see Figure 1)
with an ultra-high frequency (UHF) RFID reader for a frequency
range of 902-928MHz.
After a hardware construction, a series of tests were conducted to
measure RFID tag reading performance as function of antenna
orientation, RFID orientation, transmission power, frequency, and
distance. Additional tests were conducted to check the effect of
height, angle, and material as well. Next, a software team
developed an RFID range-finding algorithm that determines the
distance from an RFID reader to a target tag. The team decided
that the distance measurement need not be absolute nor includeCopyright is held by the author/owner(s).
MobileHCI‘10 September 7-10, 2010, Lisbon, Portugal.
ACM 978-1-60558-835-3/10/09.
Figure 1. BTTB prototype on MC9090 and UI
345
any directionality because determining an absolute distance would
require calibrating the reader to many different kinds of
environments, materials, and tag/antenna orientation. Therefore,
the output of the algorithm was a real-time change in distance to
the target, which was measured using Received Signal Strength
Indicator (RSSI).
In the meantime, the team began designing user interfaces and
decided to experiment a “Geiger counter” metaphor where the
number of pulses of sound, together with their tempo, gives an
indication of how close it is to the target. If the user carrying a
reader is a long way from the target, then few, widely spaced
sound pulses are emitted. If the user is close to the target, many,
closely spaced sound pulses are generated. In addition, we
provided a visual indicator of signal strength using a bar-graph
type meter (see Figure 1). The prototype allowed users to select a
sensitivity setting that adjusts a tradeoff between the response
time (defined as the time required to read the steady-state
response) and the accuracy (defined as the steady state variation)
of reading performance. For example, if users selected the high
sensitivity setting, the prototype responded to the tag with
decreased response time but larger oscillations, which lowered the
accuracy, compared to the normal sensitivity setting.
3. Usability Evaluation
3.1 Objectives
The objective of this evaluation study was to provide a formative
usability evaluation for the BTTB prototype with a specific aim to
assess 1) usefulness and usability of visual and audio alerts, 2)
user preferences between the normal and medium sensitivity
setting, 3) perceived usefulness of the prototype in the retail
environment, and 4) additional features to improve the user
experience. As explained earlier in this paper, the normal setting
was tuned to deliver high accuracy but with some latency, and the
medium setting was set to be more responsive to the same signal
strength compared to the normal setting, which resulted in
intensive auditory feedback sometimes. We aimed to make an
assessment using various contexts of use in the retail store
environment and provide design recommendations to improve the
prototype as a part of the iterative development process.
3.2 Method
3.2.1 Participants and Scenarios
Ten participants were recruited from the Motorola Schaumburg
campus, IL, U.S.A. Their technological background varied from
sales account managers to industrial designers. Two males and
eight females participated in the study, and the average age was
38.8 (Min =25; Max=55). Five participants reported their previous
experience with RFID. The study was conducted in a simulated
retail store at the Motorola Innovation Center with five scenarios
that represent various use cases of the BTTB such as the sales
floor and the inventory backroom (Table 1 and Figure 2).
3.2.2 Procedure
Each participant was given an overview of the experiment and
asked to fill out a brief background questionnaire on their previous
experience with retail stores, scanners, and RFID readers. Next,
we provided a brief introduction to the prototype and asked them
to familiarize themselves with the prototype by performing trial
runs with sample RFID tags. In the experimental session, they
were asked to complete the tasks (e.g., look for an item with RFID
tag #000000000013) in the five scenarios.
Table 1. Description of five scenarios
Scenario # Description (Task: find a target item in)
1 20 clothing items placed on hangers (RFID tags on
back of the items)
2 30 folded shirts piled on a table (RFID tags on inner
side of the shirts)
3 50 boxes/cartons placed on pallets (RFID tags on the
side of the box)
4 25 unfolded clothes piled in a box (RFID tags on top of
price tags)
5 anywhere in the entire store (RFID tags on the side or
back of the item)
Figure 2. Context of use (Scenario 1)
The order of the scenarios and the sensitivity settings (normal vs.
medium) was randomized to reduce the carry-over effect.
Therefore, each participant performed a total of 10 tasks during
the experiment session (two tasks for five scenarios). The
participants were asked to speak aloud what they see, hear, and
think about during the task performance. After completing each
task, participants were asked to rate the ‘ease of task completion’
using 8 point Likert-type scale (Q: How easy was it to locate the
item using this device in this setting?), and follow-up questions
were asked if necessary. After completion of all tasks,
participants rated the overall usefulness, ease of use, learnability,
and satisfaction of the prototype using the same 8 point Likert-
type scale, followed by summarizing overall experience as well as
suggestions to improve the prototype. The experiment lasted
about 1 hour, and the entire session was recorded with a video
camera for a critical incident analysis as well as the content
analysis [2].
4. RESULTS
4.1 Auditory and visual cue
4.1.1 Auditory feedback is a primary cue
All participants mostly relied on the auditory feedback during the
task completion. They mentioned that searching for an item is a
346
“visual” task, so they felt using audio feedback is more natural
and easier. Some participants did not use the visual feedback at
all. They reported difficulties searching for the tag visually and
keeping an eye on the screen for visual feedback simultaneously.
Also, several participants remarked that it was difficult to look at
the display when the item was placed very high or low (scenario
2, 3, 4). Exemplar remarks include:
“I think audio works really well. The visual, I did not use it
almost at all because it is hard to look at the box and look at the
indicator and come back and forth. That is very tedious. With
audio, even having eyes closed, I can find a box easily by
sweeping it over the pallet.”
4.1.2 Visual feedback is a secondary cue
Most participants used the visual feedback as secondary
information. Participants used the audio (beeps) to sense direction
of the general area, but once they moved to the vicinity of the
target item, they started using the visual feedback for two reasons:
1) to detect subtle change of the signal strength and 2) to confirm
the audio feedback’s accuracy. Participants commented that it was
sometimes difficult to differentiate the subtle signal change in the
sound, so they referred to the display to see if there was any subtle
change (e.g., between four bars and five bars of the signal). Below
is a typical remark from the participants.
“I look at display to see if there is any difference. I used it to
double check as secondary information.”
4.1.3 Final confirmatory visual cue is necessary
Most participants mentioned having a clear confirmatory visual
cue would be helpful for completion of the search task. Since
most participants used the visual cue as a confirmation of finding
an item, they desired to have a clear cue on the display to indicate
a kind of “you got it” confirmation. Several participants expected
to see the bar topped out when the signal was highest, but this was
not the case in the prototype. Even when the device was right on
top of the tag, the bas didn’t top out sometimes, which resulted in
several users’ confusion. They suggested that when the item is
extremely close changing color or flashing green light on the
display might be helpful indications of “yes, this is the one”
confirmation. Below is typical remark by the participants.
“If I find one item, it should light up like showing an indicator.
Yes this is the one for sure.”
4.1.4 Directional indication would be helpful
The directionality of the RFID tag signal caused some problems in
localizing items in the backroom setting where boxes were piled
up in multiple layers. At the initial scan, several participants heard
the fast beeping when they aimed the device to the target item
from a distance, which misled them to search a box in the front
layer although the target item was placed in the second or third
layer. Several participants desired to have indication for direction
on the display similar to a compass.
4.1.5 Other improvements
Some participants mentioned that it was also difficult to see and
differentiate subtle changes in the visual cue (e.g., six bars from
five), so using a color code or gradient would be useful to increase
the readability of visual cue; hence, it will speed up the search
task. Also, a participant with relatively low vision suggested that
adjustable font size of the RFID tag number would be helpful for a
quick confirmation of the tag number or item name when the
prototype detects a target. Several participants also reported that
the fast beep was somewhat irritating to them after being exposed
for some time. Others commented that the beeping may interrupt
customer’s shopping experience in retail stores. Participants
suggested having an easy access to mute, volume control over the
beep and a headset so that customers don’t have to hear the
annoying beeps. In addition, participants suggested changing tone
or sound of the beep would be helpful for understanding gradual
changes in the sound.
4.2 Setting Preferences (normal vs. medium)
Participants’ preference between the two sensitivity settings
seemed to be distributed almost evenly. The mean rating of ‘ease
of use’ for each setting also indicated that participants perceived
almost equal ‘ease of use’ (Normal = 5.87 vs. Medium = 5.82).
Two participants did not even notice any difference between the
two settings; however, those who noticed the difference explained
reasons for their preference.
Participants who preferred the normal setting remarked that it was
more accurate and sensitive enough to differentiate subtle changes
in the signal strength. Especially, when they stood distant from the
target, the normal setting provided a “from no beeps to beeps”
change which helped them to decide where to focus their attention
in the general area. Also, when they were very close to the target
item, they could sense a subtle change of tempo in auditory alerts
relatively easier than the medium setting.
However, other participants preferred the medium setting because
it beeped faster and louder (according to participants, in fact, it
beeped at the same loudness level) which helped them be aware or
confident that they are getting close to the target item. However,
when they were very close to the target item, the beeping was too
intense for some users to differentiate between the subtleties of
higher and lower signals. Also, they found that the audio in the
medium setting was not synchronized with the visual cue, which
caused users’ confusion and consequently mistrust of the audio.
4.3 Perceived Usefulness
All participants agreed that the prototype would be very useful for
a broad search in a large area, for instance, backrooms full of
boxes with small labels, shoe stores, pharmacy, or stores with high
end items. They also commented, however, that it would not be
useful for searching a small item in a small area, such as finding a
screw in a bin, because the device would provide almost the same
reading above the bin. This was evident in the quantitative data as
below (Figure 3). Mean ratings of ‘ease of use’ for five scenarios
were calculated, and the repeated measure analysis revealed a
significant difference among the five scenarios (F(4,36) =6.10, p
<0.01). The pair-wise comparison revealed that the device was
the most useful in Scenario 5 (i.e., locating an item in the store)
while the least useful in Scenario 3 (i.e., locating a box on pallet)
and scenario 4 (locating a cloth in a box) (Figure 3). There was no
significant ‘scenario x setting’ interaction (p=0.36).
347
Mean ratingof 'ease of use' for each scenario
0
1
2
3
4
5
6
7
8
1 (shirt on rack) 2 (shirt on table) 3 (box on pallet) 4 (shirt in box) 5 (item in store)
Scenarios
MeanRating
Figure 3. Mean ratings of 'ease of use' across scenarios
5. DISCUSSION
Based on experimental data, we confirmed the potential
usefulness of the BTTP prototype in retail environments. We
found that BTTB would be very useful for a broad search in a
large area, such as finding a piece of jeans in an apparel shop.
However, due to the radio signal’s propagation characteristics, its
usefulness would decrease in searching item from a small area
(e.g., finding a screw in a bin) or from a place where items piled
up in multiple layers (e.g., inventory rooms with boxes in multiple
rows and columns).
This study also revealed that participants used the auditory
feedback as a primary cue for proximity sensing while visual
feedback served to provide a confirmatory cue when users were
close to the target item within the proximity. Because the nature
of the task was a visual search which involved an active scan of
the visual environment for a particular object (target) among other
objects (distracters), it makes sense that the use of auditory
modality becomes a more natural interaction to receive directional
information in the general area. Also, the Geiger counter
metaphor seemed to be a proper mapping in this type of task in
order to inform users of the real-time distance change to the target
while reducing demands on users’ visual attention. However,
when users were within the proximity of the target, they
encountered with the low resolution problem (i.e., users could not
distinguish the tempo difference between two sounds so that they
could not tell if a value was going up or down) and tended to rely
on visual feedback that offered much easier reading.
As described above, the auditory and visual information provide
unique but complementary advantages, and the combination of the
two modalities becomes a powerful tool in the target search task.
However, integration of the two modalities in the user interface
design requires in-depth understanding of users’ information
needs in the temporal procedure of task execution, so we provide
some insights and design recommendations for BTTB based on
the experimental results and previous human factors guidelines for
display design. We believe that these suggestions can be
applicable to designing UIs for similar devices for target search
purpose.
First, the range of beeps should be large enough to deliver high
resolution. Many auditory parameters are not suitable for high
resolution display of quantitative information [3]. Using the pulse
of sound with rapidity, only a few different values can be
unambiguously presented. As found from the medium setting in
this study, the audio alerts tuned to generate too much beeping on
even lowest detectable signals made it difficult for users to detect
the subtle change in the already fast beeping. The range of beeps
should be extended to allow a discernable indication for a subtle
difference. In addition, previous literature suggested that provision
of additional or redundant cues may help differentiate the subtle
change in the sound [3]. Pitch, timbre, intensity may be used as a
supplemental cue. The use of timbre may reduce the irritation
factor of fast beeping sound observed from this study. Also, when
a headset is used with this device, spatial (directional) information
can be provided using stereo sounds [4].
Secondly, it is recommended to provide a confirmatory visual cue
on the display to allow users’ double confirmation. Usually,
participants tended to confirm their finding with the visual cue but
were confused because the bar never went up to the top. It is
recommended to soften up the sensitivity at high signal strength
levels (i.e., when a reader is close to the target) so that the visual
bar can reach the top with weaker signal strength, which will offer
a “you got it” confirmation. Also, consider using flashing, lighting
up or changing color of the bar to increase the visibility of the
final confirmation. The tactile modality may be used to duplicate
the confirmatory cue by putting a vibration actuator inside the
handle. Also, the readability of the visual cues should be
improved by including the color or gradient to clarify the subtle
differences (i.e., one bar goes up from five bars).
Lastly, the audio and visual feedbacks must be synchronized to
indicate the same ‘intensity’ information. In this study, we found
that the audio feedback that was not in line with the visual cue
caused users’ confusion and mistrust of one of the two feedbacks,
consequently. The two modality information should be
synchronized because users attend to both sources to comprehend
the information presented in a dynamic task completion.
6. ACKNOWLEDGMENTS
We thank Tim Collins, Swee Mok, Tom Mathew, Julius Gyorfi,
and Tom Babin for developing the prototype and participation in
the evaluation study design session.
7. REFERENCES
[1] Weier, M. 2008. Walgreens deploying RFID in distribution
centers, InformationWeek, extracted from:
http://www.informationweek.com/news/mobility/RFID/show
Article.jhtml?articleID=210601894
[2] Stanton, N. Salmon P, Waker, G., Baber, C., and Jenkins, D.
2006. Human Factors Methods: A practical guide for
engineering and design, Ashgate.
[3] Brewster, S. 2002. Nonspeech auditory output. The human-
computer interaction handbook, L. Erlbaum Associates Inc.
[4] Brewster, S.A., Wright, P.C. & Edwards, A.D.N. 1992. A
detailed investigation into the effectiveness of earcons. In
Proceedings of the First International Conference on
Auditory Display, pp. 471-4
348

More Related Content

Similar to MobileHCI2010 p345-lee

IRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind AssistanceIRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind AssistanceIRJET Journal
 
IEEE augmented reality learning experience model (ARLEM)
IEEE augmented reality learning experience model (ARLEM)IEEE augmented reality learning experience model (ARLEM)
IEEE augmented reality learning experience model (ARLEM)fridolin.wild
 
Apollon - 22/5/12 - 11:30 - Local SME's - Innovating Across borders
Apollon - 22/5/12 - 11:30 - Local SME's - Innovating Across bordersApollon - 22/5/12 - 11:30 - Local SME's - Innovating Across borders
Apollon - 22/5/12 - 11:30 - Local SME's - Innovating Across bordersimec.archive
 
Requirement:HW6 Problem 2 Design a mobile robot capa.docx
Requirement:HW6 Problem 2 Design a mobile robot capa.docxRequirement:HW6 Problem 2 Design a mobile robot capa.docx
Requirement:HW6 Problem 2 Design a mobile robot capa.docxaudeleypearl
 
IRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLABIRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLABIRJET Journal
 
IRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET Journal
 
IRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart CapIRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart CapIRJET Journal
 
IRJET- Review on Text Recognization of Product for Blind Person using MATLAB
IRJET-  Review on Text Recognization of Product for Blind Person using MATLABIRJET-  Review on Text Recognization of Product for Blind Person using MATLAB
IRJET- Review on Text Recognization of Product for Blind Person using MATLABIRJET Journal
 
Development of smart cane for blind people
Development of smart cane for blind peopleDevelopment of smart cane for blind people
Development of smart cane for blind peoplePradeep Thakur
 
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindPassive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindVladimir Kulyukin
 
Ch02 project selection (pp_tshare)
Ch02 project selection (pp_tshare)Ch02 project selection (pp_tshare)
Ch02 project selection (pp_tshare)Napex Terra
 
Usability testing for qualitative researchers
Usability testing for qualitative researchersUsability testing for qualitative researchers
Usability testing for qualitative researchersKay Corry Aubrey
 
Usability testing for qualitative researchers
Usability testing for qualitative researchersUsability testing for qualitative researchers
Usability testing for qualitative researchersResearchShare
 
IRJET - Visual E-Commerce Application using Deep Learning
IRJET - Visual E-Commerce Application using Deep LearningIRJET - Visual E-Commerce Application using Deep Learning
IRJET - Visual E-Commerce Application using Deep LearningIRJET Journal
 
IRJET - Expiry Date and Cost Tracking in Medicine for Visually Impaired
IRJET - Expiry Date and Cost Tracking in Medicine for Visually ImpairedIRJET - Expiry Date and Cost Tracking in Medicine for Visually Impaired
IRJET - Expiry Date and Cost Tracking in Medicine for Visually ImpairedIRJET Journal
 
Machine Vision On Embedded Platform
Machine Vision On Embedded Platform Machine Vision On Embedded Platform
Machine Vision On Embedded Platform Omkar Rane
 
Machine vision Application
Machine vision ApplicationMachine vision Application
Machine vision ApplicationAbhishek Sainkar
 
The existing and future role of RFID technology in Dairy Supply Chain from Fa...
The existing and future role of RFID technology in Dairy Supply Chain from Fa...The existing and future role of RFID technology in Dairy Supply Chain from Fa...
The existing and future role of RFID technology in Dairy Supply Chain from Fa...Shuhab Tariq
 
IRJET- Oranges Sorting using Arduino Microcontroller
IRJET- Oranges Sorting using Arduino MicrocontrollerIRJET- Oranges Sorting using Arduino Microcontroller
IRJET- Oranges Sorting using Arduino MicrocontrollerIRJET Journal
 

Similar to MobileHCI2010 p345-lee (20)

IRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind AssistanceIRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind Assistance
 
IEEE augmented reality learning experience model (ARLEM)
IEEE augmented reality learning experience model (ARLEM)IEEE augmented reality learning experience model (ARLEM)
IEEE augmented reality learning experience model (ARLEM)
 
Apollon - 22/5/12 - 11:30 - Local SME's - Innovating Across borders
Apollon - 22/5/12 - 11:30 - Local SME's - Innovating Across bordersApollon - 22/5/12 - 11:30 - Local SME's - Innovating Across borders
Apollon - 22/5/12 - 11:30 - Local SME's - Innovating Across borders
 
Requirement:HW6 Problem 2 Design a mobile robot capa.docx
Requirement:HW6 Problem 2 Design a mobile robot capa.docxRequirement:HW6 Problem 2 Design a mobile robot capa.docx
Requirement:HW6 Problem 2 Design a mobile robot capa.docx
 
See-Tag
See-TagSee-Tag
See-Tag
 
IRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLABIRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLAB
 
IRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired People
 
IRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart CapIRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart Cap
 
IRJET- Review on Text Recognization of Product for Blind Person using MATLAB
IRJET-  Review on Text Recognization of Product for Blind Person using MATLABIRJET-  Review on Text Recognization of Product for Blind Person using MATLAB
IRJET- Review on Text Recognization of Product for Blind Person using MATLAB
 
Development of smart cane for blind people
Development of smart cane for blind peopleDevelopment of smart cane for blind people
Development of smart cane for blind people
 
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindPassive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
 
Ch02 project selection (pp_tshare)
Ch02 project selection (pp_tshare)Ch02 project selection (pp_tshare)
Ch02 project selection (pp_tshare)
 
Usability testing for qualitative researchers
Usability testing for qualitative researchersUsability testing for qualitative researchers
Usability testing for qualitative researchers
 
Usability testing for qualitative researchers
Usability testing for qualitative researchersUsability testing for qualitative researchers
Usability testing for qualitative researchers
 
IRJET - Visual E-Commerce Application using Deep Learning
IRJET - Visual E-Commerce Application using Deep LearningIRJET - Visual E-Commerce Application using Deep Learning
IRJET - Visual E-Commerce Application using Deep Learning
 
IRJET - Expiry Date and Cost Tracking in Medicine for Visually Impaired
IRJET - Expiry Date and Cost Tracking in Medicine for Visually ImpairedIRJET - Expiry Date and Cost Tracking in Medicine for Visually Impaired
IRJET - Expiry Date and Cost Tracking in Medicine for Visually Impaired
 
Machine Vision On Embedded Platform
Machine Vision On Embedded Platform Machine Vision On Embedded Platform
Machine Vision On Embedded Platform
 
Machine vision Application
Machine vision ApplicationMachine vision Application
Machine vision Application
 
The existing and future role of RFID technology in Dairy Supply Chain from Fa...
The existing and future role of RFID technology in Dairy Supply Chain from Fa...The existing and future role of RFID technology in Dairy Supply Chain from Fa...
The existing and future role of RFID technology in Dairy Supply Chain from Fa...
 
IRJET- Oranges Sorting using Arduino Microcontroller
IRJET- Oranges Sorting using Arduino MicrocontrollerIRJET- Oranges Sorting using Arduino Microcontroller
IRJET- Oranges Sorting using Arduino Microcontroller
 

MobileHCI2010 p345-lee

  • 1. Usability Evaluation of Beep-To-The-Box Young Seok Lee, Santosh Basapur, Harry Zhang, Claudia Guerrero, Noel Massey Motorola Applied Research Center 1295 E. Algonquin Rd. Schaumburg IL, 60196, U.S.A. {younglee, sbasapur, harryzhang, claudiag, noel.massey}@motorola.com ABSTRACT Radio Frequency Identification (RFID) provides various opportunities to increase the productivity of retail business. In this paper, we describe a usability evaluation study for an RFID- based location tracking application, called Beep-To-The-Box (BTTB). The experiment was conducted in a simulated retail store to gain in-depth understanding of the usefulness and usability of the prototype in determining visual and audio user interface features. We describe the features of the BTTB, report the experimental results, and discuss insights gained to provide design recommendations for the final product design. Categories and Subject Descriptors H.5.2 [User Interfaces]: Evaluation/methodology, User-centered design. General Terms Design, Experimentation, Human Factors Keywords Usability, visual and auditory UI design, RFID, Indoor location tracking 1. INTRODUCTION Radio Frequency Identification (RFID) has recently become a viable replacement for the Universal Product Code (UPC) technology in many industries including retail business. Its fast growth and huge potential benefits have motivated a major move independently taken by large retailers such as Wal-Mart, Target, and Walgreens, to name a few [1]. In response, a number of major technology companies, including IBM, Intel, and Motorola, have made an investment to create various retail solutions using RFID that provide benefits such as improved inventory control, simplified business process, and reduced labor costs. An RFID system consists of two primary components – a tag and a reader. An RFID tag, similar to UPC, is usually attached to a tracking object; a reader is then used to track tagged objects. In retail space, RFID tags are useful as identification and proximity sensors – when a reader can access information on a tag, the reader is able to identify the tag (or the object bearing the tag) as well as infer that the tag is in close physical proximity of the reader. In an effort to utilize the potential of RFID, researchers at Motorola Applied Research Center have developed a mobile application, called Beep-To-The-Box (BTTB), which infers the physical proximity of a tag and directs users to the location of the tagged item. In this paper, we describe a usability evaluation study on BTTB to gain in-depth understanding of the usefulness and usability of this prototype in determining visual and audio user interface features. The remainder of this paper presents major features of the BTTB and the details of the laboratory-based usability study, and discuss how the results have provided the basis for practical design recommendation for the final product design. 2. Beep-To-The-Box Prototype BTTB is a mobile application that reads a passive RFID tag and locates an item that bears the tag. A passive tag has no battery of its own and makes use of the incoming radio waves broadcasted by a reader to power its response. A BTTB prototype was built on a Motorola MC9090 hand-held mobile computer (see Figure 1) with an ultra-high frequency (UHF) RFID reader for a frequency range of 902-928MHz. After a hardware construction, a series of tests were conducted to measure RFID tag reading performance as function of antenna orientation, RFID orientation, transmission power, frequency, and distance. Additional tests were conducted to check the effect of height, angle, and material as well. Next, a software team developed an RFID range-finding algorithm that determines the distance from an RFID reader to a target tag. The team decided that the distance measurement need not be absolute nor includeCopyright is held by the author/owner(s). MobileHCI‘10 September 7-10, 2010, Lisbon, Portugal. ACM 978-1-60558-835-3/10/09. Figure 1. BTTB prototype on MC9090 and UI 345
  • 2. any directionality because determining an absolute distance would require calibrating the reader to many different kinds of environments, materials, and tag/antenna orientation. Therefore, the output of the algorithm was a real-time change in distance to the target, which was measured using Received Signal Strength Indicator (RSSI). In the meantime, the team began designing user interfaces and decided to experiment a “Geiger counter” metaphor where the number of pulses of sound, together with their tempo, gives an indication of how close it is to the target. If the user carrying a reader is a long way from the target, then few, widely spaced sound pulses are emitted. If the user is close to the target, many, closely spaced sound pulses are generated. In addition, we provided a visual indicator of signal strength using a bar-graph type meter (see Figure 1). The prototype allowed users to select a sensitivity setting that adjusts a tradeoff between the response time (defined as the time required to read the steady-state response) and the accuracy (defined as the steady state variation) of reading performance. For example, if users selected the high sensitivity setting, the prototype responded to the tag with decreased response time but larger oscillations, which lowered the accuracy, compared to the normal sensitivity setting. 3. Usability Evaluation 3.1 Objectives The objective of this evaluation study was to provide a formative usability evaluation for the BTTB prototype with a specific aim to assess 1) usefulness and usability of visual and audio alerts, 2) user preferences between the normal and medium sensitivity setting, 3) perceived usefulness of the prototype in the retail environment, and 4) additional features to improve the user experience. As explained earlier in this paper, the normal setting was tuned to deliver high accuracy but with some latency, and the medium setting was set to be more responsive to the same signal strength compared to the normal setting, which resulted in intensive auditory feedback sometimes. We aimed to make an assessment using various contexts of use in the retail store environment and provide design recommendations to improve the prototype as a part of the iterative development process. 3.2 Method 3.2.1 Participants and Scenarios Ten participants were recruited from the Motorola Schaumburg campus, IL, U.S.A. Their technological background varied from sales account managers to industrial designers. Two males and eight females participated in the study, and the average age was 38.8 (Min =25; Max=55). Five participants reported their previous experience with RFID. The study was conducted in a simulated retail store at the Motorola Innovation Center with five scenarios that represent various use cases of the BTTB such as the sales floor and the inventory backroom (Table 1 and Figure 2). 3.2.2 Procedure Each participant was given an overview of the experiment and asked to fill out a brief background questionnaire on their previous experience with retail stores, scanners, and RFID readers. Next, we provided a brief introduction to the prototype and asked them to familiarize themselves with the prototype by performing trial runs with sample RFID tags. In the experimental session, they were asked to complete the tasks (e.g., look for an item with RFID tag #000000000013) in the five scenarios. Table 1. Description of five scenarios Scenario # Description (Task: find a target item in) 1 20 clothing items placed on hangers (RFID tags on back of the items) 2 30 folded shirts piled on a table (RFID tags on inner side of the shirts) 3 50 boxes/cartons placed on pallets (RFID tags on the side of the box) 4 25 unfolded clothes piled in a box (RFID tags on top of price tags) 5 anywhere in the entire store (RFID tags on the side or back of the item) Figure 2. Context of use (Scenario 1) The order of the scenarios and the sensitivity settings (normal vs. medium) was randomized to reduce the carry-over effect. Therefore, each participant performed a total of 10 tasks during the experiment session (two tasks for five scenarios). The participants were asked to speak aloud what they see, hear, and think about during the task performance. After completing each task, participants were asked to rate the ‘ease of task completion’ using 8 point Likert-type scale (Q: How easy was it to locate the item using this device in this setting?), and follow-up questions were asked if necessary. After completion of all tasks, participants rated the overall usefulness, ease of use, learnability, and satisfaction of the prototype using the same 8 point Likert- type scale, followed by summarizing overall experience as well as suggestions to improve the prototype. The experiment lasted about 1 hour, and the entire session was recorded with a video camera for a critical incident analysis as well as the content analysis [2]. 4. RESULTS 4.1 Auditory and visual cue 4.1.1 Auditory feedback is a primary cue All participants mostly relied on the auditory feedback during the task completion. They mentioned that searching for an item is a 346
  • 3. “visual” task, so they felt using audio feedback is more natural and easier. Some participants did not use the visual feedback at all. They reported difficulties searching for the tag visually and keeping an eye on the screen for visual feedback simultaneously. Also, several participants remarked that it was difficult to look at the display when the item was placed very high or low (scenario 2, 3, 4). Exemplar remarks include: “I think audio works really well. The visual, I did not use it almost at all because it is hard to look at the box and look at the indicator and come back and forth. That is very tedious. With audio, even having eyes closed, I can find a box easily by sweeping it over the pallet.” 4.1.2 Visual feedback is a secondary cue Most participants used the visual feedback as secondary information. Participants used the audio (beeps) to sense direction of the general area, but once they moved to the vicinity of the target item, they started using the visual feedback for two reasons: 1) to detect subtle change of the signal strength and 2) to confirm the audio feedback’s accuracy. Participants commented that it was sometimes difficult to differentiate the subtle signal change in the sound, so they referred to the display to see if there was any subtle change (e.g., between four bars and five bars of the signal). Below is a typical remark from the participants. “I look at display to see if there is any difference. I used it to double check as secondary information.” 4.1.3 Final confirmatory visual cue is necessary Most participants mentioned having a clear confirmatory visual cue would be helpful for completion of the search task. Since most participants used the visual cue as a confirmation of finding an item, they desired to have a clear cue on the display to indicate a kind of “you got it” confirmation. Several participants expected to see the bar topped out when the signal was highest, but this was not the case in the prototype. Even when the device was right on top of the tag, the bas didn’t top out sometimes, which resulted in several users’ confusion. They suggested that when the item is extremely close changing color or flashing green light on the display might be helpful indications of “yes, this is the one” confirmation. Below is typical remark by the participants. “If I find one item, it should light up like showing an indicator. Yes this is the one for sure.” 4.1.4 Directional indication would be helpful The directionality of the RFID tag signal caused some problems in localizing items in the backroom setting where boxes were piled up in multiple layers. At the initial scan, several participants heard the fast beeping when they aimed the device to the target item from a distance, which misled them to search a box in the front layer although the target item was placed in the second or third layer. Several participants desired to have indication for direction on the display similar to a compass. 4.1.5 Other improvements Some participants mentioned that it was also difficult to see and differentiate subtle changes in the visual cue (e.g., six bars from five), so using a color code or gradient would be useful to increase the readability of visual cue; hence, it will speed up the search task. Also, a participant with relatively low vision suggested that adjustable font size of the RFID tag number would be helpful for a quick confirmation of the tag number or item name when the prototype detects a target. Several participants also reported that the fast beep was somewhat irritating to them after being exposed for some time. Others commented that the beeping may interrupt customer’s shopping experience in retail stores. Participants suggested having an easy access to mute, volume control over the beep and a headset so that customers don’t have to hear the annoying beeps. In addition, participants suggested changing tone or sound of the beep would be helpful for understanding gradual changes in the sound. 4.2 Setting Preferences (normal vs. medium) Participants’ preference between the two sensitivity settings seemed to be distributed almost evenly. The mean rating of ‘ease of use’ for each setting also indicated that participants perceived almost equal ‘ease of use’ (Normal = 5.87 vs. Medium = 5.82). Two participants did not even notice any difference between the two settings; however, those who noticed the difference explained reasons for their preference. Participants who preferred the normal setting remarked that it was more accurate and sensitive enough to differentiate subtle changes in the signal strength. Especially, when they stood distant from the target, the normal setting provided a “from no beeps to beeps” change which helped them to decide where to focus their attention in the general area. Also, when they were very close to the target item, they could sense a subtle change of tempo in auditory alerts relatively easier than the medium setting. However, other participants preferred the medium setting because it beeped faster and louder (according to participants, in fact, it beeped at the same loudness level) which helped them be aware or confident that they are getting close to the target item. However, when they were very close to the target item, the beeping was too intense for some users to differentiate between the subtleties of higher and lower signals. Also, they found that the audio in the medium setting was not synchronized with the visual cue, which caused users’ confusion and consequently mistrust of the audio. 4.3 Perceived Usefulness All participants agreed that the prototype would be very useful for a broad search in a large area, for instance, backrooms full of boxes with small labels, shoe stores, pharmacy, or stores with high end items. They also commented, however, that it would not be useful for searching a small item in a small area, such as finding a screw in a bin, because the device would provide almost the same reading above the bin. This was evident in the quantitative data as below (Figure 3). Mean ratings of ‘ease of use’ for five scenarios were calculated, and the repeated measure analysis revealed a significant difference among the five scenarios (F(4,36) =6.10, p <0.01). The pair-wise comparison revealed that the device was the most useful in Scenario 5 (i.e., locating an item in the store) while the least useful in Scenario 3 (i.e., locating a box on pallet) and scenario 4 (locating a cloth in a box) (Figure 3). There was no significant ‘scenario x setting’ interaction (p=0.36). 347
  • 4. Mean ratingof 'ease of use' for each scenario 0 1 2 3 4 5 6 7 8 1 (shirt on rack) 2 (shirt on table) 3 (box on pallet) 4 (shirt in box) 5 (item in store) Scenarios MeanRating Figure 3. Mean ratings of 'ease of use' across scenarios 5. DISCUSSION Based on experimental data, we confirmed the potential usefulness of the BTTP prototype in retail environments. We found that BTTB would be very useful for a broad search in a large area, such as finding a piece of jeans in an apparel shop. However, due to the radio signal’s propagation characteristics, its usefulness would decrease in searching item from a small area (e.g., finding a screw in a bin) or from a place where items piled up in multiple layers (e.g., inventory rooms with boxes in multiple rows and columns). This study also revealed that participants used the auditory feedback as a primary cue for proximity sensing while visual feedback served to provide a confirmatory cue when users were close to the target item within the proximity. Because the nature of the task was a visual search which involved an active scan of the visual environment for a particular object (target) among other objects (distracters), it makes sense that the use of auditory modality becomes a more natural interaction to receive directional information in the general area. Also, the Geiger counter metaphor seemed to be a proper mapping in this type of task in order to inform users of the real-time distance change to the target while reducing demands on users’ visual attention. However, when users were within the proximity of the target, they encountered with the low resolution problem (i.e., users could not distinguish the tempo difference between two sounds so that they could not tell if a value was going up or down) and tended to rely on visual feedback that offered much easier reading. As described above, the auditory and visual information provide unique but complementary advantages, and the combination of the two modalities becomes a powerful tool in the target search task. However, integration of the two modalities in the user interface design requires in-depth understanding of users’ information needs in the temporal procedure of task execution, so we provide some insights and design recommendations for BTTB based on the experimental results and previous human factors guidelines for display design. We believe that these suggestions can be applicable to designing UIs for similar devices for target search purpose. First, the range of beeps should be large enough to deliver high resolution. Many auditory parameters are not suitable for high resolution display of quantitative information [3]. Using the pulse of sound with rapidity, only a few different values can be unambiguously presented. As found from the medium setting in this study, the audio alerts tuned to generate too much beeping on even lowest detectable signals made it difficult for users to detect the subtle change in the already fast beeping. The range of beeps should be extended to allow a discernable indication for a subtle difference. In addition, previous literature suggested that provision of additional or redundant cues may help differentiate the subtle change in the sound [3]. Pitch, timbre, intensity may be used as a supplemental cue. The use of timbre may reduce the irritation factor of fast beeping sound observed from this study. Also, when a headset is used with this device, spatial (directional) information can be provided using stereo sounds [4]. Secondly, it is recommended to provide a confirmatory visual cue on the display to allow users’ double confirmation. Usually, participants tended to confirm their finding with the visual cue but were confused because the bar never went up to the top. It is recommended to soften up the sensitivity at high signal strength levels (i.e., when a reader is close to the target) so that the visual bar can reach the top with weaker signal strength, which will offer a “you got it” confirmation. Also, consider using flashing, lighting up or changing color of the bar to increase the visibility of the final confirmation. The tactile modality may be used to duplicate the confirmatory cue by putting a vibration actuator inside the handle. Also, the readability of the visual cues should be improved by including the color or gradient to clarify the subtle differences (i.e., one bar goes up from five bars). Lastly, the audio and visual feedbacks must be synchronized to indicate the same ‘intensity’ information. In this study, we found that the audio feedback that was not in line with the visual cue caused users’ confusion and mistrust of one of the two feedbacks, consequently. The two modality information should be synchronized because users attend to both sources to comprehend the information presented in a dynamic task completion. 6. ACKNOWLEDGMENTS We thank Tim Collins, Swee Mok, Tom Mathew, Julius Gyorfi, and Tom Babin for developing the prototype and participation in the evaluation study design session. 7. REFERENCES [1] Weier, M. 2008. Walgreens deploying RFID in distribution centers, InformationWeek, extracted from: http://www.informationweek.com/news/mobility/RFID/show Article.jhtml?articleID=210601894 [2] Stanton, N. Salmon P, Waker, G., Baber, C., and Jenkins, D. 2006. Human Factors Methods: A practical guide for engineering and design, Ashgate. [3] Brewster, S. 2002. Nonspeech auditory output. The human- computer interaction handbook, L. Erlbaum Associates Inc. [4] Brewster, S.A., Wright, P.C. & Edwards, A.D.N. 1992. A detailed investigation into the effectiveness of earcons. In Proceedings of the First International Conference on Auditory Display, pp. 471-4 348