Ergonomics-for-One in a Robotic Shopping Cart for the Blind


Published on

Assessment and design frameworks for human-robot teams
attempt to maximize generality by covering a broad range of
potential applications. In this paper, we argue that, in assistive
robotics, the other side of generality is limited applicability: it is
oftentimes more feasible to custom-design and evolve an
application that alleviates a specific disability than to spend
resources on figuring out how to customize an existing generic
framework. We present a case study that shows how we used a
pure bottom-up learn-through-deployment approach inspired by
the principles of ergonomics-for-one to design, deploy and
iteratively re-design a proof-of-concept robotic shopping cart for
the blind.

Published in: Technology, Health & Medicine
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Ergonomics-for-One in a Robotic Shopping Cart for the Blind

  1. 1. Ergonomics-for-One in a Robotic Shopping Cart for the Blind Vladimir A. Kulyukin Chaitanya Gharpure Computer Science Assistive Technology Laboratory Computer Science Assistive Technology Laboratory Department of Computer Science Department of Computer Science Utah State University Utah State University cpg@cc.usu.eduABSTRACT homes and communities as long as possible. This situation bringsAssessment and design frameworks for human-robot teams a unique challenge and opportunity to assistive robotics: is itattempt to maximize generality by covering a broad range of possible to develop robotic devices that will enable older andpotential applications. In this paper, we argue that, in assistive disabled individuals to maintain their independence and therebyrobotics, the other side of generality is limited applicability: it is reduce the cost of institutionalized medical care?oftentimes more feasible to custom-design and evolve an Vision is a sensory modality that deteriorates with age. As ofapplication that alleviates a specific disability than to spend now, there are 11.4 million visually impaired individuals living inresources on figuring out how to customize an existing generic the U.S. [8]. Grocery shopping is an activity that presents aframework. We present a case study that shows how we used a barrier to independence for many visually impaired people whopure bottom-up learn-through-deployment approach inspired by either do not go grocery shopping at all or rely on sighted guides,the principles of ergonomics-for-one to design, deploy and e.g., friends, spouses, and partners. Traditional navigation aids,iteratively re-design a proof-of-concept robotic shopping cart for such as guide dogs and white canes, are not adequate in suchthe blind. dynamic and complex environments as modern supermarkets. These aids cannot help their users with macro-navigation, whichCategories and Subject Descriptors requires topological knowledge of the environment. Nor can theyH.1.2 [Models and Principles]: User/Machine Systems – human assist with carrying useful payloads.factors. In summer 2004, the Computer Science Assistive Technology Laboratory (CSATL) of the Department of Computer ScienceGeneral Terms (CS) of Utah State University (USU) launched a project whosePerformance, Design, Experimentation, Human Factors. objective is to build a robotic shopping cart for the visually impaired. In our previous publications, we examined several technical aspects of robot-assisted navigation for the blind, suchKeywords as RFID-based localization, greedy free space selection, andassistive technology, navigation and wayfinding for the blind, topological knowledge representation [6, 7]. In this paper, weassistive robotics, ergonomics-for-one. focus on how the ergonomic aspects of the system have evolved through fitting trials in two dynamic and complex environments.1. INTRODUCTION The paper is organized as follows. In Section 2, we reviewCurrent demographic trends in the U.S. signify a demographic relevant research on human-robot interaction (HRI). In Section 3,shift from a population where most people are relatively young to we discuss the basic principles of ergonomics-for-one and presenta population where most people are relatively old. In 2000, U.S. an ergonomics-for-one analysis to identify the key elements of theresidents aged 65 and older constituted approximately 12 percent performance gap between the blind individual and the task ofof the population. It is projected that by 2030 people aged 65 and independent grocery shopping. In Section 4, we present our initialolder will make up 22 percent of the U.S. population [11]. In design aimed at bridging several elements of the performance gap.essence, older adults will make up an increasingly larger percent In Section 5, we present our navigation trials with a small sampleof the population. of visually impaired participants. We describe our experimentThe primary concern for aging adults is the decline in their design, analyze the collected data, and present the participantssensory-motor abilities. Surveys show that a great number of U.S. feedback. In Section 6, we return to bridging the performanceresidents would like to maintain their independent status in their gap. We describe how we deployed the robotic shopping cart in a supermarket and discuss what ergonomic modifications we made in the system after several fitting trials in that environment. In Permission to make digital or hard copies of all or part of this work for Section 7, we focus on two major challenges for the future. In personal or classroom use is granted without fee provided that copies are Section 8, we give our conclusions. not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, 2. RELATED WORK requires prior specific permission and/or a fee. In recent years, many researchers have asked the question of what HRI’04, March 2–3, 2006, Salt Lake City, Utah, USA. it means to design a human-robot team and to measure its Copyright 2006 ACM 1-58113-000-0/00/0004…$5.00.
  2. 2. performance from the standpoint of human-robot interaction 3. ERGONOMICS-FOR-ONE(HRI). Fong et al. [1] identify common metrics for task-oriented We discovered the field of ergonomics-for-one during a literatureHRI through a thorough analysis of existing HRI applications. search for conceptual frameworks to help guide the design of theSeveral task-specific metrics are proposed and suggested for robotic shopping cart for the blind and assess its performance withstandardization. It is claimed that the metrics are applicable at any human participants. The term ergonomics-for-one was first coinedpoint of the HRI spectrum starting at pure teleoperation and by McQuistion in 1993 [9]. In brief, ergonomics-for-one is theending with full autonomy. Howard [5] proposes a systematic science of fitting the task to a particular individual who wants toapproach for assessing performance of a human-robot team. The repeatedly accomplish the task in a given environment.approach takes into account the capabilities of both human androbotic agents and integrates the effect of cognitive stress during Although the use of the term is recent, the ideas underlying it arecontinuous operation. Goodrich and Olsen [4] propose several not novel: occupational therapists have been devising solutions toprinciples of efficient HRI based on the lessons from evaluating find better fits between individuals and their environments forneglect tolerance and interface efficiency. Each principle, e.g., decades [13]. In ergonomics-for-one, a solution to a specificmanipulate the world instead of the robot, is motivated by fitting task is referred to as an accommodation system thatrelevant factors from cognitive information processing. Olsen and consists of five components: 1) essential task functions; 2)Goodrich [10] propose several HRI metrics for leveraging human equipment used to accomplish the task; 3) inputs and outputs; 4)attention to develop HRI interfaces that enhance the task environment in which the task is accomplished; and 5) individualeffectiveness of the human-robot team. Scerri et al. [12] discuss with a disability who desires to accomplish the task.criteria to determine how to change the autonomy level of the A performance gap is identified by comparing the essential taskrobot to enhance the performance of the human-robot team on the functions to the disabled individuals capabilities. The assistivebasis of decision costs. Yanco and Drury [14] propose qualitative device is designed to bridge the performance gap. It should betaxonomies and qualitative and quantitative metrics for human- noted that ergonomics-for-one does not define the order in whichrobot performance evaluation. Fong et al. [2] offer a detailed the components must be considered so long as eventually all ofsurvey of socially interactive robots and a taxonomy of design them are taken into account.methods and system components.Given these approaches, it is natural to ask whether any of them At this point one may ask the question of what exactlycan be readily applied to assess the performance or guide the ergonomics-for-one brings to the table that HRI does not. Thedesign of a robotic shopping cart for the blind. We do not believe answer is a different research methodology. First, ergonomics-for-that, at this point in time, this question has a positive answer. one does not assume the existence a common framework that canThere are several reasons to justify our belief. First, these be used to design and assess every assistive device imaginable. Toapproaches assume that the operator is capable of maintaining be sure, there are standard procedures that evaluate the extent ofvisual contact with the robot, either continuously, when the individual disabilities, e.g., standard vision or hearing tests. Butoperator is collocated in the same task space, or part of the time, these tests are used only as inputs to the design process. Second,when the operator is remote and interacts with the robot through ergonomics-for-one is inherently bottom-up in that it places aan interface intermediary. Second, existing assessment great deal of emphasis on fitting trials and learning throughframeworks focus on interfaces, autonomy, and task efficiency deployment [13]. The objective of such trials, also known asand do not take into account the ergonomic interaction between initial usability tests or walk-throughs, is to ascertain the usersthe human and the robot. Third, the scope of many frameworks is comfort, ease of use, preference, and other psychosocial elements.simply too broad. The other side of generality is inapplicability: it We used a group of five visually impaired individuals from theis oftentimes more feasible to custom-design a new approach than local visually impaired community in Logan, Utah, to help usto spend resources on figuring out how to customize an existing analyze the components of an accommodation system that couldgeneric framework. It was this realization that prompted us to help them do grocery shopping independently. The youngestlook for inspiration outside of the traditional HRI realms. individual was 13, the oldest was 47. Two participants were white cane users. The other three participants used both guide dogs and white canes. We met with the individuals in an informal settingThe paper is organized as follows. In Section 2, we review and asked them about what it would take them to shoprelevant research on human-robot interaction (HRI). In Section 3, independently. To minimize peer pressure, we met with eachwe discuss the basic principles of ergonomics-for-one and present individual ergonomics-for-one analysis to identify the key elements of theperformance gap between the blind individual and the task of Essential task functions: The interviews helped us identify fiveindependent grocery shopping. In Section 4, we present our initial essential task functions: 1) getting to a supermarket; 2) finding thedesign aimed at bridging several elements of the performance gap. needed grocery items; 3) getting through a cash register; 4)In Section 5, we present our navigation trials with a small sample leaving the store; and 5) getting home. None of the individualsof visually impaired participants. We describe our experiment had any problems getting to a supermarket and getting home fromdesign, analyze the collected data, and present the participants a supermarket. Logan has a free bus system with a network of busfeedback. In Section 6, we return to bridging the performance stops all over the city and several suburbs. All places where onegap. We describe how we deployed the robotic shopping cart in a can buy groceries have bus stops close by.supermarket and discuss what ergonomic modifications we madein the system after several fitting trials in that environment. In Function 2 was refined into three sub-functions: 1) navigating to aSection 7, we focus on two major challenges for the future. In shelf section with a needed grocery item; 2) finding the neededSection 8, we give our conclusions. item on the shelf; and 3) placing the item in a shopping basket.
  3. 3. Function 3 was refined into four sub-functions: 1) navigating to a next to other Lays items or other potato chip brands. This is evencash register; 2) placing the items from the basket on the belt; 3) more of a problem with smaller items, like small bags ofpaying for the items; and 4) placing the bagged items back into sunflower seeds. Assuming that the individual does not havethe shopping basket. Function 4 was refined into three sub- manual dexterity problems, once the item is found, the individualfunctions: 1) getting to the exit; 2) leaving the basket in a can place it into a shopping basket.designated place; and 3) exiting the store. Fifth, the same navigational challenges apply to the function ofEquipment: None of the participants did any grocery shopping getting through a cash register. Additional challenges are knowingon their own. They either did not do any grocery shopping or used when it is time to start placing the items on the conveyor belt,sighted guides: parents, siblings, or partners. The only equipment paying for the items, and putting the bagged items back into theused by the participants were white canes, guide dogs, and baskets. They could not use shopping carts, because Sixth, when the shopper is ready to leave the store, she again hasthey could not simultaneously handle guide dogs or white canes to navigate to the exit, thus confronting the navigationaland push the carts. challenges identified above, and place the basket in the properlyInputs and outputs: When asked how they would prefer to designated place.interact with an assistive grocery shopping device, if they hadone, the participants suggested speech and keypad as input 4. BRIDGING THE GAP: PART Ioptions and speech and dynamic Braille as output options. 4.1 On to a Robotic Shopping CartEnvironment: The target environment was a typical supermarket. After considering the first performance gap component,There are several features that make this environment particularly independent use of a shopping cart, we concluded that thechallenging. First, there is always some shopper traffic. On navigation performance of the shopping cart had to be automated.certain days, e.g. Saturday, and during certain hours, e.g., Effectively, the robotic shopping cart would act as a supermarketbetween 6 and 8 pm, the shopper traffic is at its highest. Second, guide for blind shoppers. This is, by no means, a novel idea as thethere are indigenous processes already in place, e.g., shelf re- field of AI robotics had built robotic guides before [3]. None ofstocking, cleaning, product scanning, etc., that cannot be the guides, however, were specifically built for blind shoppers indisrupted. Third, the products are periodically re-shuffled and re- supermarkets. As far as we could see, we had two options:arranged, and free open spaces are occupied with temporary building a new robotic base with a shopping cart mounted on topdisplays and stands. of it or mounting a shopping cart on top of an existing roboticIndividual: Two participants were completely blind. Three base. We chose the second option, because the first option, after aparticipants had light perception, i.e., they were able to preliminary cost analysis, looked prohibitively expensive for adistinguish between light and dark. All participants were research prototype. In addition, we already had experienceambulatory, did not have any serious speech impediments, mounting equipment on our Pioneer 2DX robotic base from thehearing problems, or cognitive disabilities. ActivMedia Corporation when we experimented with our robot- assisted navigation for the blind in indoor environments [6, 7].After the interviews, we identified the performance gap that hadto be addressed by the accommodation system. First, using aguide dog and/or a white cane with a shopping cart is not feasible.Neither guide dogs nor white canes would help avoid frontobstacles if the blind shopper has to push the cart in addition tohandling a guide dog or using a white cane. Of course, it ispossible to use a basket, but the shopper would then be restrictedto buying a small number of items.Second, since the shopper cannot independently navigate, sheneeds to communicate her intentions to a sighted guide. Thiswould be ordinarily done in natural language if the guide ishuman.Third, the visually impaired participant needs assistance with Figure 1: RoboCart in Lees MarketPlace.navigating to shelf sections with specific grocery items. In anenvironment where end points of routes remain static, many guide Thus, we built a polyvinyl chloride (PVC) pipe structure, securelydog handlers and cane users can learn routes after several trials. mounted it on top of the Pioneer 2DX robotic base, and thenHowever, this assumption does not hold in supermarkets due to placed a large shopping basket into that structure. The resultingconstant re-arrangements and re-shufflings of products. design, which we called RoboCart, is shown in Figure 1. As one can easily see from Figure 2, the RoboCart design is aFourth, even if it is assumed that the blind shopper can find her modification of RG, our indoor robotic guide for the blind that weway to the correct shelf section, she still needs to pick the right built in 2003-2004 on top of another Pioneer 2DX base. It shoulditem. For example, suppose that the blind shopper wants to buy a be noted that this is a proof-of-concept design. The backbag of Lays Classic and finds her way to the correct shelf section directional wheel of the base is small, which results in thewith Lays potato chips. There is always a chance that the shopper inherent imbalance of this design. While we have not observedwill pick a wrong bag as Lays Classic bags are typically placed any accidents in which RoboCart tipped over, the future design
  4. 4. will be modified to have a four-wheel base so that the device will below, there is another reason why Braille may not be a viablenever tip over and injure the blind shopper. option for some users. 4.3 How Do We Navigate? When we started thinking about bridging the navigational component of the performance gap, we realized that we had little knowledge about what aspects of navigation might be important to the blind navigator. We also did not know if our communication choices described in the previous section would be ergonomically acceptable to blind individuals. Finally, we wanted to find out whether the presence or absence of the human navigator behind the robot affects the robots navigation. To answer these questions, we decided to conduct a series of fitting trials. We had to find a suitable environment for the trials. We had started negotiations with Lees MarketPlace, a Figure 2: RG, an indoor robotic guide for the blind. supermarket in Logan, Utah, about the possibility of testing RoboCart in their supermarket. But the negotiations were still in4.2 How Do We Communicate? progress. We ruled out tests in our CS Department, because weUpon entering the supermarket, the shopper needs to had already tested our robotic guide in the CS Department rathercommunicate her wishes to RoboCart. The input options that we extensively and had achieved satisfactory results [6, 7].considered were automatic speech recognition (ASR) and keypad. We chose to conduct fitting trials at the USU Center for PersonsWhen using ASR, the blind shopper would wear a wireless with Disabilities (CPD). The CPD occupies an entire building onmicrophone coupled to an over-the-ear headphone and the North USU Campus. The building has an area of 40,000communicate her intentions to the robot through speech. We will square feet. It has numerous offices, classrooms, laboratories,not go into details here on why we ruled out speech as an input lounges, and bathrooms. Another challenging aspect of thisoption, because we have described our reasons in detail in our environment that makes it similar to a supermarket is numerousprevious publications [6]. In brief, our ASR experiments, both in activities occur there during its working hours. Thus, other peoplenoisy and noise-free environments, had recognition rates of below going about their business, i.e., human traffic, are an integral part50 percent even though all of our participants were native of the environment.speakers of American English. Our decision to rule out ASR as aninput option should not be construed as a general argumentagainst ASR as an HRI mode. Rather, we concluded that, given 5. FITTING TRIALSthe current state of the art in commercial ASR and the constraintsof our problem, we should explore the keypad first. 5.1 Experiment Design We used the paired differences strategy to design our pilotThe input option that we chose was a small 10-key Belkin experiments. In a paired difference experiment, one is interestednumeric keypad. The layout of keys on the keypad is the same as to find the mean difference between two methods of conductingthe layout of keys on the cell phone. Since many visually some activity, which, in our case, is navigation. A data point isimpaired people use cell phones, our thinking was that the obtained by numerically measuring the performances of twolearning curve would not be steep. In addition, the number 5 key participants, say X and Y, from two different samples doing aon Belkin keypads has a small plastic protrusion that the visually designated activity and computing the difference between the twoimpaired can sense through touch. Once the number 5 key is measurements. When a sample of differences is obtained, one canfound, it is easy to find the other keys. test two hypotheses: the null hypothesis, H 0 : µ D = 0 , against one of the three alternative hypotheses, H a : µ D > 0 ,When compared to ASR, the keypad does reduce input ambiguity.However, even with the keypad the proverbial problem of sharedvocabulary does not go away. The user still must know what to H a : µ D < 0 , and H a : µ D ≠ 0 , where µ D is the meantype into the robot to make the robot do what the user wants. Toovercome this problem, we decided to create a Braille directory. difference. Essentially, H 0 suggests that there is no difference inThe directory was to be realized as a Braille sheet with performance, whereas H a ’s suggest that there may be ainstructions that map each destination to a short sequence ofnumbers. The semantics of each line was to be as follows: if you difference. The test statistic is a one-samplewant to go to destination X, please type this numerical sequenceinto the keypad. t = x D / s D / n D , where x D is the sample meanThe next element of the communication gap is output. The difference, s D is the standard deviation of the differences, andoptions that we considered were synthetic speech and dynamic n D is the number of differences.Braille displays. As we investigated dynamic Braille displays, wefound out that they were expensive: the cheapest option we could We selected a total of 9 routes in the environment. Each route wasfind was approximately 5K USD. Originally, the cost was the more than 40 meters in length and had 3 to 5 turns. In our case,main reason why we decided on synthetic speech. As we discuss our first sample consisted of the robot. Since we focused on
  5. 5. 10 11 12 13 14 15 16 17 18 3.64 3.60 -1.96 0.90 -4.07 -4.14 3.24 3.91 1.90 Table 1: T-statistics at α = 0.05 and df=4. Route,Part 0 1 2 3 4 5 10 65.79, 65.84 59.87, 60.89 62.90, 63.45 61.76, 65.08 61.94, 63.75 65.49, 67.14 11 70.83, 72.27 55.93, 57.33 56.91, 59.27 55.45, 59.16 56.42, 59.22 72.29, 73.67 12 70.94, 72.25 72.56, 73.68 75.78, 79.43 71.79, 98.53 73.96, 75.47 69.96, 72.46 13 87.88, 89.93 87.06, 87.93 89.45, 91.03 86.29, 88.78 86.29, 88.55 87.70, 90.17 14 55.76, 56.29 82.21, 83.71 84.60, 86.46 83.22, 84.89 83.61, 84.86 55.81, 57.12 15 57.35, 60.30 79.23, 80.15 78.85, 81.29 79.88, 81.85 85.91, 88.43 56.11, 64.49 16 120.74, 123.34 93.11, 97.67 95.54, 102.48 90.91, 93.09 98.70, 101.48 122.87, 129.00 17 124.72, 123.34 83.93, 103.48 87.10, 103.16 91.17, 94.08 90.25, 92.14 125.10, 126.82 18 129.11, 130.61 130.89, 139.79 97.35, 100.58 84.46, 86.63 88.14, 92.67 130.89, 139.77 Table 2: 95% confidence intervals.navigation and guidance, we used the robotic guide shown in own and the robot navigating with a visually impaired human. OnFigure 2. Our second sample consisted of five visually impaired the other routes, i.e., 12, 13, and 18, there appears to beparticipants. To obtain the measurements, we ran the robot five insufficient evidence to reject H 0 . In other words, the presencetimes on each of the designated routes and recorded the time-to-completion, e.g., the amount of time it took the robot to complete of the human navigator behind the robot does not appear to affectthe route. For each route, the average time-to-completion was the robots performance. Since, in computing µ D , we subtractedcomputed from the five runs. the robots time-to-completion from a participants time-to-We then had each participant use the robot to navigate the same completion, the positive t-statistics that exceed 2.776 suggest thatroutes. The robot would inform the participant through synthetic the robot was slower without the navigator than with thespeech about its present location. We told each participant the navigator. On the contrary, the negative t-statistics smaller thankeypad codes for all destinations. The participant would type in 2.776 suggest that the robot was slower with the navigator than bythe destination code through the keypad attached to a pole on the itself.back of the robot. Each route was navigated five times and the To verify the validity of these observations, we analyzed the datatime-to-completion measurements were taken for each participant. through confidence intervals. We computed 95% time-to-For each participant we computed the average time-to- completion confidence intervals for each route and eachcompletion. A sample of differences that we used to test the participant, including the robot. Table 2 gives the confidencehypothesis was obtained by computing the difference between the intervals for all routes and participants. The robot is listed asrobots average times-to-completion and the participants times-to- participant 0. The interval table verifies the conclusions of thecompletion. hypothesis tests. For example, both ends of the robots confidenceWe chose to test the third alternative hypothesis, interval for route 10 given in column 0 are greater than theH a : µ D ≠ 0 , at α = 0.05 corresponding ends of participants 1 through 4 and are essentially as the level of significance. The the same as the ends of participant 5. The same observations canrejection region for this hypothesis is be made on routes 11, 16, and 17. This seems to verify the test of| t |> tα / 2 = t 0.025 = 2.776 , and has 4 degrees of freedom. hypothesis conclusion that on these routes the robot without the navigator appeared to be slower than the robot with the navigator.Table 1 contains the sample t- statistics for each of the 9 routes The same technique can be applied to routes 14 and 15 on which,numbered 10 through 18. These statistics should not be viewed as according to the test of hypothesis conclusion, the robot appeareddefinitive. The paired differences design requires that the sample to be faster without the navigator than with the navigator. Theof differences be random. This assumption may not be satisfied in robots confidence intervals for these routes are to the left of theour case, because we did not choose the five individuals confidence intervals of participants 1 through 4 and coincide withrandomly. Their names were given to us by referral. the ends of participant 5. To understand what was causing these differences, we looked at5.2 Results the video footage of the runs. The video footage of the robot runsThe results in Table 1 tell us that on routes 10, 11, 14, 15, 16, and without the navigator on routes 10, 11, 16, and 17 showed that17, H0 is rejected, because the absolute value of the t-statistic is there was quite a bit of human traffic in the hallways. The videolarger than 2.776. In other words, on these routes there appears to footage of the robot runs with the navigator on the same routesbe a significant difference between the robot navigating on its showed that in the cases of participants 1 through 4, the amount of
  6. 6. human traffic in the hallways declined. The exception was was on the right or left. It also told me when it was turning left orparticipant 5 for whom the amount of human traffic remained right. I would appreciate voice messages being spoken moreessentially the same. Since the robots speed decreases with the loudly. I understand that you cannot make it too loud withoutnumber of obstacles present in front, the robot traveled more making it obnoxious to the people around me. Perhaps, it couldslowly in the presence of human traffic. be done with one over-ear headphone or a shoulder speaker so that I have my other ear available to me.The situation was reversed on routes 14 and 15. During the robotruns without the navigator the amount of human traffic was Comment 6: Overall, I felt very comfortable navigating with theminimal. However, when we ran the robot with the human robot. I felt even more comfortable after I learned on one of thenavigators, human traffic picked up considerably. The exception runs that the robot can recover from situations when it gets lostagain was participant 5 for whom the amount of human traffic did by finding an alternate route. Self-correction is a valuable featurenot change. Our conclusion was that the amount of human traffic, of this device.i.e., the number of people on route, is a nuisance variable thatmay have contributed to the differences in robot performance. Comment 7: Make sure that there is no chance of the robot going off the course.Another interesting observation that we made as we watched thevideo footage was the effect of the occasional mismatch betweenthe verbalized intent of the robot and the robots actual actions. Atseveral T-intersections the robot would tell the navigator that itwas turning left and then, due to the presence of people, it starteddrifting to the right before actually making a left turn. When thathappened, we observed that several human navigators pulled hardon the robots handle, sometimes driving the robot to a virtualhalt. We conjecture that when a communication mismatch occurs,i.e., when the robot starts doing something other than what it saidit would do, the human navigators become apprehensive and tryto stop the robot. Since these mismatches happened on the routeswhere the robot performed better without the navigator than withthe navigator, we concluded that the mismatches may havecontributed to the performance difference. Figure 3:RoboCarts Handle, Design 1.While watching the video footage, we also observed a differentkind of communication problem that occurred several timesduring u-turns. The robot would inform the navigator that it hadstarted making a u-turn after it had already started executing themaneuver. Although the robots message was accurate, it came abit too late and, as discussed in the next section, caused some 6. BRIDGING THE GAP: PART II In fall 2004, we received permission from Lees MarketPlace todiscomfort on the part of the participants. use their supermarket as a test site for our experiments. We asked two visually impaired individuals to participate in a series of5.3 Participants Speak fitting trials in the store. On several occasions we ran RoboCartAfter the experiments, we conducted informal verbal interviews on its own. The objective was to learn through deployment whatwith the participants and recorded their responses. The interviews modifications in ergonomic design and navigation were required.consisted of several questions about navigation safety and usercomfort. The objective was to let the participants give us feedbackon their experiences. Below we give several comments verbatim. 6.1 Ergonomic Modifications As shown in Figure 2, our original design included a guide leash.Comment 1: There was some abruptness in the robot motion. However, the participants expressed a wish that the dog leash beStops and slows down too suddenly. Sometimes it accelerates too replaced with a static handle. When asked why, the participantsfast. said that the dog leash did not give them sufficient feedback as to what direction the robot was taking them. This wish wasComment 2: Sometimes the robot tells you too late when it is expressed both by the cane users and guide dog handlers. It wasabout to make a u-turn. This is a problem if you have a guide dog quite understandable that cane users expressed this wish becauseand need to tell him to get out of the robots way. the cane is firm and does resemble a static handle. We wereComment 3: A little more user training up front would help. Let surprised, however, to hear the same complaint from the guideme touch the robot and give me some time to get comfortable with dog handlers. As we took a closer look at how the guide dogs arethe keypad. handled, the explanation presented itself immediately. It turns out that guide dog handlers do not use the leash when their dogs are atComment 4: The robot slows down at turns and then it kicks into work. They use a firm leather handle attached to a special harnesshigh gear too abruptly. I have a back injury and so such changes on the back of the dog. The handle enables the handler to givein speed were felt a lot. directions to the animal as well as to receive immediate haptic feedback about the animals movement. The leash is used onlyComment 5: The communication was clear and helpful. The when the dog is not at work and is being treated as a pet.robot told me when I got to a destination, whether the destination
  7. 7. The above lesson led to our first modification - the addition of a boxes and similar movable objects being placed into them by thestatic handle shown in Figure 3. The keypad hangs on the right store staff. After investigating the possibility of using Markovpole of the handle. After several trials in Lees MarketPlace, we localization [3], we decided against it because of safety concerns.realized that the keypads position was inconvenient for the user. Most applications of Markov localization indoors are based onIt is difficult to access the keypad quickly when the robot is laser range finding. Laser range finding does not perform well inmoving. To reach for the keypad requires letting go of the handle. large open spaces or environments with large glassy surfaces thatUsing the other hand is impossible as it is occupied with a cane or absorb laser signals. The performance of Markov localization isa leash. not predictable in dynamic environments and degrades in the presence of numerous dynamic obstacles. We considered extending our RFID-based navigation to open spaces by putting portable towers with RFID tags. We rejected this idea, too, because it called for a great deal of calibration and instrumentation and could be too disruptive to the indigenous business processes. We discussed our problem with the supermarkets owner and a senior store manager. They suggested that we put masking tape lines on the floor and use them for navigating large open spaces. In their opinion, if the system were to be deployed in their store permanently, they could easily paint such lines on the floor. As long as the paint was resistant to the floor wax, the lines were not a problem. Figure 4: RoboCarts Handle, Design 2.Figure 4 shows how we modified this design by changing theposition of the keypad. We purchased the wireless version of thesame keypad, attached it to a small plastic rectangle, and thenattached the rectangle to the handles bar. This position allows thenavigator to quickly reach for the keypad during the navigationwithout letting go of the handle.We also learned that Braille may not be feasible. Of the sevenvisually impaired people that we informally polled about thepossibility of using Braille on the robot only 2 were comfortablewith the idea. As we investigated the matter further and talked Figure 6: RoboCart following a line.with the assistive technology specialists at the USU Center for RoboCart was equipped with a small LogiTech web camera.Persons with Disabilities, we learned that only a small fraction of Figure 5 shows how the camera was added to the robotic base.visually impaired people use Braille. This fraction consists mostly We put one masking tape line from the lobby and up to the aisles.of people who are blind from birth. People who lose vision later A simple vision-based line following algorithm was written andin their lives due to accident, illness, or age either never learn successfully tested on several runs. Figure 6 shows how RoboCartBraille or use it rather slowly. follows the line to reach an aisle. Once in the aisles, our original RFID-based navigation algorithm was used. One aisle has 5 shelves on both sides. An RFID tag is placed every 3 meters on the 2nd or 3rd shelf on both sides of the aisle so that the robots RFID antenna can detect it. Thus, every aisle in which we tested RoboCart is equipped with 10 RFID tags: 5 on the left side and 5 on the right side. There is also a designated cash register where RoboCart takes the blind shopper. The cash register is equipped with two RFID tags. The first tag makes RoboCart stop and inform the blind shopper that the products can be unloaded onto the belt on the right. The second tag informs the shopper that she has to wait for the bagger to put the bags into the cart. The store management was comfortable with this instrumentation plan. Figure 5: RoboCarts Camera.6.2 Navigation Modifications 7. A Glimpse of the FutureSeveral important modifications were made to our navigation When we learned that Braille may not be a viable option, wealgorithm. The original algorithm was designed for structured replaced Braille with a voice-based directory based on syntheticindoor environments [7], which was fine for navigating speech. Instead of reading Braille, a blind person uses the keypadsupermarket aisles. The algorithm did not work in large open to scroll up and down the voice menu in which each line is spokenspaces, such as supermarket lobbies. Besides having a lot of to the user by the speech synthesis software. Modern grocerycustomer traffic, supermarket lobbies constantly change in terms stores carry thousands of items. One challenge that we areof their layout due to promotion displays, flower stands, product
  8. 8. currently investigating is how to organize the directory for easy 10. REFERENCESbrowsing. [1] Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., andSince the RFID tags must be placed on both sides of each aisle, Steinfield, A. Common Metrics for Human-Robotthe approximate layout of the store must be known in advance. Interaction. In layout must be maintained by the store. Thus, another [2] Fong, T., Nourbakhsh, I., and Dautenhahn, K. A Survey ofchallenge is how the store maintains the layout so that the robotic Socially Interactive Robots. Robotics and Autonomousshopping cart always guides the blind shopper to the appropriate Systems, 42:143-166, 2003.shelf. [3] Fox, D., Burgard, W., and Thrun, S. Markov Localization forAnother ergonomic challenge is access to individual items. Mobile Robots in Dynamic Environments. Journal of AIRoboCart leads the blind person to shelf sections, not to Research, 11:391-427, 1999.individual items. For example, it will guide the person to the shelf [4] Goodrich, M. and Olsen, D. Seven Principles of Efficientsection with Lays potato chips, but the person still has to pick up Human-Robot Interaction. In Proceedings of the IEEEan individual bag and put it into the robots shopping basket. To International Conference on Systems, Man, and Cybernetics,address this problem, we have integrated a small portable barcode pp. 3943-3948. IEEE, October 2003.reader into the system. Grocery stores already use barcodereading technologies to keep track of their price inventories. The [5] Howard, A. A Methodology to Assess Performance ofscenario that we are currently experimenting with is as follows: Human-Robotic Systems in Achievement of CollectiveRoboCart gets the blind shopper to a shelf section with a bunch of Tasks. In Proceedings of the International Conference onindividual items, the shopper then uses a handheld barcode reader Intelligent Robots and Systems (IROS). IEEE/RSJ, July read the barcodes on the shelf until the barcode of the right [6] Kulyukin, V., Gharpure, C., De Graw, N., Nicholson, J., anditem is found. Under this scenario, the shopper has to find the Pavithran, S. A Robotic Wayfinding System for the Visuallyshelf and then slide the barcode reader along the shelf and listen Impaired. In Proceedings of the Innovative Applications ofuntil a speech message tells the user that the proper barcode is Artificial Intelligence Conference (IAAI), pp. AAAI, July 2004. [7] Kulyukin, V., Gharpure, C., Nicholson, J., and S. Pavithran.8. CONCLUSIONS RFID in Robot-Assisted Indoor Navigation for the VisuallyIn this paper we showed how the basic principles of ergonomics- Impaired. In Proceedings of the IEEE Internationalfor-one were applied to the design and development of a proof-of- Conference on Intelligent Robots and Systems (IROS).concept robotic shopping cart for the blind. We identified the IEEE/RSJ, October 2004.performance gap that must be overcome by an accommodation [8] LaPlante, M. and Carson, D. Disability in the United States:system that allows the blind to shop independently. We described Prevalence and Causes. U.S. Department of Education,our initial usability tests and showed how the tests shaped the Washington, DC, 2000.ergonomic modifications of the system. [9] McQuistion, L. Rehabilitation Engineering: Ergonomics for One. Ergonomics in Design, January:9-10, 1993.9. ACKNOWLEDGMENTSThe first author would like to acknowledge that this research has [10] Olsen, D and Goodrich, M. Metrics for Evaluating Human-been supported, in part, through NSF CAREER grant (IIS- Robot Interactions. In Performance Metrics for Intelligent0346880) and two Community University Research Initiative Systems (PERMIS). NIST, September, 2003.(CURI) grants (CURI-04 and CURI-05) from the State of Utah. [11] Pollack, M. Intelligent Technology for the Aging Population.We would like to thank Mr. Lee Badger, the owner of Lees AI Magazine, 26(2):9-24, 2005.MarketPlace, for allowing us to use his supermarket in Logan, [12] Scerri, P., Pynadath, D., and Tambe, M. Toward AdjustableUtah, as a research site. We are grateful to John Nicholson, our Autonomy for the Real World. Journal of AI Research,research colleague at USU CSATL, for helping us to conduct 17:171-228, 2002.many fitting trials. We would like to thank Ying Bing, a CSgraduate student, for implementing the line following algorithm. [13] E. Berg. Ergonomics in Health Care and Rehabilitation.Finally, we would like to thank the visually impaired participants Butterworth-Heinemann, Woburn, MA, our experiments for their valuable feedback. [14] Yanco, H. and Drury, J. A Taxonomy for Human-Robot Interaction. In Proceedings of the AAAI Fall Symposium on Human-Robot Interaction, pp. 111-119, 2002.