Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind


Published on

In 2004, the Computer Science Assistive Technology Laboratory (CSATL) of Utah State University (USU) started a project whose objective is to develop RoboCart, a robotic shopping assistant for the
visually impaired. RoboCart is a continuation of our previous work on RG, a robotic guide for the visually impaired in structured indoor environments. The determinism provided by exteroception of passive RFID-
enabled surfaces is desirable when dealing with dynamic and uncertain
environments where probabilistic approaches like Monte Carlo Markov
localization (MCL) may fail. We present the results of a pilot feasibility study with two visually impaired shoppers in Lee’s MarketPlace, a
supermarket in Logan, Utah.

Published in: Science, Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind

  1. 1. Passive Radio Frequency Exteroception in Robot Assisted Shopping for the Blind Chaitanya Gharpure, Vladimir Kulyukin, Minghui Jiang, and Aliasgar Kutiyanawala Computer Science Assistive Technology Laboratory, Utah State University, Logan, UT 84321, USA.,,,, Webpage: vkulyukin/vkweb/research/sandee.html Abstract. In 2004, the Computer Science Assistive Technology Labora- tory (CSATL) of Utah State University (USU) started a project whose objective is to develop RoboCart, a robotic shopping assistant for the visually impaired. RoboCart is a continuation of our previous work on RG, a robotic guide for the visually impaired in structured indoor envi- ronments. The determinism provided by exteroception of passive RFID- enabled surfaces is desirable when dealing with dynamic and uncertain environments where probabilistic approaches like Monte Carlo Markov localization (MCL) may fail. We present the results of a pilot feasibil- ity study with two visually impaired shoppers in Lee’s MarketPlace, a supermarket in Logan, Utah.1 IntroductionThere are 11.4 visually impaired users living in the U.S. [1]. Grocery shopping isan activity that presents a barrier to independence for many visually impairedpeople who either do not go grocery shopping at all or rely on sighted guides, e.g.,friends, spouses, and partners [2]. While some visually impaired people currentlyrely on the store personnel to help them shop, they express two common concerns[3]: 1) such personnel may not be immediately available, which results in theshopper having to wait for assistance for a lengthy period of time, and 2) theshopper may not be comfortable shopping for gender sensitive items with astranger, e.g. purchasing items related to personal hygiene. In our previous work, we investigated several technical aspects of robot-assisted navigation for the blind, such as RFID-based localization, greedy freespace selection, and topological knowledge representation in [4, 5, 2]. In this pa-per, we focus on how passive radio frequency (PRF) sufaces can assist a roboticshopping assistant in a grocery store. Many systems that operate in smart en-vironments utilize proprioception (action is determined relative to an internalframe of reference) or exteroception (action is determined from a stimulus origi-nating in the environment itself). RFID has become an exteroceptive technologyof choice due to low power requirements , low cost, and ease of installation.
  2. 2. This paper is organized as follows. In section 2, we present related work.In section 3, we explain the hardware and navigation algorithms of RoboCart.In section 4, we desscribe two proof-of-concept experiments which demonstratethe advantages of using RFID mats and the practicality of a smart device likeRoboCart. In section 5, we give our conclusions.2 Related WorkSmart environments have become a major focus of assistive technology research[6]. The researchers at the Smith-Kettlewell Eye Research Institute developedTalking Signs c , audio signage IR sensors for the visually impaired that asso-ciate audio signals with various signs in the environment [7]. Willis and Helal [8]propose an assisted navigation system where an RFID reader is embedded intoa blind navigator’s shoe and passive RFID sensors are placed in the floor. Vor-werk [9], a German company, manufactures carpets containing integrated RFIDtechnology for the intelligent navigation of service robots. However, they placeRFID tags strictly in a rectangular grid format. Several research efforts in mobile robotics are similar to the research describedin this paper insomuch as they use RFID technology for robot navigation. Kan-tor and Singh [10] use RFID tags for robot localization and mapping. Theyutilize time-of-arrival signals from known RFID tags to estimate distance fromdetected tags and localize the robot. Hahnel et. al. [11] propose a probabilisticmeasurement model for using RFID signals to to analyze whether RFID can beused to improve the localization of mobile robots in office environments. Theydemonstrate how RFID can be used to improve the performance of laser basedlocalization.3 Robot-Assisted Shopping3.1 RoboCart’s HardwareThe RoboCart hardware design is a modification of RG, our indoor robotic guidefor the blind that we built in 2003-2004 on top of another Pioneer 2DX base [12].RoboCart is built on top of a Pioneer 2DX robotic platform from ActivMedia,Inc. RoboCart’s wayfinding toolkit resides in a polyvinyl chloride (PVC) pipestructure securely attached to the platform (See figure 1). The wayfinding toolkitconsists of a Dell TM Ultralight X300 laptop connected to the platform’s micro-controller, a SICK laser range finder, a TI-Series 2000 RFID reader from TexasInstruments, and a Logitech c camera facing vertically down. The RFID readeris attached to a 200mm x 200mm antenna. Unlike in RG which had its RFIDantenna on the right side of the PVC structure approximately a meter and ahalf from the floor, in RoboCart, as seen in Figure 1, the RFID antenna residesclose to the floor in front of the robot for reasons that will be explained later.
  3. 3. Fig. 1. RoboCart Hardware Fig. 2. RFID mat3.2 NavigationNavigation in RoboCart is based on Kuipers’ Spatial Semantic Hierarchy (SSH)[13]. The SSH is a model to represent spatial knowledge. According to SSH, spa-tial knowledge can be represented in five levels: sensory, control, causal, topo-logical and metric. Sensory level is the interface to the robot’s sensory system.RoboCart’s primary sensors are a laser range finder, a camera, and an RFIDreader. The control level represents the environment in terms of control lawswhich have trigger and termination conditions associated with them. The causallevel describes the environment in terms of views and action. Views specify trig-gers; actions specify control laws. For example, follow-hall can be a control lawtriggered by start-of-hall and terminated by end-of-hall. The topological levelof the SSH is a higher level of abstraction, consisting of places, paths and re-gions, and their connectivity, order and containment relationships. The metricallevel describes a global metric map of the environment within a single frame ofreference. To deal with large open spaces, we decided to use laser-based Monte CarloMarkov localization (MCL) [14], as it was already implemented in ActivMedia’sLaser Mapping and Navigation software. After several field tests, we discoveredsome problems with MCL localization. First, the robot’s ability to accuratelylocalize rapidly deteriorated in the presence of heavy shopper traffic. Second,MCL sometimes failed due to wheel slippage on a wet floor or due to the blindshopper inadvertently pulling on the handle. Third, since MCL relies exclusivelyon odometery to localize itself along a long uniform hallway that lacks uniquelaser range signatures, it would frequently get lost in an aisle. Fourth, MCLlocalization frequently failed in the store lobby, because the lobby constantlychanged its layout due to promotion displays, flower stands, product boxes.Finally, once MCL fails, it either never recovers, or recovers after a long drift.
  4. 4. 3.3 RFID-based recalibrationWe conjectured that MCL was a viable option if the robot could somehow re-calibrate, periodically and reliably, its position on the global map. To allow forperiodic and reliable MCL recalibration, we decided to turn the floor of thestore into an RFID-enabled surface, where each RFID tag had its 2D coordi-nates. The literature search showed that some ubiquitous computing researchershad started thinking along the same lines [8]. The concept of the RFID-enabledsurface was refined into the concept of recalibration areas, i.e., areas of the floorwith embedded RFID tags. In our current implementation, recalibration areasare RFID mats which are small carpets with embedded RFID tags. The matsare placed at specified locations in the store without causing any disruption tothe indigeneous business processes. The literature search showed that RFID has been used to assist laser-basedlocalization. For example, in [15], the authors demonstrate how RFID can beused to improve the performance of laser-based localization through a proba-bilistic measurement model for RFID readers. While this is certainly a validapproach, we think that one advantage of recalibration areas is deterministic lo-calization: when the robot reaches a recalibration area, its location is known withcertainty. We built several RFID mats with RFID tags embedded in a hexag-onal fashion. An RFID mat is shown in Figure 2. Every recalibration area ismapped to a corresponding rectangular region in the store’s metric global mapconstructed using ActivMedia’s metric map building software, Mapper3 c . Inthe future, as larger recalibration areas are deployed, every RFID tag may havea unique ID so that a recalibration area may act as a topological region with itsown co-ordinate system and a frame of reference.3.4 Semi-automatic acquistion of topology and causalityA principal limiation of RG was the fact that the topological and causal levels ofthe SSH had to be manually created for a given environment [5]. In RoboCart’s,several aspects of acquiring topological and causal knowledge were automated.The problem here is to have the robot itself acquire the connectivity of landmarksand maneuvers that can be executed at each landmark. There are two types of representations of the environment that must be ac-quired before RoboCart can navigate a grocery store. First, the metrical levelrepresentation in form of an occupancy grid map, which is used in the MCLalgorithm, and second, the control/causal level representation, which is an ab-straction of the metric map (used for path planning). In RoboCart, the acquistion process has four steps. First, the robot is manu-ally driven through the environment to acquire a global metric map with Activ-Media’s Mapper3 laser-based software. Figure 3 shows the metric map for thearea of Lee’s MarketPlace used in the experiments. Second, a dark blue maskingtape is placed on on the floor. In Figure 3, the tape goes north from the robot’shome location and turns west between the cash registers and the grocery aisles.
  5. 5. Third, the robot follows the tape to acquire the topological and causal knowlege.Fourth, the tape is removed. The robot uses a Logitech c web cam to capture floor images. Four actionsare used: follow-tape, turn-left-90, turn-right-90, turn-180. Two action triggersare tape intersections and ends of turns. An edge detection algorithm is usedto follow the tape and to recognize three tape fiducials: straight-tape, intersec-tion, and, horizontal-tape. When a tape intersection is detected, the robot stopsand presents a confirmation dialogue to the operator. The operator accepts thelandmark if it is a true positive, and rejects it if it is a false positive. Thus,the causal schemas < V iew, Action, V iew > are obtained through user inter-action, where Action ∈ {follow-tape, turn-left-90, turn-right-90, turn-180} andV iew ∈ {tape-intersection, horizontal-tape, end-of-turn}. If a visual landmark isaccepted, a fuzzy metric landmark is created. If the global position is < x, y >,the fuzzy landmark is a rectangular region from xi to xj , and from yk to ym ,where xi = x − δ, xj = x + δ, yk = y − δ, and ym = y + δ, where δ is an integerconstant. It took us 40 minutes to acquire the topological and causal knowledge ofthe area of Lee’s MarketPlace shown in Figure 3: 10 minutes for the metricmap acquisition, 10 minutes for deploying the tape, 15 minutes for running therobot, and 5 minutes for removing the tape. Thus, the acquired knowledge of theenvironment consists of three files: the global metric map file, a file with fuzzymetric landmarks, and a file with a fuzzy metric landmark connectivity graphthat also contains the actions that can be executed at each landmark describedin the next section. Fig. 3. Fuzzy areas in the grocery store environment.
  6. 6. 3.5 ActionsPath planning in RoboCart is done through a breadth first search. A pathplan is a sequence of fuzzy metric landmarks, connected by actions. When-ever a landmark is reached, the appropriate action is triggered. There are fiveactions: turn-into-left-aisle, turn-into-right-aisle, follow-maximum-empty-spaceand track-target. The first three rely on finding and choosing the maximumempty space as the direction to travel. The track-target action relies on currentpose Pc <x-y-θ> and the destination pose Pd <x-y-θ>, to compute direction oftravel. Other actions which are not a part of the pre-planned path are stop-and-recalibrate, beep-and-stop, and inform-and-stop. If there is an obstacle in thepath, RoboCart emits a beep and stops for 8 seconds before starting to avoidthe obstacle. When RoboCart reaches a destination, it informs the user aboutthe destination and stops. Different actions and their trigger and terminationviews are listed in Table 1. Table 1. Actions and Views Trigger Action Termination fuzzy-area track-target fuzzy-area fuzzy-area turn-into-right-aisle fuzzy-area fuzzy-area turn-into-left-aisle fuzzy-area fuzzy-area maximum-empty-space fuzzy-area RFID-tag stop-and-recalibrate pose=tag-x-y Obstacle beep-and-stop no-obstacle Obstacle beep-and-stop time-out destination inform-and-stop none cashier inform-and-stop resume3.6 Human-Robot InteractionThe shopper communicates with RoboCart through a 10-key numeric keypadattached on the handle of the cart. A speech enabled menu allows the shopperto perform tasks like browse through the hierarchical database, select products,navigate, pause, and resume. A wireless IT2020 barcode reader from Hand HeldProducts Inc., is attached to the laptop. When the shopper reaches the desiredproduct in the aisle, he/she picks up the barcode and scans the barcodes on theedge of the shelf. When a barcode is scanned the reader beeps. If the barcodescanned is that of the desired item, the shopper hears the product title in theBluetooth (R) headphones. The shopper can then carefully reach for the productabove the scanned barcode and place it in the shopping basket installed onRoboCart.
  7. 7. 4 Proof-of-Concept Experiments4.1 Experiment 1Localization error samples were collected from the two populations: Pnomat andPmat , where, Pnomat is the population of localization-errors when no recalibra-tion is done at RFID mats, and Pmat is the population of localization-errors whenrecalibration is done. Localization error in centimeters was calculated from trueand calculated poses as follows. A white masking tape was placed on a darkbrown office floor forming a rectangle with a 30-meter perimeter. At 24 selectedlocations, the tape was crossed by perpendicular stretches of the same whitemasking tape. The x and y coordinates of each intersection were recorded. Four new PRF mats were developed. Each mat consisted of a carpet surface,1.2 meters long and 0.6 meters wide, instrumented with 12 RFID tags. Thex-y regions of each mat were supplied to the robot. The mats were placed inthe middle of each side of the rectangle. The robot used the vision-based tapefollowing and tape intersection recognition algorithms to determine its true pose(ground truth). Thus, whenever a tape intersection was recognized, the robotrecorded two readings: its true posed determined from vision and its estimatedpose determined from MCL. The first 16 runs without recalibration produced 384 (24 landmarks x 16 runs)samples from Pnomat . Another 16 runs with recalibration produced 384 samplesfrom Pmat . Let H0 : µ1 − µ2 = 0 , be the null hypothesis, where µ1 and µ2 arethe means of Pnomat and Pmat , respectively. The paired t-test at α = 0.001 was 2 2used to compute the t-statistic as t = (M1 − M2 )/ 2 (σ1 /n1 ) + (σ2 /n2 ), whereM1 and M2 are the sample means. From the data obtained in the experiments,the value of the t-statistic was calculated to be 6.67, which was sufficient to rejectH0 at selected α. The use of PRF mats as recalibration areas showed a 20.23 % reduction inlocalization error: from a mean localization error of 16.8cm without recalibration,to 13.4cm with recalibration. Since the test was conducted in a simple officeenvironment, the errors are small. It is expected that in a larger environment,e.g. a supermarket, the errors will be significantly larger. Negotiations with thestore for conducting recalibration experiments are underway as this paper isbeing written.4.2 Experiment 2Upon entering the store, a visually impaired shopper must complete the follow-ing tasks: find RoboCart, use RoboCart to navigate to shelf sections with neededgrocery items, find those items on the shelves, place them into RoboCart, navi-gate to the cash register, place the items on the conveyer belt, pay for the items,navigate to the exit, remove the shopping bags from RoboCart, and leave thestore. The purpose of the second experiment was to test the feasibility with respectto these tasks. In particular, we focused on two questions: 1) Can the shopper
  8. 8. successfully retrieve a given set of products?; and 2) Does the repeated use ofRoboCart result in the reduction of the overall shopping time? The sample product database consisted of products from aisles 9 and 10, with8 products on the top shelf, 8 products on the third shelf from the bottom, and8 products on the bottom shelf. The trials were run with two visually impairedshoppers from the local community over a period of six days. A single shoppingtrial consisted of the user picking up RoboCart from the docking area, navigatingto three pre-selected products, and navigating back to the docking area throughthe cash register. Before the actual trials, the shopper was given 15 minutes oftraining on using the barcode reader to scan barcodes on the shelves. We ran 7 trials for three different sets of products. To make the shopping taskrealistic, for each trial, one product was chosen from the top shelf, one from thethird shelf, and one from the bottom shelf. Split timings for each of the ten taskswere recorded and graphed. Figure 3 shows the path taken by RoboCart duringeach trial. The RFID mats were placed at both ends of the aisles as shown inFigure 3. The shoppers successfully retrieved all products. The times for the sevenshopping iterations for product sets 1, 2 and 3 were graphed. The time taken bythe different navigation tasks remained fairly constant over all shopping trials.From the graph in figure 4 it can be seen that the time to find a product reducesafter a few trials. The initial longer time in finding the product is due the fact thatthe user is not aware of the exact location of the product on the shelf. Eventuallythe user learns where to look for the barcode, and the product retrieval timereduces. The product retrieval time stabilized at an average of 20 to 30 seconds. Fig. 4. Product Retrieval Performance Fig. 5. Product Retrieval Performance for Participant 1 for Participant 2 After conducting the experiments with the first participant, we felt the needto modify the structure of the barcode reader, so that the user could easilyscan barcodes on the shelves. This led to a minor ergonomic modification to thebarcode which enabled the user to rest the barcode on the shelves and scan thebarcode with ease. This modification greatly improved the performance, whichcan be seen from the results in figure 5.
  9. 9. 5 ConclusionsWe presented a proof-of-concept prototype of RoboCart, a robotic shopping as-sistant for the visually impaired. We described how we approach the navigationproblem in a grocery store. We also presented our approach to semi-automaticacquisition of two levels of the SSH. Our use of RFID mats to recalibrate MCLwas described. We experimentally discovered that RFID-based recalibration re-duced the MCL localization error by 20.23%. In the pilot experiments, we observed that the two visually impaired shop-pers successfully retrieved all products and that the repeated use of RoboCartresulted in the reduction of the overall shopping time. The overall shopping timeappears to be inversely related to the number of shopping trials and eventuallystabilized. While the pilot feasilibity study presented in this paper confirmed the prac-ticality of a device like a smart shopping cart and gave valuable insights into thedesign of future experiments, our approach has limitations. Recalibration areasare currently placed in an ad-hoc fashion. The system would greatly benefit ifthe placement of recalibration areas on the global metric map was done algorith-mically. The automatic or semi-automatic construction of the product databasewith useful descriptions and handling instructions for all products has not beenattempted.6 AcknowledgementsThis research has been supported, in part, through NSF CAREER grant (IIS-0346880) and two Community University Research Initiative (CURI) grants(CURI-04 and CURI-05) from the State of Utah awarded to Vladimir Kulyukin.References 1. LaPlante, M.P., Carlson, D.: Disability in the united states: Prevalence and causes. In: U.S. Department of Education, National Institute of Disability and Rehabili- tation Research, Washington, DC (2000) 2. Kulyukin, V., Gharpure, C., Nicholson, J.: Robocart: Toward robot-assisted navi- gation of grocery stores by the visually impaired. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE/RSJ (2005) 3. Burrell, A.: Robot lends a seeing eye for blind shoppers. USA Today, Monday, Jul 11, 2005 (2005) 4. Kulyukin, V., Gharpure, C.P., De Graw., N.: Human-computer interaction in a robotic guide for visually impaired. In: the Proceedings of AAAI Spring Sympo- sium, Palo Alto, CA (2004) 5. Kulyukin, V., Gharpure, C.P., Nicholson, J., Pavithran, S.: Rfid in robot-assisted indoor navigation for the visually impaired. In: Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), Sendai, Japan (2004)
  10. 10. 6. Zita Haigh, K., Kiff, L., Myers, J., Guralnik, V., Gieb, C., Phelps, J., Wagner, T.: The intependent life style assistant: Ai lessons learned. In: Proceedings of the 2004 IAAI Conference, San Jose, CA, AAAI (2004) 7. Marston, J., Golledge, R.: Towards an accessible city: Removing functional barriers for the blind and visually impaired: A case for auditory signs. Technical Report, Department of Geography, University of California at Santa Barbara (2000) 8. Scooter, S., Helal, S.: A passive rfid information grid for location and proximity sensing for the blind user. University of Florida Technical Report number TR04- 009 (2004) 9. : Smart carpet, vorwerk and co. In: (2006)10. Kantor, G., Singh, S.: Priliminary results in range-only localization and mapping. In: IEEE Conference on Robotics and Automation, Washington, D.C. (2002)11. Hahnel, D., Burgard, W., Fox, D., Fishkin, K., Philipose, M.: Mapping and local- ization with rfid technology. In: Technical Report, IRS-TR-03-014, Intel Research Institute, Seattle, Washington (2003)12. Kulyukin, V., Gharpure, C.P., Sute, P., DeGraw, N., Nicholson, J.: A robotic wayfinding system for the visually impaired. In: Proceedings of the Sixteenth Innovative Applications of Artificial Intelligence Conference, San Jose, CA (2004)13. Kupiers, B.: The spatial semantic hierarchy. Artificial Intelligence 119 (2000) 191–23314. Fox, D.: Markov Localization: A Probabilistic Framework for Mobile Robot Lo- calization and Navigation. PhD thesis, University of Bonn, Germany (1998)15. Hahnel, D., Burgard, W., Fox, D., Fishkin, K., Philipose, M.: Mapping and localiza- tion with rfid technology. In: Intel Research Institute. Tech. Rep. IRS-TR-03-014, Seattle, WA (2003)