A Robotic Shopping Assistant for the Blind


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

A Robotic Shopping Assistant for the Blind

  1. 1. A Robotic Shopping Assistant for the Blind Vladimir Kulyukin Chaitanya Gharpure Computer Science Assistive Technology Laboratory Department of Computer Science Utah State University Logan, UT 83422-4205ABSTRACTThe Computer Science Assistive Technology Laboratory (CSATL) of Utah StateUniversity (USU) is currently developing RoboCart, a robotic shopping assistant for theblind. This paper describes a small set of initial experiments with RoboCart at Lee’sMarketPlace, a supermarket in Logan, Utah.KEYWORDSVisual impairment, robot-assisted navigation, robot-assisted grocery shoppingBACKGROUNDThere are 11.4 million visually impaired individuals living in the U.S. [1]. Groceryshopping is an activity that presents a barrier to independence for many visually impairedpeople who either do not go grocery shopping at all or rely on sighted guides, e.g.,friends, spouses, and partners. Traditional navigation aids, such as guide dogs and whitecanes, are not adequate in such dynamic and complex environments as modernsupermarkets. These aids cannot help their users with macro-navigation, which requirestopological knowledge of the environment. Nor can they assist with carrying usefulpayloads.In summer 2004, the Computer Science Assistive Technology Laboratory (CSATL) ofthe Department of Computer Science (CS) of Utah State University (USU) launched aproject whose objective is to build a robotic shopping assistant for the visually impaired.In our previous publications, we examined several technical aspects of robot-assistednavigation for the blind, such as RFID-based localization, greedy free space selection,and topological knowledge representation [2, 3, 4]. In this paper, we briefly describe ourrobotic shopping assistant for the blind, called RoboCart, and present a small set of initialexperiments with RoboCart in Lee’s MarketPlace, a supermarket in Logan, Utah.HYPOTHESISIt was hypothesized by the investigators that repeated use of RoboCart by a visuallyimpaired shopper leads to the reduction in overall shopping time which eventuallyreaches asymptote.METHOD
  2. 2. ----------------------------------------Figures 1 & 2 Go Here----------------------------------------RoboCart is built on top of a Pioneer 2DX robotic platform from ActivMediaCorporation. RoboCart’s navigation system resides in a PVC pipes structure mounted ontop of the platform (See Figure 1). The navigation system consists of a Dell TMUltralight X300 laptop connected to the platform’s microcontroller, a SICK laser rangefinder, a TI-Series 2000 RFID reader from Texas Instruments, and a Logitech camerafacing vertically down. The RFID reader is attached to a 200mm x 200mm antenna,which is attached close to the floor, in front of the robot as seen in figure 1. The antennareads the small RFID tags embedded under carpets placed at the beginning and end ofgrocery aisles. One such carpet is shown in Figure 2. The antenna is attached in the front,because the robot’s metallic body and the magnets in its motors disabled the antennawhen placed under the body of the robot.Navigation in RoboCart is based on Kuipers’ Spatial Semantic Hierarchy (SSH) [5]. TheSSH is a modelto represent spatial knowledge. In an SSH, spatial knowledge can be represented in fivelevels: sensory, control, causal, topological and metric. Sensory level is the interface tothe robot’s sensory system. The RoboCart’s navigation is a combination of Markovlocalization that uses the laser range finder and RFID-based localization that uses RFIDcarpets. RoboCart has a topological map of the store that contains information on whatproduct items are contained in what aisles. The shopper interacts with the cart bybrowsing a voice-based product directory with a 10-key keypad attached to the right ofthe handle. When a product item is selected RoboCart takes the shopper to an appropriateshelf.------------------------------Figures 4 Goes Here------------------------------A wireless IT2020 barcode reader from Hand Held Products Inc. is wirelessly coupled tothe onboard laptop. When the shopper reaches the desired product in the aisle, he/shepicks up the barcode and scans the barcode stickers on the edge of the shelf. When abarcode is scanned the barcode reader beeps. If the barcode scanned is that of the searchitem, the user hears a synthesized message in a Bluetooth headphone. Figure 3 shows avisually impaired user scanning a barcode on the shelf with a wireless barcode reader.RESULTS------------------------------Figures 4 and 5 Go Here------------------------------Preliminary experiments were run with one visually impaired shopper over the period of
  3. 3. three days. A single shopping iteration consisted of the shopper picking up RoboCartfrom the docking area near the entrance, navigating to three pre-selected products, andnavigating back to the docking area through the cash register. Each iteration was dividedinto 10 tasks: navigating from the docking area to product 1 (N1), finding product 1 (P1),navigating from product 1 to product 2 (N2), finding product 2 (P2), navigating fromproduct 2 to product 3 (N3), finding product 3 (P3), navigating from product 3 to entry ofcash register (NC1), unloading the products (UL), navigating from the cash register entryto the cashregister exit (NC2), and navigating from the cash register to the docking area(NLast). Before the experiments, the shopper was given 15 minutes of training on usingthe barcode reader to scan barcodes. Seven shopping runs were completed for threedifferent sets of products. Within each set, one product was chosen from the top shelf,one from the third shelf and one from the bottom shelf. Time to completion numbers foreach of the ten tasks were recorded by a human observer. It can be seen from the graph inFigure 4 that the time taken by the different navigation tasks remained fairly constantover all runs. The graph in Figure 5 shows that the time to find a product reduces after afew iterations. The initial longer time in finding the product is due the fact that theshopper is not aware of the exact location of the product. However, over time, theshopper learns where to look for the barcode for a specific product item, and the productsearch time reduces. For the shopper in the experiments, the product search time reachedthe asymptote at an average of 20 to 30 seconds.DISCUSSIONThis single subject study with gives the investigators hope that visually impairedshoppers can be trained to use a barcode reader in a relatively short period of time. Theexperiments conducted with one visually impaired shopper indicate that the overallshopping time reduces with the number of shopping iterations and eventually reachesasymptote.REFERENCES1. LaPlante, M. P. & Carlson, D. (2000). Disability in the United States: Prevalence andCauses. Washington, DC: U.S. Department of Education.2. Kulyukin, V., Gharpure, C., De Graw, N., Nicholson, J., and Pavithran, S. (2004). ARobotic Wayfinding System for the Visually Impaired. In Proceedings of the InnovativeApplications of Artificial Intelligence Conference (IAAI), pp. 864-869. AAAI, July 2004.3. Kulyukin, V., Gharpure, C., Nicholson, J., and S. Pavithran. (2004). RFID in Robot-Assisted Indoor Navigation for the Visually Impaired. In Proceedings of the IEEEInternational Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, October2004.4. Kulyukin, V., Gharpure, C., and Nicholson, J. (2005). RoboCart: Toward Robot-Assisted Navigation of Grocery Stores by the Visually Impaired. Proceedings of theIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),IEEE/RSJ, July 2005.5. Kupiers, B. (2000). The Spatial Semantic Hierarchy. Artificial Intelligence, 119:191-233.
  4. 4. ACKNOWLEDGMENTSThe study was funded, in part, by two Community University Research Initiative (CURI)grants from the State of Utah (2004-05 and 2005-06) and NSF Grant IIS-0346880. Theauthors would like to thank Sachin Pavithran, a visually impaired training anddevelopment specialist at the USU Center for Persons with Disabilities, for his feedbackon the localization experiments.Author Contact Information:Vladimir Kulyukin, Ph.D., Computer Science Assistive Technology Laboratory,Department of Computer Science, Utah State University, 4205 Old Main Hill, Logan, UT84322-4205, Office Phone (435) 797-8163. EMAIL: vladimir.kulyukin@usu.edu.Chaitanya Gharpure, Computer Science Assistive Technology Laboratory, Department ofComputer Science, Utah State University, 4205 Old Main Hill, Logan, UT 84322-4205,Office Phone (435) 512-4560. EMAIL: cpg@cc.usu.edu.
  5. 5. GRAPHICS AND EQUATIONS---------------------Figure 1: RoboCart---------------------Alternative Text Description for Figure 1.The figure shows the structure of RoboCart. A PVC pipe structure which holds thewayfinding toolkit, is mounted on the Pioneer 2DX robotic platform. The wayfindingtoolkit consists of the laser range finder, RFID reader and antenna, a Dell Latitude X300laptop, a Logitech camera, speakers, and a 10-key keypad. The RFID antenna is placed inthe front of teh roboitic base, close to the floor. It is used to read RFID tags embedded ina carpet which is placed at strategic locations in the store.-----------------------Figure 2: RFID carpet-----------------------Alternative Text Description for Figure 2.The figure shows a carpet instrumented with RFID tags. This RFID carpet is placed atstrategic locations in teh store, and used by RoboCart to localize. The RFID tags areplaced in the carpet in a hexagonal pattern. Distance between any two tags is 15 cm.
  6. 6. -------------------------------------Figure 3: User scanning a barcode-------------------------------------Alternative Text Description for Figure 3.The figure contains a visually impaired user attempting to read a barcode on the shelf,using a wireless barcode reader.--------------------------------------------Figure 4: Navigation Timings for RoboCart--------------------------------------------Alternative Text Description for Figure 4.The figure shows a graph of navigation timings. The X axis denotes the run number andthe Y axis denotes the time in seconds. Navigation timings for six navigation slots aregraphed. The navigation timings in seconds for N1 for 7 runs are 124, 124, 127, 124, 125,124, 124 respectievly. The navigation timings in seconds for N2 for 7 runs are 61, 61, 62,60, 60, 61, 60 respectively. The navigation timings in seconds for N3 for 7 runs are 57,57, 61, 57, 56, 56, 56 respectively. The navigation timings in seconds for NC1 for 7 runsare 55, 53, 53, 50, 50, 50, 50 respectively. The navigation timings in seconds for NC2 for7 runs are 15, 16, 15, 15, 16, 16, 16 respectively. The navigation timings in seconds forNLast for 7 runs are 20, 20, 18, 20, 19, 19, 20 respectively.
  7. 7. -----------------------------------Figure 5: Product search timings-----------------------------------Alternative Text Description for Figure 5.The figure shows a graph of product search timings. The X axis denotes the run numberand the Y axis denotes the time in seconds. Product search timings for three products aregraphed. The navigation timings in seconds for Product1 for 7 runs are 44, 28, 21, 19, 19,18, 13 respectievly. The navigation timings in seconds for Product2 for 7 runs are 55, 36,31, 25, 21, 23, 25 respectively. The navigation timings in seconds for Product3 for 7 runsare 30, 22, 16, 13, 18, 16, 15 respectively.