• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
 

Narrative Map Augmentation with Automated Landmark Extraction and Path Inference

on

  • 364 views

 

Statistics

Views

Total Views
364
Views on SlideShare
171
Embed Views
193

Actions

Likes
0
Downloads
0
Comments
0

32 Embeds 193

http://vkedco.blogspot.com 71
http://www.vkedco.blogspot.com 31
http://vkedco.blogspot.jp 14
http://vkedco.blogspot.in 12
http://reader.aol.com 6
http://vkedco.blogspot.co.uk 6
http://vkedco.blogspot.sk 6
http://vkedco.blogspot.se 5
http://vkedco.blogspot.com.br 4
http://vkedco.blogspot.ro 3
http://vkedco.blogspot.com.tr 3
http://www.vkedco.blogspot.jp 3
http://vkedco.blogspot.cz 3
http://www.vkedco.blogspot.de 2
http://vkedco.blogspot.de 2
http://webcache.googleusercontent.com 2
http://vkedco.blogspot.hk 2
http://vkedco.blogspot.mx 2
http://vkedco.blogspot.dk 2
http://www.vkedco.blogspot.sg 2
http://www.vkedco.blogspot.co.uk 1
http://vkedco.blogspot.pt 1
http://vkedco.blogspot.fr 1
http://www.slideee.com 1
http://www.vkedco.blogspot.kr 1
http://vkedco.blogspot.com.es 1
http://vkedco.blogspot.ru 1
http://vkedco.blogspot.kr 1
http://www.vkedco.blogspot.in 1
http://vkedco.blogspot.ie 1
http://vkedco.blogspot.nl 1
http://vkedco.blogspot.sg 1
More...

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Narrative Map Augmentation with Automated Landmark Extraction and Path Inference Narrative Map Augmentation with Automated Landmark Extraction and Path Inference Document Transcript

    • K. Miesenberger et al. (Eds.): ICCHP 2014, Part II, LNCS 8548, pp. 50–53, 2014. © Springer International Publishing Switzerland 2014 Narrative Map Augmentation with Automated Landmark Extraction and Path Inference Vladimir Kulyukin and Thimma Reddy Computer Science Assistive Technology Laboratory, Department of Computer Science, Utah State University, Logan, UT, USA Abstract. Various technologies, including GPS, Wi-Fi localization, and infra- red beacons, have been proposed to increase travel independence for visually impaired (VI) and blind travelers. Such systems take readings from sensors, lo- calize those readings on a map, and instruct VI travelers where to move next. Unfortunately, sensor readings can be noisy or absent, which decreases the traveler’s situational awareness. However, localization technologies can be augmented with solutions that put the traveler’s cognition to use. One such so- lution is narrative maps, i.e., verbal descriptions of environments produced by O&M professionals for blind travelers. The production of narrative maps is costly, because O&M professionals must travel to designated environments and describe large numbers of routes. Complete narrative coverage may not be feas- ible due to the sheer size of many environments. But, the quality of produced narrative maps can be improved by automated landmark extraction and path in- ference. In this paper, an algorithm is proposed that uses scalable natural lan- guage processing (NLP) techniques to extract landmarks and their connectivity from verbal route descriptions. Extracted landmarks can be subsequently anno- tated with sensor readings, used to find new routes, or track the traveler’s progress on different routes. 1 Introduction Various technologies, including GPS, Wi-Fi localization, and infrared beacons [1], to name just a few, have been proposed to increase travel independence for visually impaired (VI) and blind travelers. Such systems take readings from sensors, localize those readings on a map, and instruct the traveler where to move next. Unfortunately, sensor readings can be noisy, absent, or no longer representative of the traveler’s loca- tion [2], which decreases the traveler’s situational awareness and makes it harder for the traveler to use her cognitive abilities en route. Many VI and blind people receive extensive O&M training. During training, these individuals learn how to navigate indoor and outdoor environments, follow sidewalks, detect obstacles and landmarks, and cross streets [3]. They master techniques to remain oriented as they move inside buildings or on sidewalks and streets. Many individuals improve their O&M skills through independent traveling experiences. Gaunet & Briffault [4] showed that blind travelers can follow verbal directions outdoors. Nicholson & Kulyukin [2] in- vestigated the utility of verbal instructions indoors in a longitudinal study of blind
    • Narrative Map Augmentation with Automated Landmark Extraction and Path Inference 51 shopping in supermarkets and showed that independent blind travelers can navigate modern supermarkets given adequate route descriptions. Indoor and outdoor localization technologies can be augmented to better utilize the traveler’s cognition. One approach to maximizing the traveler’s cognitive and physical skills is narrative maps (www.clickandgomaps.com), i.e., verbal, egocentric or allocentric, descriptions of specific environments. Narrative maps are written by O&M professionals to take advantage of perceptual abilities of blind travelers, i.e., transitions from carpet to tile, obstacle detection, localization, shorelining, contextual cues for orientation and re-orientation, etc. The production of narrative maps requires the expertise of O&M professionals who must travel to designated environments and describe large numbers of routes. Complete route coverage is rarely feasible due to the sheer complexity of many environments. However, existing narrative maps can be augmented by automated landmark extraction and path inference. In this paper, we propose an algorithm that uses scalable natural language processing (NLP) to extract landmarks and their connectivity from verbal route descriptions. Extracted landmarks can be subsequently annotated with sensor readings (e.g., Wi-Fi clusters or digital compass readings), used to find new routes, or track the traveler’s progress en route. The paper is organized as follows. In Section 3, we outline the algorithm. In Sec- tion 4, we present the experiments with the algorithm and discuss the results. 2 Landmark Extraction and Path Inference Algorithm The conceptual basis of our algorithm is Kuipers’ Spatial Semantic Hierarchy (SSH) [6], a hybrid knowledge representation framework for spatial cognition. In our pre- vious study [2], the SSH was shown to be appropriate for the communication of ver- bal routes to blind supermarket shoppers. The SSH represents environments in terms of four levels: sensory, causal, topological, and metric. Of specific relevance to this paper is the topological level of the SSH that describes the environment as maps of places, paths, regions, and their connectivity and containment. Fig. 1. Partial route description from Caribbean Ballroom 6 to Caribbean foyer at Caribe Royale Convention Center Orlando from www.clickandgomaps.com The input of our algorithms is verbal route descriptions, one of which is shown in Fig. 1. The descriptions are split into sentences and the sentences are tokenized. The tokenized sentences are tagged with parts of speech (POS) and parsed to identify noun phrases (NPs) and verb phrases (VPs). We have used the Stanford Parser (nlp.stanford.edu) for both POS tagging and parsing. Fig. 2 shows a parse tree with POS tags for the sentence “Grand Sierra Ballroom foyer begins 75 feet ahead as the
    • 52 V. Kulyukin and T. Reddy carpet changes to tile.” Landmarks are extracted by finding NP nodes from parse trees and applying regular expressions to the corresponding text segments. Each landmark receives a unique ID and is saved in an SQL database. VP nodes from parse trees and regular expressions are used to extract actions and their parameters as well as land- mark connectivity information. For example, from the sub-tree (VP (VB walk) (NP (CD 3) (NNS steps))), the algorithm extracts the action WALK that can be paramete- rized by the unit STEP quantified by numeral 3. If at least one action is detected be- tween two landmarks, the landmarks are considered connected in a directed graph that represents the connectivity of the environment. If it cannot be determined which landmarks are connected by an extracted action, two virtual landmarks are generated and stored in the database. New paths are inferred from landmark nodes and action edges by finding landmarks common to a pair of routes, as shown in Fig. 3. The read- er may consult [5] for more details. Fig. 2. Results of POS Tagging and Parsing Fig. 3. Path Inference
    • Narrative Map Augmentation with Automated Landmark Extraction and Path Inference 53 3 Experiments The algorithm is implemented in Java and tested on 272 verbal route directions for Caribe Royale Convention Center Orlando from www.clickandgomaps.com. The algorithm extracted 421 landmarks and 884 action edges. Of 421 landmarks, 361 (86%) were true positives and 60 (14%) were false positives. Of 884 actions, 873 (98%) were true positives and 11 (2%) false positives. The algorithm also inferred 2,210 new paths. References 1. Goldsmith, A.: Wireless Communications. Cambridge Press (2005) 2. Nicholson, J., Kulyukin, V., Coster, D.: ShopTalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans. The Open Rehabilitation Journal 2, 11–23 (2009) 3. Golledge, R.G., Klatzky, R.L., Loomis, J.M.: Cognitive Mapping and Wayfinding by Adults without Vision. In: Portugali, J. (ed.) The Construction of Cognitive Maps. Kluwer Academic Publishers, Dordrecht (1996) 4. Gaunet, F.: Verbal Guidance Rules for a Localized Wayfinding and Intended for Blind Pe- destrians in Urban Areas. Universal Access in the Information Society 4(4), 338–353 (2006) 5. Kulyukin, V., Nicholson, J.: Toward Blind Travel Support through Verbal Route Direc- tions: A Path Inference Algorithm for Inferring New Route Descriptions from Existing Route Directions. The Open Rehabilitation Journal 5, 22–40 (2012) 6. Kuipers, B.: The Spatial Semantic Hierarchy. Artificial Intelligence 119, 191–233 (2000)