Using Wii Technology to Explore Real Spaces Via Virtual Environments for People Who Are Blind by H. Gedalevitz, O. Lahav, S. Battersby, D. Brown, L. Evett and P. Merritt
1. The document reports on a study that investigated how map users read, interpret, store, and use visual information presented on screen maps.
2. It examined how expertise influences these cognitive processes and how users are affected by deviations in the map image.
3. The study used eye tracking, thinking aloud protocols, sketch maps and questionnaires to analyze visual search patterns, memory of map information and influence of design variations in maps. It found that experts interpret maps more efficiently but novices can store more descriptive information, though neither could derive additional insights beyond what was shown.
The document summarizes information about a school for blind students in Gandhinagar, India. It provides details about the school's establishment, current operations, objectives, activities, sources of funding, and future plans. The school currently has 58 students and 11 teachers, and provides education from grades 1 to 10. It aims to make students self-reliant and comfortable. The school receives limited government grants and depends on donations. Its future plans include starting a college, medical dispensary, and diploma courses.
This document outlines a B.Sc. project to develop a smart blind stick system to help blind people navigate and interact with their environment. It will include several subsystems:
1) An ultrasound sensor and vibration motor system to detect obstacles and alert the user.
2) An indoor and outdoor navigation system using GPS and voice commands to guide the user between locations.
3) A wireless control system for appliances and security system activated by voice commands.
The project will be carried out by a team of students at Mansoura University, supervised by Assistant Professor Mohamed Abdel-Azim. It will involve research into relevant technologies like speech recognition, image processing, GPS, ultrasound, and microcontrollers. The
Centre for blinds and visually impairedMayur karodia
This document provides information for designing an educational facility for visually impaired students in Indore, India. It includes the site details, design challenges, case studies of similar existing facilities, and the proposed design concept. The 7.5 acre site will include administrative buildings, classrooms, training workshops, hostels, and landscaped outdoor areas. The design aims to create an accessible environment through careful planning, circulation, textures, colors and sensory stimulation. Case studies of other blind schools provided insights into effective zoning, guidance systems, and creating an understanding environment.
This document provides an overview of design considerations for visually impaired individuals. It discusses lighting, color, texture, acoustics, smell, and legibility as key design elements. It also categorizes types of visual impairment and their symptoms. Design ideas are presented that utilize tactile cues like texture floors and handrails. Case studies of the Anchor Center for Blind Children and Hazelwood School show how their designs incorporate sensory elements to aid navigation.
Technology can help students with impairments maintain or increase their functionality by providing tools like physical equipment, software, and electronic devices. Assistive technologies implemented in the classroom help blind students focus on learning without hindrance. These include talking calculators, speech recognition software, screen reading programs, reading machines, large print books, magnifiers, and tactile learning materials. Talking calculators have big buttons and vocalize numbers/operations to eliminate the need for students to squint. Speech recognition converts spoken words into documents, while screen readers produce audio versions of electronic text.
This document summarizes Mahdi Babaei's master's thesis research on optimizing gesture recognition methods in virtual spaces. The research objectives are to study gesture recognition methods, design a digital space based on projections, study virtual reality interactions, test a gesture recognition prototype, and solve disadvantages. The proposed solution combines Microsoft Kinect skeletal tracking with accelerometer data to better track head and eye movements compared to Kinect alone. User studies show quantitative improvements like 19.41% faster speeds and qualitative improvements like a 14% average optimization in usability factors based on questionnaires.
Ooms - Cognitive user evaluation of digital maps: findings and challengesswenney
This document summarizes Kristien Ooms' research on cognitive user evaluation of digital maps using eye tracking methods. The research aims to understand how map users read, interpret, store and use visual information on screen maps. It involves two parts - the first examines basic map design using simple maps, and the second examines complex realistic maps. The research finds that expert map users have shorter fixations, more fixations per second, and can locate information faster than novices. It also finds that experts can retrieve more information from maps by storing it in larger chunks in working memory.
1. The document reports on a study that investigated how map users read, interpret, store, and use visual information presented on screen maps.
2. It examined how expertise influences these cognitive processes and how users are affected by deviations in the map image.
3. The study used eye tracking, thinking aloud protocols, sketch maps and questionnaires to analyze visual search patterns, memory of map information and influence of design variations in maps. It found that experts interpret maps more efficiently but novices can store more descriptive information, though neither could derive additional insights beyond what was shown.
The document summarizes information about a school for blind students in Gandhinagar, India. It provides details about the school's establishment, current operations, objectives, activities, sources of funding, and future plans. The school currently has 58 students and 11 teachers, and provides education from grades 1 to 10. It aims to make students self-reliant and comfortable. The school receives limited government grants and depends on donations. Its future plans include starting a college, medical dispensary, and diploma courses.
This document outlines a B.Sc. project to develop a smart blind stick system to help blind people navigate and interact with their environment. It will include several subsystems:
1) An ultrasound sensor and vibration motor system to detect obstacles and alert the user.
2) An indoor and outdoor navigation system using GPS and voice commands to guide the user between locations.
3) A wireless control system for appliances and security system activated by voice commands.
The project will be carried out by a team of students at Mansoura University, supervised by Assistant Professor Mohamed Abdel-Azim. It will involve research into relevant technologies like speech recognition, image processing, GPS, ultrasound, and microcontrollers. The
Centre for blinds and visually impairedMayur karodia
This document provides information for designing an educational facility for visually impaired students in Indore, India. It includes the site details, design challenges, case studies of similar existing facilities, and the proposed design concept. The 7.5 acre site will include administrative buildings, classrooms, training workshops, hostels, and landscaped outdoor areas. The design aims to create an accessible environment through careful planning, circulation, textures, colors and sensory stimulation. Case studies of other blind schools provided insights into effective zoning, guidance systems, and creating an understanding environment.
This document provides an overview of design considerations for visually impaired individuals. It discusses lighting, color, texture, acoustics, smell, and legibility as key design elements. It also categorizes types of visual impairment and their symptoms. Design ideas are presented that utilize tactile cues like texture floors and handrails. Case studies of the Anchor Center for Blind Children and Hazelwood School show how their designs incorporate sensory elements to aid navigation.
Technology can help students with impairments maintain or increase their functionality by providing tools like physical equipment, software, and electronic devices. Assistive technologies implemented in the classroom help blind students focus on learning without hindrance. These include talking calculators, speech recognition software, screen reading programs, reading machines, large print books, magnifiers, and tactile learning materials. Talking calculators have big buttons and vocalize numbers/operations to eliminate the need for students to squint. Speech recognition converts spoken words into documents, while screen readers produce audio versions of electronic text.
This document summarizes Mahdi Babaei's master's thesis research on optimizing gesture recognition methods in virtual spaces. The research objectives are to study gesture recognition methods, design a digital space based on projections, study virtual reality interactions, test a gesture recognition prototype, and solve disadvantages. The proposed solution combines Microsoft Kinect skeletal tracking with accelerometer data to better track head and eye movements compared to Kinect alone. User studies show quantitative improvements like 19.41% faster speeds and qualitative improvements like a 14% average optimization in usability factors based on questionnaires.
Ooms - Cognitive user evaluation of digital maps: findings and challengesswenney
This document summarizes Kristien Ooms' research on cognitive user evaluation of digital maps using eye tracking methods. The research aims to understand how map users read, interpret, store and use visual information on screen maps. It involves two parts - the first examines basic map design using simple maps, and the second examines complex realistic maps. The research finds that expert map users have shorter fixations, more fixations per second, and can locate information faster than novices. It also finds that experts can retrieve more information from maps by storing it in larger chunks in working memory.
This document discusses using convolutional neural networks (CNNs) to classify and segment satellite imagery. It presents a novel approach using a CNN to perform per-pixel classification of multispectral satellite imagery and a digital surface model into five categories (vegetation, ground, roads, buildings, water). The CNN is first pre-trained with unsupervised clustering then fine-tuned for classification and segmentation. Results show the CNN approach outperforms existing methods, achieving 94.49% classification accuracy and improving segmentation by reducing salt-and-pepper effects from per-pixel classification alone.
Evaluating Motion Constraints for 3D Wayfinding in Immersive and Desktop Virt...Niklas Elmqvist
The document evaluates different motion constraints for aiding wayfinding in 3D virtual environments. It found that (1) free navigation performed better in an immersive CAVE environment while spring-based guidance on desktop performed significantly better, (2) navigation guidance was more efficient than free flight, and (3) navigation guidance had a higher impact on wayfinding for desktop users than CAVE users. The study suggests that removing some freedom in 3D navigation through guided tours can actually improve cognitive map building and wayfinding.
Supporting situation awareness on the move - the role of technology for spati...Mirjam-Mona
Presentation of Björn JE Johansson, Charlotte Hellgren, Per-Anders Oskarsson and Jonathan Svensson on the topic "Supporting situation awareness on the move - the role of technology for spatial orientation in the field" at ISCRAM2013
Bieg Eye And Pointer Coordination In Search And Selection TasksKalle
Selecting a graphical item by pointing with a computer mouse is a ubiquitous task in many graphical user interfaces. Several techniques have been suggested to facilitate this task, for instance, by reducing the required movement distance. Here we
measure the natural coordination of eye and mouse pointer
control across several search and selection tasks. We find that users automatically minimize the distance to likely targets in an intelligent, task dependent way. When target location is highly predictable, top-down knowledge can enable users to initiate pointer movements prior to target fixation. These findings question the utility of existing assistive pointing techniques and suggest that alternative approaches might be more effective.
The document discusses visual interpretation of hand gestures for human-computer interaction. It proposes using pointing gestures with a depth camera to interact with large displays. The system tracks hand movements using RGB-D cameras and uses the hand position and orientation to control the movement and rotation of virtual objects in a display. It discusses approaches for modeling, recognizing, and analyzing hand gestures as well as applications of gesture-based interaction systems. The methodology presented uses color segmentation and centroid tracking of a user's hand to determine coordinates and control a virtual object similarly to a computer mouse.
The human face of AI: how collective and augmented intelligence can help sol...Elena Simperl
This document summarizes a talk on how collective and augmented intelligence can help solve societal problems. It discusses how AI depends on human input, how collective intelligence benefits AI, and provides examples of using human computation and crowdsourcing to support disaster relief and conduct urban auditing. It also describes challenges in making crowdsourcing sustainable and assessing data quality, and emphasizes the need for iterative design of human-AI systems to bring together human, collective, and computational intelligence.
Satellite and Land Cover Image Classification using Deep Learningijtsrd
Satellite imagery is very significant for many applications including disaster response, law enforcement and environmental monitoring. These applications require the manual identification of objects and facilities in the imagery. Because the geographic area to be covered are great and the analysts available to conduct the searches are few, automation is required. The traditional object detection and classification algorithms are too inaccurate, takes a lot of time and unreliable to solve the problem. Deep learning is a family of machine learning algorithms that can be used for the automation of such tasks. It has achieved success in image classification by using convolutional neural networks. The problem of object and facility classification in satellite imagery is considered. The system is developed by using various facilities like Tensor Flow, XAMPP, FLASK and other various deep learning libraries. Roshni Rajendran | Liji Samuel "Satellite and Land Cover Image Classification using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32912.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32912/satellite-and-land-cover-image-classification-using-deep-learning/roshni-rajendran
1) Deep learning is being applied to tasks in Earth observation like land cover mapping, vegetation biomass estimation, 3D building reconstruction, anomaly detection, and simulating remote sensing images.
2) There are unique challenges in applying deep learning to Earth observation data including the curved surface of the Earth, different acquisition geometries, sparse and heterogeneous data, and integrating multiple data sources and dimensions.
3) Examples of deep learning applications presented include using convolutional autoencoders to detect anomalies in remote sensing images, incorporating Lidar data to improve biomass estimation from SAR images, and using generative models to simulate SAR images from optical images.
Open Topology: A Toolkit for Brain Isosurface Correction-776Kitware Kitware
The document discusses Open Topology, a toolkit for correcting brain isosurface meshes extracted from MRI images. It describes how the toolkit uses a half-edge data structure and algorithms for handle detection, embracing handles, holding them tight, and filling handles to correct topological errors in the isosurfaces and generate watertight meshes. The toolkit aims to correctly handle all topological errors in brain isosurfaces rapidly and is open source.
Interactive Exploration of Geospatial Network Visualization Till Nagel
This document summarizes research into improving interactive visualization of geospatial network data on multi-touch tabletop displays. An initial prototype had usability issues around selecting individual affiliations clustered together and clarity of connection strengths. A second prototype addressed these by allowing exploded clusters and visualizing connection quality. User studies found participants engaged in social discussion and storytelling using the improved visualization. Future work aims to further enhance dense data selection and evaluate weighted edge representation.
The document summarizes a research paper titled "HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences". It proposes a novel descriptor called HON4D that encodes the distribution of surface normal orientations in a 4D space of depth, time, and spatial coordinates for activity recognition from depth image sequences. The 4D space is quantized using the vertices of a polychoron structure to create bins. This allows the HON4D descriptor to capture more complex and articulated motions than existing holistic approaches. Evaluation shows it outperforms these prior methods and can also be adapted for unaligned dataset recognition.
Metric-based Few-shot Classification in Remote Sensing Image
Safety-critical Policy Iteration Algorithm for Control under Model Uncertainty
Elderly Fall Detection by Sensitive Features Based on Image Processing and Machine Learning
Efficient Parallel Processing of k-Nearest Neighbor Queries by Using a Centroid-based and Hierarchical Clustering Algorithm
This document summarizes a survey paper on collaborative work in augmented reality. The paper reviews 65 papers on AR and CSCW systems published between 2008-2019. It introduces fundamental concepts of AR and CSCW, provides a taxonomy of AR-CSCW systems based on time, space, roles and technology used. The paper survey analyzes examples of both asynchronous and synchronous collaboration in spatial, temporal dimensions. It also discusses design considerations and remaining research challenges in collaborative AR systems.
Final presentation for Ordinance Survey sponsored MSc ProjectIris Kramer
MSc Archaeological Computing (GIS and Survey), University of Southampton.
“An archaeological reaction to the remote sensing data explosion. Reviewing the research on semi-automated pattern recognition and assessing the potential to integrate artificial intelligence”
IRJET- Identification of Missing Person in the Crowd using Pretrained Neu...IRJET Journal
The document describes a proposed system to identify missing persons in crowded areas using pretrained convolutional neural networks. The system would involve collecting images of missing persons from different angles to create a dataset for training. An AlexNet pretrained neural network would then be used to detect faces in live video captured by a drone camera of crowded areas. Detected faces would be cropped, stored in a database, and used to further train the network. During testing, the system could identify missing persons by displaying their images when detected in the crowd. The goal of the system is to help police efficiently locate missing people in crowded public settings like festivals or meetings.
COGNITIVE SPACE IN THE INTERACTIVE MOVIE MAP: AN INVESTIGATION OF SPATIAL LEA...michelafelici1
The document describes the development and implementation of an interactive movie map system. The system allows users to virtually navigate an unfamiliar urban environment through street-level video footage and aerial photos accessed from a video disc. Users can travel through sequences of photographic footage, view maps and data, and change their viewpoint and route. The system was developed using footage of Aspen, Colorado filmed from streets and helicopters. It is intended as a research tool to study how users acquire spatial knowledge of an unfamiliar place through interaction with the system.
University of Nottingham - NGI Geospatial Science Example ActivitiesJeremy Morley
The document discusses research areas and projects at the Nottingham Geospatial Institute including spatial data infrastructures, geospatial analysis, openness and interoperability, location-based services, geosemantics, geoinformatics, 3D GIS, crowdsourcing, planetary, and agricultural applications. It provides examples of PhD projects on topics such as crowd-sourcing 3D building interiors, engaging communities for mapping water supplies, and identifying geological features on Mars using crowd-sourcing. Current activities and projects are also listed relating to areas of crowdsourcing, pervasive computing, semantics, spatial interfaces, 3D modeling, open source software and data, and sensor web technologies.
This document discusses using wavelet domain saliency maps for secret communication in RGB images. It proposes a method to compute saliency maps using both approximation and detail coefficients from discrete wavelet transforms of the color channels. Higher numbers of secret bits would be embedded in less salient regions according to the saliency map. The saliency map approach is compared to other methods and could make steganography more secure by embedding data in less noticeable image regions.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
Robotics and Education – EduRob Project Results Launch
10:45 Introduction to the EDUROB Project (Professor Penny Standen)
11:00 Robotic Learning Demos (Andy Burton, Nick Shopland, Steve Battersby)
11:30 Robots in Schools – initial findings (Joanna Kossewska, Lorenzo Desideri) See also ‘Education of children with disabilities using NAO robot mediation – the Polish experience’ - Joanna Kossewska, Elżbieta Lubińska-Kościółek, Tamara Cierpiałowska, Sylwia Niemiec-Elanany, Piotr Migo, Remigiusz Kijak (Pedagogical University of Krakow, Poland)
12:00 Interactive hands-on sessions with the robots
12:30 Discussion with attendees re: potential impact on educational practice and pedagogy (led by Penny Standen/Tom Hughes Roberts/Andrean Lazarov)
http://edurob.eu/
This project (543577-LLP-1-2013-1-UK-KA3-KA3MP) has been funded with support from the European Commission [Lifelong Learning Programme of the European Union]. This website reflects the views only of the author, and the European Commission cannot be held responsible for any use which may be made of the information contained therein.
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Educational Robotics for Students with disabilities (EDUROB) - brochure
http://edurob.eu/
This project (543577-LLP-1-2013-1-UK-KA3-KA3MP) has been funded with support from the European Commission [Lifelong Learning Programme of the European Union]. This website reflects the views only of the author, and the European Commission cannot be held responsible for any use which may be made of the information contained therein.
More Related Content
Similar to Using Wii Technology to Explore Real Spaces Via Virtual Environments for People Who Are Blind
This document discusses using convolutional neural networks (CNNs) to classify and segment satellite imagery. It presents a novel approach using a CNN to perform per-pixel classification of multispectral satellite imagery and a digital surface model into five categories (vegetation, ground, roads, buildings, water). The CNN is first pre-trained with unsupervised clustering then fine-tuned for classification and segmentation. Results show the CNN approach outperforms existing methods, achieving 94.49% classification accuracy and improving segmentation by reducing salt-and-pepper effects from per-pixel classification alone.
Evaluating Motion Constraints for 3D Wayfinding in Immersive and Desktop Virt...Niklas Elmqvist
The document evaluates different motion constraints for aiding wayfinding in 3D virtual environments. It found that (1) free navigation performed better in an immersive CAVE environment while spring-based guidance on desktop performed significantly better, (2) navigation guidance was more efficient than free flight, and (3) navigation guidance had a higher impact on wayfinding for desktop users than CAVE users. The study suggests that removing some freedom in 3D navigation through guided tours can actually improve cognitive map building and wayfinding.
Supporting situation awareness on the move - the role of technology for spati...Mirjam-Mona
Presentation of Björn JE Johansson, Charlotte Hellgren, Per-Anders Oskarsson and Jonathan Svensson on the topic "Supporting situation awareness on the move - the role of technology for spatial orientation in the field" at ISCRAM2013
Bieg Eye And Pointer Coordination In Search And Selection TasksKalle
Selecting a graphical item by pointing with a computer mouse is a ubiquitous task in many graphical user interfaces. Several techniques have been suggested to facilitate this task, for instance, by reducing the required movement distance. Here we
measure the natural coordination of eye and mouse pointer
control across several search and selection tasks. We find that users automatically minimize the distance to likely targets in an intelligent, task dependent way. When target location is highly predictable, top-down knowledge can enable users to initiate pointer movements prior to target fixation. These findings question the utility of existing assistive pointing techniques and suggest that alternative approaches might be more effective.
The document discusses visual interpretation of hand gestures for human-computer interaction. It proposes using pointing gestures with a depth camera to interact with large displays. The system tracks hand movements using RGB-D cameras and uses the hand position and orientation to control the movement and rotation of virtual objects in a display. It discusses approaches for modeling, recognizing, and analyzing hand gestures as well as applications of gesture-based interaction systems. The methodology presented uses color segmentation and centroid tracking of a user's hand to determine coordinates and control a virtual object similarly to a computer mouse.
The human face of AI: how collective and augmented intelligence can help sol...Elena Simperl
This document summarizes a talk on how collective and augmented intelligence can help solve societal problems. It discusses how AI depends on human input, how collective intelligence benefits AI, and provides examples of using human computation and crowdsourcing to support disaster relief and conduct urban auditing. It also describes challenges in making crowdsourcing sustainable and assessing data quality, and emphasizes the need for iterative design of human-AI systems to bring together human, collective, and computational intelligence.
Satellite and Land Cover Image Classification using Deep Learningijtsrd
Satellite imagery is very significant for many applications including disaster response, law enforcement and environmental monitoring. These applications require the manual identification of objects and facilities in the imagery. Because the geographic area to be covered are great and the analysts available to conduct the searches are few, automation is required. The traditional object detection and classification algorithms are too inaccurate, takes a lot of time and unreliable to solve the problem. Deep learning is a family of machine learning algorithms that can be used for the automation of such tasks. It has achieved success in image classification by using convolutional neural networks. The problem of object and facility classification in satellite imagery is considered. The system is developed by using various facilities like Tensor Flow, XAMPP, FLASK and other various deep learning libraries. Roshni Rajendran | Liji Samuel "Satellite and Land Cover Image Classification using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32912.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32912/satellite-and-land-cover-image-classification-using-deep-learning/roshni-rajendran
1) Deep learning is being applied to tasks in Earth observation like land cover mapping, vegetation biomass estimation, 3D building reconstruction, anomaly detection, and simulating remote sensing images.
2) There are unique challenges in applying deep learning to Earth observation data including the curved surface of the Earth, different acquisition geometries, sparse and heterogeneous data, and integrating multiple data sources and dimensions.
3) Examples of deep learning applications presented include using convolutional autoencoders to detect anomalies in remote sensing images, incorporating Lidar data to improve biomass estimation from SAR images, and using generative models to simulate SAR images from optical images.
Open Topology: A Toolkit for Brain Isosurface Correction-776Kitware Kitware
The document discusses Open Topology, a toolkit for correcting brain isosurface meshes extracted from MRI images. It describes how the toolkit uses a half-edge data structure and algorithms for handle detection, embracing handles, holding them tight, and filling handles to correct topological errors in the isosurfaces and generate watertight meshes. The toolkit aims to correctly handle all topological errors in brain isosurfaces rapidly and is open source.
Interactive Exploration of Geospatial Network Visualization Till Nagel
This document summarizes research into improving interactive visualization of geospatial network data on multi-touch tabletop displays. An initial prototype had usability issues around selecting individual affiliations clustered together and clarity of connection strengths. A second prototype addressed these by allowing exploded clusters and visualizing connection quality. User studies found participants engaged in social discussion and storytelling using the improved visualization. Future work aims to further enhance dense data selection and evaluate weighted edge representation.
The document summarizes a research paper titled "HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences". It proposes a novel descriptor called HON4D that encodes the distribution of surface normal orientations in a 4D space of depth, time, and spatial coordinates for activity recognition from depth image sequences. The 4D space is quantized using the vertices of a polychoron structure to create bins. This allows the HON4D descriptor to capture more complex and articulated motions than existing holistic approaches. Evaluation shows it outperforms these prior methods and can also be adapted for unaligned dataset recognition.
Metric-based Few-shot Classification in Remote Sensing Image
Safety-critical Policy Iteration Algorithm for Control under Model Uncertainty
Elderly Fall Detection by Sensitive Features Based on Image Processing and Machine Learning
Efficient Parallel Processing of k-Nearest Neighbor Queries by Using a Centroid-based and Hierarchical Clustering Algorithm
This document summarizes a survey paper on collaborative work in augmented reality. The paper reviews 65 papers on AR and CSCW systems published between 2008-2019. It introduces fundamental concepts of AR and CSCW, provides a taxonomy of AR-CSCW systems based on time, space, roles and technology used. The paper survey analyzes examples of both asynchronous and synchronous collaboration in spatial, temporal dimensions. It also discusses design considerations and remaining research challenges in collaborative AR systems.
Final presentation for Ordinance Survey sponsored MSc ProjectIris Kramer
MSc Archaeological Computing (GIS and Survey), University of Southampton.
“An archaeological reaction to the remote sensing data explosion. Reviewing the research on semi-automated pattern recognition and assessing the potential to integrate artificial intelligence”
IRJET- Identification of Missing Person in the Crowd using Pretrained Neu...IRJET Journal
The document describes a proposed system to identify missing persons in crowded areas using pretrained convolutional neural networks. The system would involve collecting images of missing persons from different angles to create a dataset for training. An AlexNet pretrained neural network would then be used to detect faces in live video captured by a drone camera of crowded areas. Detected faces would be cropped, stored in a database, and used to further train the network. During testing, the system could identify missing persons by displaying their images when detected in the crowd. The goal of the system is to help police efficiently locate missing people in crowded public settings like festivals or meetings.
COGNITIVE SPACE IN THE INTERACTIVE MOVIE MAP: AN INVESTIGATION OF SPATIAL LEA...michelafelici1
The document describes the development and implementation of an interactive movie map system. The system allows users to virtually navigate an unfamiliar urban environment through street-level video footage and aerial photos accessed from a video disc. Users can travel through sequences of photographic footage, view maps and data, and change their viewpoint and route. The system was developed using footage of Aspen, Colorado filmed from streets and helicopters. It is intended as a research tool to study how users acquire spatial knowledge of an unfamiliar place through interaction with the system.
University of Nottingham - NGI Geospatial Science Example ActivitiesJeremy Morley
The document discusses research areas and projects at the Nottingham Geospatial Institute including spatial data infrastructures, geospatial analysis, openness and interoperability, location-based services, geosemantics, geoinformatics, 3D GIS, crowdsourcing, planetary, and agricultural applications. It provides examples of PhD projects on topics such as crowd-sourcing 3D building interiors, engaging communities for mapping water supplies, and identifying geological features on Mars using crowd-sourcing. Current activities and projects are also listed relating to areas of crowdsourcing, pervasive computing, semantics, spatial interfaces, 3D modeling, open source software and data, and sensor web technologies.
This document discusses using wavelet domain saliency maps for secret communication in RGB images. It proposes a method to compute saliency maps using both approximation and detail coefficients from discrete wavelet transforms of the color channels. Higher numbers of secret bits would be embedded in less salient regions according to the saliency map. The saliency map approach is compared to other methods and could make steganography more secure by embedding data in less noticeable image regions.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
Similar to Using Wii Technology to Explore Real Spaces Via Virtual Environments for People Who Are Blind (20)
Robotics and Education – EduRob Project Results Launch
10:45 Introduction to the EDUROB Project (Professor Penny Standen)
11:00 Robotic Learning Demos (Andy Burton, Nick Shopland, Steve Battersby)
11:30 Robots in Schools – initial findings (Joanna Kossewska, Lorenzo Desideri) See also ‘Education of children with disabilities using NAO robot mediation – the Polish experience’ - Joanna Kossewska, Elżbieta Lubińska-Kościółek, Tamara Cierpiałowska, Sylwia Niemiec-Elanany, Piotr Migo, Remigiusz Kijak (Pedagogical University of Krakow, Poland)
12:00 Interactive hands-on sessions with the robots
12:30 Discussion with attendees re: potential impact on educational practice and pedagogy (led by Penny Standen/Tom Hughes Roberts/Andrean Lazarov)
http://edurob.eu/
This project (543577-LLP-1-2013-1-UK-KA3-KA3MP) has been funded with support from the European Commission [Lifelong Learning Programme of the European Union]. This website reflects the views only of the author, and the European Commission cannot be held responsible for any use which may be made of the information contained therein.
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Educational Robotics for Students with disabilities (EDUROB) - brochure
http://edurob.eu/
This project (543577-LLP-1-2013-1-UK-KA3-KA3MP) has been funded with support from the European Commission [Lifelong Learning Programme of the European Union]. This website reflects the views only of the author, and the European Commission cannot be held responsible for any use which may be made of the information contained therein.
Can Computer-Assisted Training of Prerequisite Motor Skills Help Enable Communication in People with Autism? Data from a New Feasibility Study ( Matthew Belmonte, Emma Weisblatt, Alicia Rybicki, Beverley Cook, Caroline Langensiepen, David Brown, Manuj Dhariwal, Tanushree Saxena-Chandhok and Prathibha Karanth)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Increasing Awareness of Alzheimer’s Disease through a Mobile Game (Beverley Cook and Philip Twidle)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
The document summarizes and compares the game features of two cognitive training games: Protect BTS and Wii Big Brain Academy. It analyzes the games' interaction mechanics, progression mechanics, and contextualization. Both games use progression challenges and rewards to motivate players. However, Wii Big Brain Academy provides more enriched gameplay through its school narrative, character customization, social features, and variety of mini-games, which may better support players' psychological needs and facilitate intrinsic motivation according to self-determination theory. Future work could involve more rigorous analysis of how specific game mechanics impact engagement and motivation for cognitive improvement.
Enhancing the measurement of clinical outcomes using Microsoft Kinect choices (Philip Breedon, Bill Byrom, Luke Siena and Willie Muehlhausen)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
User involvement in design and application of virtual reality gamification to facilitate the use of hearing aids (Sue Cobb)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Our virtual selves, our virtual morals – Mass Effect players’ personality and in game (Eva Murzyn and Evelien Valgaeren)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
The document discusses using wearable assistive technology and analyzing real-time data to support dementia patients. It proposes a framework that would allow integration of real-time sensory and contextual data using rule-based Complex Event Processing techniques to infer a dementia patient's medical state in real-time. This could trigger alerts to patients and caregivers about abnormalities detected in behavior, movement or medical conditions. The framework aims to better support dementia patients through intelligent analysis of big data from wearables and sensors.
Breast Cancer Diagnosis using a Hybrid Genetic Algorithm for Feature Selection based on Mutual Information (Abeer Alzubaidi, Georgina Cosma, David Brown and Graham Pockley)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Keynote speakers – Dom Martinovs and Rachel Barrett, ‘ No One Left Behind’ project
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Playing games with observation, dependency and agency in a new environment for making construals
(Meurig Beynon, Rene Alimisi, Russell Boyatt, Jonathon Foss, Elizabeth Hudnott, Ilkka Jormanainen, Piet Kommers, Hamish Macleod, Nicolas Pope, Steve Russ, Peter Tomcsányi and Tapani Toivonen)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Me, My Game-Self, and Others: A Qualitative Exploration of the Game-Self (Nikolaos Kartsanis and Eva Murzyn)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
A comparison of humanoid and non-humanoid robots in supporting the learning of pupils with intellectual disabilities (Sarmad Aslam, PJ Standen, Nick Shopland and Andy Burton)
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
Keynote speaker - Fiorella Operto, ‘Robotics, A New Science’
Interactive Technologies and Games (ITAG) Conference 2016
Health, Disability and EducationDates: Wednesday 26 October 2016 - Thursday 27 October 2016 Location: The Council House, NG1 2DT
This document discusses the use of virtual and collaborative virtual environments for education, with a focus on students with special needs. It describes several projects led by Sue Cobb at the University of Nottingham to develop VEs and CVEs using participatory design methods. Evaluation of the projects found that students were engaged with the technologies and they showed potential for supporting collaboration, communication skills, and perspective taking. However, more work is needed to improve realism and robustness for use in classroom settings.
Matthew Bates, Aoife Breheny, David Brown, Andy Burton and Penny Standen
Using a blended pedagogical framework to guide the applications of games in non-formal contexts
Interactive Technologies and Games (ITAG) Conference 2014
Health, Disability and Education
Dates: Thursday 16 October 2014 - Friday 17 October 2014
Location: The Council House, NG1 2DT, Nottingham, UK
Urban Games: playful storytelling experiences for city dwellers
Maria Saridaki, Eleni Kolovou
Interactive Technologies and Games (ITAG) Conference 2014
Health, Disability and Education
Dates: Thursday 16 October 2014 - Friday 17 October 2014
Location: The Council House, NG1 2DT, Nottingham, UK
Game transfer Phenomena: the pervasiveness of sounds from video games and their impact on behaviour
Angelica B. ortiz de Gortari
Interactive Technologies and Games (ITAG) Conference 2014
Health, Disability and Education
Dates: Thursday 16 October 2014 - Friday 17 October 2014
Location: The Council House, NG1 2DT, Nottingham, UK
Immersive Virtual Reality Simulation Deployment in a Lean Manufacturing Environment
Adam Gamlin, Philip Breedon and Benachir Medjdoub
Interactive Technologies and Games (ITAG) Conference 2014
Health, Disability and Education
Dates: Thursday 16 October 2014 - Friday 17 October 2014
Location: The Council House, NG1 2DT, Nottingham, UK
More from Interactive Technologies and Games: Education, Health and Disability (20)
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Using Wii Technology to Explore Real Spaces Via Virtual Environments for People Who Are Blind
1. Gedalevitz, Lahav
School of Education
Tel Aviv University
Tel Aviv, Israel
Battersby, Brown, Evett and Merritt
Computing and Technology Team
Nottingham Trent University
Nottingham, UK
Using Wii Technology
to Explore Real Spaces
Via Virtual Environments
for People Who Are Blind
2. Research Goals
Understand whether blind people can construct
a cognitive map by exploring an unknown space
using the Virtual-cane (Wii-based VE) and later
to apply it in the real space
3. Research Questions
(1) What exploration strategies and processes
do blind people use when working with
Virtual-cane?
(2) Does using the Virtual-cane contribute to the
construction of a cognitive map?
(3) How does this cognitive map contribute to
the blind person’s orientation performance in
real spaces?
4. Last Thing First…
The Virtual-cane changed the way the participants
explored VEs:
More scanning than walking
More object-to-object than perimeter strategy
Long pauses
Spatial representations were achieved, where Mapmodel was the main representation
The participants orientation tasks in the real spaces
(simple & complex) were performed correctly using a
direct path
6. Participants
The participants (N=10) were adults, men and
women, totally blind, congenitally and late blind
The participants were divided into two groups:
- Experimental group (n=5)
- Control group (n=5)
7. Variables
Exploration process
Duration; exploration mode; orientation
strategies; and systematic exploration
Cognitive map construction
Space components and their location; spatial
strategy; and spatial model
Orientation tasks performance
Duration; success; type of path; and aids used
11. Procedure
Experimental Group
Control Group
O&M Questionnaire & Open Interview
Meeting #1
Meeting #1-4
Training using the
Virtual-cane
Meeting #5
Simple space
&
Meeting #6
Complex space
Exploration task
in the VE
Exploration task
in the real space
Description task
Orientation tasks in the real space
Meeting #1
Simple space
&
Meeting #2
Complex space
13. (1)
Complex VE
Simple VE
N
What exploration strategies and processes do blind
people use when working with Virtual-cane?
Duration
(seconds)
1
1764
2
2979
3
2312
4
3525
5
1938
Total average
1
1596
2
3791
3
2683
4
5213
5
2713
Total average
Walking mode
Spatial strategy
Scanning mode
Name
37%
28%
41%
45%
50%
Distance
33%
54%
33%
21%
28%
Perimeter
2%
5%
0%
4%
6%
22%
49%
32%
23%
20%
1%
3%
2%
5%
9%
74%
51%
26%
46%
43%
51%
72%
Object to object
6%
6%
3%
3%
8%
8%
7%
13%
2%
6%
11%
12%
Pauses
23%
7%
22%
28%
7%
17%
21%
6%
16%
22%
7%
14%
14. (2)
N
Does using the Virtual-cane contribute to the
construction of a cognitive map?
Space
Spatial strategy
components
Estimated Spatial
relationship representation
Chronology
Complex VE
66%
List
4
Map model
Structure
2
75%
Starting point
11
Map model
Structure
3
57%
Object to object
9
Route model
Structure
4
5
59%
50%
Map model
Map model
Structure
Structure
19%
Object to object
Area & Object to
object
List
15
16
1
Simple VE
1
1
List
Structure
2
44%
Area
12
Route model
Structure
3
4
38%
53%
14
15
46%
Route model
Map and route
model
Map model
Structure
Structure
5
Object to object
Area & Object to
object
Area
15
Structure
15. (3)
How does this cognitive map contribute to the blind
person’s orientation performance in the real space?
N
Simple VE
1
2
3
4
5
AVG
Complex VE
1
2
3
4
5
AVG
Object-oriented tasks
Duration
Success
Direct path
(seconds)
358
67%
33%
180
67%
67%
117
67%
67%
174
100%
100%
137
67%
67%
193
73%
67%
220
50%
0%
373
50%
50%
65
0%
0%
718
0%
0%
226
50%
50%
320
30%
20%
16. (3)
How does this cognitive map contribute to the blind
person’s orientation performance in the real space?
N
Simple VE
1
2
3
4
5
AVG
Complex VE
1
2
3
4
5
AVG
Walking path
Perspective-change tasks
Duration
Success
Direct path
(seconds)
294
100%
100%
300
67%
67%
249
33%
0%
576
100%
67%
214
67%
67%
327
73%
60%
604
50%
50%
639
100%
50%
529
50%
0%
665
100%
100%
322
50%
50%
552
70%
50%
Pointing
Success
100%
67%
67%
83%
83%
80%
50%
0%
17%
33%
50%
30%
17. (3)
How does this cognitive map contribute to the blind
person’s orientation performance in the real space?
N
Simple VE
1
2
3
4
5
AVG
Complex VE
1
2
3
4
5
AVG
Full table
Perspective-Change tasks
Duration
Success
Direct path
(seconds)
294
100%
100%
300
67%
67%
249
33%
0%
576
100%
67%
214
67%
67%
327
73%
60%
604
50%
50%
639
100%
50%
529
50%
0%
665
100%
100%
322
50%
50%
552
70%
50%
Pointing
Success
100%
67%
67%
83%
83%
80%
50%
0%
17%
33%
50%
30%
18. Conclusions
The virtual cane changed the way the
participants explored the VEs:
More scanning than walking
More object-to-object than perimeter
strategy
Long pauses
Spatial representations were achieved, where
Map-model was the main representation
19. Conclusions
As the spaces became more complex the
cognitive map was less detailed
Participants managed to perform well in most
of the tasks in the real simple and complex
spaces
Most walking paths were direct to object
20. Future implementation
R&D on outdoor complex spaces
Compare the Virtual-cane with different
virtual technologies
Improve the UI for a shorter learning process
21. Thank you for listening
Special thanks
Hadas, Steven, David, Lindsay, and Patrick
The 10 participants that came voluntary
Itzik and Einat (video’s man)
lahavo@post.tau.ac.il
Exploration Task (The experimental G. -VE Wii ; control G - real space. in their own way in a limited time (40 minutes for exploring the simple space and 60 minutes for exploring the complex space). Description Task (verbal description)Orientation Tasks:Object-Oriented assignmentsPerspective-Change assignmentsPoint-to-the-Location assignments90 min for a meeting
Although the cognitive map was less detailed as the spaces became more complex, the participants still managed to perform most of the tasks in the corresponding real space