Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Nero-IR is a novel area of research under cognitive psychology, neuro-physiological methods (eye tracking, EEG, EOG, and GSR) and machine learning to understand information searchers and to improve search experience. Neuro-IR is useful in investigating the search as a learning process and to employ these sensory data as assessment of reading, mind-wandering and in inferring metadata features for machine learning models. In this talk, I will introduce a unification framework for neuro-physiological data; practically these models provide context for user interactions. I will show how we can take advantage of many existing interactions combining various sensory platforms (e.g., PupilLabs, Emotiv, Empatica E4). Information fusion can provide numerous benefits in combining multiple-sources of neuro-physiological data. The most obvious among them is the expected performance gain due to combination of evidence from multiple cues. As a practical matter, acquisition of physiological metadata is a research frontier.
Augmenting Speech-Language Rehabilitation with Brain Computer Interfaces: An ...HCI Lab
Presentation on Aug 7, 2015 in the 17th International Conference on Human-Computer Interaction #HCII2015 in Los Angeles, CA, USA. The paper was presented in the Universal Access in Human-Computer Interaction track in the "Novel technologies for speech, language, attention and child development" session which was chaired by Prof. Margherita Antona, Foundation for Research & Technology - Hellas (FORTH), Greece http://2015.hci.international/friday
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Nero-IR is a novel area of research under cognitive psychology, neuro-physiological methods (eye tracking, EEG, EOG, and GSR) and machine learning to understand information searchers and to improve search experience. Neuro-IR is useful in investigating the search as a learning process and to employ these sensory data as assessment of reading, mind-wandering and in inferring metadata features for machine learning models. In this talk, I will introduce a unification framework for neuro-physiological data; practically these models provide context for user interactions. I will show how we can take advantage of many existing interactions combining various sensory platforms (e.g., PupilLabs, Emotiv, Empatica E4). Information fusion can provide numerous benefits in combining multiple-sources of neuro-physiological data. The most obvious among them is the expected performance gain due to combination of evidence from multiple cues. As a practical matter, acquisition of physiological metadata is a research frontier.
Augmenting Speech-Language Rehabilitation with Brain Computer Interfaces: An ...HCI Lab
Presentation on Aug 7, 2015 in the 17th International Conference on Human-Computer Interaction #HCII2015 in Los Angeles, CA, USA. The paper was presented in the Universal Access in Human-Computer Interaction track in the "Novel technologies for speech, language, attention and child development" session which was chaired by Prof. Margherita Antona, Foundation for Research & Technology - Hellas (FORTH), Greece http://2015.hci.international/friday
Mindset Technologies - diagnostics / telemedicine / digital health use casesAladarTepeleamatchma
We leverage neuroscientific algorithms and machine vision (initially in AR/VR and later in telemedical devices) to enable the early detection of:
- neurodevelopmental diseases like Alzheimer's, Parkinson's
- neurodegenerative diseases like autism and ADHD
... by building a “pulse watch for the brain”
Abstract
Human–computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use. The field formally emerged out of computer science, cognitive psychology and industrial design through the 1960s, formulating guidelines for the development of interactive computer systems highlighting usability concerns for improved interfaces. Computing devices are becoming more prevalent and integrated into both our social and work spaces.HCI therefore plays an important role in ensuring that computer systems are not only functional but also respect the needs and capabilities of the humans that use them.
HCI encompasses not only ease of use but also new interaction techniques. It involves input and output devices and the interaction techniques that use them; presentation of information, control and monitoring of computer’s actions and the processes that developers follow when creating interfaces. In this seminar, emphasis is laid on the movement of a user’s eyes which can provide a convenient, natural, and high-bandwidth source of additional user input. Some of the human factors and technical considerations that arise in trying to use eye movements as an input medium and the first eye movement-based interaction techniques are discussed in this section.
AYUSHA PATNAIK,
SEM - 6th
TRIDENT ACADEMY OF TECHNOLOGY,
BBSR
Advances in Mixed Reality (MR) technologies are reshaping collaborative practices. The seamless integration of
physical and virtual elements enhances the perception of the
working environment, providing a more enriched collaborative
task experience. While revealing intriguing potential across
various sectors, wearing head-mounted displays (HMDs) can pose challenges in communication and in understanding others’ behaviours. This paper analyses the main elements of collaborative
augmented practices through the case study of Hololiver, a MR
system developed to assist surgeons in planning laparoscopic liver surgeries. The work discusses guidelines for designing interfaces
to preserve awareness in MR interactions.
Description of the way in which the software sustainability institute engages the software in research community. It covers why, how, the programmes, how to select people, activities those selected do, benefits, recommendations and more.
Emotion Detection Using Noninvasive Low-cost SensorsNicole Novielli
Emotion recognition from biometrics is relevant to a wide range of application domains, including healthcare and software development. Existing approaches usually adopt multi-electrodes sensors that could be expensive or uncomfortable to be used in real-life situations. We investigate whether we can reliably recognize high vs. low emotional valence and arousal by relying on noninvasive low-cost EEG, EMG, and GSR sensors. We report the results of an empirical study involving 19 subjects in a laboratory setting for emotion elicitation. We achieve state-of-the-art classification performance for both valence and arousal even in a cross-subject classification setting, which eliminates the need for individual training and tuning of classification models. Furthermore, we will discuss our ongoing work on the recognition of affective and cognitive states of software engineer during their daily programming tasks.
Designing Interactive Visualisations to Solve Analytical Problems in BiologyCagatay Turkay
Slides for my talk for the Cambridge Visualization of Biological Information Meetup held January 2015. I talk about why biology is exciting for visualisation researchers and go through examples where visualisation can help experts in understanding their data.
Præsentationen blev holdt ved InfinIT-konferencen SummIT 2013, der blev afholdt den 22. maj 2013 på Axelborg i København. Læs mere om konferencen her: http://www.infinit.dk/dk/arrangementer/tidligere_arrangementer/summit_2013.htm
use cases enabled by neuroscientific algorithms in AR/VR:
1) early detecting Alzheimer's, Parkinson's
2) early detecting ADHD and autism
3) medical eLearning
Mindset Technologies - diagnostics / telemedicine / digital health use casesAladarTepeleamatchma
We leverage neuroscientific algorithms and machine vision (initially in AR/VR and later in telemedical devices) to enable the early detection of:
- neurodevelopmental diseases like Alzheimer's, Parkinson's
- neurodegenerative diseases like autism and ADHD
... by building a “pulse watch for the brain”
Abstract
Human–computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use. The field formally emerged out of computer science, cognitive psychology and industrial design through the 1960s, formulating guidelines for the development of interactive computer systems highlighting usability concerns for improved interfaces. Computing devices are becoming more prevalent and integrated into both our social and work spaces.HCI therefore plays an important role in ensuring that computer systems are not only functional but also respect the needs and capabilities of the humans that use them.
HCI encompasses not only ease of use but also new interaction techniques. It involves input and output devices and the interaction techniques that use them; presentation of information, control and monitoring of computer’s actions and the processes that developers follow when creating interfaces. In this seminar, emphasis is laid on the movement of a user’s eyes which can provide a convenient, natural, and high-bandwidth source of additional user input. Some of the human factors and technical considerations that arise in trying to use eye movements as an input medium and the first eye movement-based interaction techniques are discussed in this section.
AYUSHA PATNAIK,
SEM - 6th
TRIDENT ACADEMY OF TECHNOLOGY,
BBSR
Advances in Mixed Reality (MR) technologies are reshaping collaborative practices. The seamless integration of
physical and virtual elements enhances the perception of the
working environment, providing a more enriched collaborative
task experience. While revealing intriguing potential across
various sectors, wearing head-mounted displays (HMDs) can pose challenges in communication and in understanding others’ behaviours. This paper analyses the main elements of collaborative
augmented practices through the case study of Hololiver, a MR
system developed to assist surgeons in planning laparoscopic liver surgeries. The work discusses guidelines for designing interfaces
to preserve awareness in MR interactions.
Description of the way in which the software sustainability institute engages the software in research community. It covers why, how, the programmes, how to select people, activities those selected do, benefits, recommendations and more.
Emotion Detection Using Noninvasive Low-cost SensorsNicole Novielli
Emotion recognition from biometrics is relevant to a wide range of application domains, including healthcare and software development. Existing approaches usually adopt multi-electrodes sensors that could be expensive or uncomfortable to be used in real-life situations. We investigate whether we can reliably recognize high vs. low emotional valence and arousal by relying on noninvasive low-cost EEG, EMG, and GSR sensors. We report the results of an empirical study involving 19 subjects in a laboratory setting for emotion elicitation. We achieve state-of-the-art classification performance for both valence and arousal even in a cross-subject classification setting, which eliminates the need for individual training and tuning of classification models. Furthermore, we will discuss our ongoing work on the recognition of affective and cognitive states of software engineer during their daily programming tasks.
Designing Interactive Visualisations to Solve Analytical Problems in BiologyCagatay Turkay
Slides for my talk for the Cambridge Visualization of Biological Information Meetup held January 2015. I talk about why biology is exciting for visualisation researchers and go through examples where visualisation can help experts in understanding their data.
Præsentationen blev holdt ved InfinIT-konferencen SummIT 2013, der blev afholdt den 22. maj 2013 på Axelborg i København. Læs mere om konferencen her: http://www.infinit.dk/dk/arrangementer/tidligere_arrangementer/summit_2013.htm
use cases enabled by neuroscientific algorithms in AR/VR:
1) early detecting Alzheimer's, Parkinson's
2) early detecting ADHD and autism
3) medical eLearning
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
1. October 1, 2018
October 1, 2018
Mark F. Bocko | Professor
Department: Electrical and Computer Engineering
Focus: Spatial audio
Pilot Project: “Development of a quantitative framework for spatial audio characterization”
Project Goals
• Develop quantitative methods to assess spatial audio rendering systems
• Incorporate quantitative binaural hearing models into audio system design tools
• Predict what listeners will report hearing (locations, spatial extent of sources, diffusiveness)
2. October 2018
Geunyoung Yoon | Professor
Department: Ophthalmology, The Institute of Optics, Center for Visual Science, Biomedical Engineering
Focus: Physiological Optics, Vision Correction, Visual Psychophysics, Optical Imaging, Biomechanics, Eye Diseases
Lab website: http://www.cvs.rochester.edu/yoonlab/
OCULAR OPTICS & CUSTOMIZED
VISION CORRECTION
• Eye’s aberration and visual quality
• Ocular wavefront sensing
• Advanced ophthalmic lenses
• Sport vision
• Optical metrology
OCULAR OPTICS and VISION
• Adaptive optics vision simulator
• Adaptation to habitual optics
• Neural processing and perception
• Binocular integration
• Neural plasticity
• Stereopsis
ANTERIOR SEGMENT IMAGING
• Mechanisms of pathologic cornea
diseases
• Ocular surface diseases and dry eye
• Corneal biomechanics
• Multimodal high-resolution ocular
imaging
• Advanced cataract surgery
RESEARCH TOPICS:
ACOOMMODATION & PRESBYOPIA
• Vergence-Accommodation conflict
under VR/AR environments
• Extended depth of focus technology
• Accommodating intraocular lens
• Peripheral vision and optics
• Binocular accommodation
• Emmetropization / Refractive error
3. October 2018
Zhiyao Duan | Assistant Professor
Department: Electrical and Computer Engineering
Focus: Computer Audition, Music Information Retrieval, Audio-Visual Analysis, Audio for AR/VR
Lab: Audio Information Research (AIR) Lab
RESEARCH TOPICS:
MUSIC INFORMATION RETRIEVAL
• Music transcription
• Audio-score alignment
• Music source separation
• Music generation
AUDIO-VISUAL PROCESSING
• AV analysis of music performances
• Visually informed source separation
• Music performance generation
• Talking face generation from speech
ENVIRONMENTAL SOUND
UNDERSTANDING
• Sound search by vocal imitation
• Sound event detection
• Source localization and tracking
SPEECH PROCESSING
• Speech emotion classification
• Speaker recognition and diarization
• Speech enhancement
4. October 2018
Andrew White| Assistant Professor
Department: Chemical Engineering
Focus: Role of AR/VR in higher education, computational chemistry
Courses: CHE 116 – Numerical Methods & Statistics | CHE 477 – Advanced Numerical Methods
Role of AR and VR in Higher Education
• Collaborative project exploring the use of
AR and VR in STEM curriculum
• Emphasis on tactile, collaborative,
interactive replacement for traditional
hands-on labs
• Intended for topics that are abstract or
impossible to have labs, e.g. quantum
mechanics or solving ODEs
• Provides a new tool for outreach to
generate enthusiasm for STEM careers
RESEARCH TOPICS:
Computational Chemistry
• Computer simulation of dynamics of
molecules at the level of atoms
• Provides insight at a length-scale inaccessible
to experiments
• Requires careful design of scientifically
accurate, highly-parallel, algorithms and code
• Use techniques like multi-scale modeling to
study complex phenomena like protein
adsorption
• High accuracy through novel methods to
incorporate data derived from experiments in
simulations
5. October 2018
Yuhao Zhu | Assistant Professor
Department: Computer Science, Goergen Institute for Data Science
Focus: Architecting next-generation computer hardware for an AR/VR-driven future!
RESEARCH TOPICS: Co-Design Computing Systems with Non-computing Systems for AR/VR
• Optical system + image sensor + imaging + computer vision (w/ Jannick Rolland)
• Does the optimal design of optical system change with the specific vision task?
• Does the optimal design of vision hardware change with optical systems?
• How to design an end-to-end system with specific tasks and quality metrics in mind?
• How to dynamically reconfigure both optical systems and computer systems on the fly?
6. October 2018
Michele Rucci | Professor
Department: Brain & Cognitive Sciences
Focus: Action & perception, visual perception in humans and machines, human behavior, sensory
processing
COMPUTATIONAL MECHANISMS
• Computational goals of visual
processing
• Establishment of spatial
representations
• Multimodal integration
HUMAN BEHAVIOR
• Action properties
• Identification of visuomotor
strategies
• Limits of oculomotor control
• Natural head-eye coordination
VISUAL DEVELOPMENT
• Consequences of eye movements in
visual maturation
• Abnormal eye movements
• Myopia
RESEARCH TOPICS:
ACTIVE VISION
• Vision as sensorimotor integrated
process
• Dependence of visual functions on
eye movements
• Disruption of oculomotor cycle via
gaze-contingent control
7. Building Virtual Concert Halls with Spatial Audio
Ming-Lun Lee, Matthew Brown, Zhiyao Duan, and Steven Philbert
Department of Electrical and Computer Engineering
Eastman School of Music
Building Virtual Concert Halls with Spatial Audio
Ming-Lun Lee, Matthew Brown, Zhiyao Duan, and Steven Philbert
Department of Electrical and Computer Engineering
Eastman School of Music
8. zbai@cs.rochester.edu http://zhenbai.io October 2018
Zhen Bai| Assistant Professor
Department: Computer Science
Focus: Human-Computer Interaction, Augmented Reality, Tangible User Interface, Embodied Conversational Agent,
Education Technology, Computer-Supported Collaboration, Design for Diversity
RESEARCH TOPICS: Augmented Reality - Theory of Mind, Symbolic Play, Children with Autism Spectrum Condition
9. UR AR/VR Pilot Project:
Real-time synthesis of a virtual talking face from acoustic speech
• Chenliang Xu, Assistant Professor of Computer Science
• Collaborators:
• Ross Maddox, Assistant Professor of Biomedical Engineering
• Zhiyao Duan, Assistant Professor of Electrical and Computer Engineering
[Chen, Li, Maddox, Duan, and Xu, ECCV 2018]
10. October 2018
Ross Maddox | Assistant Professor
Department: Neuroscience & Biomedical Engineering
Focus: Audio-visual integration, selective attention, sound localization
AUDIO-VISUAL INTEGRATION
• Multisensory binding and object
formation
• Impact of “uninformative” visual
stimuli on auditory perception
VISUAL HEARING AID
• Generate an artificial talking face
from speech audio in real-time
• Improve listening abilities for
people with hearing impairments,
attention disorders
RESEARCH TOPICS:
GAZE EFFECTS ON SPATIAL HEARING
• Interaction between eye
movements and auditory spatial
acuity
• Benefits of directed eye gaze on
speech-in-noise understanding
BRAINSTEM CODING OF SPEECH
• Use electroencephalography and
novel signal processing schemes to
study how brainstem codes speech
• Investigate subcortical effects of
attention and cognition
11. October 2018
Martina Poletti| Assistant Professor
Department: Neuroscience
Focus: Visual perception, eye movements, attention, eyetracking.
RESEARCH TOPICS:
VISUOSPATIAL ATTENTION
• Resolution of attention in the fovea
• Attention contribution to fine
spatial vision
• Pre-microsaccadic enhancements of
foveal vision
HIGH ACUITY VISION
• Fine control of eye movements
during high acuity tasks
• Distribution of high acuity
capabilities across the fovea
• Oculomotor strategies in fine spatial
vision
SPATIAL REPRESENTATIONS
• Multimodal integration
• Spatial updating across saccades
• Spatial updating during fixation
FOVEAL PRIORITY MAPS
• Driving factors
• Perceptual benefits
• Spatiotemporal dynamics
• Visual exploration at the foveal
scale
12. October 2018
River Campus Libraries
Presenter: Lauren Di Monte, Director of Research Initiatives
Focus: Support AR/VR research, teaching, and learning
AR/VR Creation and Exploration Space
Enhance access and support Provide on-ramps Grow a community of practice
13. UR Medicine Health Lab (Hasselberg, Mitten, Dasilva)
• Expertise in technology innovation to improve delivery of care
Department of Psychiatry (Cross, Hasselberg)
• Expertise in cognitive behavior therapy, and implementation science
Eastman School of Music (Brown, Winders)
• Expertise in visual and audio therapeutic functions
Art, Science, & Engineering (Luo)
• Expertise in computer science and smartphone sensors
13
Cognitive Behavior Therapy Mobile App with
Embedded Virtual Reality
14. October 2018
Edmund Lalor | Associate Professor
Department: Biomedical Engineering and Neuroscience
Focus: Human neuroscience, sensory processing, perception, cognition
HUMAN SENSORY PROCESSING
• Hierarchical processing of natural
audio and visual stimuli
• The effects of knowledge and
prediction on early sensory
processing
MULTISENSORY INTEGRATION
• Audiovisual speech processing
• The effect of visual input on
auditory scene segregation
NEURAL SIGNAL DECODING
• Methods for decoding multivariate
neural data
• Decoding representations of
acoustic space in the cortex
• Decoding selective attention in real-
time
RESEARCH TOPICS:
ATTENTION
• Visual spatial attention
• Auditory selective attention –
particularly to speech (i.e., the
cocktail party problem)
15. October 2018
Ania Busza | Assistant Professor
Department: Neurology (Stroke division)
Focus: Stroke Rehabilitation
KEY ISSUES IN STROKE REHABILITATION AND RECOVERY
• What factors predict stroke recovery?
• What is the best timing/dose for rehabilitation therapies?
• How can we use new technologies to create more effective therapeutics?
RESEARCH TOPICS:
(1) Surface-EMG controlled
Virtual Arm
(2) Using superficial sensors to
quantify rehab “dose”
(3) EMG analysis of Motor
System fatigue during learning
16. October 2018
Michael Jarvis| Associate Professor
Director, Digital Media Studies Program
Departments: History & ATHS
RESEARCH: VR, Visualization & Analysis of Cultural Heritage Sites; VR and Public History
UR/University of Ghana
Digital Archaeology & Structural Analysis
Field Research, 2017-present:
VR applications
17. Aims 1 & 2: Self-regulation (executive functioning & autonomic regulation) à social motivation
Aim 3: Self-regulation (autonomic & behavioral reactivity/regulation) during immersive, 360 social filmà social motivation
Measuring: autonomic
reactivity/regulation & head orientation
• Custom-made
video (~6 min)
• Spatial audio
• Progressively
more directive
social overtures
made toward
participant
Self-regulation processes underlying social motivation in ASD: The influence of social context
1. Decision
Making
2. Inhibitory
Control
3. Cognitive
Flexibility
Social Motivation:
Executive Functioning Tasks (done with and without social noise):
Collecting
HRV & EDA
during tasks
Jessica M. Keith| Doctoral Candidate
Department: Clinical & Social Sciences in Psychology; Research Lab: Dev. Neuropsychology Lab (PI: Loisa Bennetto, PhD)
18. October 2018
Benjamin T. Crane| Associate Professor
Department: Otolaryngology, Biomedical Engineering and Neuroscience
Focus: Human visual-vestibular multisensory integration, motion perception, rehabilitation of vestibular
lesions
VISUAL-VESTIBULAR INTEGRATION
• Common causality between visual
and inertial stimuli
• Disambiguation of extern vs. self
motion
VISUAL-VESTIBULAR ADAPTATION
• Use of visual-inertial heading offsets
to adapt heading
• Exposure to a rotating environment
for heading adaptation
NEW METHODS OF VESTIBULAR
REHABILITATION
• Develop method for rehabilitation
of heading deviation
• Decoding representations of
acoustic space in the cortex
• Decoding selective attention in real-
time
RESEARCH TOPICS:
PATHOLOGICAL MOTION PERCEPTION
• Rotation perception
• Heading offset
• Integration with visual motion
• Effect of unilateral lesions
• Migraine
Acute Chronic
I
V
I
V
19. October 2018
Duje Tadin | Professor
Department: Brain & Cognitive Science, Center for Visual Science, Neuroscience, Ophthalmology
Co-Director, Center for Augmented and Virtual Reality (Neuroscience)
Focus: Visual perception, Multisensory processing, Brain plasticity
Conflict of interest declaration: Senior Scientist, NeuroTrainer
VISUAL PERCEPTION & COGNITION
• Motion perception
• Visual attention & awareness
• Binocular vision
• Visual adaptation
• Cognitive assessment in VR
BRAIN PLASTICITY
• Perceptual learning
• Cognitive training
• Brain training in VR
• Used in heathy adults, dementia,
stroke, corneal disease, concussion,
schizophrenia.
METHODS
• Psychophysics
• Computational modeling
• Eye tracking
• Brain stimulation
• Neuroimaging
RESEARCH TOPICS:
MULTISENSORY PROCESSING
• Audiovisual interactions
• Vision & proprioception
• Synesthesia