In this research, We explore novel Augmented Virtual Teleportation (AVT) methods based on the hybrid technologies of Augmented Reality (AR), Virtual Reality (VR), 3D live scene capturing, and multimodal interaction. Natural behavioral cues (Hand gestures, eye gaze, etc.) that are used in face-to-face communication play an essential role in effective collaboration. In contrast, most Mixed Reality (MR) remote collaboration systems mainly investigated computer-generated visual cues rendered as graphic objects or text for delivering instructions. In this research, we first study natural communication cues that people use in face-to-face collaboration. We then develop a novel remote collaboration system to enable people to communicate remotely as face-to-face. The system will contain two main parts: 1) Live scene capturing to enable real-time environment reconstruction and sharing of a user’s location, 2) Multimodal input such as gaze, gesture, and physiological signals to enhance remote communication. So far we have conducted two experiments to study the collaboration between a person with an AR interface and a remote user within a VR interface using multimodal input. We found that the remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users by combing gaze and gesture cues than using the gaze cue alone. The combined cues were also rated significantly higher than using gaze cues alone in terms of the ease of conveying spatial actions. We plan to extend this system to study the effect of incorporating physiological signals in communication, especially in co-presence and usability. There are many potential applications of this research in different areas such as training, tourism, entertainment, gaming, and others. In conclusion, this thesis aims to study the effect of incorporating multimodal input and scene capture in remote collaboration systems in terms of presence, engagement, and task efficiency. This research will produce many benefits, such as design guidelines for future AVT systems, software libraries making it easy to create AVT systems, sample data-sets from experiments conducted, research publications, and more.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Awe 2019 - Using AR and VR for Brain SynchroniztionMark Billinghurst
1) The document discusses using augmented and virtual reality technologies to measure and synchronize brain activity between individuals.
2) Several studies have found that people's brain activity patterns can synchronize when they perform tasks together or interact socially.
3) The author proposes using EEG sensors integrated into VR headsets to measure brain activity during collaborative VR tasks and explore how VR environments and cues could enhance inter-brain synchronization.
4) Simulating brain synchronization between a human and virtual agent using EEG and computational models is also discussed as a direction for future research.
Designing Useful and Usable Augmented Reality Experiences Yan Xu
Dr. Yan Xu gave a presentation on designing useful and usable augmented reality experiences. Some key points:
- AR design requires considering the material properties of AR like spatial interaction, context awareness, and shared perspectives. Research-through-design and iterative prototyping are important approaches.
- Data-driven and theory-driven design methods can help address challenges like the chicken-and-egg problem of needing data before finalizing a design and dealing with large amounts of probabilistic or uncertain data.
- When designing for AR, it's important to consider why AR - for personal use cases, for different disciplines, and at different levels of engagement. The goal should be creating meaningful experiences that benefit people.
The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
This document provides an overview of gestural interaction and various gesture recognition techniques. It begins with definitions of gestures and how they can vary based on factors like the body part used, number of dimensions, whether they are contact-based or not. It then discusses benefits of gestures and examples of gesture recognizers like xStroke and techniques like Rubine, SiGeR, LVS, hidden Markov models, and the $-family of recognizers. The document provides details on properties like stroke, direction, and rotation invariance as well as training and recognition phases for different recognizers.
Human Computer Interaction (HCI) is the study of how humans interact with computers and how to design interfaces so that users can interact with systems effectively, efficiently and with satisfaction. HCI aims to make computers more usable by understanding users and designing appropriate input/output devices and interaction styles. The goals of HCI include improving safety, utility, effectiveness and efficiency of computer systems to benefit both users and organizations.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Awe 2019 - Using AR and VR for Brain SynchroniztionMark Billinghurst
1) The document discusses using augmented and virtual reality technologies to measure and synchronize brain activity between individuals.
2) Several studies have found that people's brain activity patterns can synchronize when they perform tasks together or interact socially.
3) The author proposes using EEG sensors integrated into VR headsets to measure brain activity during collaborative VR tasks and explore how VR environments and cues could enhance inter-brain synchronization.
4) Simulating brain synchronization between a human and virtual agent using EEG and computational models is also discussed as a direction for future research.
Designing Useful and Usable Augmented Reality Experiences Yan Xu
Dr. Yan Xu gave a presentation on designing useful and usable augmented reality experiences. Some key points:
- AR design requires considering the material properties of AR like spatial interaction, context awareness, and shared perspectives. Research-through-design and iterative prototyping are important approaches.
- Data-driven and theory-driven design methods can help address challenges like the chicken-and-egg problem of needing data before finalizing a design and dealing with large amounts of probabilistic or uncertain data.
- When designing for AR, it's important to consider why AR - for personal use cases, for different disciplines, and at different levels of engagement. The goal should be creating meaningful experiences that benefit people.
The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
This document provides an overview of gestural interaction and various gesture recognition techniques. It begins with definitions of gestures and how they can vary based on factors like the body part used, number of dimensions, whether they are contact-based or not. It then discusses benefits of gestures and examples of gesture recognizers like xStroke and techniques like Rubine, SiGeR, LVS, hidden Markov models, and the $-family of recognizers. The document provides details on properties like stroke, direction, and rotation invariance as well as training and recognition phases for different recognizers.
Human Computer Interaction (HCI) is the study of how humans interact with computers and how to design interfaces so that users can interact with systems effectively, efficiently and with satisfaction. HCI aims to make computers more usable by understanding users and designing appropriate input/output devices and interaction styles. The goals of HCI include improving safety, utility, effectiveness and efficiency of computer systems to benefit both users and organizations.
The document summarizes a presentation given by Musstanser Tinauli on their research activities and experiments. It discusses their goals of understanding how interactive environments can be measured and how tools influence user behavior. It describes ongoing case studies of games, e-learning platforms and digital pens. It outlines their methodological approach and provides results from studies on a digital pen and paper system, including lessons learned. Recent publications and collaborations are also mentioned.
Lukasz Jedrzejczyk developed a haptic interface called "Privacy-Shake" that allows users to manage location privacy settings on their mobile phones through shaking and sweeping motions. He conducted a study with 16 participants to evaluate the usability of Privacy-Shake. The study found that while Privacy-Shake allowed some tasks to be completed faster than a graphical user interface, many participants struggled with some gestures and felt the interface was awkward or annoying. Jedrzejczyk plans to continue improving Privacy-Shake by adding calibration, more discreet gestures, and expanded functionality.
This presentation was on Empathic Mixed Reality, which we applied Mixed Reality technology to Empathic Computing in our studies. We shared an overview of our research and selected findings. This talk was given at ETRI and KAIST in Daejeon, South Korea, on the 24th of May 2017.
This presentation is concerned with the development and evaluation of a redesign of the online and mobile app African Storybook initiative services that support the authoring and reading of openly licensed storybooks to support literacy development in Africa. The redesign makes use of a number of cultural-historical activity theory principles, including: object of activity, tool mediated and shared objects that are part of the third-generation activity system.
The document discusses Human Computer Interaction (HCI). It defines HCI as a discipline concerned with designing interactive computing systems for human use and studying phenomena around them. HCI draws from fields like computer science, behavioral sciences, and design. It aims to improve interactions between users and computers by making computers more usable and responsive to human needs. HCI involves methods for designing, implementing, and evaluating interfaces to minimize barriers between what users want to accomplish and how computers support users' tasks.
This document provides an overview of intelligent user interfaces and mixed realities. It discusses virtual reality (VR), augmented reality (AR), and mixed reality (MR), comparing different experiences and defining key concepts like immersion and presence. The document outlines design guidelines for VR communication and different realities, including how to avoid the uncanny valley. It also discusses perceptual models, modalities, and challenges of designing the Internet of Things for mixed realities.
Representing and Evaluating Social Context on Mobile DevicesKris Mihalic
The document summarizes research on representing and evaluating social context on mobile devices. It describes related work using context-aware systems and experience sampling methods. It then outlines a study that used ontologies and a mobile prototype to model relationship types, mood, communication channels, and settings based on user-initiated and system-triggered surveys. The results showed that communicated content, relationship types, and mood are impacted by social contexts, and that experience sampling is suitable for studying social contexts, while semantic web ontologies are useful for representing complex contextual models.
The study investigated how walking a virtual dog in augmented reality affects a person's behavior and perception. Participants walked with or without an AR dog that was either aware or unaware of another person. Results showed the AR dog influenced proxemics behavior like distance and gaze. There was no difference based on whether the other person saw the dog. Perceptions like feelings of co-presence were higher when the dog reacted to collisions with the other person. The study explored how virtual pets in augmented reality could impact human-animal interaction and social behaviors.
This document provides an overview of best practices for conducting mobile user experience research. It discusses the challenges of mobile research like varied devices, locations, networks and user experiences. It recommends testing across many devices and simulators, addressing network coverage issues, screening for data plans and experience. Field studies are suggested to understand true context of use. Tips include creating a "hot zone" for devices, addressing reflections, simplifying language and having prototype strategies. The document emphasizes understanding usage across many scenarios to effectively evaluate mobile products and identify opportunities for improvement.
The document discusses human-computer interaction (HCI). It provides an overview of HCI as a discipline concerned with designing interactive computing systems for human use. It also mentions the Association for Computing Machinery's Special Interest Group on Computer-Human Interaction. The document then lists some HCI resources and introduces the main focus of HCI as user interface design. It discusses who typically builds interfaces and why studying user interfaces is important.
This document summarizes a survey paper on collaborative work in augmented reality. The paper reviews 65 papers on AR and CSCW systems published between 2008-2019. It introduces fundamental concepts of AR and CSCW, provides a taxonomy of AR-CSCW systems based on time, space, roles and technology used. The paper survey analyzes examples of both asynchronous and synchronous collaboration in spatial, temporal dimensions. It also discusses design considerations and remaining research challenges in collaborative AR systems.
This document provides an overview of a course on usability and interaction design. The course investigates how to design software that meets users' needs and goals by including usability throughout the development process. It covers principles of usability like learnability and efficiency. Students will learn how to design and conduct usability tests of a product to identify potential usability issues.
This document introduces design and development principles based on Don Norman's book "The Design of Everyday Things". It discusses key principles like visibility, feedback, affordance, mapping and consistency. It explains how these principles help address problems in user experience and interaction design. The document concludes by noting how design principles are validated through principles of usability like learnability, efficiency and satisfaction.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
HCI is the study of the interaction between humans and computers. It draws from many disciplines including cognitive psychology, computer science, design, and fine arts. The field has evolved through three waves since the 1980s from a focus on rigid guidelines and usability testing to considering context, emotions, and cultural differences. HCI professionals work in roles like interaction design, user experience design, and information architecture to create intuitive experiences for users when interacting with technology like the iPod.
The document discusses human-computer interaction and its importance in designing user interfaces. It covers key topics like the relationship between humans and computers, practical implications of HCI, disciplines that contribute to HCI, factors to consider in HCI, and models for HCI design like the waterfall model and rapid prototyping. When designing web-based courses, factors like the user (student and instructor), user interface, organization, constraints, and productivity should be considered.
INTERACT 2019 'The Science Behind User Experience Design' CourseAsad Ali Junaid
Planning and conducting User Experience (UX) activities in a structured and scientific manner has many advantages. It is important that UX Professionals understand the scientific basis of UX methods and leverage them to enhance the UX of the application being designed. It would also be easier for the UX designer to get a buy-in from the stakeholders if his design recommendations are based in scientific logic and whetted by supporting data. In this course, UX relevant social sciences based scientific concepts and methods will be presented to the audience in a way which is simple to understand and easily to assimilate.
Advances in Mixed Reality (MR) technologies are reshaping collaborative practices. The seamless integration of
physical and virtual elements enhances the perception of the
working environment, providing a more enriched collaborative
task experience. While revealing intriguing potential across
various sectors, wearing head-mounted displays (HMDs) can pose challenges in communication and in understanding others’ behaviours. This paper analyses the main elements of collaborative
augmented practices through the case study of Hololiver, a MR
system developed to assist surgeons in planning laparoscopic liver surgeries. The work discusses guidelines for designing interfaces
to preserve awareness in MR interactions.
UBIQUITOUS COMPUTING Its Paradigm, Systems & Middlewarevivatechijri
This paper offers a survey of ubiquitous computing research which is the developing a scope that
gears communication technologies into routine life accomplishments. This study paper affords a types of the
studies that extents at the ubiquitous computing exemplar. In this paper, we present collective structure principles
of ubiquitous systems and scrutinize important developments in context-conscious ubiquitous structures. In toting,
this studies work affords a novel structure of ubiquitous computing system and an evaluation of sensors needed
for applications in ubiquitous computing. The goal of this studies work are 3-fold: i) help as a parameter for
researchers who're first-hand to ubiquitous computing and want to subsidize to this research expanse, ii) provide
a unique machine architecture for ubiquitous computing system, and iii) offer auxiliary studies ways necessary
for exceptional-of-provider assertion of ubiquitous computing..
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
This document describes a study that evaluated the Microsoft Kinect device for natural interaction in an information visualization system called MetricSPlat. The researchers hypothesized that Kinect would enable more efficient interaction than a mouse for tasks like identifying clusters and outliers in multidimensional data projections. They used a participatory design process with users to develop an interaction scheme for controlling MetricSPlat with Kinect gestures. Usability tests were conducted during design to evaluate each iteration. After finalizing the Kinect scheme, comparative usability tests were performed between Kinect and mouse. The results found that while users reported high satisfaction with Kinect, it was less efficient than the mouse in terms of task completion times and precision for the specific visualization tasks in the
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
The document summarizes a presentation given by Musstanser Tinauli on their research activities and experiments. It discusses their goals of understanding how interactive environments can be measured and how tools influence user behavior. It describes ongoing case studies of games, e-learning platforms and digital pens. It outlines their methodological approach and provides results from studies on a digital pen and paper system, including lessons learned. Recent publications and collaborations are also mentioned.
Lukasz Jedrzejczyk developed a haptic interface called "Privacy-Shake" that allows users to manage location privacy settings on their mobile phones through shaking and sweeping motions. He conducted a study with 16 participants to evaluate the usability of Privacy-Shake. The study found that while Privacy-Shake allowed some tasks to be completed faster than a graphical user interface, many participants struggled with some gestures and felt the interface was awkward or annoying. Jedrzejczyk plans to continue improving Privacy-Shake by adding calibration, more discreet gestures, and expanded functionality.
This presentation was on Empathic Mixed Reality, which we applied Mixed Reality technology to Empathic Computing in our studies. We shared an overview of our research and selected findings. This talk was given at ETRI and KAIST in Daejeon, South Korea, on the 24th of May 2017.
This presentation is concerned with the development and evaluation of a redesign of the online and mobile app African Storybook initiative services that support the authoring and reading of openly licensed storybooks to support literacy development in Africa. The redesign makes use of a number of cultural-historical activity theory principles, including: object of activity, tool mediated and shared objects that are part of the third-generation activity system.
The document discusses Human Computer Interaction (HCI). It defines HCI as a discipline concerned with designing interactive computing systems for human use and studying phenomena around them. HCI draws from fields like computer science, behavioral sciences, and design. It aims to improve interactions between users and computers by making computers more usable and responsive to human needs. HCI involves methods for designing, implementing, and evaluating interfaces to minimize barriers between what users want to accomplish and how computers support users' tasks.
This document provides an overview of intelligent user interfaces and mixed realities. It discusses virtual reality (VR), augmented reality (AR), and mixed reality (MR), comparing different experiences and defining key concepts like immersion and presence. The document outlines design guidelines for VR communication and different realities, including how to avoid the uncanny valley. It also discusses perceptual models, modalities, and challenges of designing the Internet of Things for mixed realities.
Representing and Evaluating Social Context on Mobile DevicesKris Mihalic
The document summarizes research on representing and evaluating social context on mobile devices. It describes related work using context-aware systems and experience sampling methods. It then outlines a study that used ontologies and a mobile prototype to model relationship types, mood, communication channels, and settings based on user-initiated and system-triggered surveys. The results showed that communicated content, relationship types, and mood are impacted by social contexts, and that experience sampling is suitable for studying social contexts, while semantic web ontologies are useful for representing complex contextual models.
The study investigated how walking a virtual dog in augmented reality affects a person's behavior and perception. Participants walked with or without an AR dog that was either aware or unaware of another person. Results showed the AR dog influenced proxemics behavior like distance and gaze. There was no difference based on whether the other person saw the dog. Perceptions like feelings of co-presence were higher when the dog reacted to collisions with the other person. The study explored how virtual pets in augmented reality could impact human-animal interaction and social behaviors.
This document provides an overview of best practices for conducting mobile user experience research. It discusses the challenges of mobile research like varied devices, locations, networks and user experiences. It recommends testing across many devices and simulators, addressing network coverage issues, screening for data plans and experience. Field studies are suggested to understand true context of use. Tips include creating a "hot zone" for devices, addressing reflections, simplifying language and having prototype strategies. The document emphasizes understanding usage across many scenarios to effectively evaluate mobile products and identify opportunities for improvement.
The document discusses human-computer interaction (HCI). It provides an overview of HCI as a discipline concerned with designing interactive computing systems for human use. It also mentions the Association for Computing Machinery's Special Interest Group on Computer-Human Interaction. The document then lists some HCI resources and introduces the main focus of HCI as user interface design. It discusses who typically builds interfaces and why studying user interfaces is important.
This document summarizes a survey paper on collaborative work in augmented reality. The paper reviews 65 papers on AR and CSCW systems published between 2008-2019. It introduces fundamental concepts of AR and CSCW, provides a taxonomy of AR-CSCW systems based on time, space, roles and technology used. The paper survey analyzes examples of both asynchronous and synchronous collaboration in spatial, temporal dimensions. It also discusses design considerations and remaining research challenges in collaborative AR systems.
This document provides an overview of a course on usability and interaction design. The course investigates how to design software that meets users' needs and goals by including usability throughout the development process. It covers principles of usability like learnability and efficiency. Students will learn how to design and conduct usability tests of a product to identify potential usability issues.
This document introduces design and development principles based on Don Norman's book "The Design of Everyday Things". It discusses key principles like visibility, feedback, affordance, mapping and consistency. It explains how these principles help address problems in user experience and interaction design. The document concludes by noting how design principles are validated through principles of usability like learnability, efficiency and satisfaction.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
HCI is the study of the interaction between humans and computers. It draws from many disciplines including cognitive psychology, computer science, design, and fine arts. The field has evolved through three waves since the 1980s from a focus on rigid guidelines and usability testing to considering context, emotions, and cultural differences. HCI professionals work in roles like interaction design, user experience design, and information architecture to create intuitive experiences for users when interacting with technology like the iPod.
The document discusses human-computer interaction and its importance in designing user interfaces. It covers key topics like the relationship between humans and computers, practical implications of HCI, disciplines that contribute to HCI, factors to consider in HCI, and models for HCI design like the waterfall model and rapid prototyping. When designing web-based courses, factors like the user (student and instructor), user interface, organization, constraints, and productivity should be considered.
INTERACT 2019 'The Science Behind User Experience Design' CourseAsad Ali Junaid
Planning and conducting User Experience (UX) activities in a structured and scientific manner has many advantages. It is important that UX Professionals understand the scientific basis of UX methods and leverage them to enhance the UX of the application being designed. It would also be easier for the UX designer to get a buy-in from the stakeholders if his design recommendations are based in scientific logic and whetted by supporting data. In this course, UX relevant social sciences based scientific concepts and methods will be presented to the audience in a way which is simple to understand and easily to assimilate.
Advances in Mixed Reality (MR) technologies are reshaping collaborative practices. The seamless integration of
physical and virtual elements enhances the perception of the
working environment, providing a more enriched collaborative
task experience. While revealing intriguing potential across
various sectors, wearing head-mounted displays (HMDs) can pose challenges in communication and in understanding others’ behaviours. This paper analyses the main elements of collaborative
augmented practices through the case study of Hololiver, a MR
system developed to assist surgeons in planning laparoscopic liver surgeries. The work discusses guidelines for designing interfaces
to preserve awareness in MR interactions.
UBIQUITOUS COMPUTING Its Paradigm, Systems & Middlewarevivatechijri
This paper offers a survey of ubiquitous computing research which is the developing a scope that
gears communication technologies into routine life accomplishments. This study paper affords a types of the
studies that extents at the ubiquitous computing exemplar. In this paper, we present collective structure principles
of ubiquitous systems and scrutinize important developments in context-conscious ubiquitous structures. In toting,
this studies work affords a novel structure of ubiquitous computing system and an evaluation of sensors needed
for applications in ubiquitous computing. The goal of this studies work are 3-fold: i) help as a parameter for
researchers who're first-hand to ubiquitous computing and want to subsidize to this research expanse, ii) provide
a unique machine architecture for ubiquitous computing system, and iii) offer auxiliary studies ways necessary
for exceptional-of-provider assertion of ubiquitous computing..
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
This document describes a study that evaluated the Microsoft Kinect device for natural interaction in an information visualization system called MetricSPlat. The researchers hypothesized that Kinect would enable more efficient interaction than a mouse for tasks like identifying clusters and outliers in multidimensional data projections. They used a participatory design process with users to develop an interaction scheme for controlling MetricSPlat with Kinect gestures. Usability tests were conducted during design to evaluate each iteration. After finalizing the Kinect scheme, comparative usability tests were performed between Kinect and mouse. The results found that while users reported high satisfaction with Kinect, it was less efficient than the mouse in terms of task completion times and precision for the specific visualization tasks in the
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
Iterative Multi-document Neural Attention for Multiple Answer PredictionClaudio Greco
Iterative Multi-document Neural Attention for Multiple Answer Prediction is a method for conversational recommender systems that can answer questions and provide recommendations. It extends previous work to leverage evidence from multiple documents. The model iteratively performs attention over the query and documents to uncover relationships. It then uses attention weights to generate relevance scores and predict multiple answers. An evaluation on a movie dialog dataset shows it outperforms baselines at question answering and recommendation tasks. Future work includes improving evidence retrieval and incorporating user preferences into the model.
This document describes a virtual supermarket simulator application created by Emil Rosenlund Høeg and Luca Mangano. The application was developed for Aalborg University's Integrated Food Studies program as a tool to study consumer behavior and decision making in a virtual environment. A usability test was conducted with 17 participants to evaluate the application based on learnability, effectiveness, efficiency, errors, and satisfaction. The results showed that participants felt limited by the application's current functionality but provided suggestions for improvements to enhance the experience. The project demonstrated potential for a virtual simulation tool but requires further refinement of the user interface, textures, and interaction techniques to achieve its goals.
A Survey on Different Approaches to Augmented Reality inthe Educational FieldIRJET Journal
This document summarizes research on using augmented reality in education. It discusses how AR can help students learn actively and motivate learning by providing realistic virtual experiences. The document reviews several studies on using AR in education. One study proposed a framework called "inverse augmented reality" that allows virtual agents to observe both virtual and real objects. Another designed an AR game to guide tourists to locations and display historical information. A third explored using blockchain to allow users to contribute AR content. The document concludes AR shows potential for improving skills like spatial ability but also faces challenges from technical issues and complexity.
This document summarizes Katrien Verbert's talk on mixed-initiative recommender systems at the 12th RecSysNL meetup. It discusses how recommender systems can increase user trust and acceptance by explaining recommendations and enabling user interaction with the recommendation process. Examples of Verbert's research include systems like TasteWeights and IntersectionExplorer that provide transparency, user control, and support for exploration in recommender interfaces. Verbert's work also examines how personal characteristics affect user experience with different types and levels of recommender system controllability.
Interacting with Smart Environments - Ph.D. Thesis PresentationLuigi De Russis
This thesis explores approaches to improve interaction between users and smart environments. It presents several contributions that address challenges in key interaction areas and provide tools and applications loosely coupled with underlying intelligent systems. The contributions are validated through user testing and publications, and address challenges like eye-based interaction, interaction with ubiquitous devices, visual programming for end-users, and incentivizing energy consumption behaviors. Future work is proposed in areas like on-body interaction and using existing sensing and actuating devices in environments.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Cognitive Science Perspective on User eXperience!Hamed Abdi
How can we use research and knowledge about the brain, the visual system, memory and Emotion to design more effective products, services and systems?
In this presentation i explain how we can synthesize two different disciplinary to reach to more effectiveness in developing services, products and systems.
This document describes a new technique called "Latent Cross" for incorporating contextual data into recurrent neural network (RNN) recommender systems more effectively. The authors first demonstrate that modeling context as direct features in feed-forward neural networks is inefficient at capturing common feature interactions. They then apply this insight to design an improved RNN recommender system that uses Latent Cross. Latent Cross embeds the context and performs an element-wise product of the context embedding with the RNN's hidden states, allowing the model to better understand how context affects recommendations. The authors evaluate their approach on a large-scale RNN recommender system at YouTube and show that Latent Cross improves recommendation performance over conventional techniques.
USABILITY EVALUATION OF A CONTROL AND PROGRAMMING ENVIRONMENT FOR PROGRAMMING...ijseajournal
This paper presents an assessment of usability of Control and Programming Environment (CPE) of a
remote mobile robot. The CPE is an educational environment focused on computer programming education
that integrates a program development online tool with a remote lab. To evaluate system usability,
empirical test was conducted with computer science students in order to identify the views of users on the
system and get directions on how to improve the quality of interface use. The study used questionnaire and
observation of the evaluator. The degree of users’ satisfaction was measured by using a quantitative
approach that establishes the average ranking for each question of the questionnaire. The results indicate
that the system is simple, easy to use and suited to programming practices, however needed changes to
make it more intuitive and efficient. The realization test of usability, even with a small sample user, is
important to provide feedback on the system's user experience and help identify problems.
The Medium of Visualization for Software ComprehensionLeonel Merino
Leonel Merino is defending his PhD thesis on the medium of visualization for software comprehension. The document reviews literature on software visualization and different visualization mediums like standard screens, wall displays, virtual reality, and augmented reality. It presents results from a survey of software visualization tools showing that the majority use standard screens and the medium has not been widely considered as a factor in effectiveness. The thesis argues that the medium is an important factor and different mediums may improve effectiveness, outlining experiments comparing 3D visualizations on standard screens versus augmented reality.
An investigation into the physical build and psychological aspects of an inte...Jessica Navarro
This dissertation investigates creating an interactive information point and examines the psychological effects on users. The student aims to build an animatronic information point that tracks objects and interacts with users. Research covers object tracking hardware/software, human-computer interaction, and effects of anthropomorphism. The student will create a physical animatronic head, programming in LabVIEW and Roborealm, conduct user testing via questionnaire, and analyze the results. The dissertation aims to determine if a more lifelike interactive information point improves the user experience of conveying information.
PhD Defence: Leveraging sensing-based interaction for supporting reflection a...Simone Mora
The document summarizes Simone Mora's PhD thesis on leveraging sensing-based interaction to support reflective learning during crisis training. It outlines the problem domain, research questions, theoretical underpinnings, methodology, technology tools developed, contributions, and conclusions/future work. The research aimed to design sensing interfaces to capture experiences during crisis work and trigger reflection. Prototypes were developed and evaluated to support capturing, recreating, and generating experiences. The work provided knowledge on implementing computer-supported reflective learning models and designing experience-capturing tools for crisis workers using novel sensing interactions.
This dissertation examines supernatural and comfortable user interfaces for basic 3D interaction tasks. It is split into three main parts. The first part evaluates hover interactions and develops an adaptive interface to reduce fatigue. The second part analyzes the effects of comfort on performance and develops a supernatural selection technique. The third part analyzes travel in VR, developing and evaluating flying interfaces as well as walking-based redirection techniques. Throughout the dissertation, the goal was to develop supernatural interactions inspired by natural interactions but without physical limitations of the real world. The dissertation contributes knowledge on 3D user interfaces, selection techniques, and travel methods in virtual environments.
Similar to Using Physiological sensing and scene reconstruction in remote collaboration (20)
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
Using Physiological sensing and scene reconstruction in remote collaboration
1. 1Augmented Human Lab
Using Multimodal Input in Augmented
Virtual Teleportation
04th November 2020
Prasanth SasikumarSupervisors: Mark Billinghurst, Suranga Nanayakkara, Huidong Bai
3. 3
“To contribute to enhancing the ability to engage as a team from anywhere in the world”
Motivation
4. Outline
● Overview
● Progress so far:
○ Mixed Reality Remote Collaboration, Multimodal Input and Spatial Audio
○ Mixed Reality Training System and Physiological Sensing
● Future Works:
○ Physiological sensing
○ A framework for remote collaboration and Training
● Publications/Outcomes
● Timeline
○ Schedule and Resources.
4
Outline
6. 6
Overview
"What is the effect of including multimodal
cues in remote collaboration on presence,
engagement, and task efficiency?"
Research Question
7. 7
Overview
"How does capturing and sharing a user’s
surroundings with another person improve
the sense of presence and enhance
collaboration on physical tasks? (RQ1)"
8. 8
Overview
"How does sharing multi-modal inputs
(Physiological Sensing and Non-verbal
Communication) affect Social Presence in
remote collaboration? (RQ2)"
14. 16
User Study 1
● Compare user-centric and device- centric cues.
● Sharing gaze and gesture cues from the remote expert
to the local worker.
● Evaluation: performance, copresence and user
experience.
● Recruited 10 participants
○ Completion time,
○ Networked Mind Measure of Social Presence Questionnaire
○ MEC Spatial Presence Questionnaire
○ NASA Task Load Index Questionnaire
Wearable remote fusion (Pilot Study)
Remote expert guiding local worker to complete task
15. 17
User Study 1
● Findings:
○ Combing gaze and gesture cues, provide
significantly stronger sense of co-presence.
○ For both the local and remote users than using
the gaze cue alone.
○ The combined cues were also rated significantly
higher than gaze in terms of the ease of
conveying spatial actions.
● P. Sasikumar, L. Gao, H. Bai and M. Billinghurst, "Wearable RemoteFusion: A Mixed
Reality Remote Collaboration System with Local Eye Gaze and Remote Hand Gesture
Sharing," 2019 IEEE International Symposium on Mixed and Augmented Reality
Adjunct (ISMAR-Adjunct), Beijing, China, 2019, pp. 393-394, doi: 10.1109/ISMAR-
Adjunct.2019.000-3.
16. 18
User Study 2
● Live 3D panorama reconstruction for remote expert.
● Hand gesture and eye gaze sharing
● A formal user study with 24 participants
A User Study on MR Remote Collaboration
with Eye Gaze & Hand Gesture Sharing
System Overview
Camera Calibration
17. 19
User Study 2
● Findings:
○ Combing gaze and gesture cues, provide
significantly stronger sense of co-presence.
○ For both the local and remote users than using
the gaze cue alone.
○ The combined cues were also rated significantly
higher than gaze in terms of the ease of
conveying spatial actions.
● Huidong Bai, Prasanth Sasikumar, Jing Yang, and Mark Billinghurst. 2020. A User Study
on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI
'20). Association for Computing Machinery, New York, NY, USA, 1–13.
DOI:https://doi.org/10.1145/3313831.3376550
18. 20
User Study 3
● The Effect of Spatial Auditory and Visual Cues in Mixed
Reality Remote Collaboration
● MR remote collaboration system that shares both spatial
auditory and visual cues
● Two user studies: (a) Audio cues, (b) Audio + Visual Cues.
Spatial Situation Model
NASA TLX
Spatial Audio
19. 21
User Study 3
● Improves spatial awareness in remote collaborative
tasks.
● Spatialized remote expert’s voice and auditory beacons
enabled local workers to locate occluded objects as
small as 2cm3 with significantly stronger spatial
perception.
Spatial Audio
View frustum & hand Gesture
NASA TLX
Lego brick layout
20. Working with Industry
● Engine Maintenance and Training.
● Issue: High training cost.
● Solution: Training system using volumetric playback
22
Industry Requirement
21. 23
User Study 4
MR Training System
● Study the impact of volumetric playback in a
MR spatial training system.
● Four visual instruction cues were compared;
○ Annotation
○ Hand Gestures
○ Avatar Representation
○ Volumetric Playback
System Overview
Engine
23. 25
User Study 4
MR Training System(Results)
● Volumetric instruction cues exhibits an increase
in co-presence and system usability while
reducing mental workload and frustration.
● To be published.
Mental Load (NASA TLX)
System Usability
24. 26
Industrial Survey
● Evaluated two long zoom interviews and 8
survey responses.
● Summarized the findings and drew design
implications.
● There was no standard tool to support physical
tasks remotely.
● Incorporating safety aspects into MR systems.
● Progress monitoring along with Improving
spatial accuracy and
● Better collaborative visualization of information.
25. 27
Progress so far:
● The experience uses real time depth sensing technology
and AR/VR displays to enable participants to view and
be part of tabletop conversations with people from
different cultural backgrounds, in a playful, explorative
and powerful way.
● Mairi Gunn, Huidong Bai, and Prasanth Sasikumar. 2019. Come to the Table! Haere mai ki te tēpu! In
SIGGRAPH Asia 2019 XR (SA '19). Association for Computing Machinery, New York, NY, USA, 4–5.
DOI:https://doi.org/10.1145/3355355.3361898
XR Demo (based of training system)
26. 28
Enhancing Remote Collaboration
● To study the usability of incorporating virtual worlds with
existing telecommunication systems.
● VIP programme.
● Literature review in progress.
Effect of MR in existing teleconferencing
solutions
Prototype MR system.
30. 32
Planned User Study
● Physiological responses are robust indicators of
autonomic nervous system activity that is related to
emotion.
● Evaluating heart rate and skin conductance.
● Adapt virtual environments based on sensory feedback.
● Hypothesis : "Real Time multimodal input provides a more
immersive, personalised,engaging experience in remote
collaboration systems".
● Two planned user studies.
Physiological sensing.
System Overview
31. 33
Physiological sensing.
System Overview
● User Study 1
○ 12 to 15 participants
○ Test physiological cues for emotion-relevant information.
○ Exposed to various collaborative tasks developed with
general available contextual variables.
○ Tasks will include (a) Pick and place, (b) Mechanical
assembly/disassembly and (c) Collaborative gaming.
● User Study 2
○ Evaluate the efficiency of the system in terms of task
completion time and co-presence.
○ Evaluation: Self-Assessment Manikin (SAM) questionnaire,
System Usability Questionnaire (SUS) and SSM.
● Outcome will answer part of RQ2 and RQ3.
32. 34
Progress so far:
● Dual EEG set-up that allowed us to use hyperscanning
to simultaneously record the neural activity of both
participants.
● We found that similar levels of inter-brain synchrony can
be elicited in the real-world and VR for the same task.
● A. Barde, N. Saffaryazdi, P. Withana, N. Patel, P. Sasikumar and M. Billinghurst, "Inter-Brain
Connectivity: Comparisons between Real and Virtual Environments using Hyperscanning," 2019 IEEE
International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Beijing, China,
2019, pp. 338-339, doi: 10.1109/ISMAR-Adjunct.2019.00-17.
HyperScanning
33. 35
Progress so far:
● Participants wear electroencephalography (EEG) head-
mounted displays to create music together using a
physical drum.
● Visualization reflects the synchronicity level while at the
same time trains the participants to create music
together, enriching the experience and performance.
● Ryo Hajika, Kunal Gupta, Prasanth Sasikumar, and Yun Suen Pai. 2019. HyperDrum: Interactive
Synchronous Drumming in Virtual Reality using Everyday Objects. In SIGGRAPH Asia 2019 XR (SA '19).
Association for Computing Machinery, New York, NY, USA, 15–16.
DOI:https://doi.org/10.1145/3355355.3361894
XR Demo (HyperDrum)
36. 38
End Game!
MiSo RC.
● Multimodal Input & Scene Reconstruction based Remote Collaboration system.
● Developing the framework would enable us to:
○ 24 to 30 participants.
○ Exposed to various collaborative tasks developed with general available contextual variables.
○ Tasks will include (a) Pick and place, (b) Mechanical assembly/disassembly and (c) Collaborative gaming.
● Hypothesis:
○ "Real-time multimodal input improves user experience in remote collaboration systems compared to a traditional RC
system".
● Outcome will answer RQ3.
38. Provisional goals
1. Complete a literature review of related work in (1) Use of Shared remote collaboration in AR/VR (2) Use of gaze
and gesture input in shared AR/VR (3) Rapid Scene Construction. Compile a report based on this literature review,
to the satisfaction of the advisory committee.
2. Meet end users from a target user group (e.g. utility company) and interview them to understand their needs for
remote collaboration. Write a report about this, to the satisfaction of the advisory committee.
3. Develop a system for shared remote collaboration based on scene capture, to the satisfaction of the advisory
committee.
4. Conduct a pilot test to study the effect of 3D scene sharing in remote collaboration, to the satisfaction of the
advisory committee.
40
Provisional goals
39. Publications
1. P. Sasikumar, L. Gao, H. Bai and M. Billinghurst, "Wearable RemoteFusion: A Mixed Reality Remote Collaboration System with Local Eye Gaze and
Remote Hand Gesture Sharing [Poster]," 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Beijing,
China, 2019, pp. 393-394, doi: 10.1109/ISMAR-Adjunct.2019.000-3..
2. Huidong. Bai, Prasanth Sasikumar, Jing Yang, and Mark Billinghurst. 2020. A User Study on Mixed Reality Remote Collaboration with Eye Gaze and
Hand Gesture Sharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing
Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376550.
3. Yang, J., Sasikumar, P., Bai, H. et al. The effects of spatial auditory and visual cues on mixed reality remote collaboration. J Multimodal User
Interfaces (2020). https://doi.org/10.1007/s12193-020-00331-1
4. Hajika, R., Gupta, K. ,Sasikumar, P., and Pai, Y. S. (2019). HyperDrum: Interactive Synchronous Drumming in Virtual Reality using Everyday
Objects[Best XR demo paper runner up]. In SIGGRAPH Asia 2019 XR (pp. 15-16).
5. Mairi Gunn, Huidong Bai, and Prasanth Sasikumar. 2019. Come to the Table! Haere mai ki te tēpu! [Demo] In SIGGRAPH Asia 2019 XR (SA '19).
Association for Computing Machinery, New York, NY, USA, 4–5. DOI:https://doi.org/10.1145/3355355.3361898
6. A. Barde, N. Saffaryazdi, P. Withana, N. Patel, P. Sasikumar and M. Billinghurst, "Inter-Brain Connectivity: Comparisons between Real and Virtual
Environments using Hyperscanning [Poster]," 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct),
Beijing, China, 2019, pp. 338-339, doi: 10.1109/ISMAR-Adjunct.2019.00-17.
7. Pai, Y. S, Hajika, R., Gupta, K. , and Sasikumar, P. (2019). NeuralDrum: Perceiving Brain Synchronicity in XR Drumming accepted as a work for
SIGGRAPH Asia 2020 Technical Communications program. This will be presented on 20th November 2020.
41
Publications
41. 43Augmented Human Lab
Using Multimodal Input in Augmented
Virtual Teleportation
04th November 2020
Prasanth SasikumarSupervisors: Mark Billinghurst, Suranga Nanayakkara, Huidong Bai
ability to share the intimate information of human states and feelings derived from various sensors is an emergent paradigm called empathic computing
Explain that when we say remote collaboration, we mean - there is a remote expert and a local worker.
How we stich the cameras and how the world is aligned.
Technical Stuff Here
What is uuser and what is device
No of participants.
Tasks
Say that between the user centri ccues. Which one gives better perofomance.
Face to face as in real world communication.
Quick Mention about the gaze study.
We conducted this study in light of the increasing importance of the current pandemic era where travel is restricted. We collected feedback on Mixed Reality (MR) concepts for remote collaboration, and especially on the benefits, use cases features that MR systems for remote collaboration should have. We conducted and evaluated two long zoom interviews and 8 survey responses and summarized the findings and drew design implications. We found that teamwork and collaboration are essential for all the participants/organization and there were no standard tools to support physical tasks remotely. We described ideal use cases that augment existing solutions by enhancing safety, mitigating risks, improving accuracy and enabling better coordination.
Quick Mention
Quick Mention
Quick Mention
Quick Mention
Quick Mention
Quick Mention
Explain that when we say remote collaboration, we mean - there is a remote expert and a local worker.
With the rapid advancements in artificial intelligence technology to make consumer products smart, human trust on smart machines is an important factor while designing human-computer interactions.
For a user to rely on a system, she should be trusting the information provided by the system. If the task is completed as per user’s satisfaction, they will start trusting the system.