This document outlines two experiments that investigate the effect of self-animated avatars in virtual environments. The first pilot study found that the presence of an avatar improved task performance for some users, depending on individual factors like gaming experience. A follow-up experiment aimed to account for individual differences and examine how immersion and task difficulty impact avatar effects. It used an object orientation matching task with variations in avatar presence, immersion level, and task difficulty. Results showed significance in some conditions, answering questions about how avatars influence user performance and experience in virtual worlds. Future work could explore other environments, tasks, and feedback methods.
In November 2014, I was invited back to MMU to talk about how UX activities can be integrated with Agile software development approaches.
The talk touched on what Agile is, why it exists, and why there's potential for conflict with UX activities. I then talked about the opportunities for getting along with each other to make better products, and practical tips that students might be able to use when working in Agile projects.
Presentation on EEG cognitive adaptive training in VR. Given by Mark Billinghurst at the IEEE VR conference in Osaka, Japan. The talk was given on March 25th, 2019
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Lecture on Advanced Human Computer Interaction given by Mark Billinghurst on July 28th 2016. This is the first lecture in the COMP 4026 Advanced HCI course.
Roland Barthes: Empowering the Creative 'Subject'Craig Hammond
Presentation by Dr Craig Hammond: Introducing an overview of a selection of Roland Barthes's key concepts associated with the personal 'chaos' of subjective interpretation: i.e. the obtuse meaning, the scriptor, and the punctum.
In November 2014, I was invited back to MMU to talk about how UX activities can be integrated with Agile software development approaches.
The talk touched on what Agile is, why it exists, and why there's potential for conflict with UX activities. I then talked about the opportunities for getting along with each other to make better products, and practical tips that students might be able to use when working in Agile projects.
Presentation on EEG cognitive adaptive training in VR. Given by Mark Billinghurst at the IEEE VR conference in Osaka, Japan. The talk was given on March 25th, 2019
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Lecture on Advanced Human Computer Interaction given by Mark Billinghurst on July 28th 2016. This is the first lecture in the COMP 4026 Advanced HCI course.
Roland Barthes: Empowering the Creative 'Subject'Craig Hammond
Presentation by Dr Craig Hammond: Introducing an overview of a selection of Roland Barthes's key concepts associated with the personal 'chaos' of subjective interpretation: i.e. the obtuse meaning, the scriptor, and the punctum.
This presentation introduces students to some of the wider issues around virtual worlds: identity, subjectivity, appearance, reality, presence, and the possibilities of mixed reality and augmented reality in the future. It is chock-full of Lori Landay's machinima, and you can see more at http://www.youtube.com/user/ProfLL
Dummy's guide to Virtual Reality - Top 5 basic things you should know about VRVictoria Robertson
The Top 5 things every beginner should know about virtual reality.
VR is going mainstream and you should know the very basics if you don't want to seem like a dummy.
Qbit’s VR e-Commerce environments are designed to bring the online shopping experience to the next level by taking the users into fully interactive 3D online stores.
It is a seminar presentation on a technology called Virtual reality. It key features are what is virtual reality, its history and evolution, its types, devices that are used for Virtual reality and where virtual reality is applicable.
What is Virtual Reality?
Why we need Virtual Reality?
Virtual reality systems
Virtual Reality hardware
Virtual Reality developing tools
The Future of Virtual Reality
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
The effects of visual realism on search tasks in mixed reality simulations-IE...Yadhu Kiran
Abstract—In this paper, we investigate the validity of Mixed Reality (MR) Simulation by conducting an experiment studying the effects of the visual realism of the simulated environment on various search tasks in Augmented Reality (AR). MR Simulation is a practical approach to conducting controlled and repeatable user experiments in MR, including AR. This approach uses a high-fidelity Virtual
Reality (VR) display system to simulate a wide range of equal or lower fidelity displays from the MR continuum, for the express purpose of conducting user experiments. For the experiment, we created three virtual models of a real-world location, each with a different perceived level of visual realism. We designed and executed an AR experiment using the real-world location and repeated
the experiment within VR using the three virtual models we created. The experiment looked into how fast users could search for both physical and virtual information that was present in the scene. Our experiment demonstrates the usefulness of MR Simulation and provides early evidence for the validity of MR Simulation with respect to AR search tasks performed in immersive VR.
Joseph, Will and Sarah will present a discussion on developing a software solution for the zSpace device that combines the 3D head tracked stereo display with the ability to write on the surface
of the device with the stylus. Their focus is in STEM education where users can write down
mathematical expressions and have them recognized by the system. These recognized expressions
can then be used to help drive 3D simulations. They will present how the software was
developed, lessons learned, and provide examples dealing with the graphing of 3D functions,
torque, projectile motion, and kinetic energy.
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
This presentation introduces students to some of the wider issues around virtual worlds: identity, subjectivity, appearance, reality, presence, and the possibilities of mixed reality and augmented reality in the future. It is chock-full of Lori Landay's machinima, and you can see more at http://www.youtube.com/user/ProfLL
Dummy's guide to Virtual Reality - Top 5 basic things you should know about VRVictoria Robertson
The Top 5 things every beginner should know about virtual reality.
VR is going mainstream and you should know the very basics if you don't want to seem like a dummy.
Qbit’s VR e-Commerce environments are designed to bring the online shopping experience to the next level by taking the users into fully interactive 3D online stores.
It is a seminar presentation on a technology called Virtual reality. It key features are what is virtual reality, its history and evolution, its types, devices that are used for Virtual reality and where virtual reality is applicable.
What is Virtual Reality?
Why we need Virtual Reality?
Virtual reality systems
Virtual Reality hardware
Virtual Reality developing tools
The Future of Virtual Reality
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
Lecture 11 of the COMP 4010 class on Augmented Reality and Virtual Reality. This lecture is about VR applications and was taught by Mark Billinghurst on October 19th 2021 at the University of South Australia
The effects of visual realism on search tasks in mixed reality simulations-IE...Yadhu Kiran
Abstract—In this paper, we investigate the validity of Mixed Reality (MR) Simulation by conducting an experiment studying the effects of the visual realism of the simulated environment on various search tasks in Augmented Reality (AR). MR Simulation is a practical approach to conducting controlled and repeatable user experiments in MR, including AR. This approach uses a high-fidelity Virtual
Reality (VR) display system to simulate a wide range of equal or lower fidelity displays from the MR continuum, for the express purpose of conducting user experiments. For the experiment, we created three virtual models of a real-world location, each with a different perceived level of visual realism. We designed and executed an AR experiment using the real-world location and repeated
the experiment within VR using the three virtual models we created. The experiment looked into how fast users could search for both physical and virtual information that was present in the scene. Our experiment demonstrates the usefulness of MR Simulation and provides early evidence for the validity of MR Simulation with respect to AR search tasks performed in immersive VR.
Joseph, Will and Sarah will present a discussion on developing a software solution for the zSpace device that combines the 3D head tracked stereo display with the ability to write on the surface
of the device with the stylus. Their focus is in STEM education where users can write down
mathematical expressions and have them recognized by the system. These recognized expressions
can then be used to help drive 3D simulations. They will present how the software was
developed, lessons learned, and provide examples dealing with the graphing of 3D functions,
torque, projectile motion, and kinetic energy.
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Lecture 5 in the 2022 COMP 4010 lecture series. This lecture is about AR prototyping tools and techniques. The lecture was given by Mark Billinghurst from University of South Australia in 2022.
Long-term Face Tracking in the Wild using Deep LearningElaheh Rashedi
This paper investigates long-term face tracking of a specific person given his/her face image in a single frame as a query in a video stream. Through taking advantage of pre-trained deep learning models on big data, a novel system is developed for accurate video face tracking in the unconstrained environments depicting various people and objects moving in and out of the frame. In the proposed system, we present a detection-verification-tracking method (dubbed as 'DVT') which accomplishes the long-term face tracking task through the collaboration of face detection, face verification, and (short-term) face tracking. An offline trained detector based on cascaded convolutional neural networks localizes all faces appeared in the frames, and an offline trained face verifier based on deep convolutional neural networks and similarity metric learning decides if any face or which face corresponds to the queried person. An online trained tracker follows the face from frame to frame. When validated on a sitcom episode and a TV show, the DVT method outperforms tracking-learning-detection (TLD) and face-TLD in terms of recall and precision. The proposed system is also tested on many other types of videos and shows very promising results.
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
Lecture 9 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture describes principles for effective Interface Design for Mobile AR applications. Look for the other 9 lectures in the course.
Evaluation of a Natural User Interaction Gameplay System Using the Microsoft Kinect Augmented with Non-invasive Brain Computer Interfaces by Peter Mitchell, Dr. Brett Wilkinson, and Dr. Sean Fitzgibbon
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
Similar to Effect of Self-animated Avatars in Virtual Environments (20)
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
4. Presentation Outline
1. Introduction
– Motivation
– Research Problem
– Why is this important?
2. Background and Hypothesis
– Avatars
– Embodiment
– Interface
– Hypothesis
3. Experiment 1 – Pilot Study
– Introduction
– Description
– Results and Conclusion
4. Experiment 2 – Follow up Study
– Introduction
– Description
– Results and Conclusion
5. Final Discussion
– Conclusion
– Future Work
9. Motivation
• Gesture recognition
– Kinect
– Leap
– Smartphones with
gyroscopes
• Modern Graphic
Cards
• Web based 3D
graphics APIs
9
• Animated self-avatar
based interfaces
10. Problem
• What is the effect of presence of a self
animated avatar on user performance in a
virtual environments ?
– Is there an effect?
– Does the strength of the effect depend upon level
of immersion?
10
12. Presentation Outline
1. Introduction
– Motivation
– Research Problem
– Why is this important?
2. Background and Hypothesis
– Avatars
– Embodiment
– Interface
– Hypothesis
3. Experiment 1 – Pilot Study
– Introduction
– Description
– Results and Conclusion
4. Experiment 2 – Follow up Study
– Introduction
– Description
– Results and Conclusion
5. Final Discussion
– Conclusion
– Future Work
13. Background
• Mohler et. al. The Effect of Viewing a Self-Avatar
on distance Judgments in an HMD-Based Virtual
Environment
• Kadri et.al. The Influence of Visual Appearance of
User’s Avatar on the Manipulation of Objects in
Virtual Environments
• ViewCube: A 3D Orientation Indicator and
Controller
• Ziemek, et.al. Evaluating the effectiveness of
orientation indicators with an awareness of
individual differences.
13
14. Kadri et.al. The Influence of Visual Appearance of
User’s Avatar on the Manipulation of Objects in Virtual
Environments
14
15. Background - Embodiment
• Embodied Cognition
– Role of the body in cognition
• Sanchez-Vives : Virtual Hand Illusion Induced
by Visuomotor Correlations
• Margaret Wilson, Six Views of Embodied
Cognition
– Cognitive work can be offloaded onto the real
world
15
16. Background - Interface
• Lok et.al. Effects of Handling Real Objects and
Self-Avatar Fidelity on Cognitive Task
Performance and Sense of Presence in Virtual
Environments
• Ban et. al. : Magic Pot: Interactive
Metamorphosis of the Perceived Shape
– Pseudo haptics
• Robert J.K. Jacob, Linda E. Sibert, Daniel C.
McFarlane, M. Preston Mullen, Jr. Integrality and
Separability of Input Devices
16
17. Effect of Self-Avatars
• Familiar Auxiliary orientation indicator during
manipulation.
• Better sense of presence and embodiment in
virtual environment.
– Due to synchronicity between visual and
proprioceptive cues.
• More natural interface closer to real world
performance.
17
18. Hypothesis
• Presence of animated self-avatar will result in
improved performance in an orientation
matching task.
• The strength of the effect will be stronger with
higher immersion.
18
19. Plan of Work
• Pilot Study
– Kinect Based 3D Object Manipulation on a
Desktop Display. SAP 2012.
• Main Experiment
– An improved version of the pilot.
– Two immersiveness conditions, two visual
feedback conditions in each
19
20. Presentation Outline
1. Introduction
– Motivation
– Research Problem
– Why is this important?
2. Background and Hypothesis
– Avatars
– Embodiment
– Interface
– Hypothesis
3. Experiment 1 – Pilot Study
– Introduction
– Description
– Results and Conclusion
4. Experiment 2 – Follow up Study
– Introduction
– Description
– Results and Conclusion
5. Final Discussion
– Conclusion
– Future Work
21. Pilot Experiment
• Task – Object rotation to match orientation
using two modes ‘swipe’ and ‘twist’.
• Display Conditions – Self avatar present vs
sphere present
• Performance measure – Time to completion
• Individual difference factors – Gaming
experience & Gender (correlated)
21
22. Introduction
• Controlled experimental evaluation of a Kinect
based user interface.
• 3D object manipulation in virtual
environment.
• Two variations - with & without self avatars.
23. Questions
• Does self avatar have observable effect on
interfaces for object manipulation in virtual
environments on a desktop display?
• Are there strong individual differences in the
effect?
– Make interface more accessible
34. Results – Manipulation modes
• Difference in manipulation mode as a function
of gender/gaming experience
35. Pilot study conclusion
• Effect of self avatar on performance of only a
subset of users
– Necessary to check for individual differences in
performance data
36. Presentation Outline
1. Introduction
– Motivation
– Research Problem
– Why is this important?
2. Background and Hypothesis
– Avatars
– Embodiment
– Interface
– Hypothesis
3. Experiment 1 – Pilot Study
– Introduction
– Description
– Results and Conclusion
4. Experiment 2 – Follow up Study
– Introduction
– Description
– Results and Conclusion
5. Final Discussion
– Conclusion
– Future Work
37. Key Learning
• There is an effect of self avatar display
condition.
• Need to look closer at individual differences
• Missing
– Effect of immersiveness
– Uncorrelated individual difference factors
– Did not look if task difficulty interacts with display
condition
37
41. Procedure
• Task
– Object orientation matching
– 15 objects per user trial
• User Interface
– Swipe mode
– Twist mode - Modified with restriction on wrist
rotation
– Automatic mode switching
– Automatic match detection
– Maximum time limit
41
42. Design
• Two Immersiveness conditions (Between Subjects)
– High immersiveness - With stereo mode enabled
– Low immersiveness - Without stereo
• Two visual feedback conditions (Between Subjects)
– With self Avatar
– Without self Avatar
• Axis of rotation and amount of rotation chosen from a
uniform distribution (Fixed for all trials once selected)
42
48. Analysis
• One Way ANOVA – average time as dependent
variable and visual display as independent
variable
• 2 (visual display) x 2 (stereo condition) ANOVA
• ANCOVA with spatial abilities as covariates
• 2 (visual display) x 2 (task difficulty) ANOVA
48
52. Anticipated Contribution
• Answer to the question whether self-avatars
in a virtual world any effect.
• Does the strength vary with level of
immersion or
– Individual difference
– Task difficulty
• Designing new virtual worlds with avatars and
designing interfaces in virtual environments
52
53. Presentation Outline
1. Introduction
– Motivation
– Research Problem
– Why is this important?
2. Background and Hypothesis
– Avatars
– Embodiment
– Interface
– Hypothesis
3. Experiment 1 – Pilot Study
– Introduction
– Description
– Results and Conclusion
4. Experiment 2 – Follow up Study
– Introduction
– Description
– Results and Conclusion
5. Final Discussion
– Conclusion
– Future Work
56. Possible Extensions
• Study the effect in an HMD based virtual
environments.
• Effect on translation tasks.
• Effect on tasks involving translation and
rotation.
• Effect on size perception.
56
57. Future Research
• Different controller object in hand
• Automatic twist/swipe mode detection
• Replace orientation sensor with a low cost
mobile device accelerometer
• 6DOF task
• Stereographic displays
• Symmetrical objects
• Tactile Feedback
• Comprehension vs. Manipulation
Evaluation of effects (due to various causes) in a task.
Mention avatar = self animated avatar now.
Tried to explain the question that I will try to answer, the problem that’s expected, the idea that would get the solution, experiments actually do it. Along the way I’ll mention results from research papers that I think are relevant.
When the time comes that we come
Tie self avatar with earlier talk.
What are avatars?
Avatars are the digital representation of humans online, or as in our context, in virtual Environments. Self-avatars, the first-person representations of the users Themselves.
Earlier avatar studies have shown inconclusive results on spatial cognition tasks.
<break>
2. Self avatar is used in a variety of viewpoints and our interface renders an animated representation of the user’s arm and hand. Previously using avatars needed expensive motion capture equipment, but now we have the technology to build interfaces using avatars at a much lower cost.
That brings up the question whether there are benefits of having such a self avatar extending into the virtual world. Could they help in offloading of cognitive work or provide a frame of reference? Could they help the user perform better just by making the interface more natural?
The reason why perception in action is especially interesting now is because… **I’d like to start with the motivation behind this research… we are seeing a lot of devices that can be used for gesture recognition coming out at commodity prices
These hardware when combined with power of …
Not only An ability to build and make available easily a new kind of interface.
That is all the technology we need to build Easy to build at a low cost. Also easy to make it widely available.
This has become possible only recently, hence not much work is done to understand how the presence of self avatars effect the user’s actions in a virtual world.
The effect here is measures by keeping track of time to perform a task by the users in the virtual environment.
In an earlier study here, Mohler….
Mohler – There is an effect of avatars on distance perception.
Kadri – Arrow avatars influence the way user manipulate objects. The cues have an effect, the question is can those effects translate into improvement in performance?
Khan’s & Tina’s don’t use avatars but does find an effect of having orientation indicators and that they help in object manipulation. The ideas is that if the hand can also act as a familiar orientation indicator. Tina’s paper in addition talks about individual differences – how different people may be help differently by the presence of the orientation indicator.
--
Orientation indicator ?
According to the theory of embodied cognition, body plays a role in cognition (what role?)
In the virtual world is could be possible that the avatar may take the role of the body for purposes of cognition.
Sanchez-Vives -illusions of ownership and proprioceptive displacement could be induced with only visuomotor stimulation, specifically, the lesson here is that embodiment is possible even thought the avatar is located in front of the user and is not colocated as long as we have a high fidelity representation of the user actions in the form of an avatar.
In margret wilsons study six views of embodied cognition, it is suggested that cognitive work can be offloaded to environment. Here again the avatar may facilitate this offloading by providing the user with a familiar orientation indicator for object manipulation.
Here we see that avatars, essentially, help make the interface with virtual world closer to object manipulation in the real world.
Environment is part of the cognitive system???
And since we want to make the interface as close to interacting with real world objects as possible, the last set of important background work deals with interfaces.
Lok - Manipulating a real object in hand brings performance closer to that of manipulating the object in the real world.
Magic pot – we see that the shape of the object in real hand need not have the exact shape of the virtual object being manipulated for perceiving the shape of the object shape on the VE.
Integrality – The task and the interface are both integral in nature. Match control structure of interface with perceptual structure of the task.
All these results are important to the final interface using self avatars.
Summing up its possible that self avatars may provide users with a familiar auxiliary orientation indicator
How each of these may in turn affect performance?
Embodiment and presence?
The expectation is that all these effects would translate into improved performance and so the hypothesis is that…
Point 2 – especially since a higher immersion could lead to better sense of presence and embodiment.
In order to go about conducting the study, the plan of work involves two experiments-
Pilot study – to test the feasibility of the building a self avatar based object manipulation interface and running user experiments on it.
Main experiment – uses lessons learnt from the pilot study.
I would like to start off with a quick , overview of the paper..
1. We have conducted a controlled…
2. The interface itself is a ,Gesture based interface, for manipulating 3D objects in virtual world using the kinect sensor, .
3. We compared two variations of the interface, one of which used self avatar and the other that did not.
Why did we do this?
More concretely,…
1….. Does self avatar have observable effect on interfaces for object manipulation in virtual worlds on a desktop display?
Intuitively having a self avatar does look like a more natural way of interaction, but we wanted to see if it translated into measureable effect in performance on a desktop display???
<break>
2. Also, in the real world it is also important to ask - are there strong individual differences in the effect?
In other words, does performance of user groups vary significantly when using the interface? This becomes especially important when we are looking to make the interface accessible to the largest possible user group.
To evaluate the interface, we chose the orientation matching task, mainly due to the ease of measuring performance in it and also as the task is known to be non trivial for larger rotations.
As seen in the picture here, the screen is split into left and right regions. Objects at different orientations , appear in both regions, in each successive trial. The object on the right is then rotated by the user, to match the orientation of the object on the left.
The interface has two variations based upon display conditions or what the users see as feedback for their arm motion
The picture on the top shows the 1st variation – also called ‘Sphere condition’ in which a sphere of the size similar to that of the hand is rendered based upon the position of the right hand while interaction occurs.
Only feedback to the user in this condition is a sense of position of the hand.
Also, in the Sphere condition, the frame of reference is unclear.
The picture below shows the other variation – also called the self avatar condition in which motion of shoulder, elbow and wrist are accurately mapped onto the avatar. Fingers, however, are not animated.
Self avatars can provide the user with an egocentric and anthropomorphic frame of reference as well as a more natural interface
-----
We used a between subjects design for assigning participants to the feedback conditions.
** Also, Male and female participants were evenly distributed for the sphere and the self avatar conditions.
----
The interface provides two modes of rotation to change the orientation of the objects.
First is Rotation along hand motion also referred to as swipe mode. [VIDEO] As you can see, the user rotates the object along an axis on the plane of the display and perpendicular to the direction of hand motion. This is similar to rotating objects using the virtual sphere method, except that we use hand gestures instead of moving the mouse pointer.
The second method of interaction is rotating the object about the axis of their wrist also referred to as twist mode. [VIDEO] This is closer to how we manipulate objects in the real world. Here the user can rotate the object about any axis just by aligning their wrist with that axis and performing a twist gesture.
** Both interaction methods were enabled in each of the display conditions and the user could select either mode at any time during a trial.
** For larger rotations users were able to employ ratcheting and accomplish larger rotations as sum of smaller rotations.
** Also as seen in the video, color of the object changed when the avatar’s hand came close to the object to incorporate a sense of contact. Users were only able to manipulate when the avatar hand was close to the object .
The user had a chance to practice with a training object before the recorded trials. Each participant gets 6 trials to practice with a single practice object, which is the one shown on the picture on the left after which they were ask to perform the task on 12 distinct trial objects, one of which is shown in the right figure.
----------
***** Completion time and number of times each mode of rotation was used were recorded for each trial for all participants. ********
----------
In the experimental setup,
We used the Kinect for recovering joint orientations of the user for animating the avatar.
In addition to the Kinect we also used an orientation sensor strapped around the hand of the user for improving the accuracy of the wrist orientation.
And, a wireless mouse used by the user to indicate the mode of rotation for each gesture - whether along the hand motion/ or about the wrist axis.
-----
** Geometric and display field of views were matched to improve realism and sense of embodiment of the avatar.
///////////////////////And Gender and gaming experience were also recorded in order to check for “individual differences”.
We looked at the time taken by each participant to complete all 12 trials.
Here we have plotted the average completion time for the two display conditions. The first and second bar show the average completion time for the self avatar and sphere display conditions respectively.
And, we can see that, there was no significant difference in performance between sphere and avatar conditions overall
However..
There was difference in performance in visual display condition as function of gender and video game experience.
On the graph we again have average completion time for each condition, now split by gender.
It can be seen that although everyone performed similarly in the self avatar condition, female participants took noticeably longer to complete the trials in the sphere condition.
---------------split-----------------
It is important to note that , Gaming experience and gender in our participant pool were highly overlapping and we cannot associate the effect to either gaming experience or gender alone from the available data.
Here we see an effect of “individual differences” in our interface where the display condition had an effect on performance of a subset of participants.
As seen in the earlier graph, and more clearly in here, gender and gaming experience gender and gaming experience significantly overlapped in our participant pool. It would be interesting to see the performance of female gamers and male non – gamers using this interface.
Generic test. Spatial abilities correlation
We also recorded the mode of rotation indicating which mode the participants used whether along the hand motion or about the wrist axis and again we see an effect of “individual differences”. Male/Gamer participants used both modes almost equally while female/non gamers participants relied more on rotation along the hand motion.
Ideally a good sense of object orientation in virtual world would afford a predisposition toward rotation about wrist axis as it would be more efficient if used correctly. Also it is closer to the natural way of manipulating objects in the real world and object can be matched to the target orientation in a single motion if the appropriate axis needed is known as it supports rotation along the line of sight as well.
We compared user performance in two variations of a gesture based interface for object manipulation interface in virtual world .
1. We found that there was an.. “Effect of self avatar on performance of only a subset of users”
2. Hence we see when evaluating such interfaces, “Necessary to check for individual differences in performance data”
Latest tracking devices such as the Kinect and leap, open up new ways of interacting with the virtual world. We did a controlled study of one such possibility and we saw one variation of the interface showing significant effect of individual differences while the other did not. The key message here is that in evaluating performance of such interfaces its important to keep a look out for individual differences, which can be significant , to really understand the strength and weaknesses of the system.
------------
Presence of self animated avatar showed effect in performance of only a subset of users.
Male Gamer users did not show significant effect
Non gamer female users had an effect between the conditions
We also did not test for immersiveness, could it be that the running on the desktop caused the effect not to show up in all individuals? Weak effect due to running on a desktop.
Keeping in mind the goal of the research… and also lessons learnt from the earlier experiment the new experiment is designed.
Interface made more intuitive and easier for the users to use.
Instead of just asking gaming experience, also give them a test.
The task will be to match the orientation of 15 distinct objects appearing in pairs at different orientations. The axis and the amount of rotation to match the object is selected from a uniform distribution as mentioned.
The interface has two variations based upon display conditions or what the users see as feedback for their arm motion
The picture on the top shows the 1st variation – also called ‘Sphere condition’ in which a sphere of the size similar to that of the hand is rendered based upon the position of the right hand while interaction occurs.
Only feedback to the user in this condition is a sense of position of the hand.
Also, in the Sphere condition, the frame of reference is unclear.
The picture below shows the other variation – also called the self avatar condition in which motion of shoulder, elbow and wrist are accurately mapped onto the avatar. Fingers, however, are not animated.
Self avatars can provide the user with an egocentric and anthropomorphic frame of reference as well as a more natural interface
-----
We used a between subjects design for assigning participants to the feedback conditions.
** Also, Male and female participants were evenly distributed for the sphere and the self avatar conditions.
----
The interface provides two modes of rotation to change the orientation of the objects.
First is Rotation along hand motion also referred to as swipe mode. [VIDEO] As you can see, the user rotates the object along an axis on the plane of the display and perpendicular to the direction of hand motion. This is similar to rotating objects using the virtual sphere method, except that we use hand gestures instead of moving the mouse pointer.
The second method of interaction is rotating the object about the axis of their wrist also referred to as twist mode. [VIDEO] This is closer to how we manipulate objects in the real world. Here the user can rotate the object about any axis just by aligning their wrist with that axis and performing a twist gesture.
** Both interaction methods were enabled in each of the display conditions and the user could select either mode at any time during a trial.
** For larger rotations users were able to employ ratcheting and accomplish larger rotations as sum of smaller rotations.
** Also as seen in the video, color of the object changed when the avatar’s hand came close to the object to incorporate a sense of contact. Users were only able to manipulate when the avatar hand was close to the object .
The user had a chance to practice with a training object before the recorded trials. Each participant gets 6 trials to practice with a single practice object, which is the one shown on the picture on the left after which they were ask to perform the task on 12 distinct trial objects, one of which is shown in the right figure.
----------
***** Completion time and number of times each mode of rotation was used were recorded for each trial for all participants. ********
----------
In the experimental setup,
We used the Kinect for recovering joint orientations of the user for animating the avatar.
In addition to the Kinect we also used an orientation sensor strapped around the hand of the user for improving the accuracy of the wrist orientation.
And, a wireless mouse used by the user to indicate the mode of rotation for each gesture - whether along the hand motion/ or about the wrist axis.
-----
** Geometric and display field of views were matched to improve realism and sense of embodiment of the avatar.
///////////////////////And Gender and gaming experience were also recorded in order to check for “individual differences”.
To analyze the data, I plan to use one way anova with time as dependedt and visual display as an independent variable to look if there is an effect of display condition overall.
The a multifactor anova with 2 visual display conditions and 2 stereo conditions to see if there is an effect of visual display as a function of immersiveness condition.
Lastly ancova- with spatial abilities as covariates…used to determine whether individual differences factors account for the visual feedback condition effects.
backed by results from controlled experiments
Interesting ideas to explore further.
Questions –
1.Visually faithful avatar benefits?
2. Papers that embodiment improve task performance
3. Get paper for immersiveness is better.
Future –
Now there is a way to evaluate interfaces.
Predict the performance on another task
On HMD? Immersive environment.
Haptics & Mobile
Haptic Interfaces for Embodiment in Virtual Environments
Movement, Action, and Situation: Presence in Virtual Environments – How well the overall situation in the VE meshes with the actions afforded in the VE effect of sense of embodiment
Relation beteween immersion and embodiment? That high immersion leads to higher embodiment. Sanchez paper?