This document summarizes a study that evaluated a Kinect-based user interface for 3D object manipulation in a virtual environment. The study tested two interface conditions - one with a self avatar representation and one without. It found that there was no significant difference in completion time between the two conditions overall. However, differences did emerge when analyzing the results based on factors like gender and gaming experience. The study concluded that the effect of self avatars is only observed for a subset of users and that individual performance differences need to be accounted for. It proposes several areas for future research, such as automatic detection of manipulation modes and using mobile device sensors.
Synchronized online gymnastics could provide new possibilities for enhancing the physical and social well-being of people with restricted mobility. We propose a prototype platform for this – Online-Gym – which allows users to interact using a Microsoft Kinect and participate in on-line gymnastics sessions.
In this paper we present the Online-Gym concept and a first iteration on the platform architecture that allows interaction in virtual worlds with movement captured by a Kinect device.
The exploratory work done so far makes evidence that this approach opens significant opportunities and these scenarios may be further developed, enhanced and enriched.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
Synchronized online gymnastics could provide new possibilities for enhancing the physical and social well-being of people with restricted mobility. We propose a prototype platform for this – Online-Gym – which allows users to interact using a Microsoft Kinect and participate in on-line gymnastics sessions.
In this paper we present the Online-Gym concept and a first iteration on the platform architecture that allows interaction in virtual worlds with movement captured by a Kinect device.
The exploratory work done so far makes evidence that this approach opens significant opportunities and these scenarios may be further developed, enhanced and enriched.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
Lecture 10 in the COMP 4010 Lectures on AR/VR from the Univeristy of South Australia. This lecture is about VR Interface Design and Evaluating VR interfaces. Taught by Mark Billinghurst on October 12, 2021.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
The fifth lecture from the Augmented Reality Summer School taught by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR research directions.
The third lecture from the Augmented Reality Summer School talk by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR Interaction Techniques
Lecture 9 of the COMP 4010 course on AR/VR. This lecture is about AR Interaction methods. Taught on October 2nd 2018 by Mark Billinghurst at the University of South Australia
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
More Related Content
Similar to Kinect Based 3D Object Manipulation on a Desktop Display
Lecture 10 in the COMP 4010 Lectures on AR/VR from the Univeristy of South Australia. This lecture is about VR Interface Design and Evaluating VR interfaces. Taught by Mark Billinghurst on October 12, 2021.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
The fifth lecture from the Augmented Reality Summer School taught by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR research directions.
The third lecture from the Augmented Reality Summer School talk by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR Interaction Techniques
Lecture 9 of the COMP 4010 course on AR/VR. This lecture is about AR Interaction methods. Taught on October 2nd 2018 by Mark Billinghurst at the University of South Australia
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Kinect Based 3D Object Manipulation on a Desktop Display
1. Kinect Based 3D Object Manipulation on a
Desktop Display
Mukund Raj, Sarah H. Creem-Regehr, Kristina
M.Rand, Jeanine K. Stefanucci and William B.
Thompson
University of Utah
2. Introduction
• Controlled experimental evaluation of a Kinect
based user interface.
• 3D object manipulation in virtual
environment.
• Two variations - with & without self avatars.
2
3. Motivation
• Availability of low cost gesture recognition
hardware.
• 3D graphics on web platform.
3
The LeapMicrosoft Windows Kinect
Nintendo Wii
5. Questions
• Does self avatar have observable effect on
interfaces for object manipulation in virtual
environments on a desktop display?
• Are there strong individual differences in the
effect?
5
14. Results – Manipulation modes
• Difference in manipulation mode as a function
of gender/gaming experience
14
15. Conclusion
• Effect of self avatar on performance of only a
subset of users
• Necessary to check for individual differences
in performance data
16
16. Future Research
• Automatic twist/swipe mode detection
• Replace orientation sensor with a low cost
mobile device accelerometer
• 6DOF task
• Stereographic displays
• Symmetrical objects
• Tactile Feedback
• Comprehension vs. Manipulation
15
17. This work was supported by the National
Science Foundation under Grant No. 1116636
17
Editor's Notes
I would like to start off with a quick , overview of the paper..
1. We have conducted a controlled…
2. The interface itself is a ,Gesture based interface, for manipulating 3D objects in virtual world using the kinect sensor, .
3. We compared two variations of the interface, one of which used self avatar and the other that did not.
Why did we do this?
Low cost tracking devices such as microsoft kinect and Nitendo Wii becoming increasingly popular. Leap is an even more cheaper device and expected to be out by the end of the year.
With spread of such devices, we can expect to see a large number of gesture based interfaces built over them.
We are also seeing, javascript APIs for rendering interactive 3D graphics like WebGL are becoming increasingly powerful. There are now stunningly realistic virtual worlds being rendered entirely within the web browser, using this library.
Both these technologies make gesture based interfaces “&” virtual worlds not only easier to build but more accessible to a larger audience than ever before. We have made an attempt to build and evaluate a gesture based interface for object manipulation using self - avatars.
What are avatars?
Avatars are the digital representation of humans online, or as in our context, in virtual Environments. Self-avatars, the first-person representations of the users Themselves.
Earlier avatar studies have shown inconclusive results on spatial cognition tasks.
<break>
2. Our interface renders an animated representation of the user’s arm and hand. Previously using avatars needed expensive motion capture equipment, but now we have the technology to build interfaces using avatars at a much lower cost.
That brings up the question whether there are benefits of having such a self avatar extending into the virtual world. Could they help in offloading of cognitive work or provide a frame of reference? Could they help the user perform better just by making the interface more natural?
More concretely,…
1….. Does self avatar have observable effect on interfaces for object manipulation in virtual worlds on a desktop display?
Intuitively having a self avatar does look like a more natural way of interaction, but we wanted to see if it translated into measureable effect in performance on a desktop display???
<break>
2. Also, in the real world it is also important to ask - are there strong individual differences in the effect?
In other words, does performance of user groups vary significantly when using the interface?
To evaluate the interface, we chose the orientation matching task, mainly due to the ease of measuring performance in it and also as the task is known to be non trivial for larger rotations.
As seen in the picture here, the screen is split into left and right regions. Objects at different orientations , appear in both regions, in each successive trial. The object on the right is then rotated by the user, to match the orientation of the object on the left.
The interface has two variations based upon display conditions or what the users see as feedback for their arm motion
The picture on the top shows the 1st variation – also called ‘Sphere condition’ in which a sphere of the size similar to that of the hand is rendered based upon the position of the right hand while interaction occurs.
Only feedback to the user in this condition is a sense of position of the hand.
Also, in the Sphere condition, the frame of reference is unclear.
The picture below shows the other variation – also called the self avatar condition in which motion of shoulder, elbow and wrist are accurately mapped onto the avatar. Fingers, however, are not animated.
Self avatars can provide the user with an egocentric and anthropomorphic frame of reference as well as a more natural interface
-----
We used a between subjects design for assigning participants to the feedback conditions.
** Also, Male and female participants were evenly distributed for the sphere and the self avatar conditions.
----
The interface provides two modes of rotation to change the orientation of the objects.
First is Rotation along hand motion also referred to as swipe mode. [VIDEO] As you can see, the user rotates the object along an axis on the plane of the display and perpendicular to the direction of hand motion. This is similar to rotating objects using the virtual sphere method, except that we use hand gestures instead of moving the mouse pointer.
The second method of interaction is rotating the object about the axis of their wrist also referred to as twist mode. [VIDEO] This is closer to how we manipulate objects in the real world. Here the user can rotate the object about any axis just by aligning their wrist with that axis and performing a twist gesture.
** Both interaction methods were enabled in each of the display conditions and the user could select either mode at any time during a trial.
** For larger rotations users were able to employ ratcheting and accomplish larger rotations as sum of smaller rotations.
** Also as seen in the video, color of the object changed when the avatar’s hand came close to the object to incorporate a sense of contact. Users were only able to manipulate when the avatar hand was close to the object .
The user had a chance to practice with a training object before the recorded trials. Each participant gets 6 trials to practice with a single practice object, which is the one shown on the picture on the left after which they were ask to perform the task on 12 distinct trial objects, one of which is shown in the right figure.
----------
***** Completion time and number of times each mode of rotation was used were recorded for each trial for all participants. ********
----------
In the experimental setup,
We used the Kinect for recovering joint orientations of the user for animating the avatar.
An orientation sensor strapped around the hand of the user for improving the accuracy of the wrist orientation.
And, a wireless mouse used by the user to indicate the mode of rotation for each gesture - whether along the hand motion/ or about the wrist axis.
-----
** Geometric and display field of views were matched to improve realism and sense of embodiment of the avatar.
///////////////////////And Gender and gaming experience were also recorded in order to check for “individual differences”.
We looked at the time taken by each participant to complete all 12 trials.
Here we have plotted the average completion time for the two display conditions. The first and second bar show the average completion time for the self avatar and sphere display conditions respectively.
And, we can see that, there was no significant difference in performance between sphere and avatar conditions overall
However..
There was difference in performance in visual display condition as function of gender and video game experience.
On the graph we again have average completion time for each condition, now split by gender.
It can be seen that although everyone performed similarly in the self avatar condition, female participants took noticeably longer to complete the trials in the sphere condition.
---------------split-----------------
It is important to note that , Gaming experience and gender in our participant pool were highly overlapping and we cannot associate the effect to either gaming experience or gender alone from the available data.
Here we see an effect of “individual differences” in our interface where the display condition had an effect on performance of a subset of participants.
As seen in the earlier graph, and more clearly in here, gender and gaming experience gender and gaming experience significantly overlapped in our participant pool. It would be interesting to see the performance of female gamers and male non – gamers using this interface.
We also recorded the mode of rotation indicating which mode the participants used whether along the hand motion or about the wrist axis and again we see an effect of “individual differences”. Male/Gamer participants used both modes almost equally while female/non gamers participants relied more on rotation along the hand motion.
Ideally a good sense of object orientation in virtual world would afford a predisposition toward rotation about wrist axis as it would be more efficient if used correctly. Also it is closer to the natural way of manipulating objects in the real world and object can be matched to the target orientation in a single motion if the appropriate axis needed is known.
We compared user performance in two variations of a gesture based interface for object manipulation interface in virtual world .
1. We found that there was an.. “Effect of self avatar on performance of only a subset of users”
2. More importantly when evaluating such interfaces, “Necessary to check for individual differences in performance data”
Latest tracking devices such as the Kinect and leap, open up new ways of interacting with the virtual world. We did a controlled study of one such possibility and we saw one variation of the interface showing significant effect of individual differences while the other did not. The key message here is that in evaluating performance of such interfaces its important to keep a look out for individual differences, which can be significant , to really understand the strength and weaknesses of the system.