This document summarizes a study that evaluated a Kinect-based user interface for 3D object manipulation in a virtual environment. The study tested two interface conditions - one with a self avatar representation and one without. It found that there was no significant difference in completion time between the two conditions overall. However, differences did emerge when analyzing the results based on factors like gender and gaming experience. The study concluded that the effect of self avatars is only observed for a subset of users and that individual performance differences need to be accounted for. It proposes several areas for future research, such as automatic detection of manipulation modes and using mobile device sensors.
Synchronized online gymnastics could provide new possibilities for enhancing the physical and social well-being of people with restricted mobility. We propose a prototype platform for this – Online-Gym – which allows users to interact using a Microsoft Kinect and participate in on-line gymnastics sessions.
In this paper we present the Online-Gym concept and a first iteration on the platform architecture that allows interaction in virtual worlds with movement captured by a Kinect device.
The exploratory work done so far makes evidence that this approach opens significant opportunities and these scenarios may be further developed, enhanced and enriched.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
Synchronized online gymnastics could provide new possibilities for enhancing the physical and social well-being of people with restricted mobility. We propose a prototype platform for this – Online-Gym – which allows users to interact using a Microsoft Kinect and participate in on-line gymnastics sessions.
In this paper we present the Online-Gym concept and a first iteration on the platform architecture that allows interaction in virtual worlds with movement captured by a Kinect device.
The exploratory work done so far makes evidence that this approach opens significant opportunities and these scenarios may be further developed, enhanced and enriched.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
Lecture 10 in the COMP 4010 Lectures on AR/VR from the Univeristy of South Australia. This lecture is about VR Interface Design and Evaluating VR interfaces. Taught by Mark Billinghurst on October 12, 2021.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
The fifth lecture from the Augmented Reality Summer School taught by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR research directions.
The third lecture from the Augmented Reality Summer School talk by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR Interaction Techniques
Lecture 9 of the COMP 4010 course on AR/VR. This lecture is about AR Interaction methods. Taught on October 2nd 2018 by Mark Billinghurst at the University of South Australia
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Lecture 10 in the COMP 4010 Lectures on AR/VR from the Univeristy of South Australia. This lecture is about VR Interface Design and Evaluating VR interfaces. Taught by Mark Billinghurst on October 12, 2021.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
The fifth lecture from the Augmented Reality Summer School taught by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR research directions.
The third lecture from the Augmented Reality Summer School talk by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR Interaction Techniques
Lecture 9 of the COMP 4010 course on AR/VR. This lecture is about AR Interaction methods. Taught on October 2nd 2018 by Mark Billinghurst at the University of South Australia
Lecture 5 in the COMP 4010 class on Augmented and Virtual Reality. This lecture was about AR Interaction and Prototyping methods. Taught by Mark Billinghurst on August 24th 2021 at the University of South Australia.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Quantum Computing: Current Landscape and the Future Role of APIs
Kinect Based 3D Object Manipulation on a Desktop Display
1. Kinect Based 3D Object Manipulation on a
Desktop Display
Mukund Raj, Sarah H. Creem-Regehr, Kristina
M.Rand, Jeanine K. Stefanucci and William B.
Thompson
University of Utah
2. Introduction
• Controlled experimental evaluation of a Kinect
based user interface.
• 3D object manipulation in virtual
environment.
• Two variations - with & without self avatars.
2
3. Motivation
• Availability of low cost gesture recognition
hardware.
• 3D graphics on web platform.
3
The LeapMicrosoft Windows Kinect
Nintendo Wii
5. Questions
• Does self avatar have observable effect on
interfaces for object manipulation in virtual
environments on a desktop display?
• Are there strong individual differences in the
effect?
5
14. Results – Manipulation modes
• Difference in manipulation mode as a function
of gender/gaming experience
14
15. Conclusion
• Effect of self avatar on performance of only a
subset of users
• Necessary to check for individual differences
in performance data
16
16. Future Research
• Automatic twist/swipe mode detection
• Replace orientation sensor with a low cost
mobile device accelerometer
• 6DOF task
• Stereographic displays
• Symmetrical objects
• Tactile Feedback
• Comprehension vs. Manipulation
15
17. This work was supported by the National
Science Foundation under Grant No. 1116636
17
Editor's Notes
I would like to start off with a quick , overview of the paper..
1. We have conducted a controlled…
2. The interface itself is a ,Gesture based interface, for manipulating 3D objects in virtual world using the kinect sensor, .
3. We compared two variations of the interface, one of which used self avatar and the other that did not.
Why did we do this?
Low cost tracking devices such as microsoft kinect and Nitendo Wii becoming increasingly popular. Leap is an even more cheaper device and expected to be out by the end of the year.
With spread of such devices, we can expect to see a large number of gesture based interfaces built over them.
We are also seeing, javascript APIs for rendering interactive 3D graphics like WebGL are becoming increasingly powerful. There are now stunningly realistic virtual worlds being rendered entirely within the web browser, using this library.
Both these technologies make gesture based interfaces “&” virtual worlds not only easier to build but more accessible to a larger audience than ever before. We have made an attempt to build and evaluate a gesture based interface for object manipulation using self - avatars.
What are avatars?
Avatars are the digital representation of humans online, or as in our context, in virtual Environments. Self-avatars, the first-person representations of the users Themselves.
Earlier avatar studies have shown inconclusive results on spatial cognition tasks.
<break>
2. Our interface renders an animated representation of the user’s arm and hand. Previously using avatars needed expensive motion capture equipment, but now we have the technology to build interfaces using avatars at a much lower cost.
That brings up the question whether there are benefits of having such a self avatar extending into the virtual world. Could they help in offloading of cognitive work or provide a frame of reference? Could they help the user perform better just by making the interface more natural?
More concretely,…
1….. Does self avatar have observable effect on interfaces for object manipulation in virtual worlds on a desktop display?
Intuitively having a self avatar does look like a more natural way of interaction, but we wanted to see if it translated into measureable effect in performance on a desktop display???
<break>
2. Also, in the real world it is also important to ask - are there strong individual differences in the effect?
In other words, does performance of user groups vary significantly when using the interface?
To evaluate the interface, we chose the orientation matching task, mainly due to the ease of measuring performance in it and also as the task is known to be non trivial for larger rotations.
As seen in the picture here, the screen is split into left and right regions. Objects at different orientations , appear in both regions, in each successive trial. The object on the right is then rotated by the user, to match the orientation of the object on the left.
The interface has two variations based upon display conditions or what the users see as feedback for their arm motion
The picture on the top shows the 1st variation – also called ‘Sphere condition’ in which a sphere of the size similar to that of the hand is rendered based upon the position of the right hand while interaction occurs.
Only feedback to the user in this condition is a sense of position of the hand.
Also, in the Sphere condition, the frame of reference is unclear.
The picture below shows the other variation – also called the self avatar condition in which motion of shoulder, elbow and wrist are accurately mapped onto the avatar. Fingers, however, are not animated.
Self avatars can provide the user with an egocentric and anthropomorphic frame of reference as well as a more natural interface
-----
We used a between subjects design for assigning participants to the feedback conditions.
** Also, Male and female participants were evenly distributed for the sphere and the self avatar conditions.
----
The interface provides two modes of rotation to change the orientation of the objects.
First is Rotation along hand motion also referred to as swipe mode. [VIDEO] As you can see, the user rotates the object along an axis on the plane of the display and perpendicular to the direction of hand motion. This is similar to rotating objects using the virtual sphere method, except that we use hand gestures instead of moving the mouse pointer.
The second method of interaction is rotating the object about the axis of their wrist also referred to as twist mode. [VIDEO] This is closer to how we manipulate objects in the real world. Here the user can rotate the object about any axis just by aligning their wrist with that axis and performing a twist gesture.
** Both interaction methods were enabled in each of the display conditions and the user could select either mode at any time during a trial.
** For larger rotations users were able to employ ratcheting and accomplish larger rotations as sum of smaller rotations.
** Also as seen in the video, color of the object changed when the avatar’s hand came close to the object to incorporate a sense of contact. Users were only able to manipulate when the avatar hand was close to the object .
The user had a chance to practice with a training object before the recorded trials. Each participant gets 6 trials to practice with a single practice object, which is the one shown on the picture on the left after which they were ask to perform the task on 12 distinct trial objects, one of which is shown in the right figure.
----------
***** Completion time and number of times each mode of rotation was used were recorded for each trial for all participants. ********
----------
In the experimental setup,
We used the Kinect for recovering joint orientations of the user for animating the avatar.
An orientation sensor strapped around the hand of the user for improving the accuracy of the wrist orientation.
And, a wireless mouse used by the user to indicate the mode of rotation for each gesture - whether along the hand motion/ or about the wrist axis.
-----
** Geometric and display field of views were matched to improve realism and sense of embodiment of the avatar.
///////////////////////And Gender and gaming experience were also recorded in order to check for “individual differences”.
We looked at the time taken by each participant to complete all 12 trials.
Here we have plotted the average completion time for the two display conditions. The first and second bar show the average completion time for the self avatar and sphere display conditions respectively.
And, we can see that, there was no significant difference in performance between sphere and avatar conditions overall
However..
There was difference in performance in visual display condition as function of gender and video game experience.
On the graph we again have average completion time for each condition, now split by gender.
It can be seen that although everyone performed similarly in the self avatar condition, female participants took noticeably longer to complete the trials in the sphere condition.
---------------split-----------------
It is important to note that , Gaming experience and gender in our participant pool were highly overlapping and we cannot associate the effect to either gaming experience or gender alone from the available data.
Here we see an effect of “individual differences” in our interface where the display condition had an effect on performance of a subset of participants.
As seen in the earlier graph, and more clearly in here, gender and gaming experience gender and gaming experience significantly overlapped in our participant pool. It would be interesting to see the performance of female gamers and male non – gamers using this interface.
We also recorded the mode of rotation indicating which mode the participants used whether along the hand motion or about the wrist axis and again we see an effect of “individual differences”. Male/Gamer participants used both modes almost equally while female/non gamers participants relied more on rotation along the hand motion.
Ideally a good sense of object orientation in virtual world would afford a predisposition toward rotation about wrist axis as it would be more efficient if used correctly. Also it is closer to the natural way of manipulating objects in the real world and object can be matched to the target orientation in a single motion if the appropriate axis needed is known.
We compared user performance in two variations of a gesture based interface for object manipulation interface in virtual world .
1. We found that there was an.. “Effect of self avatar on performance of only a subset of users”
2. More importantly when evaluating such interfaces, “Necessary to check for individual differences in performance data”
Latest tracking devices such as the Kinect and leap, open up new ways of interacting with the virtual world. We did a controlled study of one such possibility and we saw one variation of the interface showing significant effect of individual differences while the other did not. The key message here is that in evaluating performance of such interfaces its important to keep a look out for individual differences, which can be significant , to really understand the strength and weaknesses of the system.