How to integrate the Leap Motion SDK V2 with the Oculus Rift DK2 in Unity for hand skeleton tracking, object hand interaction and player movement using the Rift's positional tracker.
Basic VR Development Tutorial Integrating Oculus Rift and Razer HydraChris Zaharia
Basic VR tutorial on re-making the first scene in Super Mario 64 in Unity 3D integrating the Oculus Rift and Razer Hydra to control Mario and interact with objects.
Presentation includes step-by-step instructions, screenshots and source code in C#.
Also I forgot to share the code for the GrabObject script, but you can find it here:
https://github.com/chrisjz/sm64vr/blob/b5c43072d237c12ff0112fc2ef293031f69a9243/Assets/Objects/Scripts/GrabObject.cs
Here's the source code for the game mentioned in this tutorial:
https://github.com/chrisjz/sm64vr
This tutorial was presented at the Sydney Virtual Reality meetup in June 2014.
This document provides instructions for developing a simple virtual reality magic game using the Myo gesture controller and Oculus Rift. It describes integrating the Myo to detect hand gestures and map different gestures to launching magic projectiles from the player's hand. Code snippets are provided to attach the Myo controller to a hand model, setup magic powers by launching projectiles on specific gestures, and additional suggestions for extending the game.
The document discusses the benefits of using augmented reality (AR) and Azure for creating shared AR experiences. It explains that the cloud allows for digital copies of the real world to be securely stored and processed, enabling any user to access shared AR content from any device. A key concept discussed is spatial anchors, which link virtual content to physical locations even when a user's view of that location changes. The document outlines example use cases like collaborative design reviews, training guidance, wayfinding, and persistent multi-user virtual content. It also provides an overview of how Azure Spatial Anchors works, including the use of cloud spatial anchors, anchor creation and location, and considerations for user experience.
This document summarizes a presentation by Ninjaneers on an autonomous hexapod robot capable of object recognition. The objectives were to build a hexapod robot that can overcome limitations of other mobile robots through autonomous movement, visual object recognition using an ultrasonic sensor and camera, and a functional gripper to pick up and move objects. The design and implementation process, programming approach using SimpleCV, tests conducted, and budget are also summarized.
Lidnug Presentation - Kinect - The How, Were and When of developing with itPhilip Wheat
These are the slides from my LIDNUG presentation on Developing with the Microsoft Kinect using the Kinect for Windows SDK. You can find the presentation recording at http://www.youtube.com/watch?v=0arzMSlqnHk
This document provides a short introduction to HTML5, including:
- HTML5 is the 5th version of the HTML standard by the W3C and is still under development but supported by many browsers.
- HTML5 introduces new semantic elements, video and audio tags, 2D/3D graphics using <canvas>, and new JavaScript APIs for features like geolocation, offline web apps, and drag and drop.
- The document provides examples of using new HTML5 features like video playback, semantic elements, geolocation API, and drawing on a canvas with JavaScript.
Developing for Leap Motion
DotnetConf session here: http://www.youtube.com/watch?v=YixzSUxyGKU ( 1 hour)
Video tutorial can be found here:
Developing for Leap Motion in C# Part 1: http://www.youtube.com/watch?v=1Rn3q75mdns
Developing for Leap Motion in C# Part 2: http://www.youtube.com/watch?v=-r_cAtHQzy8
GitHub repository for the Leap Motion demo app: https://github.com/IrisClasson/Leap-Motion/
Slides: http://www.slideshare.net/irisdanielaclasson/developing-for-leap-motion/
Basic VR Development Tutorial Integrating Oculus Rift and Razer HydraChris Zaharia
Basic VR tutorial on re-making the first scene in Super Mario 64 in Unity 3D integrating the Oculus Rift and Razer Hydra to control Mario and interact with objects.
Presentation includes step-by-step instructions, screenshots and source code in C#.
Also I forgot to share the code for the GrabObject script, but you can find it here:
https://github.com/chrisjz/sm64vr/blob/b5c43072d237c12ff0112fc2ef293031f69a9243/Assets/Objects/Scripts/GrabObject.cs
Here's the source code for the game mentioned in this tutorial:
https://github.com/chrisjz/sm64vr
This tutorial was presented at the Sydney Virtual Reality meetup in June 2014.
This document provides instructions for developing a simple virtual reality magic game using the Myo gesture controller and Oculus Rift. It describes integrating the Myo to detect hand gestures and map different gestures to launching magic projectiles from the player's hand. Code snippets are provided to attach the Myo controller to a hand model, setup magic powers by launching projectiles on specific gestures, and additional suggestions for extending the game.
The document discusses the benefits of using augmented reality (AR) and Azure for creating shared AR experiences. It explains that the cloud allows for digital copies of the real world to be securely stored and processed, enabling any user to access shared AR content from any device. A key concept discussed is spatial anchors, which link virtual content to physical locations even when a user's view of that location changes. The document outlines example use cases like collaborative design reviews, training guidance, wayfinding, and persistent multi-user virtual content. It also provides an overview of how Azure Spatial Anchors works, including the use of cloud spatial anchors, anchor creation and location, and considerations for user experience.
This document summarizes a presentation by Ninjaneers on an autonomous hexapod robot capable of object recognition. The objectives were to build a hexapod robot that can overcome limitations of other mobile robots through autonomous movement, visual object recognition using an ultrasonic sensor and camera, and a functional gripper to pick up and move objects. The design and implementation process, programming approach using SimpleCV, tests conducted, and budget are also summarized.
Lidnug Presentation - Kinect - The How, Were and When of developing with itPhilip Wheat
These are the slides from my LIDNUG presentation on Developing with the Microsoft Kinect using the Kinect for Windows SDK. You can find the presentation recording at http://www.youtube.com/watch?v=0arzMSlqnHk
This document provides a short introduction to HTML5, including:
- HTML5 is the 5th version of the HTML standard by the W3C and is still under development but supported by many browsers.
- HTML5 introduces new semantic elements, video and audio tags, 2D/3D graphics using <canvas>, and new JavaScript APIs for features like geolocation, offline web apps, and drag and drop.
- The document provides examples of using new HTML5 features like video playback, semantic elements, geolocation API, and drawing on a canvas with JavaScript.
Developing for Leap Motion
DotnetConf session here: http://www.youtube.com/watch?v=YixzSUxyGKU ( 1 hour)
Video tutorial can be found here:
Developing for Leap Motion in C# Part 1: http://www.youtube.com/watch?v=1Rn3q75mdns
Developing for Leap Motion in C# Part 2: http://www.youtube.com/watch?v=-r_cAtHQzy8
GitHub repository for the Leap Motion demo app: https://github.com/IrisClasson/Leap-Motion/
Slides: http://www.slideshare.net/irisdanielaclasson/developing-for-leap-motion/
This document summarizes the steps to build a simple iOS game called KillTime using the Cocos2D game framework. It discusses setting up the project, adding a background, player character and enemy targets, detecting touches to shoot bullets, checking for collisions, animating sprites, and tracking the body count with labels and sound effects. The document provides code snippets to implement the game's core functionality and gameplay using Cocos2D concepts like scenes, layers, sprites, actions and the touch handling API.
Using the Kinect for Fun and Profit by Tam HannaCodemotion
Very few devices offer as fascinating features as the Microsoft Kinect. This seminar teaches you what the Kinect can do and how you can develop for it.
Attendants are recommended to bring a notebook with Visual C# 2010 express edition and the latest Kinect SDK so that they can fully profit from the talk. A sensor will be available for testing own applications.
This document provides information on augmented reality (AR), including its definition, history, technologies used, and workflow. It defines AR as a live direct or indirect view of the physical real-world environment whose elements are augmented by computer-generated imagery. The history section outlines some key developments in AR from 1966 to present day. It also discusses the technologies used for tracking such as cameras, sensors and computer vision techniques. Finally, it describes the general workflow for an AR application including retrieving GPS position, orientation, acceleration and camera image.
Augmented Reality on iPhone ApplicationsOmar Cafini
Augmented reality overlays virtual imagery on the real world by using devices' cameras and sensors. It has been developing since the 1960s and uses technologies like GPS, compasses, and accelerometers to track devices' positions and orientations. Developers retrieve location data from GPS, orientation from compasses, and acceleration from accelerometers. They use the camera to view the real world. Existing libraries help create augmented reality applications, and popular examples on app stores include Layar, Wikitude, and Around Me.
Enhance your world with ARKit. UA Mobile 2017.UA Mobile
This document provides an introduction to augmented reality and ARKit. It explains what augmented reality and ARKit are, how ARKit works under the hood using technologies like visual inertial odometry, and how to get started with ARKit development. It demonstrates how to detect horizontal planes, place 3D objects on detected planes in response to user input, and take snapshots of the AR scene. The code samples show how to initialize an AR session, add 3D models to the scene, handle plane detection callbacks, and place objects in the world.
Kenn Song (VisionStar): EasyAR 2.0 - New Features and ChangesAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Kenn Song (VisionStar): EasyAR 2.0 - New Features and Changes
EasyAR is an augmented reality SDK designed to build AR applications easily and smoothly. EasyAR 2.0 is a major update with many new features since the first public release. It brings amazing experience improvements with SLAM and 3D object tracking. And it makes AR easier to build with APIs exported into many different coding languages, easier to share with cloud recognition and scene recording.
http://AugmentedWorldExpo.com
This document summarizes a presentation about the Unity game development platform. It introduces Unity as a game engine that integrates tools for creating 3D and 2D content across platforms. It describes Unity's main components, including the game engine, editor tools, asset store, and support for multiple platforms. It provides screenshots explaining Unity's interface and gives examples of creating a simple space shooter game in Unity, including importing assets, scripting enemy behavior, using prefabs, and adding collisions and a game over scene.
By now you have heard about Flash Augmented Reality and how it is taking the Flash Development community by storm! Whether you are looking for how to get started, how to improve your own experiments or have a client who desperately needs AR on their site, this session is for you.
In this keynote I cover getting up and running as well as the ideal workflow for testing/deploying your creation. I also cover the basics then quickly move into how to build a FLAR Emulator for easy testing/debugging as well as general usability/performance issues. Finally we will look at my own experiments with AR, how they were built and highlight some of the best uses of Flash AR today.
The goal of this presentation is to teach you how to build a solid reusable foundation for all of your Flash AR projects which will allow you to quickly prototype your ideas. All code covered in this session is open source and free to use. Documentation on how it works will also be handed out as well.
This document provides an introduction to scripting in Unity, covering the basics of scripting fundamentals like naming conventions, player input, connecting variables, and accessing components between scripts. The tutorial uses JavaScript examples to demonstrate how to write scripts to move a main camera based on player input, create a spotlight that follows the camera, and access components between scripts to make the spotlight look at a cube when the jump button is pressed.
Second chapter of the lecture Unreal Engine Basics taught at SAE Institute Hamburg.
- Getting familiar with the Unreal Level Editor
- Learning how to bind and handle player keyboard and mouse input
- Understanding character movement properties and functions
Day 2 slides UNO summer 2010 robotics workshopRaj Dasgupta
Day 2 of the robotics workshop focused on designing autonomous intelligence for robots through controllers. The controller acts as the robot's "brain" by reading sensor input, processing that data, and sending output to actuators. Students learned to code simple controllers and reviewed more complex examples, including obstacle avoidance, line following, and state-based object following with a camera. Plans for Day 3 included programming basic behaviors on the e-puck robot.
Cocos2d is a well known open source software framework on game industry. It is is a 2D game framework built upon the OpenGL ES API’s.
In this session, I will talk about a hierarchical structures of an Cocos2d node and scenes. Also Cocos2d Graphic User Interface, Physical System, Audio, Particle System and Scene Transition technique will be shown. Finally this session will show various branches of Cocos2d open source projects including Cocos2d-x, Cocos2d-Swift, Cocos2d-html5, and Cocos2d-xna.
This document describes an "Auto Chasing Turtle" robot project that uses a Kinect sensor to detect human faces and autonomously follow those faces. The robot consists of a KONDO Animal 01 robot base controlled by a BeagleBoard-xM running openFrameworks and ofxDroidKinect software. The Kinect provides RGB and depth camera data to detect faces and calculate the robot's course and distance to the target face. The robot rotates to find faces, calculates its direction to the detected face, and adjusts its distance using Kinect depth data. An iPad can view the robot working in real time.
Philipp Nagele (CTO, Wikitude) An Insider Deep-Dive into the Wikitude SDK AugmentedWorldExpo
Philipp Nagele (CTO, Wikitude GmbH) gives an Insider Deep-Dive into the Wikitude SDK
An introduction into the many options of the Wikitude SDK with a deeper look into advanced features like Plugins API and how to combine third party libraries with the Wikitude SDK. We will look into the general architecture of the SDK and deep-dive into a few outstanding (and maybe not so well-known) features of the SDK.
Useful Tools for Making Video Games - XNA (2008)Korhan Bircan
This document provides an overview of tools and techniques for creating 3D video games in XNA, including installing Visual Studio and XNA Game Studio, displaying 3D models by loading them and applying transformations, handling keyboard/mouse input, implementing a basic camera, adding a skybox, and creating animations using curves to interpolate between control points over time. Sample code implementations for many of these techniques can be found in ZIP files referenced.
The webinar will provide an overview of building immersive worlds in virtual reality using Unity and the XR Interaction Toolkit. It will cover setting up a VR project, understanding locomotion systems, creating grabbable objects, and using VR sockets. The webinar agenda includes an introduction, a 45 minute session on building VR worlds, and a 10 minute Q&A. Additional overtime content will demonstrate visual scripting and creating a jigsaw puzzle tool.
Verold Studio allows users to build and publish interactive 3D content to the web without coding. It provides visual tools for designers and artists to create scenes and define behaviors. Developers can customize projects using HTML/CSS/JavaScript and Three.js. The editor supports collaboration, importing 3D models and textures, animations, and publishing projects to be embedded on websites or downloaded as standalone apps. Components can be created to add interactivity and integrate third party libraries.
- Scene data is converted to DOTS format entities and components during conversion from GameObjects to entities
- Conversion can happen when subscenes are closed or open
- Sample code shows how a rotating cube component is converted, including converting the script to a component data struct and adding a rotation component
- Prefabs are handled by declaring referenced prefabs and getting the primary entity during conversion
- Conversion systems discover GameObjects, create entities, and perform the conversion
This document provides an overview of six programming robotics simulation labs using Webots. Lab 1 introduces how to use Webots and program basic robot controllers. Lab 2 programs a robot to detect colors. Lab 3 adds a camera to a robot to make it aware of product colors. Lab 4 explains how two robots were synchronized in an IPR collaboration. Lab 5 changes where a cube is thrown by a Kuka robot. Lab 6 programs an ABB IRB120 robot to draw a square instead of a circle.
This document summarizes the steps to build a simple iOS game called KillTime using the Cocos2D game framework. It discusses setting up the project, adding a background, player character and enemy targets, detecting touches to shoot bullets, checking for collisions, animating sprites, and tracking the body count with labels and sound effects. The document provides code snippets to implement the game's core functionality and gameplay using Cocos2D concepts like scenes, layers, sprites, actions and the touch handling API.
Using the Kinect for Fun and Profit by Tam HannaCodemotion
Very few devices offer as fascinating features as the Microsoft Kinect. This seminar teaches you what the Kinect can do and how you can develop for it.
Attendants are recommended to bring a notebook with Visual C# 2010 express edition and the latest Kinect SDK so that they can fully profit from the talk. A sensor will be available for testing own applications.
This document provides information on augmented reality (AR), including its definition, history, technologies used, and workflow. It defines AR as a live direct or indirect view of the physical real-world environment whose elements are augmented by computer-generated imagery. The history section outlines some key developments in AR from 1966 to present day. It also discusses the technologies used for tracking such as cameras, sensors and computer vision techniques. Finally, it describes the general workflow for an AR application including retrieving GPS position, orientation, acceleration and camera image.
Augmented Reality on iPhone ApplicationsOmar Cafini
Augmented reality overlays virtual imagery on the real world by using devices' cameras and sensors. It has been developing since the 1960s and uses technologies like GPS, compasses, and accelerometers to track devices' positions and orientations. Developers retrieve location data from GPS, orientation from compasses, and acceleration from accelerometers. They use the camera to view the real world. Existing libraries help create augmented reality applications, and popular examples on app stores include Layar, Wikitude, and Around Me.
Enhance your world with ARKit. UA Mobile 2017.UA Mobile
This document provides an introduction to augmented reality and ARKit. It explains what augmented reality and ARKit are, how ARKit works under the hood using technologies like visual inertial odometry, and how to get started with ARKit development. It demonstrates how to detect horizontal planes, place 3D objects on detected planes in response to user input, and take snapshots of the AR scene. The code samples show how to initialize an AR session, add 3D models to the scene, handle plane detection callbacks, and place objects in the world.
Kenn Song (VisionStar): EasyAR 2.0 - New Features and ChangesAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Kenn Song (VisionStar): EasyAR 2.0 - New Features and Changes
EasyAR is an augmented reality SDK designed to build AR applications easily and smoothly. EasyAR 2.0 is a major update with many new features since the first public release. It brings amazing experience improvements with SLAM and 3D object tracking. And it makes AR easier to build with APIs exported into many different coding languages, easier to share with cloud recognition and scene recording.
http://AugmentedWorldExpo.com
This document summarizes a presentation about the Unity game development platform. It introduces Unity as a game engine that integrates tools for creating 3D and 2D content across platforms. It describes Unity's main components, including the game engine, editor tools, asset store, and support for multiple platforms. It provides screenshots explaining Unity's interface and gives examples of creating a simple space shooter game in Unity, including importing assets, scripting enemy behavior, using prefabs, and adding collisions and a game over scene.
By now you have heard about Flash Augmented Reality and how it is taking the Flash Development community by storm! Whether you are looking for how to get started, how to improve your own experiments or have a client who desperately needs AR on their site, this session is for you.
In this keynote I cover getting up and running as well as the ideal workflow for testing/deploying your creation. I also cover the basics then quickly move into how to build a FLAR Emulator for easy testing/debugging as well as general usability/performance issues. Finally we will look at my own experiments with AR, how they were built and highlight some of the best uses of Flash AR today.
The goal of this presentation is to teach you how to build a solid reusable foundation for all of your Flash AR projects which will allow you to quickly prototype your ideas. All code covered in this session is open source and free to use. Documentation on how it works will also be handed out as well.
This document provides an introduction to scripting in Unity, covering the basics of scripting fundamentals like naming conventions, player input, connecting variables, and accessing components between scripts. The tutorial uses JavaScript examples to demonstrate how to write scripts to move a main camera based on player input, create a spotlight that follows the camera, and access components between scripts to make the spotlight look at a cube when the jump button is pressed.
Second chapter of the lecture Unreal Engine Basics taught at SAE Institute Hamburg.
- Getting familiar with the Unreal Level Editor
- Learning how to bind and handle player keyboard and mouse input
- Understanding character movement properties and functions
Day 2 slides UNO summer 2010 robotics workshopRaj Dasgupta
Day 2 of the robotics workshop focused on designing autonomous intelligence for robots through controllers. The controller acts as the robot's "brain" by reading sensor input, processing that data, and sending output to actuators. Students learned to code simple controllers and reviewed more complex examples, including obstacle avoidance, line following, and state-based object following with a camera. Plans for Day 3 included programming basic behaviors on the e-puck robot.
Cocos2d is a well known open source software framework on game industry. It is is a 2D game framework built upon the OpenGL ES API’s.
In this session, I will talk about a hierarchical structures of an Cocos2d node and scenes. Also Cocos2d Graphic User Interface, Physical System, Audio, Particle System and Scene Transition technique will be shown. Finally this session will show various branches of Cocos2d open source projects including Cocos2d-x, Cocos2d-Swift, Cocos2d-html5, and Cocos2d-xna.
This document describes an "Auto Chasing Turtle" robot project that uses a Kinect sensor to detect human faces and autonomously follow those faces. The robot consists of a KONDO Animal 01 robot base controlled by a BeagleBoard-xM running openFrameworks and ofxDroidKinect software. The Kinect provides RGB and depth camera data to detect faces and calculate the robot's course and distance to the target face. The robot rotates to find faces, calculates its direction to the detected face, and adjusts its distance using Kinect depth data. An iPad can view the robot working in real time.
Philipp Nagele (CTO, Wikitude) An Insider Deep-Dive into the Wikitude SDK AugmentedWorldExpo
Philipp Nagele (CTO, Wikitude GmbH) gives an Insider Deep-Dive into the Wikitude SDK
An introduction into the many options of the Wikitude SDK with a deeper look into advanced features like Plugins API and how to combine third party libraries with the Wikitude SDK. We will look into the general architecture of the SDK and deep-dive into a few outstanding (and maybe not so well-known) features of the SDK.
Useful Tools for Making Video Games - XNA (2008)Korhan Bircan
This document provides an overview of tools and techniques for creating 3D video games in XNA, including installing Visual Studio and XNA Game Studio, displaying 3D models by loading them and applying transformations, handling keyboard/mouse input, implementing a basic camera, adding a skybox, and creating animations using curves to interpolate between control points over time. Sample code implementations for many of these techniques can be found in ZIP files referenced.
The webinar will provide an overview of building immersive worlds in virtual reality using Unity and the XR Interaction Toolkit. It will cover setting up a VR project, understanding locomotion systems, creating grabbable objects, and using VR sockets. The webinar agenda includes an introduction, a 45 minute session on building VR worlds, and a 10 minute Q&A. Additional overtime content will demonstrate visual scripting and creating a jigsaw puzzle tool.
Verold Studio allows users to build and publish interactive 3D content to the web without coding. It provides visual tools for designers and artists to create scenes and define behaviors. Developers can customize projects using HTML/CSS/JavaScript and Three.js. The editor supports collaboration, importing 3D models and textures, animations, and publishing projects to be embedded on websites or downloaded as standalone apps. Components can be created to add interactivity and integrate third party libraries.
- Scene data is converted to DOTS format entities and components during conversion from GameObjects to entities
- Conversion can happen when subscenes are closed or open
- Sample code shows how a rotating cube component is converted, including converting the script to a component data struct and adding a rotation component
- Prefabs are handled by declaring referenced prefabs and getting the primary entity during conversion
- Conversion systems discover GameObjects, create entities, and perform the conversion
This document provides an overview of six programming robotics simulation labs using Webots. Lab 1 introduces how to use Webots and program basic robot controllers. Lab 2 programs a robot to detect colors. Lab 3 adds a camera to a robot to make it aware of product colors. Lab 4 explains how two robots were synchronized in an IPR collaboration. Lab 5 changes where a cube is thrown by a Kuka robot. Lab 6 programs an ABB IRB120 robot to draw a square instead of a circle.
Similar to Oculus Rift DK2 + Leap Motion Tutorial (20)
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
5. Augmented Reality
• Image API
• Infrared & Night Vision
• Enable in Leap Motion Control Panel > Settings > General > Allow
Images
• Leap Oculus Passthrough Unity Package
• Next Leap Motion may have colour camera
7. Scope
•Map physical hand and finger movements to 3D hand
model and physics
• Grab physics objects (rigidbodies) by pinching
• Movement and jump using DK2’s positional tracker
11. Attach hands to player
• Assumed that you’ve already integrated Rift with player
• Is compatible with Sixense’s Razer Hydra integration via script (see
LeapHandExtendController.cs in tutorial)
12. 1. Create empty GameObject called “LeapHandController” and place
under OvrCameraController.
2. Either attach the HandController prefab to game object or attach
HandController.cs script.
3. Fill out variables with the following:
13. Setup RigidLeftHand + RigidRightHand prefabs
• Check if both prefabs are set as follows or change
accordingly.
MagneticPinch.cs script allows user to
grab rigidbodies by pinching fingers
14. For right hand, choose
the [..]Arm3 mesh
Each finger must have the following variables,
based on type of finger (i.e. index, pinky) and if
it’s left (starts with L) or right (starts with R):
16. Magnetic Pinch Script
• Allows user to grab closest object which has a rigidbody
• Force Spring Constant sets the elasticity of the grip to object
• Magnetic Distance determines how close an object must be from a
hand for it to be grabbed
Warning: There is a conflict between Magnetic
Pinch and enabling the Left/Right Physics
Models. You’d need to fix this conflict by making
the magnetic script ignore the hand’s physics
17. Grab Hand and Grabbable scripts
• As an alternative to Magnetic Pinch script
• Assign GrabHand.cs to hand graphic prefab
• Assign Grabbable.cs to rigidbody object to be grabbed
18. Hand Physics
• Interact with objects in a realistic way
• Grab small objects with one hand
• Grab larger objects with multiple hands
• Push/Pull other objects
• Other possibilities
• Grabbable objects must have colliders
Currently, grasping objects with hands is still
quite jittery. Future SDK or Leap hardware
updates should improve on this hopefully.
19.
20. Handle conflicts with other hand trackers
• This approach will hide/disable Sixense hands if leap motion detects
leap hands in scene
• Start by extending Leap Motion’s HandController.cs
21. LeapHandExtendController.cs (1)
using UnityEngine;
using System.Collections;
using Leap;
public class LeapHandExtendController : HandController {
protected Controller leap_controller_;
protected void Awake () {
leap_controller_ = new Controller();
}
24. Movement - DK2 Positional Tracking
• Move/Jump by using DK2’s positional tracking
• Move forward or backward by moving head in those directions
• Either rotate or strafe sideways by moving head left/right
• Jump by popping head upwards
• Could crouch too by popping head downwards
• User should normally be positioned directly in front of DK2’s tracker
25. Logic - Positional Tracking Movement
• Create/modify your player input controller script, attached to player
• Head position is calculated by subtracting the initial local position of the
main camera, of when scene is loaded, from its current position
• Create configurable Vector3 variables for:
• Sensitivity – strength multiplier of movement or jump actions
• Minimum – The minimum position that the Rift must be away from the centre
position, for the action to actually be triggered
• Movement will use the X (-left/+right) and Z axis (+forward/-backward) in
the direction of the camera as a variable sent to the Character Motor’s
inputMoveDirection variable
• Jump will use the Y axis and will be mapped to the Character Motor’s
inputJump variable
26.
27. Code – Positional Track Movement
[RequireComponent(typeof(CharacterMotor))]
public class FPSInputController : MonoBehaviour {
…
public bool ovrMovement = false; // Enable move player by moving head on X and Z axis
public bool ovrJump = false; // Enable player jumps by moving head on Y axis upwa rds
public Vector3 ovrControlSensitivity = new Vector3(1, 1, 1); // Multiplier of positiona tracking move/jump actions
public Vector3 ovrControlMinimum = new Vector3(0, 0, 0); // Min distance of head from centre to move/jump
public enum OvrXAxisAction { Strafe = 0, Rotate = 1 }
public OvrXAxisAction ovrXAxisAction = OvrXAxisAction.Rotate; // Whether x axis positional tracking performs strafing or rotation
private GameObject mainCamera; // Camera where movement orientation is done and audio listener enabled
private CharacterMotor motor;
// OVR positional tracking, currently works via tilting head
private Vector3 initPosTrackDir;
private Vector3 curPosTrackDir;
private Vector3 diffPosTrackDir;
28. Start + Update
void Start() {
…
initPosTrackDir = mainCamera.transform.localPosition;
}
void Update() {
// Get the input vector from OVR positional tracking
if (ovrMovement || ovrJump) {
curPosTrackDir = mainCamera.transform.localPosition;
diffPosTrackDir = curPosTrackDir - initPosTrackDir;
}
Normally, need Leap mount to mount Leap to DK1/DK2.
Can also use blue tac or other methods.
Leap Oculus Passhthrough Unity package allows integration of the infrared image passthrough in Unity. Can use to blend or show real world in VR.
Need to use Unity Pro instead of free due to Leap integration using external plugins.
Leap Motion V2 Skeletal Assets (beta) – found on Unity Assets store.
Pick these files from the Leap Unity Assets package.
Folders are bolded.
Even some files included here you may not need.
Red underline are suggested variables where the prefab’s may have different values than we want.
Green underline is a suggestion.
The magnetic pinch and hand physics conflict is due to the magnetic pinch script programmed to find the closest rigidbody which isn’t the hand’s graphics model which happens to be the physics hand model in this case.
Haven’t yet experimented much with this one so I can’t add much detail yet.
Currently we can’t override HandController’s Update() function, so we’ll use the LateUpdate() function instead.
Check the amount of leap hands visible, and if there are more than zero then disable Sixense (or any other) hand trackers.
Vector3 mapping to movements:
Left = negative X
Right = positive X
Forward = positive Z
Backward = negative Z
Jump = positive Y
(Additionally) Crouch = negative Y
Note – entire code for class is not here, only relevant code. If you’d like some help on this, feel free to contact me @chrisjz.
mainCamera needs to be declared as either the only camera following the player, or one of the 2 Oculus Rift cameras.
Set current tracking position as the camera’s local positon, then calculate the difference position by subtracting the initial camera position from the current.
If head position is further than the minimum from the centre, then set the direction to move multiplied by sensitivity, otherwise do not move in that direction.
directionVector is used afterwards to set the movement directions and strength.
If jump via positional tracking :
Enabled, then trigger jump action if camera’s difference position is above the set minimum Y axis position
Disabled, then map jump keyboard key to jump action