Oculus insight building the best vr aaron daviesMary Chan
The rapid growth of the VR industry has brought to light new opportunities. Best Practices are emerging to guide developers in creating experiences which will thrill and excite them, while at the same time providing a safe and comfortable VR interaction. Find out what works, why VR is important, and learn of the important actions you can take to deliver the highest quality consumer grade VR interaction possible.
Oculus insight building the best vr aaron daviesMary Chan
The rapid growth of the VR industry has brought to light new opportunities. Best Practices are emerging to guide developers in creating experiences which will thrill and excite them, while at the same time providing a safe and comfortable VR interaction. Find out what works, why VR is important, and learn of the important actions you can take to deliver the highest quality consumer grade VR interaction possible.
The Rift is a virtual reality head-mounted display developed by Oculus VR. It was initially proposed in a Kick starter campaign, during which Oculus VR.
The simulation aspect of the Oculus Rift can be put to use as a tool for training. It's one of the more obvious non-gaming applications for the device, mostly because non-VR simulations already exist in many fields, but the quality of simulation is what's important here.
"Improving the VR experience, from the authors to the users"
Creating an immersive virtual reality application is a big challenge: choosing (or creating) the right hardware, choosing (or creating) the right software, and finally crafting the user experience. The hardware is increasingly powerful and accessible, but we don't know how to make the best of it. This is in part because designing a VR experience is a complex software task, and is also due to our limited understanding of the main component of the system: the user.
In this talk we will focus the current trends in system design, on the goals and design of MiddleVR, a generic VR plugin aimed at simplifying the creation of VR applications and we will discuss how our understanding of human perception can be used to improve the VR experience.
GS-4145, Oxide discusses how Mantle enables game engine performance, by Dan B...AMD Developer Central
Presentation GS-4145, Oxide discusses how Mantle enables game engine performance, by Dan Baker, Tim Kip and Guennadi Riguer at the AMD Developer Summit (APU13) November 11-13, 2012.
Building Applications with the Microsoft Kinect SDKDataLeader.io
David Silverlight's powerpoint presentation on the Kinect for Windows SDK. Feb. 29, 2012
NUI = Natural User Interface: it's an invisible interface, the content is the interface, removing the proxy, direct manipulation, gestural interfaces
Kinect for Windows SDK:
1. Kinect explorer
2. Installing & using the Kinect sensor
3. Setting up your dev environment
4. Skeletal tracking fundamentals
5. Working with depth data
6. Audio fundamentals
7. Camera fundamentals
VR is the ultimate reality
- What is VR ?
- What is the current state of the VR market ?
- How to create VR applications ?
- What's the future ?
London - Oculus Rift VR Meetup
12-13 December 2013
this presentation covers the very aspects of creating the virtual environments and also gives a small tutorial on how to create AR apps to create custom synthetic environments.
Digital проекты в российском спорте 2016Ivan Ryndin
Свежие кейсы профессионального, любительского и детского спорта.
Футбол - Мемориал Гранаткина, Хоккей с мячом - Чемпионат мира 2016 в Ульяновске, Футбол - Кубанская весна 2016, Хоккей - Золотая шайба 2016.
Матрица Digital проектов. Выводы и рекомендации.
The Rift is a virtual reality head-mounted display developed by Oculus VR. It was initially proposed in a Kick starter campaign, during which Oculus VR.
The simulation aspect of the Oculus Rift can be put to use as a tool for training. It's one of the more obvious non-gaming applications for the device, mostly because non-VR simulations already exist in many fields, but the quality of simulation is what's important here.
"Improving the VR experience, from the authors to the users"
Creating an immersive virtual reality application is a big challenge: choosing (or creating) the right hardware, choosing (or creating) the right software, and finally crafting the user experience. The hardware is increasingly powerful and accessible, but we don't know how to make the best of it. This is in part because designing a VR experience is a complex software task, and is also due to our limited understanding of the main component of the system: the user.
In this talk we will focus the current trends in system design, on the goals and design of MiddleVR, a generic VR plugin aimed at simplifying the creation of VR applications and we will discuss how our understanding of human perception can be used to improve the VR experience.
GS-4145, Oxide discusses how Mantle enables game engine performance, by Dan B...AMD Developer Central
Presentation GS-4145, Oxide discusses how Mantle enables game engine performance, by Dan Baker, Tim Kip and Guennadi Riguer at the AMD Developer Summit (APU13) November 11-13, 2012.
Building Applications with the Microsoft Kinect SDKDataLeader.io
David Silverlight's powerpoint presentation on the Kinect for Windows SDK. Feb. 29, 2012
NUI = Natural User Interface: it's an invisible interface, the content is the interface, removing the proxy, direct manipulation, gestural interfaces
Kinect for Windows SDK:
1. Kinect explorer
2. Installing & using the Kinect sensor
3. Setting up your dev environment
4. Skeletal tracking fundamentals
5. Working with depth data
6. Audio fundamentals
7. Camera fundamentals
VR is the ultimate reality
- What is VR ?
- What is the current state of the VR market ?
- How to create VR applications ?
- What's the future ?
London - Oculus Rift VR Meetup
12-13 December 2013
this presentation covers the very aspects of creating the virtual environments and also gives a small tutorial on how to create AR apps to create custom synthetic environments.
Digital проекты в российском спорте 2016Ivan Ryndin
Свежие кейсы профессионального, любительского и детского спорта.
Футбол - Мемориал Гранаткина, Хоккей с мячом - Чемпионат мира 2016 в Ульяновске, Футбол - Кубанская весна 2016, Хоккей - Золотая шайба 2016.
Матрица Digital проектов. Выводы и рекомендации.
Basic VR Development Tutorial Integrating Oculus Rift and Razer HydraChris Zaharia
Basic VR tutorial on re-making the first scene in Super Mario 64 in Unity 3D integrating the Oculus Rift and Razer Hydra to control Mario and interact with objects.
Presentation includes step-by-step instructions, screenshots and source code in C#.
Also I forgot to share the code for the GrabObject script, but you can find it here:
https://github.com/chrisjz/sm64vr/blob/b5c43072d237c12ff0112fc2ef293031f69a9243/Assets/Objects/Scripts/GrabObject.cs
Here's the source code for the game mentioned in this tutorial:
https://github.com/chrisjz/sm64vr
This tutorial was presented at the Sydney Virtual Reality meetup in June 2014.
These slides use ideas from my class to develop a business model for Oculus Rift’s headset. This headset creates the perception among users that they are in a different reality as they play video games. It has much higher resolution and faster response time than do previous headsets. Although Oculus hopes to develop its own console, these slides recommend that Oculus partner with a console maker such as Sony, Microsoft, or Nintendo in order to reduce development cost and gain access to existing users and game software. Hopefully it can receive a portion of hardware and game software revenues as its headset enables console makers to increase their market share and charge more for game software.
Introduction material for 360 Video. This includes Multimedia Pipeline and Rendering pipeline for playback. Comparison of projections for 360 video rendering.
The next generation of GPU APIs for Game EnginesPooya Eimandar
Demonstrate about new pipeline of GPU APIs for developing real time game engine.
Developing for DirectX12, Vulkan or Metal requires a redesign of the game engine. Developers can achieve key benefits like reduced power consumption and optimized CPU and GPU, multi-threading on multiple GPU devices.
Virtual Reality Application Development on Android using Google CardboardApurv Nigam
DroidCon 2015 was a Annual Developer Conference that was organised by HasGeek.com in Bangalore. Hundreds of Android Developers participated in the conference to listen and learn from various big speakers and trainers from tech industry related to emerging technologies in mobiles.
I was one of the invited speaker to speak about Virtual Reality and give a hands on coding session. This was purely intended to increase awareness about core concepts of Virtual Reality which is the next big thing in Tech Industry.
For more details, please visit https://droidconin.talkfunnel.com/2015/68-virtual-reality-an-introduction-to-development-on-
Presentation Video : http://tinyurl.com/pfhz96m
Stage 3D introduction in Adobe Flash Player and Adobe AIR lets you use techniques such as deferred lighting, screen space dynamic shadow, MRT, and more through vertex and fragment shaders. Join Jean-Philippe Doiron, Principal Architect R&D at Frima Studio, and Jean-Philippe Auclair, R&D Architect, for a deep dive into GPU programming with the new Flash Player, and discover how to produce beautiful GPU effects that are reusable in your games and applications.
Virtual Reality: Learn to Maximize Present and Future Creative Possibilities!Stephan Tanguay
2014 is the year that virtual reality will go from a bad memory from the 90s to the beginning of a new way to experience content. Already the Oculus Rift has started to transform several industries with its ability to transport you to a completely interactive virtual space. As a designer and/or developer, this transformation will change the way you look at your work.
The first time you build something on the screen and then project yourself into your own creation, you quickly feel a sense of wonder and excitement from experiencing content in a way you never expected. You will also realize that many things that work well with a typical mouse, keyboard and computer screen work considerably different in the virtual world. Sharing these experiences with Stephan will help you to understand how to leverage what works in virtual reality, future possibilities and what to avoid so your virtual experiences will be effective at immersing users in your virtual creations.
Hacking for Salone: Drone Races - Di Saverio; Lippolis - Codemotion Milan 2016Codemotion
This year for the Salone del Mobile at frog, we came up with a funky experiment, based on Drones, Android, and VR. In this talk, your hosts will walk you through our Drone Race experiment, touching topics like real-time computer vision, reactive programming for mobile, indoor positioning and (wheeled) Drones hacking. The variety and complexity of these topics is equal to its coolness though, so you may be puzzled asking yourself: "Where do I start?" We will share experiences and lots of code, so that you can start right away.
Write retrogames in the web and add something more with AzureMarco Parenzan
Mi piacciono i retrogames, sin da quando ho iniziato a usare i computer con il mio amato Commodore 64 negli anni ottanta. Così nel mio (poco) tempo libero cerco di reimplementare, non come sviluppatore professionista, i giochi più semplici e coinvolgenti con cui ho giocato, portandoli sul web con JavaScript e Phaser.io. Quindi voglio condividere la mia esperienza.
Poi, dato che lavoro con Azure e i giochi diventano delle Web App, cerco di evolvere i giochi aggiungendo alcune nuove e moderne funzionalità, così per sperimentare un po' i vari servizi di Azure, soprattutto quelli che normalmente uso meno.
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Philipp Nagele (Wikitude): What's Next with Wikitude
An in-depth look into the recent developments at Wikitude and what the next version of the Wikitude SDK will offer augmented reality developers.
http://AugmentedWorldExpo.com
Using intel's real sense to create games with natural user interfaces justi...BeMyApp
As technology advances, more sophisticated ways of interfacing with it are emerging. Even though new tech strives to make our apps more intuitive and easy to use, designing interfaces for those apps is not quite as straight forward. We’ve learned a few rules and “gotchas” when working with gesture cameras that can help to make apps that use them easy and fun to use.
In this talk Justin described:
1. Different data types you can get from Intel® RealSense™ and how to get them
2. Designing an interface for a gesture camera
3. Using your hands, face, and voice as an interface
Kenn Song (VisionStar): EasyAR 2.0 - New Features and ChangesAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Kenn Song (VisionStar): EasyAR 2.0 - New Features and Changes
EasyAR is an augmented reality SDK designed to build AR applications easily and smoothly. EasyAR 2.0 is a major update with many new features since the first public release. It brings amazing experience improvements with SLAM and 3D object tracking. And it makes AR easier to build with APIs exported into many different coding languages, easier to share with cloud recognition and scene recording.
http://AugmentedWorldExpo.com
Алексей Рыбаков (Senior Engineer,Technical Evangelist DataArt ) Alina Vilk
Алексей Рыбаков (Senior Engineer,Technical Evangelist DataArt ) рассказывал о современном состоянии VR/AR-индустрии, о принципах и особенностях мобильного VR. Участники встречи коснулись средств разработки и запустили парочку примеров на Google DayDream. Во второй части доклада обсудили Samsung Gear VR, Google Cardboard и Google Daydream, в чем их сходства и различия с точки зрения программиста. Рассмотрели SDK/Tool, которые можно с ними использовать.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
6. Oculus is...
• Palmer Luckey& John Carmack “homebrew” prototype at E3 2012
• Oculus VR founded mid 2012
• Successful Kickstarter campaign Sept 2012 (2.4 million)
• First 10k devkits shipped March 2013
• Over 50,000 devkits shipped to date
• Over 60,000 developers on Oculus devportal
• 100s of games in development for the Rift
• 50+ top-tier show awards, best of CES 2013, E3 2013, CES 2014
• Acquired by Facebook
• Started DK2 shipping on August 2014
• Announced mobile VR kit, Gear VR
• Announced prototype of next version, Crescent bay at Oculus Connect 2014
10. Positional Tracking
• External camera looking at the user.
• 72H x 52V Degrees FOV
• 0.5m -2.5 meters tracking range
• Placed on the desk or looking down at the user
• Includes a tripod screw mount
11. • Relies on IR built into the Rift on all sides
• Supports swaying side to side, front and back
• Tracking works while looking up and down
• Player can turn more than 110 degrees to each side
Positional Tracking
13. Features of SDK 0.4
• Support DK1 & DK2
-New C Interface; recompile the game with new SDK
• Positional tracking
-Position origin is currently 1 meter from camera
-SDK reports SensorStatewith predicted Pose state
-Includes orientation, position and derivatives
-Sets flags when out of tracking range
-Falls back on head model
• Direct-to-Rift and Extended Mode
• OVR Configuration Utility
14. Where OVR Software Stack is Heading
• C Interface: Easy to bind from other languages
• Driver DLL: Automatically support changes in HW and functionality
• OVR Service: Rift sharing and VRs transitions across apps
15. SDK Rendering vs. Game Rendering
•0.2 SDK didn’t do any rendering
•Only provided parameters needed for proper rendering
•New rendering backend in SDK 0.4
•Takes over critical rendering features
•Game(App) layer gives SDK L & R eye textures to ovrHmd_EndFrame()
16. • General Work Flow
-ovrHmd_CreateDistortionMesh
Transforms the image through UVs
More efficient to render then pixel shader
Gives Oculus flexibility to change distortion
-ovrHmd_BeginFrame
-ovrHmd_GetEyePoses
-Stereo rendering based on EyeRenderPose(Game Scene Render)
-ovrHmd_EndFrame
SDK Rendering vs. Game Rendering
17. Finishes rendering with the following features on SDK 0.4:
•Barrel distortion with chromatic aberration & time warp
•Internal latency testing & dynamic prediction
•Low-latency v-sync and flip (even better with direct-to-rift)
•Health & Safety Warning
SDK Rendering vs. Game Rendering
18. • Easy to integrate:
-No shaders, meshes to create
-Pass device/system pointers and eye texture(s)
-Support OpenGL and D3D9/10/11
-Must re-apply rendering state for next frame
• Benefits:
-Better compatibility with future Oculus HW and Features
-Reduces graphics setup bugs
-Support low-latency driver screen access such as front buffer rendering, etc.
-Automatically support overlays: latency testing, camera guides, debug data,
pass-through, platform overlays
SDK Rendering vs. Game Rendering
19. Supporting Game Engine
•Unity 3D, Unreal Engine 3 & 4 use SDK rendering
•In some cases, what you think is unsupported by our SDK, might actually be supported, but we still want to know!
Ping us. We will listen.
20. Extended Mode
•Headset shows up as an OS Display
•Application must place a window on Rift monitor
•Icons and Windows in a wrong place
•Windows compositor handles Present
•There is usually at least one extra frame of latency
•More latency if no CPU & GPU sync is done
21. Direct To Rift
Outputs to Rift without Display being a part of the desktop
•Headset not seen by the OS
•Avoid jumping windows and icons
•Decouples Rift v-sync from OS compositor
•Avoids extra GPU buffering for minimum latency
•Use ovrHmd_AttachToWindow
•Window swap chain output is directed to the Rift
Hope to see Direct Mode become the longer term solution
23. VR Motion to Photon Latency
Input
USB
Game Engine
Write Display
Pixel Switching
New Image
24. VR Motion to Photon Latency
Keeping latency low is crucial for a solid VR experience
Goal is < 20 ms, and hopefully close to 5 ms
We’re almost there! (stay tuned for some numbers)
25. Reducing Latency -TimeWarp
•Re-projects rendering for a later point in time
•Done simultaneously with distortion
•Reduces perceived latency
•Accounts for DK2 rolling shutter
•0.4 SDK handles orientation, positional possible
26. Reducing Latency -TimeWarp
13.3 ms
Any other way to apply sensor before the end of the frame?
TimeWarp–Projected rendering -Pioneered by John
75 FPS
27. Any other way to apply sensor before the end of the frame?
TimeWarp–Projected rendering -Pioneered by John
Reducing Latency -TimeWarp
30. SDK Roadmap –Short Term
Driver Improvements & Fixes
•Multi-Monitor fixes to v-sync related latency & judder
•Reduce post-present latency
•Handle Optimus and multi-GPU scenarios
Linux Support
Asynchronous/Adaptive Time warp
•Reduce Judder by generating extra frames when needed
Time warp Layers
•Handle Cockpit and Overlays separately from the scene