3. PROBLEM DEFINITION
• VR world presents unique problems arising due to system latencies
• Most noticeable impact is latency from physical event to display├
• Gauging effect involves measuring latency between two human
sensory events
• Physical measurement involves complicated equipment's with precise
positioning and orientation ⏚
• Hard to automate and reproduce precision and positioning⏀
• Hard to define a constant and consistent workload ⏉
5. SIMPLIFIED X - PHOTON FRAME JOURNEY PIPELINE LATENCY CAUSES
Hardware
event (for eg.
Motion,
camera
sensor)
Processing
blocks (for eg.
Sensor fusion,
ISP … blocks)
Frame
compositor
(typically app
+ OS render
components)
Display Driver
Arbitrations/clock stretching, scheduling, non aligned locks …
Post and pre-processing on frames, involves complex high volume calculations, often pushed off to
GPU thus getting affected by submit, context switching, shader, … compounded by CPU latencies
like scheduling, fence acquire/release imbalances …
Typically a consumer of an external event and producer of a derived display frame. Responsible for
deciding the display frame elements based on input event. Typically involves techniques like time
warps, aligning frame production with vsync, front buffer rendering … GPU/CPU factors as above
affect latencies.
End point for software. Driver typically responsible for picking up composited display frame and
scan out to display. Latency beyond this is the hardware panel response, color/pixel illumination
latencies etc which are more complex to measure …
6. APPROACH TO SOLUTION
• Technically same data flowing across multiple stages ⏉
• Generally physical events get timestamped while being formed into a frame
• This event timestamp is now key for identifying that frame at various pipeline stages
• current time @ stage – frame time = frame to current stage latency
• The only challenge is then to figure out way to embed and persist this timestamp
across multiple stages as well as the end point ⏋
• With tesseract’s dependence on SDK, the render function of the SDK used to embed
frame timestamp on to the color buffer
• Display driver then reads out this embedded time stamp just before scan out
• With traces at multiple places it’s now possible to use tools to generate visual latency
graphs at individual stages in addition to end to end ├
• Strategy adopted successfully for pass-through camera, WIP for motion to photon
⏀
├ When user sees effect of a physical event (X to photon), where X is another physical event
⏚ For eg. A high speed camera or a optical sensor is almost always involved at the display end as the receiver
⏀ - Gives rise to high percentage of run to run & precision variations for eg. photo sensor needs to be in a particular position on the display side
⏉- Like how does one define a workload which produces constant motion and constant oriented output on display
├ Reacting to physical event if required or simply pass it over
⏃ Potentially grouped together by buffers of data (for e.g. display and camera)
⏀ for eg. camera keeps producing frames asynchronous to display pipeline
⏊ for eg. a missed vsync might end up having app showing up relatively older camera frame thus increasing latency for one frame without guarantee of compensation in future
- Typical bus communication problems while communicating with hardware sensor
⏉ - for e.g. same camera frame (or same sensor frame) finally reaches display in some form or the other based on use case performing the frame transformation
⏋ - in case of X to display, the end point is the display driver
├ - https://jirasw.nvidia.com/browse/AV-446 for discussions and required patches
⏀ - Passthrough camera latency checks added to sanity (https://jirasw.nvidia.com/browse/AV-484) along with Motion to App (https://jirasw/browse/AV-430) motion to display is WIP