Temporary Coherent
3D Animation
Akshat Singh
B.Tech IV year
Table of Content
» Defination
» Method
» Steps Involved
» Acquisition System
» Data Acquisition
» Unified Skeletal Animation Reconstruction
» Temporally Coherent 3d Animation Reconstruction
» SIFT features
» Merging of three cameras
» Bone-length Variation
» Evaluation
Defination
Method for capturing human motion
over 360 degrees by the fusion of
multi-view RGB-D video data from
Kinect sensors using feature point
sampling
Method
» Capture real-world objects and technique ranging
from shape matching to deformation toward
coherent animation
» Use RGB data,3D reconstruction for deapth
reconstruction
» Background segmentation dealing with RGB color
space
Steps Involved
Acquisition
System
» frame of input RGB and Depth
images
» separately resampled in a 3D
point cloud
» point clouds are merged in a
unified global coordinate system
» RGB frames from three cameras
» Frontal and profile faces are
detected in two cameras
» depth data with the overlaid
skeleton from Kinect
» unified skeleton from the two
cameras towards which the actor's
face is oriented
Data Acquisition
Merging of three cameras
» unified two point clouds black and red
» corresponding two skeletons after the extrinsic calibration
» Unified skeleton reconstructed
Unified Skeletal Animation
Reconstruction(Still)
» unified two point clouds black
and red
» corresponding two skeletons
after the extrinsic calibration
» Unified skeleton reconstructed
Unified Skeletal Animation
Reconstruction
Zoomed-out point cloud with feature
points shown in blue and green. Red
point at time-step t is to be matched,
and green points are the five nearest
feature points.
Shows the zoomed-in point cloud at t.
Motion vectors are calculated with
respect to the 5 nearest feature points.
These motion vectors are used to
calculate the matching point at t+1 ,not
centered on any point because the
matching is resolution independent
Temporally Coherent 3d Animation
Reconstruction
Estimating Optical
Feature Points
Estimating
Geometrical
Feature Points
Mapping
Unified
skeleton
reconstruct
from method
Alignment
using Motion
Vectors
SIFT features using a simple Euclidean
distance measure D
» Matching of optical feature
points between two RGB images
using SIFT.
» SIFT feature has a location
q(t) = (u, v, t)
» Optical feature points L(t)
» mapping between L(t) and
L(t + 1)
Bounding-box
and Skeleton
Overlap
Estimation
Bone-length
Variation
Evaluation
» Comparison against direct RGB-D SLAM
» Comparison against feature-based RGB-D
SLAM
» Evaluation of the residual configuration
» Depth vs inverse depth in the geometric
reprojection error
» Computational time
» Failure modes
» Qualitative results
Place your screenshot here
Code Link
» https://github.com/
alejocb/rgbdtam
» https://www.youtube.co
m/watch?v=sc-
hqtJtHD4
THANK you!
Any questions?
You can find me at
» Linkedin/akshat7497
» akshat7497@gmail.com

Temporary Coherence 3D Animation

  • 1.
  • 2.
    Table of Content »Defination » Method » Steps Involved » Acquisition System » Data Acquisition » Unified Skeletal Animation Reconstruction » Temporally Coherent 3d Animation Reconstruction » SIFT features » Merging of three cameras » Bone-length Variation » Evaluation
  • 3.
    Defination Method for capturinghuman motion over 360 degrees by the fusion of multi-view RGB-D video data from Kinect sensors using feature point sampling
  • 4.
    Method » Capture real-worldobjects and technique ranging from shape matching to deformation toward coherent animation » Use RGB data,3D reconstruction for deapth reconstruction » Background segmentation dealing with RGB color space
  • 5.
  • 6.
    Acquisition System » frame ofinput RGB and Depth images » separately resampled in a 3D point cloud » point clouds are merged in a unified global coordinate system
  • 7.
    » RGB framesfrom three cameras » Frontal and profile faces are detected in two cameras » depth data with the overlaid skeleton from Kinect » unified skeleton from the two cameras towards which the actor's face is oriented Data Acquisition
  • 8.
    Merging of threecameras » unified two point clouds black and red » corresponding two skeletons after the extrinsic calibration » Unified skeleton reconstructed
  • 9.
    Unified Skeletal Animation Reconstruction(Still) »unified two point clouds black and red » corresponding two skeletons after the extrinsic calibration » Unified skeleton reconstructed
  • 10.
    Unified Skeletal Animation Reconstruction Zoomed-outpoint cloud with feature points shown in blue and green. Red point at time-step t is to be matched, and green points are the five nearest feature points. Shows the zoomed-in point cloud at t. Motion vectors are calculated with respect to the 5 nearest feature points. These motion vectors are used to calculate the matching point at t+1 ,not centered on any point because the matching is resolution independent
  • 11.
    Temporally Coherent 3dAnimation Reconstruction Estimating Optical Feature Points Estimating Geometrical Feature Points Mapping Unified skeleton reconstruct from method Alignment using Motion Vectors
  • 12.
    SIFT features usinga simple Euclidean distance measure D » Matching of optical feature points between two RGB images using SIFT. » SIFT feature has a location q(t) = (u, v, t) » Optical feature points L(t) » mapping between L(t) and L(t + 1)
  • 13.
  • 14.
  • 15.
    Evaluation » Comparison againstdirect RGB-D SLAM » Comparison against feature-based RGB-D SLAM » Evaluation of the residual configuration » Depth vs inverse depth in the geometric reprojection error » Computational time » Failure modes » Qualitative results
  • 16.
    Place your screenshothere Code Link » https://github.com/ alejocb/rgbdtam » https://www.youtube.co m/watch?v=sc- hqtJtHD4
  • 17.
    THANK you! Any questions? Youcan find me at » Linkedin/akshat7497 » akshat7497@gmail.com