DynamicFusion is a method for reconstructing and tracking non-rigid scenes in real-time by extending KinectFusion. It uses a volumetric truncated signed distance function (TSDF) to integrate depth maps from multiple viewpoints into a global reconstruction. Live depth frames are aligned to a dense surface prediction generated by raycasting the TSDF. This closes the loop between mapping and localization for tracking dynamic, non-rigid scenes.
DynamicFusion is a method for reconstructing and tracking non-rigid scenes in real-time by extending KinectFusion. It uses a volumetric truncated signed distance function (TSDF) to integrate depth maps from multiple viewpoints into a global reconstruction. Live depth frames are aligned to a dense surface prediction generated by raycasting the TSDF. This closes the loop between mapping and localization for tracking dynamic, non-rigid scenes.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
This document provides an overview of deepfake generation and detection. It begins with an introduction to the author and their background and research interests. The rest of the document is outlined as follows: definitions of deepfakes, various deepfake generation techniques including face synthesis, manipulation, reenactment and swapping, and an overview of deepfake detection methods including commonly used datasets, image-based and video-based detection approaches.
42. 参考文献②
• 直接成分と大域成分の分離
– Nayar et al., “Fast separation of direct and global components
of a scene using high frequency illumination”, SIGGRAPH2006.
• DLPプロジェクタ+高速度カメラ
– Narasimhan et al., “Temporal dithering of illumination for fast
active vision”, ECCV2008.
– Han, Sato, Okabe, & Sato, “Fast spectral reflectance recovery
using DLP projector”, IJCV (2014).
– Maeda & Okabe, “Acquiring multispectral light transport using
multi-primary DLP projector”, IPTA2016.
– 鳥居 & 岡部, “動的シーンにおける光源色ごとの直接・大域成分の
分離”, MIRU2017. (+ 鳥居,岡部,& 天野,MIRU2018)
42
71. 参考文献④
• 多重化センシング
– Schechner et al., “A theory of multiplexed illumination”, ICCV2003.
• 照明シミュレーション
– Paul Debevec Home Page
http://www.pauldebevec.com/
– LeGendre et al., “Practical multispectral lighting reproduction”,
TOG (2016).
• 素材識別
– Liu & Gu, “Discriminative illumination: per-pixel classification of
raw materials based on optimal projections of spectral BRDF”,
TPAMI (2014). (+ Gu & Liu,CVPR2012)
– Wang & Okabe, “Joint optimization of coded illumination and
grayscale conversion for one-shot raw material classification”,
BMVC2017.
71
72. 参考文献⑤
• 反射成分の分離
– Kobayashi & Okabe, “Separating reflection components in
images under multispectral and multidirectional light sources”,
ICPR2016.
– 小屋松, 日高, 岡部, & レンシュ, “狭帯域光源下画像の反射成分と
蛍光成分の分離”, MIRU2018.
• 物体のモデリング
– Kitahara, Okabe, Fuchs, & Lensch, “Simultaneous estimation
of spectral reflectance and normal from a small number of
images”, VISAPP2015.
– 北原 & 岡部, “蛍光物体のモデリングとその任意照明下画像の
生成・編集への応用”, FIT2017.
– 北原, 岡部, & 佐藤, “蛍光に基づく照度差ステレオのための光源
波長と観測波長の最適化”, CVIM研究会2018年3月
72
82. 参考文献⑥
• 3次元ディスプレイ
– Lanman et al., “Content-adaptive parallax barriers: optimizing
dual-layer 3D displays using low-rank light field factorization”,
SIGGRAPH Asia 2010.
– Wetzstein et al., “Tensor displays: compressive light field syn-
thesis using multilayer displays with directional backlighting”,
TOG (2012).
• 照明シミュレーション
– Masselus et al., “Relighting with 4D incident light fields”,
SIGGRAPH2003.
– Zhou et al., “Light field projection for lighting reproduction”,
VR2015.
– Oya & Okabe, “Image-based relighting with 5-D incident light
fields”, CPCV2017.
82