3. Related Work
Paris, S. et al. “Hair photobooth: geometric and photometric
acquisition of real hairstyles.” SIGGRAPH 2008.
Requires thousands of images for a single reconstruction
4. Related Work
Jakob, W., & Moon, J. “Capturing hair assemblies fiber by
fiber.” ACM SIGGRAPH Asia 2009.
Photographed
Rendered
Capture individual hair strands using focal sweep
5. Contributions
Passive multi-view stereo approach capable of
reconstructing finely detailed hair geometry
Robust matching criterion based on the local
orientation of hair
Aggregation scheme to gather local evidence while
taking hair structure into account
Progressive template fitting procedure to fuse
multiple depth maps
Quantitative evaluation of our acquisition system
9. System Overview
• Partial depth map constructed by minimizing MRF
framework with graph cuts, along with structure-aware
aggregation and depth map refinement to improve quality
11. 2. Local Hair Orientation
• Filter bank of many (e.g. 180) oriented Difference-of-
Gaussian
Oriented DoG
DoG graph from http://fourier.eng.hmc.edu/e161/lectures/gradient/node11.html
13. 2. Local Hair Orientation
• Multi-resolution pyramid of orientation fields
Coarse
Fine
14. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
yi:: noisy image
xi: denoised image
! ! = !! ! + !!! !
Global minimization approximation
by graph cuts with α expansion
Image from Patter Recognition and Machine Learning
15. 3. Partial Geometry Construction
Approximate global minimization using graph cuts
• Boykov, Y., Veksler, O., & Zabih, R. (2001). Fast
approximate energy minimization via graph cuts. IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 23(11), 1222–1239.
• コンピュータビジョン最先端ガイド1 第2章 グラフカット
16. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
• Smoothness term
! ! = !! ! + !!! !
17. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
• Smoothness term
! ! = !! ! + !!! !
!! ! = !! !, !! ! ! − ! !! !
!∈!"#$%& !!∈!(!)
Depth continuity constraint between adjacent pixels p and p’
18. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
• Smoothness term
! ! = !! ! + !!! !
!!"# ! − !!"# !! !
!! !, !! = exp!(− ! )
2!!
Enforce strong depth continuity by Gaussian of orientation distance
19. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
• Data term
! ! = !! ! + !!! !
20. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
• Data term
! ! = !! ! + !!! !
(!)
!! ! = !! (!, !)
!∈!"#$%& !∈!"#"!$
!!! !, ! = ! (!)
!! (!!"# ! , !! !! !, ! )
!∈!"#$%
orientation field at orientation field at Projection of 3D point of
level l of level l of depth map D at pixel p
reference view adjacent view onto view v
21. 3. Partial Geometry Construction
• MRF (Markov Random Field) energy formulation
• Data term
! ! = !! ! + !!! !
(!)
!! ! = !! (!, !)
!∈!"#$%& !∈!"#"!$
!!! !, ! = ! (!)
!! (!!"# ! , !! !! !, ! )
!∈!"#$%
Cost function to measure
deviation of orientation fields
exp(..) is for influence of camera pair’s
different tilting angles
22. 3. Partial Geometry Construction
• Structure-Aware Aggregation
• Before summing up data term, guided filtering is applied on each
level l based on orientation field
1 ℜ{ ! ! − !! ∗ ! !! − !! }
! (!) !, !! = 1+
!! !
!! + !
!:(!,! ! )∈! !
|ω| : # of pixels in window
ε : structure awareness
µk : average of orientation
σk : standard deviation of orientation
23. 3. Partial Geometry Construction
• Sub-pixel depth map refinement
• Similar to T. Beeler et al. “High-quality single-shot capture of facial
geometry.” ACM ToG., 29(4)
24. 3. Partial Geometry Construction
• Aggregation and refinement results
No refinement
With refinement
With refinement
and aggregation
25. 4. Final Geometry Reconstruction
• Merge depths from multiple views by
• Kazhdan, M. et al. “Poisson surface reconstruction.” SGP06
• Li, H. et al. “Robust single-view geometry and motion
reconstruction.” SIGGRAPH Asia 2009.
26. distance is 3 mm. We also ran a state-of-the-art multi-view
in terms of gantry arm rotation. The left and right cam-
algorithm [4, 7, 1] on the synthetic dataset, and the statistics
eras in the T-pose provide balanced coverage with respect
of its numerical accuracy are similar to ours. However, as
to the center reference camera. Since our system employs
shown in Figure 9, their visual appearance is a lot worse
orientation-based stereo, matching will fail for horizontal
with the presence of blobs and spurious discontinuities.
5. Evaluation
hair strands (more specifically, strands parallel to epipolar
lines). To address this problem, a bottom camera is added
to extend the stereo baselines and prevent the “orientation
Timings Our algorithm performs favorably in terms of ef-
ficiency. On a single thread of a Core i7 2.3GHz CPU, each
blindness” for horizontal strands.
We use 8 groups of 32 views for all examples in this
• Quantitative evaluation
paper. Three of these groups are in the upper hemisphere,
using synthetic data
while a further five are positioned in a ring configuration
on the middle horizontal plane, as shown in Figure 2. We
(a) (f): Synthetic data
calibrate • camera positions with a checkerboard pat-
the
tern [19], then perform foreground-background segmenta-
• (b): This method
tion by background color thresholding combined with a
• (c): (a) overlaid on (b)
small amount of additional manual keying. A large area
light source was used for these datasets.
• (d): Difference between (a)
Qualitative Evaluation The top two rows of Figure 11
show reconstructions for is on the orderdemon-
and (b) two different hairstyles, of
millimeters
strating that our method can accommodate a variety of
hairstyles — straight to curly — and handle various hair col-
ors. We• (g): PMVS + Poisson
also compare our results on these datasets with (a) (b) (c) (d) (e)
[4] and [7] (h): T.6.Beeler et al.details present
• in Figure Note the significant “High-
in our reconstructions: though we do not claim to per-
quality single-shot capture
form reconstruction at the level of individual hair strands,
small groups of hair aregeometry.” ACMour
of facial clearly visible thanks to
ToG., 29(4)
structure-aware aggregation and detail-preserving merging
algorithms.
• (i) This method
In Figure 7 and Figure 8, we show how our reconstruc-
tion algorithm scales with higher resolution input and more (f) (g) (h) (i)
camera views. Higher resolution and more views greatly
increase the detail revealed in the reconstructed results. Figure 9: We evaluate the accuracy of our approach by
running it on synthetic data (a), (f). The result is shown
27. Dynamic Hair Capture
• Being completely passive,
this method is applicable to
dynamic hair capture
• Capture setup:
• 4 high-speed cameras
• 640 x 480, 100fps
• Lower quality due to low
resolution of high-speed
cameras
28. Conclusions
• Qualitative evaluation shows that passive, multi-view
construction of hair geometry based on multi-resolution
orientation fields achieves accurate measurements
• Combined with structure-aware aggregation, this method
achieves superior quality compared to other methods
• This method can be applied to capturing hair in motion
29. Latest Related Work
• Chai M. et al. “Single-View Hair Modeling for Portrait
Manipulation.” To appear in ACM TOG 31(4), to be
presented at SIGGRAPH 2012.