2. Motivation: rich geometry features
2
Genova et.al. CVPR 2020
Wang et.al. CVPR 2018
Park et.al. CVPR 2019
● Reality vs. Practicality
3. Outline
● Point upsampling
[CVPR 2019]
● Differentiable Point Rendering
[SIGGRAPH Asia 2019]
● Representation agnostic shape deformation
[CVPR 2020]
3
4. Patch-based Progressive 3D Point Upsampling
Wang Yifan1, Shihao Wu1, Hui Huang2, Daniel Cohen-Or2,3, Olga Sorkine-Hornung1
1ETH Zurich, 2Shenzhen University, 3Tel Aviv University
5. Motivation / Objective
● Previous works cannot recover details.
● Goal: present fine-grained details even for high upsampling ratio
and sparse input using prior knowledge harnessed from external
data by a deep neural network.
input PU-Net (Yu et.al.)
Ground truth Ours
EAR (Huang et.al.)
16x upsampling
6. Key Idea
1. different levels of detail
2. progressive training [ProSR, CVPRW 2018]
3. adaptive receptive field
8. Key Idea
1. Tailor for different levels of detail with a sub-net
Iterative
4x+4x
(PU-Net)
input direct
16x
9. Key Idea
1. Tailor for different levels of detail with a sub-net
2. Train all levels of detail end-to-end in a progressive fashion.
Iterative
4x+4x
(PU-Net)
input direct
16x
10. Key Idea
1. Tailor for different levels of detail with a sub-net
2. Train all levels of detail end-to-end in a progressive fashion.
3. Reduce receptive field, i.e. large context for large-scale detail,
local context for fine-scale detail.
Iterative
4x+4x
(PU-Net)
input direct
16x
11. Key Idea
1. Tailor for different levels of detail with a sub-net
2. Train all levels of detail end-to-end in a progressive fashion.
3. Reduce receptive field, i.e. large context for large-scale detail,
local context for fine-scale detail.
Iterative
4x+4x
(PU-Net)
input direct
16x
ground
truth
27. Goal: detail preserving shape deformation
● Given two arbitrary shapes without correspondences, we can deform one to
match the other while preserving its rich geometric details.
# 27
source shape target shape deformed shape
30. Motivation: detail preservation
# 30
detail preservation
(geometry regularizer)
+
Training Loss = shape alignment
(point-to-point distance)
● Shape alignment and detail preservation are two competing objectives.
Groueix et.al. CGF 2019
Source Target
31. Our key idea: dimension reduction using cages
● Constrain the input and output space, reduce the dimensionality of the
deformation space.
# 31
translation in the
reduced space
32. Interlude: Cage-based deformations
# 32
● Classic shape modeling
technique
● Coarse enclosing mesh
● Associated with the shape via
differentiable “cage
coordinates”
● Deformation by interpolation
Ju et.al. SIGGRAPH 2005
Joshi et.al. TOG 2007
Lipman et.al. TOG 2008
35. Advantage: efficiency and robustness
● Network complexity and computation is constant with respect to input shape
● Deformation quality is independent of sampling and local noise
# 35
deformed shape
120k vertices
target shape
CageNet
DeformNet
per cage-vertex
offset
1024 vertices
39. Conclusion and future work
● A novel representation for
detail-preserving
deformations
● Other classic techniques
from interactive shape
modeling for various
applications and inputs
# 39
Jacobson et.al. SIGGRAPH 2011
Hi, my name is Yifan, in the next 5 minutes I’ll being show you a cool new method for 3d shape deformation.
Our lab is focusing on details
Previous optimization-based methods focus on sharp features
Tailor for different levels of detail with a sub-net
Train all levels of detail end-to-end in a progressive fashion.
Reduce receptive field, i.e. large context for large-scale detail, local context for fine-scale detail.
The following slides visually demonstrate how each key idea contribute to better result. Shown here are the baseline
Thanks for the introduction, today i'm going to talk about a differentiable renderer for point clouds.
One motivation for our project is to leverage advanced neural image processing networks for classical point cloud processing tasks.
As we should know by now, differentiable rendering aims at inferring the underlying scene parameters such as ... from the rendered images.
It's typically used to update these scene parameters from the changes on image.
The key is to define a gradient of the rendering function wrt the scene parameters.
Use our DR to render images of the noisy point cloud, denoise the images with a state-of-the-art image denoising network, then use the denoised outputs as target images, propagate the change through the DR to the point cloud.
The advantage compared to traditional denoising technique, is that many contemporary neural networks, such as generative adversarial networks, are able to hallucinate plausible details.
Inspired by previous work, paparazzi, we can use DR to propagate image filtering to 3D models, in this case 3D point cloud.
The advantage of our DR is that it’s very robust under noisy inputs compared mesh-based renderers.
Lastly, as demonstrated before, our DR can also be used for image-based point cloud deformation, with large topology changes such as this one shown here.
Hi, my name is Yifan, in the next 5 minutes I’ll being show you a cool new method for 3d shape deformation.
The primary motivation of this work is detail preservation.
More specifically,
For example, the iron throne model is deformed to match an arbitrary sofa model, while keeping all the sword embellishment intact.
This technique can be used in many areas such as for design and animation.
First let me convince you that this is not a trivial task with a pair of simple chair models. We can see significant distortions in the deformation result generated by the state-of-the-art method.
Secondly, shape alignment and feature preservation are in fact two competing if not conflicting objectives.
As the example here shows, as we naively force the result to match the target and preserve the features in the source shape, the resulting geometry becomes a somewhat smudged version of the two.
Our key idea is to improve the regularity of the deformation by reducing the dimensionality of the deformation space, we do so by representing the input and output with a much coarser mesh, called cage.
This idea is inspired by the classic interactive shape modeling technique.
Each of cage vertices is associated with the underlying shape using the so-called cage coordinates.
Deformation is driven by offsetting the cage vertices. The deformed shape is obtained by interpolating cage vertices using the coordinates.
While conventionally the cages are created by artists manually, i'll show you how we automatically create these cages for arbitrary input shapes.
We use a neural network to automatically generate the enclosing cage conditioned on the source shape. From which, we can compute the coordinates deterministically. Then another network deforms this cage by offsetting cage vertices. Finally, we apply the interpolation to obtain the deformed shape. Because the interpolation is smooth by definition, the local geometry details can be preserved naturally.
Our network can be trained using any shape alignment loss between the source and the target shape.
We can control the rigidness of the cage via the skinning weights by penalizing negative values, (as these values leads to extrapolations instead of interpolation).
Additional regularizers can be applied to improve the preservation of higher-level features, such as symmetry.
Since the cage operations are fully differentiable, the entire network can be optimized end-to-end.
Since the deformation is per cage-vertex, our network does not scale with the input resolution, thus can handle very complex shapes like this chair shown below.
Furthermore, since the network needs to focus only on the global information, it’s robust to noisy and partial inputs.
We can use our method to generate new variations of shapes, a practical tool for 3D stock amplification and design.
As the cage deformation is not strictly tied to the enclosed shape, we can apply an existing deformation to a dissimilar source shape, a technique often referred to as “deformation transfer”.
In this example, our network is trained to deform a human in rest pose to various other poses.
we can then transform the predicted deformation to a new character, in this case a skeleton and a robot.
This is achieved by optimizing the source cage for the new character.
Compared to existing works, our method doesn’t require known correspondences with the target shape at inference time.
To conclude, we proposed a novel representation for shape deformations that is detail preserving by construction.
We envision that more classic interactive shape modeling techniques, such as cage-deformation used in this paper, can be incorporated into neural networks for different types of input and applications.
To conclude, we proposed a novel representation for shape deformations that is detail preserving by construction.
We envision that more classic interactive shape modeling techniques, such as cage-deformation used in this paper, can be incorporated into neural networks for different types of input and applications.