Data-Driven Facial Animation Based on Feature Points

1,229 views
1,158 views

Published on

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,229
On SlideShare
0
From Embeds
0
Number of Embeds
25
Actions
Shares
0
Downloads
28
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Data-Driven Facial Animation Based on Feature Points

  1. 1. Data-Driven Facial Animation Based on Feature Points Pamela Fox
  2. 2. Goal of Project <ul><li>Develop a system for generating facial animation of any given 2-d or 3-d model given novel text or speech input, which is both realistic and realtime. </li></ul>
  3. 3. Step 1: Capture data <ul><li>For realism, motion capture points of a human speaking must be captured that encompass the entire phonetic alphabet and facial gestures and expressions. </li></ul>
  4. 4. Step 2: Choose feature points from mocap data <ul><li>A set of points from mocap data must be selected for use as the “feature points.” </li></ul><ul><li>Only the motion of feature points is needed for animating a model. </li></ul>
  5. 5. Step 3: Select same feature points on model <ul><li>User must tell the program what points on the model correspond to the mocap’s selected points. </li></ul>
  6. 6. Step 4: Compute FP regions/weights <ul><li>Program searches out from each feature points, computing weights for each vertex in surrounding neighborhood until weight close to 0. </li></ul>
  7. 7. Step 5: Animate the model <ul><li>Given information about vector movement of motion capture points, each feature point is moved and then neighbors are moved according to weights. </li></ul>

×