Data-Driven Facial Animation Based on Feature Points

  • 953 views
Uploaded on

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
953
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
23
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Data-Driven Facial Animation Based on Feature Points Pamela Fox
  • 2. Goal of Project
    • Develop a system for generating facial animation of any given 2-d or 3-d model given novel text or speech input, which is both realistic and realtime.
  • 3. Step 1: Capture data
    • For realism, motion capture points of a human speaking must be captured that encompass the entire phonetic alphabet and facial gestures and expressions.
  • 4. Step 2: Choose feature points from mocap data
    • A set of points from mocap data must be selected for use as the “feature points.”
    • Only the motion of feature points is needed for animating a model.
  • 5. Step 3: Select same feature points on model
    • User must tell the program what points on the model correspond to the mocap’s selected points.
  • 6. Step 4: Compute FP regions/weights
    • Program searches out from each feature points, computing weights for each vertex in surrounding neighborhood until weight close to 0.
  • 7. Step 5: Animate the model
    • Given information about vector movement of motion capture points, each feature point is moved and then neighbors are moved according to weights.