Data-Driven Facial Animation Based on Feature Points
Upcoming SlideShare
Loading in...5
×
 

Data-Driven Facial Animation Based on Feature Points

on

  • 1,547 views

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

Statistics

Views

Total Views
1,547
Slideshare-icon Views on SlideShare
1,547
Embed Views
0

Actions

Likes
1
Downloads
23
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Data-Driven Facial Animation Based on Feature Points Data-Driven Facial Animation Based on Feature Points Presentation Transcript

    • Data-Driven Facial Animation Based on Feature Points Pamela Fox
    • Goal of Project
      • Develop a system for generating facial animation of any given 2-d or 3-d model given novel text or speech input, which is both realistic and realtime.
    • Step 1: Capture data
      • For realism, motion capture points of a human speaking must be captured that encompass the entire phonetic alphabet and facial gestures and expressions.
    • Step 2: Choose feature points from mocap data
      • A set of points from mocap data must be selected for use as the “feature points.”
      • Only the motion of feature points is needed for animating a model.
    • Step 3: Select same feature points on model
      • User must tell the program what points on the model correspond to the mocap’s selected points.
    • Step 4: Compute FP regions/weights
      • Program searches out from each feature points, computing weights for each vertex in surrounding neighborhood until weight close to 0.
    • Step 5: Animate the model
      • Given information about vector movement of motion capture points, each feature point is moved and then neighbors are moved according to weights.