Your SlideShare is downloading. ×
0
Data-Driven Facial Animation Based on Feature Points
Data-Driven Facial Animation Based on Feature Points
Data-Driven Facial Animation Based on Feature Points
Data-Driven Facial Animation Based on Feature Points
Data-Driven Facial Animation Based on Feature Points
Data-Driven Facial Animation Based on Feature Points
Data-Driven Facial Animation Based on Feature Points
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Data-Driven Facial Animation Based on Feature Points

991

Published on

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

Short presentation on a MEL program I wrote. Based on research with Zhigang Deng.

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
991
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
26
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Data-Driven Facial Animation Based on Feature Points Pamela Fox
  • 2. Goal of Project
    • Develop a system for generating facial animation of any given 2-d or 3-d model given novel text or speech input, which is both realistic and realtime.
  • 3. Step 1: Capture data
    • For realism, motion capture points of a human speaking must be captured that encompass the entire phonetic alphabet and facial gestures and expressions.
  • 4. Step 2: Choose feature points from mocap data
    • A set of points from mocap data must be selected for use as the “feature points.”
    • Only the motion of feature points is needed for animating a model.
  • 5. Step 3: Select same feature points on model
    • User must tell the program what points on the model correspond to the mocap’s selected points.
  • 6. Step 4: Compute FP regions/weights
    • Program searches out from each feature points, computing weights for each vertex in surrounding neighborhood until weight close to 0.
  • 7. Step 5: Animate the model
    • Given information about vector movement of motion capture points, each feature point is moved and then neighbors are moved according to weights.

×