The document describes a 5-step process for generating realistic facial animation in real-time based on motion capture data and selected feature points: 1) Capture full motion capture data of a human speaker, 2) Choose a set of key feature points from the motion capture data, 3) Select corresponding points on the target 2D or 3D model, 4) Compute weighted regions around each feature point on the model, 5) Animate the model by moving feature points and weighted neighbors based on motion capture vector movements.