For the talk today, I will introduce the concept of model based coding. In particular, I will be discussing facial analysis and the concept of dynamic bandwidths. I will spend several slides on this material as my project has less to do with the particulars of algorithm as it does with the appropriate application of the algorithm. Next I will discuss the changes made to NSGA-II in order to make it more real time fro the application. In particular, I did this by combining it with a deterministic search. I will then present the results and conclude the presentation.
Mention the video games on essential parameter bullet Alternative to sending raw video footage Creation of “essential” parameters needed to reconstruct a scene Imagine the most recent multiplayer video games. The characters interact with each other, and the surroundings seemlessly. This is because the elements are based upon a model, not a video. A real-time analysis nightmare This is where the similarity ends, however. Video games take no real analysis. It is all built into the game software. For model based analysis, we need to find out what the user is doing by video processing instead of controller inputs.
Before discussing facial analysis it is good to get an idea of the power of model based coding. Take this scene for example, the participants in the conference are being transmitted extremely well. If you have ever participated in a conference on video, you know how bad it is. Other application include Gaming, where the character on the screen mimics your face, Man Machine interaction, where a computer can get an idea of you emotions using facial expressions. And video telephony. Most model based coding is at or below 1kb/s, and the last time I checked cellular communication, it was close to 9600 b/s at transmission and 1200 b/s when idle.
Okay here is how facial analysis by optimization is done. We need key animation parameters to manipulate the model of the face shown. The model based coder… But the analysis can be completed using synthesis…
The power of the coding is well illustrated from these example models… Ideally, I would have models as good as the ones seen here, but…
I have no funding for an expensive rendering software, and am a fairly dismal programmer. So this rendering will have to do.
The current research uses stochastic training to know when to do adjust based upon the error in the image. Gradient methods can be done in real time but catches up quickly with large movements of the head. To combat this, direct methods are used thast are not dependent on training. Direct methods are farther from being real time, but a more robust. TO make them more real time, researchers throw out FAPs. I agree that not all FAPs are needed fro realism, but currently only about 2 frames a second can be done using 12 parameters. FAP reduction is daunting, which FAPs are most important? For every facial scenario? And, no one is approaching this from a multiple objective standpoint.
What do I mean by dynamic bandwidth
I chose to use PSNR for its computational convenience. It is fast and easy to implement.
Use NSGA-II for the multiple objective optimization Assign a premature stopping criteria Choose bandwidth Select FAP sets Use deterministic algorithm
Have a two dimensional picture here, and explain the discrete line search
Eric Larson December 2007 Image Coding and Analysis Laboratory, Oklahoma State University
D. Pearson, “Developments in model-based image coding,” Proceedings of the IEEE , Vol. 83, No. 6, June 1995.
I. Pandizic. J. Ahlberg, M. Wzorek, P. Rudol, and M. Mosmondor, “Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation,” Proceedings of the 2 nd international conferenice on mobile and ubiquitous media , 2003
P. Eisert, “MPEG-4 facial animation in video analysis and synthesis,” International Journal of Imaging Systems and Technology , June 2003.
P. Eisert, “Very Low Bit Rate Coding,” Doctoral Thesis, November 2000.
J. D. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” 1 st international conference on genetic algorithms , 1985.
K. Deb, “Multi-objective genetic algorithms: problems, difficulties, and construction of test problems,” Evolutionary Computation , 1999.
Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T., A fast and elitist multiobjective genetic algorithm: NSGA-II , IEEE Transactions on Evolutionary Computation , 2002.
F. I. Parke, Parameterized Models for Facial Animation, IEEE Transactions on Computer Graphics and Animation , 1982.
R. Forchheimer and T. Kronander, “Image coding – from waveforms to animation,” IEEE Transactions on Acoustics, Speech, and Signal Processing , 37:1212, 1989.
C. S. Choi, K. Aizawa, H. Harashima, and T. Takebe, “Analysis and synthesis of facial image sequences in model-based image coding,” IEEE Transactions on Circuits and Systems for Video Technology , June 1994.
M. Buck, “Model based image sequence coding,” Motion Analysis and Image Sequence Coding, Ch. 10, Kluwer Academic Publishing, 1993, pp. 285-315.
N. Diehl, “Object motion estimation and segmentation on image sequences,” Signal Processing: Image Communications , Vol. 3, No. 1, February 1991, pp. 23-56.
K. Aizawa, H. Harashima and T. Saito, “Model-based analysis-synthesis image coding (MBASIC) system for a person’s face,” Signal Processing: Image Communication, vol. 1, pp. 139-152, 1989.
I. S. Pandizic and R. Forchheimer, “MPEG-4 Facial Animation: the Standard, Implementation, and Applications,” 1 st Ed. John Wiley and Sons, 2002, pp. 3-41.
J. Ahlberg and R. Forchheimer, “Face Tracking for model-based coding and face animation,” International Journal on Imaging Systems Technology , Wiley Periodicals, Vol. 13, pp. 8-22, 2003.
Dornaika, F., Ahlberg, J., Fast and Reliable Active Appearance Model Search for 3D Face Tracking, Proceedings of Mirage 2003, March 2003.
Dornaika, F., Ahlberg, J., Fitting 3D Face Models for Tracking and Active Appearance Model Training, Image and Vision Computing 24(2006), Science Direct, 2006.
Carter, E.F, 1994, The Generation and Application of Random Numbers, Forth Dimensions, Vol XVI, Nos 1 & 2, Forth Interest Group, Oakland California.
S. Kirkpatrick, C. D. Gelati, and M. P. Vecchi, “Optimization by simulated annealing,” Science , Vol. 220, No. 4598, pp. 671-680, 1983.
T. Edgar, D. Himmelblau, and Lasdon, L., Optimization of Chemical Processes , 2 nd Edition, McGraw-Hill, New York, NY, 2001.
G. Reklaitis, A. Ravindran, and Ragsdell, K., Engineering Optimization, Methods and Applications , 2 nd Edition, John Wiley and Sons, New York, NY, 2006.
ISTface, Program from Instituto Superior Technico, standard FAP animation sequence, “wow25.fap”.
J. Jiang, A. Alwan, P. A. Keating, and T. A. Edward Jr., “On the relationship between face movements, tongue movements, and speech acoustics,” EURASIP Journal on Applied Signal Processing , 2002.
Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity ,” IEEE Trans. Image Process . 13, 600–612 (2004).