Your SlideShare is downloading. ×
Facial Feature Analysis For Model Based Coding
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Facial Feature Analysis For Model Based Coding

650
views

Published on

A genetic algorithm I contributed at the conference on evolutionary computation

A genetic algorithm I contributed at the conference on evolutionary computation


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
650
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • For the talk today, I will introduce the concept of model based coding. In particular, I will be discussing facial analysis and the concept of dynamic bandwidths. I will spend several slides on this material as my project has less to do with the particulars of algorithm as it does with the appropriate application of the algorithm. Next I will discuss the changes made to NSGA-II in order to make it more real time fro the application. In particular, I did this by combining it with a deterministic search. I will then present the results and conclude the presentation.
  • Mention the video games on essential parameter bullet Alternative to sending raw video footage Creation of “essential” parameters needed to reconstruct a scene Imagine the most recent multiplayer video games. The characters interact with each other, and the surroundings seemlessly. This is because the elements are based upon a model, not a video. A real-time analysis nightmare This is where the similarity ends, however. Video games take no real analysis. It is all built into the game software. For model based analysis, we need to find out what the user is doing by video processing instead of controller inputs.
  • Before discussing facial analysis it is good to get an idea of the power of model based coding. Take this scene for example, the participants in the conference are being transmitted extremely well. If you have ever participated in a conference on video, you know how bad it is. Other application include Gaming, where the character on the screen mimics your face, Man Machine interaction, where a computer can get an idea of you emotions using facial expressions. And video telephony. Most model based coding is at or below 1kb/s, and the last time I checked cellular communication, it was close to 9600 b/s at transmission and 1200 b/s when idle.
  • Okay here is how facial analysis by optimization is done. We need key animation parameters to manipulate the model of the face shown. The model based coder… But the analysis can be completed using synthesis…
  • The power of the coding is well illustrated from these example models… Ideally, I would have models as good as the ones seen here, but…
  • I have no funding for an expensive rendering software, and am a fairly dismal programmer. So this rendering will have to do.
  • The current research uses stochastic training to know when to do adjust based upon the error in the image. Gradient methods can be done in real time but catches up quickly with large movements of the head. To combat this, direct methods are used thast are not dependent on training. Direct methods are farther from being real time, but a more robust. TO make them more real time, researchers throw out FAPs. I agree that not all FAPs are needed fro realism, but currently only about 2 frames a second can be done using 12 parameters. FAP reduction is daunting, which FAPs are most important? For every facial scenario? And, no one is approaching this from a multiple objective standpoint.
  • What do I mean by dynamic bandwidth
  • I chose to use PSNR for its computational convenience. It is fast and easy to implement.
  • Use NSGA-II for the multiple objective optimization Assign a premature stopping criteria Choose bandwidth Select FAP sets Use deterministic algorithm
  • Have a two dimensional picture here, and explain the discrete line search
  • Transcript

    • 1. Eric Larson December 2007 Image Coding and Analysis Laboratory, Oklahoma State University
    • 2.
      • What is model-based coding?
        • Facial Analysis
        • Dealing with Dynamic Bandwidths
      • Solving a MOP quickly
        • An application specific NSGA-II, with a deterministic search
      • Results
      • Conclusion
    • 3.
      • Alternative to sending raw video footage
      • Creation of “essential” parameters needed to reconstruct a scene
      • A real-time analysis nightmare
      Copyright by Microsoft
    • 4.
      • Very Low Bit Rate Teleconferencing
      • Gaming
      • Man-Machine Interaction
      • Video Telephony
        • Telephony for the deaf
      Image Courtesy of Dr. Peter Eisert [3]
    • 5.
      • Analysis (by Synthesis)
      Image Courtesy of Dr. Peter Eisert [3]
    • 6. Images Courtesy of Dr. Peter Eisert [4]
    • 7.
      • Generously, Instituto Superior Technico
      ISTface [22]
    • 8.
      • Gradient based approximation is not robust
      • Complication of direct optimization
        • Handled by reducing FAPs
      • Do not address problem of dynamic bandwidth
      Image Courtesy of J. Ahlberg [17]
    • 9.  
    • 10.
      • Quality Objective Function:
      • FAP Number Objective Function:
    • 11.
      • Use NSGA-II for the multiple objective optimization
      • Assign a premature stopping criteria
      • Choose bandwidth
      • Select FAP sets
      • Use deterministic algorithm
    • 12.
      • Tournament selection used for crossover
      • Parents and children combined, sorted according to
        • Domination
        • Nearest Neighbor
      • Repeat
      From [7], NSGA-II
    • 13.
      • while {a search direction of improvement can be found}
        • for {each dimension, step 20 units}
          • -if the step is favorable , another step is made
          • -Else, choose next dimension
      • find direction of steepest descent from original point and improved point
      •  
      • while {step size scaling constant < 0.0001}
        • take step in the steepest descent direction
          • -if the new point is favorable , increase step size by two,
          • -else, decrease step size by a factor of ten.
        • Update starting individual with new individual
    • 14.
      • Pareto fronts
    • 15. Max Bandwidth (Uncompressed) FRAME NO. Selected FAP Sets a Best PSNR Mean PSNR (Over 3 runs) Mean Function Evaluations Medium 0 0 (3) , 1 (2) , 2, 4, 5 (2) , 6, 9, 10, 11 (2) , 12, 13, 15 (3) , 16 (3) 30.57 dB 30.36 dB 779 (~4.8 Kbits/s 1 0 (3) , 1 (2) , 2, 4, 5 (3) , 6 (2) , 8, 9, 10, 11, 12, 13, 14 (2) , 15 (3) , 16 (3) , 17 (3) 35.14 dB 32.54 dB 690 At 25 fps) b 2 0 (2) , 1, 2, 4, 5 (2) , 6 (2) , 7, 8, 10, 11 (2) , 12 (2) , 13 (2) , 14 (3) , 15 (3) , 16 (3) , 17 (2) 32.09 dB 29.50 dB 392 3 0 (2) , 1, 2 (2) , 5, 6 (2) , 7 (2) , 8, 9 (2) , 11 (2) , 12, 13 (2) , 14 (2) , 15 (3) , 16 (3) , 17 33.20 dB 29.99 dB 561 4 0 (2) , 1, 2 (2) , 3, 5 (2) , 6 (2) , 7, 8, 10 (2) , 11 (2) , 13 (2) , 14 (2) , 15 (3) , 16 (2) , 17 (2) 32.98 dB 28.14 dB 415 5 0 (2) , 1 (2) , 2 (2) , 3, 6 (2) , 7 (2) , 8, 9 (2) , 10 (2) , 11, 12 (3) , 13, 14 (3) , 15 (3) , 16 (2) , 17 (2) 32.90 dB 28.73 dB 299 6 0 (2) , 1 (3) , 2, 5, 7 (2) , 8 (3) , 9, 10 (2) , 11, 12, 14, 15 (3) , 16 (3) , 17 32.13 dB 30.89 dB 748 7 0 (3) , 1 (2) , 4, 5, 6, 7 (3) , 8 (3) , 11 (2) , 12 (2) , 13, 14, 15 (3) , 16 (3) , 17 (2) 31.91 dB 29.51 dB 445 8 0 (3) , 2, 4, 5 (2) , 6(2), 8, 9, 11 (2) , 12 (2) , 13 (2) , 14 (2) , 15 (3) , 16 (3) , 17 (2) 30.97 dB 29.53 dB 726 9 0 (3) , 1 (2) , 3, 5 (2) , 6 (2) , 7, 8, 9 (2) , 10 (2) , 11 (2) , 12 (2) , 14, 15 (3) , 16 (3) , 17 30.96 dB 28.99 dB 451 10 0 (3) , 2, 3, 5, 6, 7, 8, 9, 10 (2) , 11 (2) , 12 (2) , 13 (2) , 14 (2) , 15 (3) , 16 (3) , 17 (2) 30.21 dB 28.80 dB 527 Low 0 0, 7, 8 (2) , 11 (2) , 14 (2) , 15 (3) , 16 (2) 29.95 dB 27.13 dB 573 (~2.4 Kbits/s 1 0, 5, 8, 11 (2) , 12, 14, 15 (3) , 16 (2) , 17 (3) 33.23 dB 29.46 dB 595 At 25 fps) b 2 8, 10, 11, 12 (2) , 13 (2) , 14, 15 (3) , 16 (2) , 17 (3) 32.02 dB 27.21 dB 773 3 2, 5, 6, 8, 9, 12 (2) , 14, 15 (3) , 16, 17 28.77 dB 24.34 dB 808 4 1, 9 (2) , 10, 11, 12 (2) , 14 (2) , 15 (3) , 17 (3) 22.99 dB 22.80 dB 745 5 1, 2, 4, 5, 6, 9, 11, 12, 14, 15 (3) , 16 (2) , 17 29.25 dB 26.93 dB 446 6 2, 5, 6, 9 (2) , 10, 11 (2) , 12, 14 (2) , 15 (2) , 16 (3) , 17 29.67 dB 25.75 dB 376 7 1, 2, 7, 8, 9, 10, 12, 14, 15 (3) , 16 (3) , 17 29.01 dB 28.41 dB 386 8 1, 3, 9, 12, 13, 15, 16 (3) 28.97 dB 23.98 dB 529 9 0, 5, 9, 10, 11, 12, 15 (2) , 16 (3) , 17 28.79 dB 25.93 dB 694 10 3, 5 (2) , 6 (2) , 9, 10 (2) , 12, 15, 16 (3) 27.56 dB 24.25 dB 226
    • 16.
      • Histogram of all resultant individuals
    • 17.
      • Video Sequence
      Frame 90 Low Medium
    • 18. Frame 93 Low Medium
    • 19. Frame 96 Low Medium
    • 20. Frame 99 Low Medium
    • 21. Frame 102 Low Medium
    • 22. Frame 105 Low Medium
    • 23. Frame 108 Low Medium
    • 24. Frame 111 Low Medium
    • 25. Frame 114 Low Medium
    • 26. Frame 117 Low Medium
    • 27. Frame 120 Low Medium
    • 28.
      • Deficiencies can be traced back to selection of PSNR
      • Future work should include error functions like SSIM or Eigen-faces
      • Algorithm works
        • Accentuates the
        • details of PSNR
    • 29.
      • D. Pearson, “Developments in model-based image coding,” Proceedings of the IEEE , Vol. 83, No. 6, June 1995.
      • I. Pandizic. J. Ahlberg, M. Wzorek, P. Rudol, and M. Mosmondor, “Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation,” Proceedings of the 2 nd international conferenice on mobile and ubiquitous media , 2003
      • P. Eisert, “MPEG-4 facial animation in video analysis and synthesis,” International Journal of Imaging Systems and Technology , June 2003.
      • P. Eisert, “Very Low Bit Rate Coding,” Doctoral Thesis, November 2000.
      • J. D. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” 1 st international conference on genetic algorithms , 1985.
      • K. Deb, “Multi-objective genetic algorithms: problems, difficulties, and construction of test problems,” Evolutionary Computation , 1999.
      • Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T., A fast and elitist multiobjective genetic algorithm: NSGA-II , IEEE Transactions on Evolutionary Computation , 2002.
      • F. I. Parke, Parameterized Models for Facial Animation, IEEE Transactions on Computer Graphics and Animation , 1982.
      • R. Forchheimer and T. Kronander, “Image coding – from waveforms to animation,” IEEE Transactions on Acoustics, Speech, and Signal Processing , 37:1212, 1989.
      • C. S. Choi, K. Aizawa, H. Harashima, and T. Takebe, “Analysis and synthesis of facial image sequences in model-based image coding,” IEEE Transactions on Circuits and Systems for Video Technology , June 1994.
      • M. Buck, “Model based image sequence coding,” Motion Analysis and Image Sequence Coding, Ch. 10, Kluwer Academic Publishing, 1993, pp. 285-315.
      • N. Diehl, “Object motion estimation and segmentation on image sequences,” Signal Processing: Image Communications , Vol. 3, No. 1, February 1991, pp. 23-56.
      • K. Aizawa, H. Harashima and T. Saito, “Model-based analysis-synthesis image coding (MBASIC) system for a person’s face,” Signal Processing: Image Communication, vol. 1, pp. 139-152, 1989.
      • I. S. Pandizic and R. Forchheimer, “MPEG-4 Facial Animation: the Standard, Implementation, and Applications,” 1 st Ed. John Wiley and Sons, 2002, pp. 3-41.
      • J. Ahlberg and R. Forchheimer, “Face Tracking for model-based coding and face animation,” International Journal on Imaging Systems Technology , Wiley Periodicals, Vol. 13, pp. 8-22, 2003.
      • Dornaika, F., Ahlberg, J., Fast and Reliable Active Appearance Model Search for 3D Face Tracking, Proceedings of Mirage 2003, March 2003.
      • Dornaika, F., Ahlberg, J., Fitting 3D Face Models for Tracking and Active Appearance Model Training, Image and Vision Computing 24(2006), Science Direct, 2006.
      • Carter, E.F, 1994, The Generation and Application of Random Numbers, Forth Dimensions, Vol XVI, Nos 1 & 2, Forth Interest Group, Oakland California.
      • S. Kirkpatrick, C. D. Gelati, and M. P. Vecchi, “Optimization by simulated annealing,” Science , Vol. 220, No. 4598, pp. 671-680, 1983.
      • T. Edgar, D. Himmelblau, and Lasdon, L., Optimization of Chemical Processes , 2 nd Edition, McGraw-Hill, New York, NY, 2001.
      • G. Reklaitis, A. Ravindran, and Ragsdell, K., Engineering Optimization, Methods and Applications , 2 nd Edition, John Wiley and Sons, New York, NY, 2006.
      • ISTface, Program from Instituto Superior Technico, standard FAP animation sequence, “wow25.fap”.
      • J. Jiang, A. Alwan, P. A. Keating, and T. A. Edward Jr., “On the relationship between face movements, tongue movements, and speech acoustics,” EURASIP Journal on Applied Signal Processing , 2002.
      • Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity ,” IEEE Trans. Image Process . 13, 600–612 (2004).