Talk accompanying the paper:
Lihua You and Richard Southern and Jian J. Zhang, Motion in Games (Lecture Notes in Computer Science), 2009/06/01, 5884(1):207-218, doi:10.1007/978-3-642-10347-6_19
This document is a chapter from an introductory mathematical analysis textbook. It covers curve sketching, including how to find relative and absolute extrema, determine concavity, use the second derivative test, identify asymptotes, and apply concepts of maxima and minima. The chapter contains learning objectives, an outline of topics, examples of applying techniques to sketch curves and solve optimization problems, and instructional content to introduce these curve sketching concepts.
This chapter discusses exponential and logarithmic functions. It begins by introducing exponential functions and their properties, including examples of exponential growth and decay. Logarithmic functions are then introduced as the inverse of exponential functions. The chapter covers the graphs and properties of logarithmic functions, including logarithmic identities and techniques for solving logarithmic and exponential equations. Examples are provided to illustrate key concepts like compound interest, radioactive decay, and modeling population growth over time.
This document provides an outline of Chapter 8 from the textbook "Introductory Mathematical Analysis" which covers introduction to probability and statistics. The chapter objectives are to develop basic counting principles, understand combinations and permutations, define sample spaces and events, and introduce probability, conditional probability, independent events, and Bayes' formula. The chapter is divided into sections that cover these topics, including examples applying concepts like the counting principle, permutations, combinations, sample spaces, events, and calculating probabilities.
This document summarizes Chapter 9 from a textbook on introductory mathematical analysis. Section 9.1 discusses discrete random variables and expected value. Section 9.2 covers the binomial distribution and how it relates to the binomial theorem. Section 9.3 introduces Markov chains and their associated transition matrices. Examples are provided for each topic to illustrate key concepts like calculating expected values, applying the binomial distribution formula, and determining probabilities using Markov chains.
This document summarizes Chapter 10 from a mathematics textbook. The chapter covers limits and continuity. It introduces limits, such as one-sided limits and limits at infinity. It defines continuity as a function being continuous at a point if the limit exists and is equal to the function value. Discontinuities can occur if a limit does not exist or is infinite. The chapter applies limits and continuity to solve inequalities involving polynomials and rational functions. Examples are provided to illustrate key concepts like evaluating limits, identifying discontinuities, and using continuity to solve nonlinear inequalities.
The document discusses functions and graphs in chapter 2 of an introductory mathematical analysis textbook. It introduces key concepts such as functions, domains, ranges, combinations of functions, inverse functions, and graphs in rectangular coordinates. It provides examples of determining equality of functions, finding function values, combining functions, and finding inverses. It also discusses special functions, graphs, symmetry, and intercepts. The chapter aims to define functions and domains, introduce different types of functions and their operations, and familiarize students with graphing equations and basic function shapes.
This chapter discusses additional topics in differentiation including:
- Derivatives of logarithmic and exponential functions
- Elasticity of demand
- Implicit differentiation
- Logarithmic differentiation
- Newton's method for approximating roots
- Finding higher-order derivatives directly and implicitly.
Examples are provided for each topic to illustrate the differentiation techniques.
This chapter discusses linear programming problems and methods for solving them. It introduces linear inequalities in two variables and how to represent the feasible region geometrically. The chapter describes how to formulate a linear programming problem by defining the objective function and constraints. It presents the simplex method for solving linear programming problems and discusses concepts like degeneracy, unbounded solutions, and multiple optima. The chapter also covers topics like artificial variables, minimization problems, and finding the dual of a linear programming problem.
This document is a chapter from an introductory mathematical analysis textbook. It covers curve sketching, including how to find relative and absolute extrema, determine concavity, use the second derivative test, identify asymptotes, and apply concepts of maxima and minima. The chapter contains learning objectives, an outline of topics, examples of applying techniques to sketch curves and solve optimization problems, and instructional content to introduce these curve sketching concepts.
This chapter discusses exponential and logarithmic functions. It begins by introducing exponential functions and their properties, including examples of exponential growth and decay. Logarithmic functions are then introduced as the inverse of exponential functions. The chapter covers the graphs and properties of logarithmic functions, including logarithmic identities and techniques for solving logarithmic and exponential equations. Examples are provided to illustrate key concepts like compound interest, radioactive decay, and modeling population growth over time.
This document provides an outline of Chapter 8 from the textbook "Introductory Mathematical Analysis" which covers introduction to probability and statistics. The chapter objectives are to develop basic counting principles, understand combinations and permutations, define sample spaces and events, and introduce probability, conditional probability, independent events, and Bayes' formula. The chapter is divided into sections that cover these topics, including examples applying concepts like the counting principle, permutations, combinations, sample spaces, events, and calculating probabilities.
This document summarizes Chapter 9 from a textbook on introductory mathematical analysis. Section 9.1 discusses discrete random variables and expected value. Section 9.2 covers the binomial distribution and how it relates to the binomial theorem. Section 9.3 introduces Markov chains and their associated transition matrices. Examples are provided for each topic to illustrate key concepts like calculating expected values, applying the binomial distribution formula, and determining probabilities using Markov chains.
This document summarizes Chapter 10 from a mathematics textbook. The chapter covers limits and continuity. It introduces limits, such as one-sided limits and limits at infinity. It defines continuity as a function being continuous at a point if the limit exists and is equal to the function value. Discontinuities can occur if a limit does not exist or is infinite. The chapter applies limits and continuity to solve inequalities involving polynomials and rational functions. Examples are provided to illustrate key concepts like evaluating limits, identifying discontinuities, and using continuity to solve nonlinear inequalities.
The document discusses functions and graphs in chapter 2 of an introductory mathematical analysis textbook. It introduces key concepts such as functions, domains, ranges, combinations of functions, inverse functions, and graphs in rectangular coordinates. It provides examples of determining equality of functions, finding function values, combining functions, and finding inverses. It also discusses special functions, graphs, symmetry, and intercepts. The chapter aims to define functions and domains, introduce different types of functions and their operations, and familiarize students with graphing equations and basic function shapes.
This chapter discusses additional topics in differentiation including:
- Derivatives of logarithmic and exponential functions
- Elasticity of demand
- Implicit differentiation
- Logarithmic differentiation
- Newton's method for approximating roots
- Finding higher-order derivatives directly and implicitly.
Examples are provided for each topic to illustrate the differentiation techniques.
This chapter discusses linear programming problems and methods for solving them. It introduces linear inequalities in two variables and how to represent the feasible region geometrically. The chapter describes how to formulate a linear programming problem by defining the objective function and constraints. It presents the simplex method for solving linear programming problems and discusses concepts like degeneracy, unbounded solutions, and multiple optima. The chapter also covers topics like artificial variables, minimization problems, and finding the dual of a linear programming problem.
This chapter discusses multivariable calculus topics including functions of several variables, partial derivatives, applications of partial derivatives, implicit partial differentiation, higher-order partial derivatives, the chain rule, and finding maxima and minima for functions of two variables. It provides examples of computing partial derivatives, finding marginal costs and productivity, implicit partial differentiation, and using the chain rule. The objectives are to develop concepts and techniques for multivariable calculus including computing derivatives of functions with multiple variables.
This chapter discusses integration, including defining indefinite integrals, evaluating definite integrals, and techniques for integration. The key topics covered include:
- Defining antiderivatives and indefinite integrals, and using properties like linearity to evaluate integrals.
- Applying integration to solve problems involving rates of change, such as calculating total cost from a marginal cost function.
- Evaluating definite integrals to find the area under a curve over a specified interval.
- Covering techniques for integrating common functions like polynomials, exponentials, and logarithms using rules like power, substitution and integration by parts.
This chapter discusses various methods of integration including integration by parts, integration by partial fractions, and integration using tables of integrals. It also covers applications of integration such as finding the average value of a function, solving differential equations using separation of variables, and modeling population growth with logistic functions. Examples are provided to illustrate how to use these various integration methods and applications to evaluate definite and indefinite integrals.
This chapter discusses differentiation, including:
- Defining the derivative using the limit definition of the slope of a tangent line.
- Basic differentiation rules for constants, polynomials, sums and differences.
- Interpreting the derivative as an instantaneous rate of change.
- Applying the product rule and quotient rule to differentiate products and quotients.
- Using differentiation to find equations of tangent lines, velocities, marginal costs, and other rates of change.
This document provides an overview of Chapter 3 from the textbook "Introductory Mathematical Analysis" which covers topics related to lines, parabolas, and systems of equations. The chapter objectives are to develop concepts of slope, demand/supply curves, quadratic functions, and solving systems of linear and nonlinear equations. Examples are provided for finding equations of lines from points, graphing linear and quadratic functions, and applying systems of equations to equilibrium points and break-even analysis. Key concepts explained include slope, forms of linear equations, parallel/perpendicular lines, demand/supply curves, and solving systems using elimination or substitution.
This document provides an overview and outline of topics covered in a textbook on introductory mathematical analysis. The first chapter focuses on applications of algebra, including using equations and inequalities to model real-world scenarios, solving linear and absolute value equations/inequalities, and introducing summation notation. Sample problems are provided with step-by-step solutions to illustrate key concepts like using variables to represent unknown amounts, setting up equations to determine profit levels, and evaluating finite sums.
1. The orthogonal decomposition theorem states that any vector y in Rn can be written uniquely as the sum of a vector ŷ in a subspace W and a vector z orthogonal to W.
2. The vector ŷ is called the orthogonal projection of y onto W. It is the closest vector to y that lies in W.
3. The best approximation theorem states that the orthogonal projection ŷ provides the best or closest approximation of y using only vectors that lie in the subspace W. The distance from y to ŷ is less than the distance from y to any other vector in
This document provides an overview and outline of topics covered in a textbook on introductory mathematical analysis. The first chapter focuses on applications of algebra, including using equations and inequalities to model real-world scenarios, solving linear and quadratic equations, using summation notation to write sums, and working with absolute values. Sample problems are provided with step-by-step solutions to illustrate key concepts like setting up equations to determine optimal pricing or amounts of materials needed.
This chapter discusses continuous random variables and their probability density functions. It introduces the normal and exponential distributions and how to calculate probabilities and descriptive statistics for continuous random variables. It also shows how to approximate the binomial distribution using the normal distribution. The chapter objectives are to introduce continuous random variables, discuss the normal distribution and standard normal table, and demonstrate the normal approximation to the binomial distribution.
This document summarizes results on analyzing stochastic gradient descent (SGD) algorithms for minimizing convex functions. It shows that a continuous-time version of SGD (SGD-c) can strongly approximate the discrete-time version (SGD-d) under certain conditions. It also establishes that SGD achieves the minimax optimal convergence rate of O(t^-1/2) for α=1/2 by using an "averaging from the past" procedure, closing the gap between previous lower and upper bound results.
A Conjecture on Strongly Consistent LearningJoe Suzuki
1. The document presents a conjecture about the error probability of overestimating the true order k* when learning autoregressive moving average (ARMA) models from samples.
2. The conjecture states that if the estimated order k is greater than the true order k*, the error probability is equal to the probability that a chi-squared distributed random variable with k - k* degrees of freedom is greater than (k - k*)dn, where dn is related to the sample size n.
3. The author provides evidence that a sum of squared estimated ARMA coefficients could be chi-squared distributed, lending credibility to the conjecture.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
(α ψ)- Construction with q- function for coupled fixed pointAlexander Decker
This document presents a theorem to prove the existence of coupled fixed points for contractive mappings in partially ordered quasi-metric spaces. It begins with definitions of key concepts such as mixed monotone mappings, coupled fixed points, quasi-metric spaces, and Q-functions. It then states and proves a coupled fixed point theorem for mappings that satisfy an (α-Ψ)-contractive condition in a partially ordered, complete quasi-metric space with a Q-function. The theorem shows that if such a mapping F has the mixed monotone property and satisfies the contractive inequality, then F has at least one coupled fixed point.
This chapter discusses multivariable calculus topics including functions of several variables, partial derivatives, applications of partial derivatives, implicit partial differentiation, higher-order partial derivatives, the chain rule, and finding maxima and minima for functions of two variables. It provides examples of computing partial derivatives, finding marginal costs and productivity, implicit partial differentiation, and using the chain rule. The objectives are to develop concepts and techniques for multivariable calculus including computing partial derivatives and applying optimization methods.
This chapter discusses multivariable calculus topics including functions of several variables, partial derivatives, applications of partial derivatives, implicit partial differentiation, higher-order partial derivatives, the chain rule, and finding maxima and minima for functions of two variables. It provides examples of computing partial derivatives, finding marginal costs and productivity, implicit partial differentiation, and using the chain rule. The objectives are to develop concepts and techniques for multivariable calculus including computing derivatives of functions with multiple variables.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Anima Anandkumar is a faculty at the EECS Dept. at U.C.Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, and IBM Fran Allen PhD fellowship.
This document summarizes the use of the Ritz method to approximate the critical frequencies of a tapered hollow beam. It begins by introducing the governing equations and describing the uniform beam solution. It then outlines the Ritz method, which uses the uniform beam eigenfunctions as a basis to approximate the tapered beam solution. The method is applied numerically to predict the first three critical frequencies of the tapered beam, which are found to match well with finite element analysis results. The Ritz method is concluded to be an effective way to approximate critical frequencies for more complex beam geometries.
Hecke Operators on Jacobi Forms of Lattice Index and the Relation to Elliptic...Ali Ajouz
Jacobi forms of lattice index, whose theory can be viewed as extension of the theory of classical Jacobi forms, play an important role in various theories, like the theory of orthogonal modular forms or the theory of vertex operator
algebras. Every Jacobi form of lattice index has a theta expansion which implies, for index of odd rank, a connection to half integral weight modular forms and then via Shimura lifting to modular forms of integral weight, and implies a direct connection to modular forms of integral weight if the rank is
even. The aim of this thesis is to develop a Hecke theory for Jacobi forms of lattice index extending the Hecke theory for the classical Jacobi forms, and to study how the indicated relations to elliptic modular forms behave under Hecke operators. After defining Hecke operators as double coset operators,
we determine their action on the Fourier coefficients of Jacobi forms, and we determine the multiplicative relations satisfied by the Hecke operators, i.e. we study the structural constants of the algebra generated by the Hecke operators. As a consequence we show that the vector space of Jacobi forms
of lattice index has a basis consisting of simultaneous eigenforms for our Hecke operators, and we discover the precise relation between our Hecke algebras and the Hecke algebras for modular forms of integral weight. The
latter supports the expectation that there exist equivariant isomorphisms between spaces of Jacobi forms of lattice index and spaces of integral weight modular forms. We make this precise and prove the existence of such liftings
in certain cases. Moreover, we give further evidence for the existence of such liftings in general by studying numerical examples.
This document summarizes Ja-Keoung Koo's presentation on structure from motion. It discusses image formation, the structure from motion pipeline with calibrated cameras, and the 8-point algorithm. The key points are:
1. Image formation maps 3D world points to 2D image points using a camera's intrinsic and extrinsic parameters.
2. Structure from motion with calibrated cameras recovers 3D structure and camera motion from 2D correspondences using the essential matrix and 8-point algorithm.
3. The 8-point algorithm finds the essential matrix from point correspondences, decomposes it to recover the rotation and translation between views.
Special Plenary Lecture at the International Conference on VIBRATION ENGINEERING AND TECHNOLOGY OF MACHINERY (VETOMAC), Lisbon, Portugal, September 10 - 13, 2018
http://www.conf.pt/index.php/v-speakers
Propagation of uncertainties in complex engineering dynamical systems is receiving increasing attention. When uncertainties are taken into account, the equations of motion of discretised dynamical systems can be expressed by coupled ordinary differential equations with stochastic coefficients. The computational cost for the solution of such a system mainly depends on the number of degrees of freedom and number of random variables. Among various numerical methods developed for such systems, the polynomial chaos based Galerkin projection approach shows significant promise because it is more accurate compared to the classical perturbation based methods and computationally more efficient compared to the Monte Carlo simulation based methods. However, the computational cost increases significantly with the number of random variables and the results tend to become less accurate for a longer length of time. In this talk novel approaches will be discussed to address these issues. Reduced-order Galerkin projection schemes in the frequency domain will be discussed to address the problem of a large number of random variables. Practical examples will be given to illustrate the application of the proposed Galerkin projection techniques.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
This chapter discusses multivariable calculus topics including functions of several variables, partial derivatives, applications of partial derivatives, implicit partial differentiation, higher-order partial derivatives, the chain rule, and finding maxima and minima for functions of two variables. It provides examples of computing partial derivatives, finding marginal costs and productivity, implicit partial differentiation, and using the chain rule. The objectives are to develop concepts and techniques for multivariable calculus including computing derivatives of functions with multiple variables.
This chapter discusses integration, including defining indefinite integrals, evaluating definite integrals, and techniques for integration. The key topics covered include:
- Defining antiderivatives and indefinite integrals, and using properties like linearity to evaluate integrals.
- Applying integration to solve problems involving rates of change, such as calculating total cost from a marginal cost function.
- Evaluating definite integrals to find the area under a curve over a specified interval.
- Covering techniques for integrating common functions like polynomials, exponentials, and logarithms using rules like power, substitution and integration by parts.
This chapter discusses various methods of integration including integration by parts, integration by partial fractions, and integration using tables of integrals. It also covers applications of integration such as finding the average value of a function, solving differential equations using separation of variables, and modeling population growth with logistic functions. Examples are provided to illustrate how to use these various integration methods and applications to evaluate definite and indefinite integrals.
This chapter discusses differentiation, including:
- Defining the derivative using the limit definition of the slope of a tangent line.
- Basic differentiation rules for constants, polynomials, sums and differences.
- Interpreting the derivative as an instantaneous rate of change.
- Applying the product rule and quotient rule to differentiate products and quotients.
- Using differentiation to find equations of tangent lines, velocities, marginal costs, and other rates of change.
This document provides an overview of Chapter 3 from the textbook "Introductory Mathematical Analysis" which covers topics related to lines, parabolas, and systems of equations. The chapter objectives are to develop concepts of slope, demand/supply curves, quadratic functions, and solving systems of linear and nonlinear equations. Examples are provided for finding equations of lines from points, graphing linear and quadratic functions, and applying systems of equations to equilibrium points and break-even analysis. Key concepts explained include slope, forms of linear equations, parallel/perpendicular lines, demand/supply curves, and solving systems using elimination or substitution.
This document provides an overview and outline of topics covered in a textbook on introductory mathematical analysis. The first chapter focuses on applications of algebra, including using equations and inequalities to model real-world scenarios, solving linear and absolute value equations/inequalities, and introducing summation notation. Sample problems are provided with step-by-step solutions to illustrate key concepts like using variables to represent unknown amounts, setting up equations to determine profit levels, and evaluating finite sums.
1. The orthogonal decomposition theorem states that any vector y in Rn can be written uniquely as the sum of a vector ŷ in a subspace W and a vector z orthogonal to W.
2. The vector ŷ is called the orthogonal projection of y onto W. It is the closest vector to y that lies in W.
3. The best approximation theorem states that the orthogonal projection ŷ provides the best or closest approximation of y using only vectors that lie in the subspace W. The distance from y to ŷ is less than the distance from y to any other vector in
This document provides an overview and outline of topics covered in a textbook on introductory mathematical analysis. The first chapter focuses on applications of algebra, including using equations and inequalities to model real-world scenarios, solving linear and quadratic equations, using summation notation to write sums, and working with absolute values. Sample problems are provided with step-by-step solutions to illustrate key concepts like setting up equations to determine optimal pricing or amounts of materials needed.
This chapter discusses continuous random variables and their probability density functions. It introduces the normal and exponential distributions and how to calculate probabilities and descriptive statistics for continuous random variables. It also shows how to approximate the binomial distribution using the normal distribution. The chapter objectives are to introduce continuous random variables, discuss the normal distribution and standard normal table, and demonstrate the normal approximation to the binomial distribution.
This document summarizes results on analyzing stochastic gradient descent (SGD) algorithms for minimizing convex functions. It shows that a continuous-time version of SGD (SGD-c) can strongly approximate the discrete-time version (SGD-d) under certain conditions. It also establishes that SGD achieves the minimax optimal convergence rate of O(t^-1/2) for α=1/2 by using an "averaging from the past" procedure, closing the gap between previous lower and upper bound results.
A Conjecture on Strongly Consistent LearningJoe Suzuki
1. The document presents a conjecture about the error probability of overestimating the true order k* when learning autoregressive moving average (ARMA) models from samples.
2. The conjecture states that if the estimated order k is greater than the true order k*, the error probability is equal to the probability that a chi-squared distributed random variable with k - k* degrees of freedom is greater than (k - k*)dn, where dn is related to the sample size n.
3. The author provides evidence that a sum of squared estimated ARMA coefficients could be chi-squared distributed, lending credibility to the conjecture.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
(α ψ)- Construction with q- function for coupled fixed pointAlexander Decker
This document presents a theorem to prove the existence of coupled fixed points for contractive mappings in partially ordered quasi-metric spaces. It begins with definitions of key concepts such as mixed monotone mappings, coupled fixed points, quasi-metric spaces, and Q-functions. It then states and proves a coupled fixed point theorem for mappings that satisfy an (α-Ψ)-contractive condition in a partially ordered, complete quasi-metric space with a Q-function. The theorem shows that if such a mapping F has the mixed monotone property and satisfies the contractive inequality, then F has at least one coupled fixed point.
This chapter discusses multivariable calculus topics including functions of several variables, partial derivatives, applications of partial derivatives, implicit partial differentiation, higher-order partial derivatives, the chain rule, and finding maxima and minima for functions of two variables. It provides examples of computing partial derivatives, finding marginal costs and productivity, implicit partial differentiation, and using the chain rule. The objectives are to develop concepts and techniques for multivariable calculus including computing partial derivatives and applying optimization methods.
This chapter discusses multivariable calculus topics including functions of several variables, partial derivatives, applications of partial derivatives, implicit partial differentiation, higher-order partial derivatives, the chain rule, and finding maxima and minima for functions of two variables. It provides examples of computing partial derivatives, finding marginal costs and productivity, implicit partial differentiation, and using the chain rule. The objectives are to develop concepts and techniques for multivariable calculus including computing derivatives of functions with multiple variables.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Anima Anandkumar is a faculty at the EECS Dept. at U.C.Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, and IBM Fran Allen PhD fellowship.
This document summarizes the use of the Ritz method to approximate the critical frequencies of a tapered hollow beam. It begins by introducing the governing equations and describing the uniform beam solution. It then outlines the Ritz method, which uses the uniform beam eigenfunctions as a basis to approximate the tapered beam solution. The method is applied numerically to predict the first three critical frequencies of the tapered beam, which are found to match well with finite element analysis results. The Ritz method is concluded to be an effective way to approximate critical frequencies for more complex beam geometries.
Hecke Operators on Jacobi Forms of Lattice Index and the Relation to Elliptic...Ali Ajouz
Jacobi forms of lattice index, whose theory can be viewed as extension of the theory of classical Jacobi forms, play an important role in various theories, like the theory of orthogonal modular forms or the theory of vertex operator
algebras. Every Jacobi form of lattice index has a theta expansion which implies, for index of odd rank, a connection to half integral weight modular forms and then via Shimura lifting to modular forms of integral weight, and implies a direct connection to modular forms of integral weight if the rank is
even. The aim of this thesis is to develop a Hecke theory for Jacobi forms of lattice index extending the Hecke theory for the classical Jacobi forms, and to study how the indicated relations to elliptic modular forms behave under Hecke operators. After defining Hecke operators as double coset operators,
we determine their action on the Fourier coefficients of Jacobi forms, and we determine the multiplicative relations satisfied by the Hecke operators, i.e. we study the structural constants of the algebra generated by the Hecke operators. As a consequence we show that the vector space of Jacobi forms
of lattice index has a basis consisting of simultaneous eigenforms for our Hecke operators, and we discover the precise relation between our Hecke algebras and the Hecke algebras for modular forms of integral weight. The
latter supports the expectation that there exist equivariant isomorphisms between spaces of Jacobi forms of lattice index and spaces of integral weight modular forms. We make this precise and prove the existence of such liftings
in certain cases. Moreover, we give further evidence for the existence of such liftings in general by studying numerical examples.
This document summarizes Ja-Keoung Koo's presentation on structure from motion. It discusses image formation, the structure from motion pipeline with calibrated cameras, and the 8-point algorithm. The key points are:
1. Image formation maps 3D world points to 2D image points using a camera's intrinsic and extrinsic parameters.
2. Structure from motion with calibrated cameras recovers 3D structure and camera motion from 2D correspondences using the essential matrix and 8-point algorithm.
3. The 8-point algorithm finds the essential matrix from point correspondences, decomposes it to recover the rotation and translation between views.
Special Plenary Lecture at the International Conference on VIBRATION ENGINEERING AND TECHNOLOGY OF MACHINERY (VETOMAC), Lisbon, Portugal, September 10 - 13, 2018
http://www.conf.pt/index.php/v-speakers
Propagation of uncertainties in complex engineering dynamical systems is receiving increasing attention. When uncertainties are taken into account, the equations of motion of discretised dynamical systems can be expressed by coupled ordinary differential equations with stochastic coefficients. The computational cost for the solution of such a system mainly depends on the number of degrees of freedom and number of random variables. Among various numerical methods developed for such systems, the polynomial chaos based Galerkin projection approach shows significant promise because it is more accurate compared to the classical perturbation based methods and computationally more efficient compared to the Monte Carlo simulation based methods. However, the computational cost increases significantly with the number of random variables and the results tend to become less accurate for a longer length of time. In this talk novel approaches will be discussed to address these issues. Reduced-order Galerkin projection schemes in the frequency domain will be discussed to address the problem of a large number of random variables. Practical examples will be given to illustrate the application of the proposed Galerkin projection techniques.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
This chapter discusses continuous random variables and their probability density functions. It introduces the normal and exponential distributions and how to calculate probabilities and descriptive statistics for continuous random variables. It also shows how to approximate the binomial distribution using the normal distribution. The chapter objectives are to introduce continuous random variables, discuss the normal distribution and standard normal table, and demonstrate the normal approximation to the binomial distribution.
This chapter discusses continuous random variables and their probability density functions. It introduces the normal and exponential distributions and how to calculate probabilities and descriptive statistics for continuous random variables. It also shows how to approximate the binomial distribution using the normal distribution. The key topics covered are continuous random variables, the normal distribution, finding mean and standard deviation, and the normal approximation to the binomial distribution.
Theories and Engineering Technics of 2D-to-3D Back-Projection ProblemSeongcheol Baek
The slides introduce mathematical basics of 3d-to-2d image projection, 2d-to-3d back-projection problem, and its engineering technics, such as convex optimization problem, principal component analysis (PCV), singular value decomposition (SVD), etc.
The document discusses solving the Schrodinger equation to obtain bound state solutions and scattering phase shifts for a modified trigonometric Scarf type potential. It presents the asymptotic iteration method used to find the approximate bound state energies. The scattering phase shift is then calculated by expressing the radial wavefunction as a hypergeometric function and analyzing its asymptotic behavior. The potential's effect on the eigenvalues and scattering is studied numerically.
A Family Of Extragradient Methods For Solving Equilibrium ProblemsYasmine Anino
The document discusses using variational inequalities and bilevel programming models to analyze the optimal pollution emission price problem. Specifically, it presents a continuous-time central planning model where the government chooses the optimal price of pollution emissions considering how manufacturers in a supply chain will respond to the price. The lower-level problem involves the manufacturers determining their optimal production levels given the emission price, while the upper-level problem involves the government selecting the price to maximize social welfare. Existence of solutions is analyzed using variational inequality theory.
This document is a chapter from an introductory mathematical analysis textbook. It covers curve sketching, including how to find relative and absolute extrema, determine concavity, use the second derivative test, identify asymptotes, and apply concepts of maxima and minima. The chapter contains learning objectives, an outline of topics, examples of applying techniques to sketch curves and solve optimization problems, and instructional content to introduce these curve sketching concepts.
This chapter discusses techniques for sketching graphs of functions based on analyzing critical points, extrema, concavity, asymptotes, and applied optimization problems. The key topics covered are: using the first derivative test to find relative extrema; applying the extreme value theorem to find absolute extrema over closed intervals; determining concavity based on the second derivative; using the second derivative test to classify critical points; identifying vertical, horizontal and oblique asymptotes of rational functions; and solving applied problems to minimize or maximize quantities like total cost.
Formulas for Surface Weighted Numbers on Graphijtsrd
The boundary value problem differential operator on the graph of a specific structure is discussed in this article. The graph has degree 1 vertices and edges that are linked at one common vertex. The differential operator expression with real valued potentials, the Dirichlet boundary conditions, and the conventional matching requirements define the boundary value issue. There are a finite number of eig nv lu s in this problem.The residues of the diagonal elements of the Weyl matrix in the eigenvalues are referred to as weight numbers. The ig nv lu s are monomorphic functions with simple poles.The weight numbers under consideration generalize the weight numbers of differential operators on a finite interval, which are equal to the reciprocals of the squared norms of eigenfunctions. These numbers, along with the eig nv lu s, serve as spectral data for unique operator reconstruction. The contour integration is used to obtain formulas for surfacethe weight numbers, as well as formulas for the sums in the case of superficial near ig nv lu s. On the graphs, the formulas can be utilized to analyze inverse spectral problems. Ghulam Hazrat Aimal Rasa "Formulas for Surface Weighted Numbers on Graph" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49573.pdf Paper URL: https://www.ijtsrd.com/mathemetics/calculus/49573/formulas-for-surface-weighted-numbers-on-graph/ghulam-hazrat-aimal-rasa
Similar to Adaptive physics–inspired facial animation (20)
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
3. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Goals
Goals
◮ Facial animation is complex — simple linear blending between
facial models provides insufficient detail.
◮ The skin is a visco–elastic, anisotropic material and is difficult
to simulate.
◮ We want to derive a fast parametric model for skin
deformation of the human face.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
11. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Practicalities
Derivation
◮ From before:
∂2Mu
∂u2
+ 2
∂2Muv
∂u∂v
+
∂2Mv
∂v2
+ Fx = 0 (4)
◮ From theory of bending isotropic elastic plates:
Mu = −D(
∂2x
∂u2
+ µ
∂2x
∂v2
)
Mv = −D(
∂2x
∂v2
+ µ
∂2x
∂u2
)
Muv = −(1 − µ)D
∂2x
∂u∂v
(5)
◮ Substituting Eq 5 in Eq 4:
S1x
∂4x
∂u4
+ S2x
∂4x
∂u2∂v2
+ S3x
∂4x
∂v4
= Fx (6)
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
12. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Practicalities
Derivation
◮ From before, in vector form, x = [x, y, z]T , f = [fx , fy , fz ]T :
s1
∂4x
∂u4
+ s2
∂4x
∂u2∂v2
+ s3
∂4x
∂v4
= f (7)
◮ Boundary conditions:
u = 0 x = g0(v)
∂x
∂u
= g1(v)
u = 1 x = g2(v)
∂x
∂u
= g3(v)
v = 0 x = g4(u)
∂x
∂v
= g5(u)
v = 1 x = g6(u)
∂x
∂v
= g7(u) (8)
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
13. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Practicalities
Derivation
◮ From before, in vector form, x = [x, y, z]T , f = [fx , fy , fz ]T :
s1
∂4x
∂u4
+ s2
∂4x
∂u2∂v2
+ s3
∂4x
∂v4
= f (7)
◮ Boundary conditions:
u = 0 x = g0(v)
∂x
∂u
= g1(v)
u = 1 x = g2(v)
∂x
∂u
= g3(v)
v = 0 x = g4(u)
∂x
∂v
= g5(u)
v = 1 x = g6(u)
∂x
∂v
= g7(u) (8)
◮ Solving this 4th order PDE is very difficult.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
14. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Practicalities
Derivation
◮ From before, in vector form, x = [x, y, z]T , f = [fx , fy , fz ]T :
s1
∂4x
∂u4
+ s2
∂4x
∂u2∂v2
+ s3
∂4x
∂v4
= f (7)
◮ Boundary conditions:
u = 0 x = g0(v)
∂x
∂u
= g1(v)
u = 1 x = g2(v)
∂x
∂u
= g3(v)
v = 0 x = g4(u)
∂x
∂v
= g5(u)
v = 1 x = g6(u)
∂x
∂v
= g7(u) (8)
◮ Solving this 4th order PDE is very difficult.
◮ We approximate the solution using Finite Differencing.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
16. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Practicalities
Derivation
Finite differencing
◮ We solve this numerically with central differencing:
2(3s1 + 2s2 + 3s3)x0 − 2(2s1 + s2)(x1 + x3)
−2(s2 + 2s3)(x2 + x4) + s2(x5 + x6 + x7 + x8)
+s1(x9 + x10 + x11 + x12) = δ4
f0
◮ Along with the restated boundary conditions, this can be
posed in matrix form:
KX = F.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
17. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Practicalities
Derivation
Finite differencing
◮ We solve this numerically with central differencing:
2(3s1 + 2s2 + 3s3)x0 − 2(2s1 + s2)(x1 + x3)
−2(s2 + 2s3)(x2 + x4) + s2(x5 + x6 + x7 + x8)
+s1(x9 + x10 + x11 + x12) = δ4
f0
◮ Along with the restated boundary conditions, this can be
posed in matrix form:
KX = F.
◮ K is sparse and square — system can be quickly solved using
conjugate gradient method.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
18. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Mesh sampling
Reconstruction
Finite difference mesh sampling
◮ For our mathematical model, the surface must be a regular
quadrilateral grid.
◮ Our input is a triangle mesh.
◮ We develop a simple automatic method to find this surface
from an input face mesh.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
22. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Mesh sampling
Reconstruction
General sampling approach
◮ The input face must have an open
boundary.
◮ The face is flattened using some
parametrization.
◮ Nodes are sampled in the parametric
domain.
◮ Barycentric coordinates of nodes in faces
of parametric domain are determined.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
23. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Mesh sampling
Reconstruction
General sampling approach
◮ The input face must have an open
boundary.
◮ The face is flattened using some
parametrization.
◮ Nodes are sampled in the parametric
domain.
◮ Barycentric coordinates of nodes in faces
of parametric domain are determined.
◮ 3D node location determined using
barycentric coordinates on original mesh.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
31. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Mesh sampling
Reconstruction
Reconstruction
◮ The grid mesh is triangulated.
◮ For each vertex pi in the original mesh
. . .
◮ Compute barycentric coordinates bi
and displacement from the face di .
◮ Entire mesh can be reconstructed
using the matrix form:
ˆP = BX + D
◮ di can be rotated based on orientation
of face in grid mesh to improve results.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation
34. Outline
Introduction
Mathematical Model
Reconstruction
Applications
Force blend shapes
Force transfer
Force transfer
◮ Shape deformation can be transfered from one arbitrary input
surface to another.
◮ Both input meshes must have corresponding nodes, especially
around key deforming features.
◮ Each node force vector must be rotated and scaled for the
target model.
Lihua You, Richard Southern and Jian Jun Zhang Adaptive physics–inspired facial animation