The document discusses projective geometry and its applications in computer vision. It begins by introducing planar geometry and algebraic geometry. It then describes the 2D projective plane and how points and lines can be represented using homogeneous coordinates. Ideal points and the line at infinity are discussed. Projective transformations including homographies are explained. Conic sections and how they transform under projectivities are covered. The key concepts of duality and various subgroups of projective transformations are summarized. Examples of projective transformations and corrections are provided.
The document discusses projective geometry in 3D space (P3). It defines how points, planes, and lines are represented using homogeneous coordinates. Under projective transformations, incidence relations between points and planes are preserved. Three non-coplanar points uniquely define a plane, and three planes intersect at a point. The hierarchy of transformations from projective to Euclidean is described, along with the invariants each preserve. The plane at infinity π∞ and absolute conic Ω∞ allow measurement of affine and metric properties within a projective frame.
DLT stands for Direct Linear Transformation. It is an algorithm that estimates the camera matrix P by minimizing the algebraic error between measured image points xi and projected 3D points PXi. Specifically, DLT finds P by solving the equation Ap=0, where A is constructed from point correspondences and p contains the entries of P. This minimizes the sum of squared algebraic distances between the points. For affine cameras, the algebraic and geometric distances are equivalent. DLT provides an initial estimate of P that can be refined using nonlinear optimization techniques.
Lines and curves can be detected using techniques like Hough transform and ellipse fitting. Color can be represented in models like RGB or HSI and analyzed using histograms. Texture is described using features such as edgeness, co-occurrence matrices, and statistics like energy, entropy, and contrast computed from the matrices.
The document provides a math review covering topics in algebra, geometry, trigonometry, and statistics. It defines concepts like negative numbers, exponents, square roots, order of operations, lines, angles, trigonometric functions, and averages. Formulas are presented for topics like quadratic equations, the Pythagorean theorem, laws of sines and cosines, percentages, and standard deviation. Examples are included to illustrate key ideas.
This document discusses polynomial interpolation and outlines the key goals and topics that will be covered in Chapter 10. The goals are to motivate the need for interpolation of both data and functions, derive three methods for computing a polynomial interpolant suitable for different circumstances, derive error expressions, discuss Chebyshev interpolation, and consider interpolating derivative values. The outline lists the topics as monomial basis, Lagrange basis, Newton basis and divided differences, interpolation error, Chebyshev interpolation, and interpolating derivative values. Motivation is provided for interpolating both discrete data samples and continuous functions, with a wish list of properties for a reasonable interpolant. Polynomial interpolation is discussed as a basic and important form of interpolation.
The document discusses approximation algorithms for NP-complete problems. It introduces the concept of approximation ratios, which measure how close an approximate solution from a polynomial-time algorithm is to the optimal solution. The document then provides examples of approximation algorithms with a ratio of 2 for the vertex cover and traveling salesman problems. It also discusses using backtracking to find all possible solutions to the subset sum problem.
This document discusses probabilistic segmentation using mixture models. It explains that a mixture model represents the probability of generating a pixel measurement vector as a weighted sum of component densities. The likelihood for all observations is calculated as the product of probabilities for each data point. Missing data problems are also discussed, where the incomplete data likelihood is calculated as the product of probabilities for each incomplete data observation.
The document discusses functions and their graphical representations. It defines key terms like domain, range, and one-to-one and many-to-one mappings. It then focuses on quadratic functions, showing that their graphs take characteristic U-shaped or inverted U-shaped forms. The document also examines inequalities involving quadratic expressions and how to determine the range of values satisfying such inequalities by analyzing the graph of the quadratic function.
The document discusses projective geometry in 3D space (P3). It defines how points, planes, and lines are represented using homogeneous coordinates. Under projective transformations, incidence relations between points and planes are preserved. Three non-coplanar points uniquely define a plane, and three planes intersect at a point. The hierarchy of transformations from projective to Euclidean is described, along with the invariants each preserve. The plane at infinity π∞ and absolute conic Ω∞ allow measurement of affine and metric properties within a projective frame.
DLT stands for Direct Linear Transformation. It is an algorithm that estimates the camera matrix P by minimizing the algebraic error between measured image points xi and projected 3D points PXi. Specifically, DLT finds P by solving the equation Ap=0, where A is constructed from point correspondences and p contains the entries of P. This minimizes the sum of squared algebraic distances between the points. For affine cameras, the algebraic and geometric distances are equivalent. DLT provides an initial estimate of P that can be refined using nonlinear optimization techniques.
Lines and curves can be detected using techniques like Hough transform and ellipse fitting. Color can be represented in models like RGB or HSI and analyzed using histograms. Texture is described using features such as edgeness, co-occurrence matrices, and statistics like energy, entropy, and contrast computed from the matrices.
The document provides a math review covering topics in algebra, geometry, trigonometry, and statistics. It defines concepts like negative numbers, exponents, square roots, order of operations, lines, angles, trigonometric functions, and averages. Formulas are presented for topics like quadratic equations, the Pythagorean theorem, laws of sines and cosines, percentages, and standard deviation. Examples are included to illustrate key ideas.
This document discusses polynomial interpolation and outlines the key goals and topics that will be covered in Chapter 10. The goals are to motivate the need for interpolation of both data and functions, derive three methods for computing a polynomial interpolant suitable for different circumstances, derive error expressions, discuss Chebyshev interpolation, and consider interpolating derivative values. The outline lists the topics as monomial basis, Lagrange basis, Newton basis and divided differences, interpolation error, Chebyshev interpolation, and interpolating derivative values. Motivation is provided for interpolating both discrete data samples and continuous functions, with a wish list of properties for a reasonable interpolant. Polynomial interpolation is discussed as a basic and important form of interpolation.
The document discusses approximation algorithms for NP-complete problems. It introduces the concept of approximation ratios, which measure how close an approximate solution from a polynomial-time algorithm is to the optimal solution. The document then provides examples of approximation algorithms with a ratio of 2 for the vertex cover and traveling salesman problems. It also discusses using backtracking to find all possible solutions to the subset sum problem.
This document discusses probabilistic segmentation using mixture models. It explains that a mixture model represents the probability of generating a pixel measurement vector as a weighted sum of component densities. The likelihood for all observations is calculated as the product of probabilities for each data point. Missing data problems are also discussed, where the incomplete data likelihood is calculated as the product of probabilities for each incomplete data observation.
The document discusses functions and their graphical representations. It defines key terms like domain, range, and one-to-one and many-to-one mappings. It then focuses on quadratic functions, showing that their graphs take characteristic U-shaped or inverted U-shaped forms. The document also examines inequalities involving quadratic expressions and how to determine the range of values satisfying such inequalities by analyzing the graph of the quadratic function.
This document discusses composite functions and the order of operations when combining functions.
It provides an example of a mother converting the temperature of her baby's bath water from Celsius to Fahrenheit using two separate functions. The first function converts the Celsius reading to Fahrenheit, and the second maps the Fahrenheit reading to whether the water is too cold, alright, or too hot. Together these functions form a composite function.
Algebraically, a composite function f∘g(x) is defined as applying the inner function f first to the input x, and then applying the outer function g to the output of f. The domain of the inner function must be contained within the range of the outer function. The order of
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...Ceni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the fifth part which is discussing singular value decomposition and principal component analysis.
Here are the slides of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
Here are the slides of the third part which is discussing factorization and linear transformations.
https://www.slideshare.net/CeniBabaogluPhDinMat/3-linear-algebra-for-machine-learning-factorization-and-linear-transformations-130813437
Here are the slides of the fourth part which is discussing eigenvalues and eigenvectors.
https://www.slideshare.net/CeniBabaogluPhDinMat/4-linear-algebra-for-machine-learning-eigenvalues-eigenvectors-and-diagonalization
The document defines and describes key concepts related to rectangular coordinate systems and functions. It introduces the x-axis, y-axis, and origin that make up the rectangular coordinate system. It then defines various types of functions like linear, quadratic, absolute value and their graphs. Key characteristics of functions like domain, range, and intercepts are also summarized.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the third part which is discussing factorization and linear transformations.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
The document provides an overview of the Foundation Course for the Actuarial Common Entrance Test (ACET). It covers 8 chapters on mathematical topics including notation, numerical methods, functions, algebra, calculus, and vectors/matrices. It recommends reviewing areas of weakness and provides additional practice questions. When studying core technical subjects, the Foundation Course can be used as a reference for mathematical concepts requiring review.
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeksJinTaek Seo
This document provides an overview of calculus concepts including derivatives, integrals, and their applications in physics. It defines key terms like singularity, differentiation, integration, and discusses notation for derivatives. It also covers derivatives and integrals of basic functions, applications to physics concepts like velocity, acceleration, and Newton's Second Law, as well as examples of calculating derivatives and integrals.
The document discusses numerical methods for finding roots of functions. It introduces the bisection method for finding a root of a continuous function f(x) within a given interval [a,b] where f(a) and f(b) have opposite signs. The method bisects the interval into two subintervals and recursively narrows in on the root by testing the sign of f(x) at the midpoint of each subinterval. An example applies the bisection method to find a root of the function f(x)=x^3-x-1 between 1 and 2.
2. Linear Algebra for Machine Learning: Basis and DimensionCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the second part which is discussing basis and dimension.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeksJinTaek Seo
This document provides an overview of vectors and matrices in 3D graphics. It defines vectors as n-tuples of real numbers that can be added and multiplied by scalars. Vectors have a length and direction, and can be normalized. Matrices are arrays of numbers that can be added, multiplied by scalars, and transposed. The cross product of vectors returns a perpendicular vector, while the dot product measures similarity of direction. Matrices are used to represent transformations in 3D graphics.
1. Linear Algebra for Machine Learning: Linear SystemsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the first part which is giving a short overview of matrices and discussing linear systems.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
The document presents an algorithm to find the visible region of a polygon in O(n^2) time and O(n) space. It first finds the visible vertices region by checking each vertex for intersections with edges in O(n^2) time. It then extends this region to the full visible region by considering gaps between visible vertices and finding intersection points of extreme lines in O(n) time. The algorithm avoids issues with existing approaches and produces the exact visible region without unnecessary points.
The document outlines the aims, objectives, and syllabus for the Mathematics HL (1st exams 2014) course. It includes:
- 10 aims of the course focused on developing mathematical skills, understanding, problem solving, and appreciation of mathematics.
- 6 objectives centered around demonstrating knowledge and understanding of mathematical concepts, problem solving, communication, use of technology, reasoning, and inquiry approaches.
- The syllabus is divided into 8 core topics (Algebra, Functions and equations, Circular functions and trigonometry, Vectors, Statistics and probability, Calculus, and 2 optional topics (Statistics and probability, Sets, relations and groups) that provide 48 hours of instruction each.
This document provides an overview of key calculus concepts and formulas taught in a Calculus I course at Miami Dade College - Hialeah Campus. The topics covered include limits and derivatives, integration, optimization techniques, and applications of calculus to economics, business, physics, and other fields. The document is intended as a study guide for students in the Calculus I class taught by Professor Mohammad Shakil.
This document summarizes Andrew Ng's lecture notes on supervised learning and linear regression. It begins with examples of supervised learning problems like predicting housing prices from living area size. It introduces key concepts like training examples, features, hypotheses, and cost functions. It then describes using linear regression to predict prices from area and bedrooms. Gradient descent and stochastic gradient descent are introduced as algorithms to minimize the cost function. Finally, it discusses an alternative approach using the normal equations to explicitly minimize the cost function without iteration.
Math 1300: Section 4-5 Inverse of a Square MatrixJason Aubrey
This document is a lecture on identity matrices given by Jason Aubrey of the University of Missouri Department of Mathematics. It defines an identity matrix as an n×n matrix with 1s on the main diagonal and 0s elsewhere. Examples of 2×2 and 3×3 identity matrices are given. The key property that the product of a matrix and the identity matrix equals the original matrix is also described. An example calculation of multiplying two matrices is shown step-by-step to illustrate the use of the identity matrix.
This document discusses matrix addition and subtraction. It states that two matrices are equal if they are the same size and have equal corresponding elements. The sum of two matrices is a matrix with elements that are the sums of the corresponding elements. Addition is commutative and associative for matrices of the same size. A zero matrix has all elements equal to zero. The negative of a matrix has elements that are the negatives of the original matrix's elements.
This document introduces homography and projective geometry concepts. It discusses that homography is a projective transformation represented by a 3x3 matrix that maps points from one projective plane to another. The document outlines key homography topics like the line at infinity, mapping between image planes using homography, and the Direct Linear Transform (DLT) algorithm to estimate homography from point correspondences between images. It also provides an overview of homography functions in the OpenCV library.
This document discusses composite functions and the order of operations when combining functions.
It provides an example of a mother converting the temperature of her baby's bath water from Celsius to Fahrenheit using two separate functions. The first function converts the Celsius reading to Fahrenheit, and the second maps the Fahrenheit reading to whether the water is too cold, alright, or too hot. Together these functions form a composite function.
Algebraically, a composite function f∘g(x) is defined as applying the inner function f first to the input x, and then applying the outer function g to the output of f. The domain of the inner function must be contained within the range of the outer function. The order of
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...Ceni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the fifth part which is discussing singular value decomposition and principal component analysis.
Here are the slides of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
Here are the slides of the third part which is discussing factorization and linear transformations.
https://www.slideshare.net/CeniBabaogluPhDinMat/3-linear-algebra-for-machine-learning-factorization-and-linear-transformations-130813437
Here are the slides of the fourth part which is discussing eigenvalues and eigenvectors.
https://www.slideshare.net/CeniBabaogluPhDinMat/4-linear-algebra-for-machine-learning-eigenvalues-eigenvectors-and-diagonalization
The document defines and describes key concepts related to rectangular coordinate systems and functions. It introduces the x-axis, y-axis, and origin that make up the rectangular coordinate system. It then defines various types of functions like linear, quadratic, absolute value and their graphs. Key characteristics of functions like domain, range, and intercepts are also summarized.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the third part which is discussing factorization and linear transformations.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
The document provides an overview of the Foundation Course for the Actuarial Common Entrance Test (ACET). It covers 8 chapters on mathematical topics including notation, numerical methods, functions, algebra, calculus, and vectors/matrices. It recommends reviewing areas of weakness and provides additional practice questions. When studying core technical subjects, the Foundation Course can be used as a reference for mathematical concepts requiring review.
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeksJinTaek Seo
This document provides an overview of calculus concepts including derivatives, integrals, and their applications in physics. It defines key terms like singularity, differentiation, integration, and discusses notation for derivatives. It also covers derivatives and integrals of basic functions, applications to physics concepts like velocity, acceleration, and Newton's Second Law, as well as examples of calculating derivatives and integrals.
The document discusses numerical methods for finding roots of functions. It introduces the bisection method for finding a root of a continuous function f(x) within a given interval [a,b] where f(a) and f(b) have opposite signs. The method bisects the interval into two subintervals and recursively narrows in on the root by testing the sign of f(x) at the midpoint of each subinterval. An example applies the bisection method to find a root of the function f(x)=x^3-x-1 between 1 and 2.
2. Linear Algebra for Machine Learning: Basis and DimensionCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the second part which is discussing basis and dimension.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeksJinTaek Seo
This document provides an overview of vectors and matrices in 3D graphics. It defines vectors as n-tuples of real numbers that can be added and multiplied by scalars. Vectors have a length and direction, and can be normalized. Matrices are arrays of numbers that can be added, multiplied by scalars, and transposed. The cross product of vectors returns a perpendicular vector, while the dot product measures similarity of direction. Matrices are used to represent transformations in 3D graphics.
1. Linear Algebra for Machine Learning: Linear SystemsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the first part which is giving a short overview of matrices and discussing linear systems.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
The document presents an algorithm to find the visible region of a polygon in O(n^2) time and O(n) space. It first finds the visible vertices region by checking each vertex for intersections with edges in O(n^2) time. It then extends this region to the full visible region by considering gaps between visible vertices and finding intersection points of extreme lines in O(n) time. The algorithm avoids issues with existing approaches and produces the exact visible region without unnecessary points.
The document outlines the aims, objectives, and syllabus for the Mathematics HL (1st exams 2014) course. It includes:
- 10 aims of the course focused on developing mathematical skills, understanding, problem solving, and appreciation of mathematics.
- 6 objectives centered around demonstrating knowledge and understanding of mathematical concepts, problem solving, communication, use of technology, reasoning, and inquiry approaches.
- The syllabus is divided into 8 core topics (Algebra, Functions and equations, Circular functions and trigonometry, Vectors, Statistics and probability, Calculus, and 2 optional topics (Statistics and probability, Sets, relations and groups) that provide 48 hours of instruction each.
This document provides an overview of key calculus concepts and formulas taught in a Calculus I course at Miami Dade College - Hialeah Campus. The topics covered include limits and derivatives, integration, optimization techniques, and applications of calculus to economics, business, physics, and other fields. The document is intended as a study guide for students in the Calculus I class taught by Professor Mohammad Shakil.
This document summarizes Andrew Ng's lecture notes on supervised learning and linear regression. It begins with examples of supervised learning problems like predicting housing prices from living area size. It introduces key concepts like training examples, features, hypotheses, and cost functions. It then describes using linear regression to predict prices from area and bedrooms. Gradient descent and stochastic gradient descent are introduced as algorithms to minimize the cost function. Finally, it discusses an alternative approach using the normal equations to explicitly minimize the cost function without iteration.
Math 1300: Section 4-5 Inverse of a Square MatrixJason Aubrey
This document is a lecture on identity matrices given by Jason Aubrey of the University of Missouri Department of Mathematics. It defines an identity matrix as an n×n matrix with 1s on the main diagonal and 0s elsewhere. Examples of 2×2 and 3×3 identity matrices are given. The key property that the product of a matrix and the identity matrix equals the original matrix is also described. An example calculation of multiplying two matrices is shown step-by-step to illustrate the use of the identity matrix.
This document discusses matrix addition and subtraction. It states that two matrices are equal if they are the same size and have equal corresponding elements. The sum of two matrices is a matrix with elements that are the sums of the corresponding elements. Addition is commutative and associative for matrices of the same size. A zero matrix has all elements equal to zero. The negative of a matrix has elements that are the negatives of the original matrix's elements.
This document introduces homography and projective geometry concepts. It discusses that homography is a projective transformation represented by a 3x3 matrix that maps points from one projective plane to another. The document outlines key homography topics like the line at infinity, mapping between image planes using homography, and the Direct Linear Transform (DLT) algorithm to estimate homography from point correspondences between images. It also provides an overview of homography functions in the OpenCV library.
The document contains 15 multiple choice questions related to circles (circunferências) in the plane. Some key details assessed include:
1) Equations of lines parallel to a given line that intersect a circle and form equal chord lengths.
2) Finding which values satisfy an equation relating a line intersecting a circle.
3) Analyzing statements about triangles, lines, and circles.
4) Calculating the product of symmetrics and conjugates of complex numbers within a given set.
5) Determining the range of a chord length measured by a line intersecting a circle.
The questions cover topics such as parallel lines, secants, tangents, intersections, radii,
The document is a final exam for an engineering course covering several topics in probability and statistics. It contains 7 multi-part questions testing concepts such as Benford's Law, probability distributions including normal, Poisson, and Rayleigh distributions, sampling and descriptive statistics. Students are allowed basic calculators and materials but no outside resources to solve the problems and show their work.
APEX INSTITUTE has been established with sincere and positive resolve to do something rewarding for ENGG. / PRE-MEDICAL aspirants. For this the APEX INSTITUTE has been instituted to provide a relentlessly motivating and competitive atmosphere.
This document describes the Witch of Agnesi curve. It begins by introducing the curve and how it is constructed geometrically based on a circle. Vector-valued functions are derived to describe the curve parametrically, and the rectangular equation is obtained by eliminating the parameter. Specifically:
1) Vector-valued functions rA(θ) and rB(θ) are derived to describe points A and B on the curve parametrically.
2) These are combined to obtain the vector-valued function r(θ) for the overall Witch of Agnesi curve.
3) The rectangular equation y=(8a^3)/(x^2+4a^2) is then derived by eliminating
The document provides information about a test for candidates applying for an M.Tech in Computer Science. It describes:
1) The test will have two parts - a morning objective test (Test MIII) and an afternoon short answer test (Test CS).
2) The CS test booklet will have two groups - Group A covering analytical ability and mathematics at the B.Sc. pass level, and Group B covering advanced topics in mathematics, statistics, physics, computer science, and engineering at the B.Sc. Hons. and B.Tech. levels.
3) Sample questions are provided for both Group A (mathematical reasoning and basic concepts) and Group B (advanced topics in real analysis
The document describes a test for candidates applying for an M.Tech. in Computer Science. [The test consists of two parts - an objective test in the morning and a short answer test in the afternoon. The short answer test has two groups - Group A covers analytical ability and mathematics at the B.Sc. level, while Group B covers additional topics in mathematics, statistics, physics, computer science, or engineering depending on the candidate's choice.] The document provides sample questions testing concepts in mathematics including algebra, calculus, number theory, and logic.
The document provides instructions for a mathematics scholarship test consisting of 3 sections (Algebra, Analysis, Geometry) with 10 questions each. It defines key terms and notations used in the test, such as types of matrices, function notation, and interval notation. It also specifies rules for the test, including that calculators are not allowed and that points will only be awarded if all choices in a question are correct.
I am Nicholas L. I am a Maths Assignment Expert at mathsassignmenthelp.com. I hold a Master's in Mathematics from the University of California. I have been helping students with their assignments for the past 10 years. I solve assignments related to Maths.
Visit mathsassignmenthelp.com or email info@mathsassignmenthelp.com.
You can also call +1 678 648 4277 for any assistance with Maths Assignments.
This document provides an overview of vectors, tensors, and coordinate systems in fluid mechanics. It defines scalars, vectors, and tensors, and describes how they can be represented using basis vectors. It introduces common coordinate systems like Cartesian, cylindrical, and spherical coordinates. It explains how to transform between these systems and decompose vectors into their scalar components. The document also defines tensor operations like addition, multiplication, and transpose. It describes how tensors can be represented by matrices and defines important tensors like the identity tensor.
I am Cage T. I am a Maths Assignment Solver at mathhomeworksolver.com. I hold a Master's in Mathematics, from Los Angeles, USA. I have been helping students with their assignments for the past 9 years. I solved assignments related to Maths .
Visit mathhomeworksolver.com or email support@mathhomeworksolver.com. You can also call on +1 678 648 4277 for any assistance with Maths Assignment.
This document provides an introduction to matrix algebra and random vectors. It defines key concepts such as vectors, matrices, matrix operations, and properties of positive definite matrices. Vectors are defined as arrays of real numbers that can be added or multiplied by scalars. Matrices are rectangular arrays of numbers that can be added or multiplied. Positive definite matrices are matrices where the quadratic form is always nonnegative. The eigenvalues and eigenvectors of a symmetric positive definite matrix allow geometric interpretation of distances defined by the matrix.
This document provides an overview of graphing linear equations. It defines key terms like solutions, intercepts, and linear models. Examples are given to show how to graph equations by finding intercepts or using a table of points. Horizontal and vertical lines are discussed as special cases of linear equations. The document concludes with an example of using a linear equation to model a real-world situation involving monthly phone costs.
This document contains instructions for 5 assignment questions involving numerical integration and solving differential equations. Question 1 involves using the quad function to evaluate several integrals. Question 2 involves using quad to evaluate Fresnel integrals and plot the results. Question 3 involves using Monte Carlo methods to estimate volumes and double integrals. Question 4 involves using Euler's method to solve an initial value problem and analyze errors. Question 5 involves using lsode to solve a system of differential equations modeling atmospheric circulation and experimenting with initial conditions.
Here are the steps to find the line of intersection of the two planes:
1) Write the equations of the planes in standard form:
Plane 1: x + 2y - z = 4
Plane 2: 2x - y + z = 1
2) Set the equations equal to each other and solve as a system of equations:
x + 2y - z = 4
2x - y + z = 1
3) Eliminate one variable:
Subtract the second equation from the first:
(x + 2y - z) - (2x - y + z) = 4 - 1
-x + y = 3
4) Substitute back into one of the
This document discusses beam deflections and summarizes a method for calculating beam deflection using multiple integration. It provides an example of using this method to calculate the deflection of a beam under three-point bending. The maximum deflection occurs at the beam's midpoint and is given by the equation P L3/48EI. It also discusses analyzing statically indeterminate beams by writing slope and deflection equations with unknown reaction forces and solving for the forces using boundary conditions. An example is provided of calculating the deflection of a beam supported at three points.
Application of matrix algebra to multivariate data using standardize scoresAlexander Decker
This document discusses applying matrix algebra to estimate parameters in a regression equation using standardized scores. It presents a methodology for standardizing multivariate data measured in different units. The methodology is demonstrated by applying it to sample data to estimate the regression plane. The results using standardized scores match those obtained in previous studies using original and mean-corrected scores. Standardizing converts data to unit-less, approximately normal scores, allowing comparison across different measurement units.
11.application of matrix algebra to multivariate data using standardize scoresAlexander Decker
This document discusses applying matrix algebra to estimate parameters in a regression equation using standardized scores. It presents a methodology for standardizing multivariate data measured in different units. The methodology is demonstrated by applying it to sample data to estimate the regression plane. The results using standardized scores match those obtained in previous studies using original and mean-corrected scores. Standardizing converts data to approximately normal, unit-less scores, addressing issues that arise when data is measured in different units.
The document contains a set of 45 multiple choice questions related to mathematical sciences topics like machine language, computer hardware, programming languages, matrices, probability, statistics, and linear algebra. The questions cover concepts such as eigenvectors, probability density functions, integration techniques, random variables, estimators, and congruences.
The document discusses dimensionality reduction techniques for reducing high-dimensional data to fewer dimensions. It categorizes dimensionality reduction into feature extraction and feature selection. Feature extraction transforms features to generate new ones, while feature selection selects the best original features. The document then discusses several feature selection algorithms from different categories (filter, wrapper, hybrid) and evaluates their performance on cancer datasets. It finds that linear support vector machines using mRMR feature selection provided the best results.
Cervical cancer rates have dramatically declined in the United States due to widespread Pap smear screening and the ability to treat precancerous lesions before they develop into cancer. The introduction of the Pap test in the 1940s allowed early detection and helped reduce cervical cancer incidence and mortality rates by over 60% between 1955 and 1992. New automated screening systems using digital imaging and computational analysis now further aid in screening and may help expand screening to rural areas through remote image analysis.
The document summarizes linear dynamical models and tracking using the Kalman filter. It discusses prediction using the previous state estimate, correction using the new measurement, and modeling the system and measurements as Gaussian processes. The key steps of prediction using the dynamic model and correction by updating the state estimate based on the new measurement are derived for a linear system with a one-dimensional state vector.
The document discusses camera models used in computer vision. It begins by defining a camera as a mapping from the 3D world to a 2D image. The basic pinhole camera model is then described, including the camera center, image plane, principal axis, and principal point. Central projection using homogeneous coordinates is shown. The camera calibration matrix K is introduced, which relates the camera coordinate system to pixel coordinates. Finally, the full camera matrix P is defined, which combines camera intrinsics K, rotation R, and translation -C to map 3D world points to 2D image points.
This document discusses singular value decomposition (SVD) and its applications. SVD decomposes a matrix into three component matrices that reveal useful properties about the matrix's structure and rank. SVD can be used to find the best-fitting line to a set of points by minimizing the sum of squared distances between points and the line. The solution involves computing the SVD of a transformed matrix and taking the right singular vector corresponding to the second largest singular value.
The document discusses estimating 2D homography from point correspondences between two images using the Direct Linear Transformation algorithm. It describes how each point correspondence provides two linear equations relating the entries of the homography matrix. At least four point correspondences are needed to compute the homography using DLT. The document also discusses issues like degenerate configurations, data normalization, robust estimation techniques like RANSAC to deal with outlier correspondences.
The document summarizes linear dynamical models and tracking using the Kalman filter. It discusses prediction using the previous state estimate, correction using the new measurement, and representing the state as a Gaussian distribution. Key steps include predicting the next state using the dynamic model, then correcting the prediction using the new measurement via Bayes' rule to get an updated state estimate. Calculations involve multiplying and summing Gaussian probability densities.
The document discusses probabilistic segmentation using mixture models and the expectation-maximization (EM) algorithm. It addresses image segmentation and line fitting applications.
For image segmentation, the missing data is an (n x g) matrix of indicator variables showing which pixel belongs to which segment. The E-step computes the probability each pixel belongs to each segment. The M-step re-estimates the mixture model parameters to maximize the complete data log-likelihood.
For line fitting, the missing data is similarly an (n x g) matrix showing which point belongs to which line. The E-step computes the probability each point was drawn from each line. The M-step then re-estimates the line parameters.
The document discusses segmentation and is from the Computer Science and Engineering department at the Indian Institute of Technology in Kharagpur. It contains 29 pages of content about segmentation but provides no other context or summaries of the information within.
The trifocal tensor encapsulates the projective geometry relations between three views. It depends only on the relative pose between the three cameras and their internal parameters. The trifocal tensor can uniquely determine point and line correspondences between the three views and can be used to transfer points from a correspondence in two views to the corresponding point in the third view. It consists of three 3x3 matrices that relate image lines between the views and can induce homographies between views from lines in one of the images.
The document discusses two-view geometry and epipolar geometry in computer vision. It contains the following key points in 3 sentences:
Epipolar geometry describes the intrinsic projective geometry between two views of a scene and is defined by the fundamental matrix F, which is a 3x3 matrix that maps a point in one image to an epipolar line in the other image. The epipolar line is the intersection of the epipolar plane containing the baseline between cameras and the second image plane. Special motions like pure translation result in all epipolar lines intersecting at the epipole, which is the image of the camera center from the other view.
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known 3D positions and solving for the camera projection matrix P that relates 3D scene points to their 2D image coordinates.
The document discusses segmentation and is from the Computer Science and Engineering department at the Indian Institute of Technology in Kharagpur. It contains 29 pages of content about segmentation but provides no other context or summaries of the information within.
The document discusses least squares minimization and solving systems of linear equations. It begins by introducing overdetermined systems with more equations than unknowns and describes finding the least squares solution that minimizes the residual. It then presents the algorithm which uses the singular value decomposition to solve the normal equations and find the pseudo-inverse. It also covers solving homogeneous systems of equations by minimizing the residual subject to the constraint that the solution vector has unit length.
1. C OMPUTER V ISION : P ROJECTIVE G EOMETRY
IIT Kharagpur
Computer Science and Engineering,
Indian Institute of Technology
Kharagpur.
(IIT Kharagpur) Projective Geometry Jan ’10 1 / 40
2. Planar Geometry
Geometry is the study of points and lines and their relationships.
Geometry can be studied in terms of properties of geometric
primitives.
An algebraic approach to studying geometry involves establishing
a coordinate system.
Algebraic geometry
Points and lines are represented as vectors.
A conic section is represented by a symmetric matrix.
Results derived using algebraic geometry are very useful for
developing practical computation methods.
(IIT Kharagpur) Projective Geometry Jan ’10 2 / 40
3. The 2D projective plane Notation
A point (x, y ) can be considered as a vector in the vector space
IR2
Geometric entities can be represented by a column vector.
Generally x represents a column vector and xT represents a row
vector.
A point x gets represented by a column vector:
x
x= = (x, y )T
y
(IIT Kharagpur) Projective Geometry Jan ’10 3 / 40
4. Homogeneous representation of lines
Equation of a line: ax + by + c = 0
Line as a vector: (a, b, c)T
Vectors (a, b, c)T and k (a, b, c)T represent the same line.
For different values of scalar k , we get an equivalence class of
vectors.
Any particular vector (a, b, c)T is a representative of the
equivalence class.
Projective space
The set of equivalence classes of vectors in IR3 forms the
projective space IP2 .
The vector (0, 0, 0)T is excluded from the projective space since it
does not correspond to any line.
(IIT Kharagpur) Projective Geometry Jan ’10 4 / 40
5. Homogeneous representation of points
A point x = (x, y )T lies on the line l = (a, b, c)T if
a
(x, y , 1) b
ax + by + c = 0
=0
c
The point (x, y )T in IR2 is represented as a 3-vector by adding a
final coordinate of 1.
Since (kx, ky , k )l = 0, the set of vectors (kx, ky , k )T for varying
values of k would represent the same point (x, y T ) in IR2 .
A homogeneous vector of general form x = (x1 , x2 , x3 )T
represents the point (x1 /x3 , x2 /x3 )T in IR2 .
Points as homogeneous vectors are also elements of IP2 .
(IIT Kharagpur) Projective Geometry Jan ’10 5 / 40
6. Homogeneous coordinate
Point x lies on line l if and only if xT l = 0
xT l = lT x = x.l
Homogeneous coordinate (3-vector) x = (x1 , y1 , z1 )T .
Inhomogeneous coordinate (2-vector) x = (x, y )T .
Two lines l = (a, b, c)T and l = (a , b , c )T intersect at a point x.
x=l×l
The line through two points x and x is
l=x×x
(IIT Kharagpur) Projective Geometry Jan ’10 6 / 40
7. Ideal Points Line at ∞
Consider two parallel lines l = (a, b, c)T and l = (a, b, c )T .
Intersection of l and l is given by l × l .
b
l × l = (c − c ) −a
0
The inhomogeneous representation of the point of intersection
b/0
−a/0
Parallel lines meet at infinity.
(IIT Kharagpur) Projective Geometry Jan ’10 7 / 40
8. Ideal points and the line at infinity
All homogeneous 3-vectors form the projective space IP2 .
The points for which the last coordinate x3 = 0 are the ideal points
x1
x
2
0
The set of all ideal points (x1 , x2 , 0)T lie on a single line, the Line
at Infinity, denoted l∞ = (0, 0, 1)T
A line l = (a, b, c)T intersects l∞ in the ideal point (b, −a, 0)T .
The vector (b, −a)T is tangent to the line and orthogonal to the
line normal (a, b) and so represents the line’s direction.
(IIT Kharagpur) Projective Geometry Jan ’10 8 / 40
9. Advantage of projective geometry
Projective plane IP2
In IP2 , two distinct lines meet in a single point and two points lie on
a single line.
In the standard Euclidean geometry of IR2 , parallel lines form a
special case.
The study of the geometry of IP2 is known as projective geometry.
In the purely geometric study of projective geometry, one does not
make any distinction between points at infinity (ideal points) and
ordinary points.
(IIT Kharagpur) Projective Geometry Jan ’10 9 / 40
10. A model for projective plane
Points in IP2 correspond to rays in IR3 .
The set of all vectors k (x1 , x2 , x3 )T as k varies forms a ray through
origin.
The lines in IP2 are planes passing through origin in IR3
Getting inhomogeneous representation: Points and lines may be
obtained by intersecting this set of of rays and planes with the
plane x3 = 1
(IIT Kharagpur) Projective Geometry Jan ’10 10 / 40
11. Duality
The role of points and lines can be interchanged in statements
concerning the properties of lines and points.
E.g. lT x = 0 also implies xT l = 0
To any theorem of 2-dimensional projective geometry there
corresponds a dual theorem, which may be derived by interchanging
the roles of points and lines in the original theorem.
A line through 2 points is dual to the point of intersection of the
two lines.
(IIT Kharagpur) Projective Geometry Jan ’10 11 / 40
12. Conics and Dual Conics
A conic is a curve described by a second-degree equation in the
plane.
E.g. hyperbola, ellipse, parabola.
Inhomogeneous coordinates → equation of a conic:
ax 2 + bxy + cy 2 + dx + ey + f = 0
Homogenizing this by replacements x1 → x1 /x3 , y → x2 /x3
2 2 2
ax1 + bx1 x2 + cx2 + dx1 x2 + ex2 x3 + fx3
(IIT Kharagpur) Projective Geometry Jan ’10 12 / 40
13. Conic in matrix form
a b/2 d/2
xT Cx = 0 C = b/2 c e/2
d/2 e/2 f
The matrix C is a homogeneous representation of the conic.
The conic has 5 degrees of freedom, i.e. the ratios:
{a : b : c : d : e : f }
Five points are required to define a conic.
Tangent to the conic
The line l tangent to the conic C is given by l = Cx
(IIT Kharagpur) Projective Geometry Jan ’10 13 / 40
14. Conic in matrix form
a b/2 d/2
xT Cx = 0 C = b/2 c e/2
d/2 e/2 f
The matrix C is a homogeneous representation of the conic.
The conic has 5 degrees of freedom, i.e. the ratios:
{a : b : c : d : e : f }
(IIT Kharagpur) Projective Geometry Jan ’10 14 / 40
15. Projective Transformations
2D projective geometry is the study of properties of the projective
plane IP2 that are invariant under a group of transformations
known as projectivities.
A projectivity is an invertible mapping from points in IP2 to points in
IP2 .
A projectivity is an invertible mapping h from IP2 to itself such that
three points x1 , x2 and x3 lie on the same line if and only if h(x1 ),
h(x2 ) and h(x3 ) do.
Also called as: collineation, projective transformation or a
homography.
(IIT Kharagpur) Projective Geometry Jan ’10 15 / 40
16. Homography Projective Transformation
Algebraic definition:
A mapping h : P2 → IP2 is a projectivity if and only if there exists a
non-singular 3 × 3 matrix H such that for any point in P2 represented by
vector x it is true that h(x) = Hx.
H is a linear transformation
x1 h11 h12 h13
x1
x = Hx x = h
2 21 h22 h23
x
2
x3 h31 h32 h33 x3
H is a homogeneous matrix
Only ratios of the matrix elements is significant.
There are 8 degrees of freedom.
(IIT Kharagpur) Projective Geometry Jan ’10 16 / 40
17. Projective Transformation
A projective transformation leaves the projective properties
invariant.
A projective transformation in P2 is simply a linear transformation
of R3 .
(IIT Kharagpur) Projective Geometry Jan ’10 17 / 40
18. Transformation of Lines
Points xi get transformed as xi = Hxi
If these points xi lie on a line l, then lT xi = 0
The transformed points xi would lie on a line l .
l = H−T l
(IIT Kharagpur) Projective Geometry Jan ’10 18 / 40
19. Transformation of Conics
Points x get transformed as x = Hx
If the point x lies on a conic C, then xT Cx = 0
xT Cx = x T [H−1 ]T C H−1 x
= x T H−T C H−1 x
Under a point transformation x = Hx, a conic C transforms to
C = H−T C H−1
(IIT Kharagpur) Projective Geometry Jan ’10 19 / 40
20. Hierarchy of Transformations
General linear group: GL(n) −→ Group of invertible n × n matrices
with real elements.
Projective linear group: PL(n) −→ Matrices are related by a scalar
multiplier. Quotient group of GL(n).
Projective Linear Group Subgroups
Affine group: −→ Matrices for which the last row is (0, 0, 1)
Euclidean group: −→ Additionally, the upper left hand 2 × 2 matrix
is orthogonal.
Oriented Euclidean group: PL(n) −→ Additionally, the upper left
hand 2 × 2 matrix has determinant 1.
(IIT Kharagpur) Projective Geometry Jan ’10 20 / 40
21. Invariants
A transformation can be described in terms of those elements or
quantities that are preserved or invariant.
A (scalar) invariant of a geometric configuration is a function of the
configuration whose value is unchanged by a particular
transformation.
Euclidean invariants Similarity invariants
Distance between two points. Distance
Angle between two lines. Angle between two lines.
(IIT Kharagpur) Projective Geometry Jan ’10 21 / 40
22. Examples of Projective transformations
(IIT Kharagpur) Projective Geometry Jan ’10 22 / 40
23. Examples of Projective transformations
(IIT Kharagpur) Projective Geometry Jan ’10 23 / 40
24. Example of Projective Correction
(IIT Kharagpur) Projective Geometry Jan ’10 24 / 40
25. Isometries
x
cosθ − sinθ tx x
y =
sinθ cosθ ty y
where = ±1
1 0 0 1 1
Isometries are transformations of the plane R2 that preserve
Euclidean distance.
If = 1, the isometry is orientation-preserving and is a Euclidean
transformation. Euclidean transformation is a composition of
translation and rotation.
If = −1, the isometry reverses orientation.
(IIT Kharagpur) Projective Geometry Jan ’10 25 / 40
26. Isometries In short form
R t
x = HE x = x
0T 1
R is a 2 × 2 rotation matrix. RT R = RRT = I
t is a translation 2-vector.
0 is a null 2-vector.
It has 3 degrees of freedom: 1 for rotation, 2 for translation.
Invariants Isometry
Length
Angle
Area
(IIT Kharagpur) Projective Geometry Jan ’10 26 / 40
27. Similarity Transformation
x
s cosθ −s sinθ tx x
y = s sinθ s cosθ t y
where s = scaling
y
1 0 0 1 1
It is an isometry composed with an isotropic scaling.
Preserves the shape.
Has 4 degrees of freedom −→ scaling(1), rotation(1),
translation(2).
(IIT Kharagpur) Projective Geometry Jan ’10 27 / 40
28. Similarity Transformation In short form
sR t
x = HS x = x
0T 1
R is a 2 × 2 rotation matrix. RT R = RRT = I
t is a translation 2-vector.
0 is a null 2-vector.
Invariants Isometry
Angle
Parallel lines remain as parallel.
Length: Ratio of two lengths is preserved.
Area: Ratio of two areas is preserved.
(IIT Kharagpur) Projective Geometry Jan ’10 28 / 40
29. Metric Structure
Metric Structure implies that the structure is defined up to a similarity.
(IIT Kharagpur) Projective Geometry Jan ’10 29 / 40
30. Affine Transformation (Affinity)
x
a11 a12 tx x
y = a21 a22 ty y
1 0 0 1 1
A t
x = HA x = x
0T 1
A is a 2 × 2 non-singular matrix.
Has 6 degrees of freedom −→ 6 matrix elements.
The transformation can be computed using 3 point
correspondences.
(IIT Kharagpur) Projective Geometry Jan ’10 30 / 40
31. Decomposition of an Affine transform
A = R(θ) R(−φ) D R(φ)
λ1 0
D is a diagonal matrix. D =
0 λ2
R(θ) and R(φ) are rotations by θ and φ respectively.
(IIT Kharagpur) Projective Geometry Jan ’10 31 / 40
32. Affine transform Non-isotropic scaling
Non-isotropic scaling means there is a scaling direction (angle φ),
and a ratio of scaling parameters λ1 : λ2 in orthogonal directions.
It has 2 extra degrees of freedom compared to a similarity
transform.
Invariants Affine Transform
Angle
Parallel lines remain as parallel.
Length: Ratio of two lengths is preserved for parallel lines.
Area: Ratio of two areas is preserved. In fact areas are scaled by
factor λ1 λ2 .
There can be orientation preserving and orientation reversing affinities
depending on the sign of detA
(IIT Kharagpur) Projective Geometry Jan ’10 32 / 40
33. Projective Transformation
A t
x = HP x = x where v = (v1 , v2 )T
vT v
Has 8 degrees of freedom −→ 9 elements with only ratio
significant.
The transformation can be computed using 4 point
correspondences, with no 3 collinear on either plane.
Invariants
A ratio of ratios (cross ratio) of lengths on a line is a projective
invariant.
(IIT Kharagpur) Projective Geometry Jan ’10 33 / 40
34. Similarity (4 dof)
↓
Affinity (6 dof) Affinity: Scaling of area is the same all over the
↓ plane. Orientation of a transformed line does not
Projectivity (8 dof) depend on its position on the plane.
Projectivity: Area scaling varies with position.
Orientation of a transformed line depends on its
initial orientation and position.
The vector v is responsible for non-linear effects.
x1 A x1
A t
x2 = x2
0T 1
0 0
x1 x1
A t
x = A x
2
2
vT
v
0 v1 x1 + v2 x2
(IIT Kharagpur) Projective Geometry Jan ’10 34 / 40
35. Decomposition of a Projective Transform
sR t K 0 I 0 A t
H = HS HA HP = =
0T 1 0T 1 vT v vT v
A = sRK + tvT
K is an upper-triangular matrix normalized as det K = 1
The decomposition is valid if v 0, is unique if s is positive.
1.707 0.586 1.0
H = 2.707 8.242 2.0
1.0 2.0 1.0
2cos45o −2sin45o 1 0.5 1 0 1 0 0
= 2sin45o 2cos45o 2 0 2 0 0 1 0
0 0 1 0 0 1 1 2 1
(IIT Kharagpur) Projective Geometry Jan ’10 35 / 40
37. Number of Invariants
The number of functionally independent invariants
≥
the number of degrees of freedom of the configuration
−
the number of degrees of freedom of the transformation
(IIT Kharagpur) Projective Geometry Jan ’10 37 / 40
38. Projective (8 dof) Invariants
h11 h12 h13
Concurrency, collinearity
h21 h22 h23
Order of contact
h31 h32 h33
Cross ratios
Affine (6 dof) Invariants
a11 a12 tx
Parallelism
a21 a22 ty
Ratios of areas, ratio of
0 0 1
lengths on parallel lines
The line at ∞
(IIT Kharagpur) Projective Geometry Jan ’10 38 / 40
39. Similarity (4 dof) Invariants
sr11 sr12 tx
sr
Ratio of lengths,
21 sr22 ty
Angles
0 0 1
The circular points
Euclidean (3 dof) Invariants
r11 r12 tx
r
Lengths,
21 r22 ty
Area
0 0 1
(IIT Kharagpur) Projective Geometry Jan ’10 39 / 40