The document provides an overview of basic math concepts for computer graphics, including:
- Sets, mappings, and Cartesian coordinates are introduced to represent vectors and points in 2D and 3D space.
- Linear interpolation is described as a fundamental operation in graphics used to connect data points.
- Parametric and implicit equations are discussed for representing common 2D curves and lines.
- Concepts like the dot product, cross product, and gradient are covered, which are important for calculations involving vectors.
This document discusses various concepts related to vectors and 3D geometry including dot products, cross products, planes, lines, and their relationships. Dot products can be used to find the angle between vectors and determine if vectors are perpendicular. Cross products give a vector perpendicular to both input vectors. Plane equations can be defined using a point and normal vector, three points, or two vectors in the plane. Lines are defined by two points or a point and direction vector. The intersection of planes and lines, parallelism, and distances between lines and points and planes are also covered.
The document outlines the aims, objectives, and syllabus for the Mathematics HL (1st exams 2014) course. It includes:
- 10 aims of the course focused on developing mathematical skills, understanding, problem solving, and appreciation of mathematics.
- 6 objectives centered around demonstrating knowledge and understanding of mathematical concepts, problem solving, communication, use of technology, reasoning, and inquiry approaches.
- The syllabus is divided into 8 core topics (Algebra, Functions and equations, Circular functions and trigonometry, Vectors, Statistics and probability, Calculus, and 2 optional topics (Statistics and probability, Sets, relations and groups) that provide 48 hours of instruction each.
2. Linear Algebra for Machine Learning: Basis and DimensionCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the second part which is discussing basis and dimension.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
This academic article summarizes an article published in the journal Mathematical Theory and Modeling that discusses extensions of *-algebras. It begins by providing definitions of key terms such as linear space, normed linear space, algebra, Banach space, Banach algebra, involution, and *-algebra. It then gives concrete examples of *-algebras. Next, it describes how an extension of a *-algebra can be represented by a commutative diagram or a short exact sequence. The article concludes by restating the purpose and providing references.
This document provides an introduction to graph theory concepts. It defines graphs as mathematical objects consisting of nodes and edges. Both directed and undirected graphs are discussed. Key graph properties like paths, cycles, degrees, and connectivity are defined. Classic graph problems introduced include Eulerian circuits, Hamiltonian circuits, spanning trees, and graph coloring. Graph theory is a fundamental area of mathematics with applications in artificial intelligence.
This document provides an overview of concepts from linear algebra that are necessary for understanding quantum mechanics. It reviews vectors, vector spaces, linear independence, bases, linear operators, and complex numbers. It then introduces key concepts for quantum mechanics, including Dirac notation, inner products, outer products, eigenvalues and eigenvectors, unitary and Hermitian operators, and tensor products. The goal is to cover the necessary mathematical foundations and notations systematically to enable the study of quantum mechanics postulates.
1. Linear Algebra for Machine Learning: Linear SystemsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the first part which is giving a short overview of matrices and discussing linear systems.
The document discusses the dimension of vector spaces. It defines the dimension of a vector space V as the number of vectors in a basis for V. It states that any set containing more vectors than the dimension must be linearly dependent, and that every basis of V must consist of exactly the dimension number of vectors. Examples of finding the dimension of various vector spaces and subspaces are provided.
This document discusses various concepts related to vectors and 3D geometry including dot products, cross products, planes, lines, and their relationships. Dot products can be used to find the angle between vectors and determine if vectors are perpendicular. Cross products give a vector perpendicular to both input vectors. Plane equations can be defined using a point and normal vector, three points, or two vectors in the plane. Lines are defined by two points or a point and direction vector. The intersection of planes and lines, parallelism, and distances between lines and points and planes are also covered.
The document outlines the aims, objectives, and syllabus for the Mathematics HL (1st exams 2014) course. It includes:
- 10 aims of the course focused on developing mathematical skills, understanding, problem solving, and appreciation of mathematics.
- 6 objectives centered around demonstrating knowledge and understanding of mathematical concepts, problem solving, communication, use of technology, reasoning, and inquiry approaches.
- The syllabus is divided into 8 core topics (Algebra, Functions and equations, Circular functions and trigonometry, Vectors, Statistics and probability, Calculus, and 2 optional topics (Statistics and probability, Sets, relations and groups) that provide 48 hours of instruction each.
2. Linear Algebra for Machine Learning: Basis and DimensionCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the second part which is discussing basis and dimension.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
This academic article summarizes an article published in the journal Mathematical Theory and Modeling that discusses extensions of *-algebras. It begins by providing definitions of key terms such as linear space, normed linear space, algebra, Banach space, Banach algebra, involution, and *-algebra. It then gives concrete examples of *-algebras. Next, it describes how an extension of a *-algebra can be represented by a commutative diagram or a short exact sequence. The article concludes by restating the purpose and providing references.
This document provides an introduction to graph theory concepts. It defines graphs as mathematical objects consisting of nodes and edges. Both directed and undirected graphs are discussed. Key graph properties like paths, cycles, degrees, and connectivity are defined. Classic graph problems introduced include Eulerian circuits, Hamiltonian circuits, spanning trees, and graph coloring. Graph theory is a fundamental area of mathematics with applications in artificial intelligence.
This document provides an overview of concepts from linear algebra that are necessary for understanding quantum mechanics. It reviews vectors, vector spaces, linear independence, bases, linear operators, and complex numbers. It then introduces key concepts for quantum mechanics, including Dirac notation, inner products, outer products, eigenvalues and eigenvectors, unitary and Hermitian operators, and tensor products. The goal is to cover the necessary mathematical foundations and notations systematically to enable the study of quantum mechanics postulates.
1. Linear Algebra for Machine Learning: Linear SystemsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the first part which is giving a short overview of matrices and discussing linear systems.
The document discusses the dimension of vector spaces. It defines the dimension of a vector space V as the number of vectors in a basis for V. It states that any set containing more vectors than the dimension must be linearly dependent, and that every basis of V must consist of exactly the dimension number of vectors. Examples of finding the dimension of various vector spaces and subspaces are provided.
The document summarizes key concepts from chapter 2 of the lecture slides on linear algebra for deep learning. It defines scalars as single numbers and vectors as 1-D arrays of numbers that can be indexed. Matrices are 2-D arrays of numbers that are indexed with two numbers. Tensors generalize this to arrays with more dimensions. The document also discusses matrix operations like transpose, dot product, and inversion which are important for solving systems of linear equations. It introduces norms as functions to measure the size of vectors.
The document discusses different types of functions including linear, quadratic, absolute value, and square root functions. It provides the definitions and key properties of each function such as domain, range, intercepts, vertex, and transformations that modify the graph. Examples are worked through demonstrating how to find specific characteristics of each function and graph transformations.
The document discusses properties of determinants, row operations on matrices, and how they affect determinants. It then covers Cramer's rule, vector spaces, subspaces, and the null space and column space of matrices. Specifically, it provides theorems showing that row replacements and scalings of rows do not change the determinant, while row interchanges negate the determinant. Cramer's rule is introduced for solving systems of linear equations. Key concepts for vector spaces and subspaces are defined, and the null space and column space of a matrix are shown to be subspaces.
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the third part which is discussing factorization and linear transformations.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...Ceni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the fifth part which is discussing singular value decomposition and principal component analysis.
Here are the slides of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
Here are the slides of the third part which is discussing factorization and linear transformations.
https://www.slideshare.net/CeniBabaogluPhDinMat/3-linear-algebra-for-machine-learning-factorization-and-linear-transformations-130813437
Here are the slides of the fourth part which is discussing eigenvalues and eigenvectors.
https://www.slideshare.net/CeniBabaogluPhDinMat/4-linear-algebra-for-machine-learning-eigenvalues-eigenvectors-and-diagonalization
1. The document discusses three geometry problems involving vectors and their solutions:
2. It shows that the midpoint lines of a parallelogram trisect its diagonal lines.
3. It proves that if two pairs of opposite edges in a tetrahedron are perpendicular, then the third pair is also perpendicular.
4. It demonstrates that the midpoint line between two sides of a triangle is parallel to the third side and half its length.
The document discusses various topics related to vectors including:
- Definitions of vectors, scalars, magnitude and direction
- Equality of vectors and types of vectors
- Addition and subtraction of vectors using triangle law and parallelogram law
- Multiplication of a vector by a scalar
- Scalar (dot) product and properties
- Vector (cross) product and properties
- Applications to work done, moments and areas
The document provides explanations, properties, examples and formulas for key vector algebra concepts.
This document summarizes three applications of linear algebra:
1) Fast integer multiplication, which can be done in O(n log n) time using linear algebra and Fourier transforms to represent integers as polynomials and multiply the polynomials.
2) Data structures like databases and graphs can be represented using matrices and vectors from linear algebra.
3) Multimedia like images, sound, and video can be stored as vectors and matrices, with images as pixel arrays, sound as amplitude arrays, and video as arrays of images.
This document provides an introduction to vectors using a geometric approach. It begins by defining vectors as oriented line segments representing displacements, velocities, and forces. Key concepts introduced include vector addition and scalar multiplication. These operations are used to define a vector space, which has properties like closure under addition and scalar multiplication. Specific vector spaces discussed include Rn, the set of n-tuples of real numbers, and Cn, the set of n-tuples of complex numbers. The document also covers bases, linear independence, components of vectors with respect to a basis, and the dimension of a vector space. Several exercises are provided to reinforce these concepts.
Enumeration methods are very important in a variety of settings, both mathematical and applications. For many problems there is actually no real hope to do the enumeration in reasonable time since the number of solutions is so big. This talk is about how to compute at the limit.
The talk is decomposed into:
(a) Regular enumeration procedure where one uses computerized case distinction.
(b) Use of symmetry groups for isomorphism checks.
(c) The augmentation scheme that allows to enumerate object up to isomorphism without keeping the full list in memory.
(d) The homomorphism principle that allows to map a complex problem to a simpler one.
This document provides an introduction to graph theory concepts. It defines what a graph is consisting of vertices and edges. It discusses different types of graphs like simple graphs, multigraphs, digraphs and their properties. It introduces concepts like degrees of vertices, handshaking lemma, planar graphs, Euler's formula, bipartite graphs and graph coloring. It provides examples of special graphs like complete graphs, cycles, wheels and hypercubes. It discusses applications of graphs in areas like job assignments and local area networks. The document also summarizes theorems regarding planar graphs like Kuratowski's theorem stating conditions for a graph to be non-planar.
This document provides an overview of graph theory and applications. It begins with a brief history of graph theory and examples of early applications. It then covers basic graph theory concepts like paths, trees, connectivity, and graph representations. The document discusses representing graphs with adjacency matrices and incidence matrices. It also covers algorithms for determining connectivity in graphs and searching graphs using depth-first search. The document aims to provide an introduction to fundamental graph theory topics and applications in large graphs.
Math 1300: Section 7- 3 Basic Counting PrinciplesJason Aubrey
The document discusses basic counting principles for sets, including the addition principle and Venn diagrams. It provides examples of using the addition principle to count the total number of students in a class when given the numbers of students in different categories, like males and females or different majors. It defines the addition principle formula as the total being equal to the sum of the individual set counts minus any overlap between sets.
This document provides an introduction to fundamental concepts in graph theory. It defines what a graph is composed of and different graph types including simple graphs, directed graphs, bipartite graphs, and complete graphs. It discusses graph terminology such as vertices, edges, paths, cycles, components, and subgraphs. It also covers graph properties like connectivity, degrees, isomorphism, and graph coloring. Examples are provided to illustrate key graph concepts and theorems are stated about properties of graphs like the Petersen graph and graph components.
This document discusses category theory and algebraic semantics for programming languages. It defines categories, functors, natural transformations, Ω-algebras, (Ω,E)-algebras, and monads. Ω-algebras allow modeling algebraic structures like groups. (Ω,E)-algebras satisfy equations E and have a free algebra. Monads correspond to algebraic theories and define Kleisli categories of algebras. Algebraic semantics uses axioms and rules to prove equalities in (Ω,E)-algebras.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems. It is useful for problems that exhibit optimal substructure and overlapping subproblems. Some examples of problems solved using dynamic programming include the optimal binary search tree problem, the 0/1 knapsack problem, and matrix chain multiplication. Dynamic programming works by storing the results of subproblems to avoid recomputing them, building up to the overall optimal solution.
Lecture 8 nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6njit-ronbrown
The document discusses null spaces, column spaces, and bases of matrices. It begins by defining the null space of a matrix A as the set of all solutions to the homogeneous equation Ax = 0. It then proves that the null space of any matrix is a subspace. Similarly, it defines the column space of A as the set of all linear combinations of A's columns, and proves the column space is always a subspace. The document contrasts the properties of null spaces and column spaces. It also discusses finding bases for null spaces and column spaces. Finally, it covers linear independence, spanning sets, and using pivots to determine bases.
This document discusses matrix addition and subtraction. It states that two matrices are equal if they are the same size and have equal corresponding elements. The sum of two matrices is a matrix with elements that are the sums of the corresponding elements. Addition is commutative and associative for matrices of the same size. A zero matrix has all elements equal to zero. The negative of a matrix has elements that are the negatives of the original matrix's elements.
- The document provides an introduction to linear algebra and MATLAB. It discusses various linear algebra concepts like vectors, matrices, tensors, and operations on them.
- It then covers key MATLAB topics - basic data types, vector and matrix operations, control flow, plotting, and writing efficient code.
- The document emphasizes how linear algebra and MATLAB are closely related and commonly used together in applications like image and signal processing.
- Dimensionality reduction techniques assign instances to vectors in a lower-dimensional space while approximately preserving similarity relationships. Principal component analysis (PCA) is a common linear dimensionality reduction technique.
- Kernel PCA performs PCA in a higher-dimensional feature space implicitly defined by a kernel function. This allows PCA to find nonlinear structure in data. Kernel PCA computes the principal components by finding the eigenvectors of the normalized kernel matrix.
- For a new data point, its representation in the lower-dimensional space is given by projecting it onto the principal components in feature space using the kernel trick, without explicitly computing features.
The document summarizes key concepts from chapter 2 of the lecture slides on linear algebra for deep learning. It defines scalars as single numbers and vectors as 1-D arrays of numbers that can be indexed. Matrices are 2-D arrays of numbers that are indexed with two numbers. Tensors generalize this to arrays with more dimensions. The document also discusses matrix operations like transpose, dot product, and inversion which are important for solving systems of linear equations. It introduces norms as functions to measure the size of vectors.
The document discusses different types of functions including linear, quadratic, absolute value, and square root functions. It provides the definitions and key properties of each function such as domain, range, intercepts, vertex, and transformations that modify the graph. Examples are worked through demonstrating how to find specific characteristics of each function and graph transformations.
The document discusses properties of determinants, row operations on matrices, and how they affect determinants. It then covers Cramer's rule, vector spaces, subspaces, and the null space and column space of matrices. Specifically, it provides theorems showing that row replacements and scalings of rows do not change the determinant, while row interchanges negate the determinant. Cramer's rule is introduced for solving systems of linear equations. Key concepts for vector spaces and subspaces are defined, and the null space and column space of a matrix are shown to be subspaces.
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the third part which is discussing factorization and linear transformations.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...Ceni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the fifth part which is discussing singular value decomposition and principal component analysis.
Here are the slides of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
Here are the slides of the third part which is discussing factorization and linear transformations.
https://www.slideshare.net/CeniBabaogluPhDinMat/3-linear-algebra-for-machine-learning-factorization-and-linear-transformations-130813437
Here are the slides of the fourth part which is discussing eigenvalues and eigenvectors.
https://www.slideshare.net/CeniBabaogluPhDinMat/4-linear-algebra-for-machine-learning-eigenvalues-eigenvectors-and-diagonalization
1. The document discusses three geometry problems involving vectors and their solutions:
2. It shows that the midpoint lines of a parallelogram trisect its diagonal lines.
3. It proves that if two pairs of opposite edges in a tetrahedron are perpendicular, then the third pair is also perpendicular.
4. It demonstrates that the midpoint line between two sides of a triangle is parallel to the third side and half its length.
The document discusses various topics related to vectors including:
- Definitions of vectors, scalars, magnitude and direction
- Equality of vectors and types of vectors
- Addition and subtraction of vectors using triangle law and parallelogram law
- Multiplication of a vector by a scalar
- Scalar (dot) product and properties
- Vector (cross) product and properties
- Applications to work done, moments and areas
The document provides explanations, properties, examples and formulas for key vector algebra concepts.
This document summarizes three applications of linear algebra:
1) Fast integer multiplication, which can be done in O(n log n) time using linear algebra and Fourier transforms to represent integers as polynomials and multiply the polynomials.
2) Data structures like databases and graphs can be represented using matrices and vectors from linear algebra.
3) Multimedia like images, sound, and video can be stored as vectors and matrices, with images as pixel arrays, sound as amplitude arrays, and video as arrays of images.
This document provides an introduction to vectors using a geometric approach. It begins by defining vectors as oriented line segments representing displacements, velocities, and forces. Key concepts introduced include vector addition and scalar multiplication. These operations are used to define a vector space, which has properties like closure under addition and scalar multiplication. Specific vector spaces discussed include Rn, the set of n-tuples of real numbers, and Cn, the set of n-tuples of complex numbers. The document also covers bases, linear independence, components of vectors with respect to a basis, and the dimension of a vector space. Several exercises are provided to reinforce these concepts.
Enumeration methods are very important in a variety of settings, both mathematical and applications. For many problems there is actually no real hope to do the enumeration in reasonable time since the number of solutions is so big. This talk is about how to compute at the limit.
The talk is decomposed into:
(a) Regular enumeration procedure where one uses computerized case distinction.
(b) Use of symmetry groups for isomorphism checks.
(c) The augmentation scheme that allows to enumerate object up to isomorphism without keeping the full list in memory.
(d) The homomorphism principle that allows to map a complex problem to a simpler one.
This document provides an introduction to graph theory concepts. It defines what a graph is consisting of vertices and edges. It discusses different types of graphs like simple graphs, multigraphs, digraphs and their properties. It introduces concepts like degrees of vertices, handshaking lemma, planar graphs, Euler's formula, bipartite graphs and graph coloring. It provides examples of special graphs like complete graphs, cycles, wheels and hypercubes. It discusses applications of graphs in areas like job assignments and local area networks. The document also summarizes theorems regarding planar graphs like Kuratowski's theorem stating conditions for a graph to be non-planar.
This document provides an overview of graph theory and applications. It begins with a brief history of graph theory and examples of early applications. It then covers basic graph theory concepts like paths, trees, connectivity, and graph representations. The document discusses representing graphs with adjacency matrices and incidence matrices. It also covers algorithms for determining connectivity in graphs and searching graphs using depth-first search. The document aims to provide an introduction to fundamental graph theory topics and applications in large graphs.
Math 1300: Section 7- 3 Basic Counting PrinciplesJason Aubrey
The document discusses basic counting principles for sets, including the addition principle and Venn diagrams. It provides examples of using the addition principle to count the total number of students in a class when given the numbers of students in different categories, like males and females or different majors. It defines the addition principle formula as the total being equal to the sum of the individual set counts minus any overlap between sets.
This document provides an introduction to fundamental concepts in graph theory. It defines what a graph is composed of and different graph types including simple graphs, directed graphs, bipartite graphs, and complete graphs. It discusses graph terminology such as vertices, edges, paths, cycles, components, and subgraphs. It also covers graph properties like connectivity, degrees, isomorphism, and graph coloring. Examples are provided to illustrate key graph concepts and theorems are stated about properties of graphs like the Petersen graph and graph components.
This document discusses category theory and algebraic semantics for programming languages. It defines categories, functors, natural transformations, Ω-algebras, (Ω,E)-algebras, and monads. Ω-algebras allow modeling algebraic structures like groups. (Ω,E)-algebras satisfy equations E and have a free algebra. Monads correspond to algebraic theories and define Kleisli categories of algebras. Algebraic semantics uses axioms and rules to prove equalities in (Ω,E)-algebras.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems. It is useful for problems that exhibit optimal substructure and overlapping subproblems. Some examples of problems solved using dynamic programming include the optimal binary search tree problem, the 0/1 knapsack problem, and matrix chain multiplication. Dynamic programming works by storing the results of subproblems to avoid recomputing them, building up to the overall optimal solution.
Lecture 8 nul col bases dim & rank - section 4-2, 4-3, 4-5 & 4-6njit-ronbrown
The document discusses null spaces, column spaces, and bases of matrices. It begins by defining the null space of a matrix A as the set of all solutions to the homogeneous equation Ax = 0. It then proves that the null space of any matrix is a subspace. Similarly, it defines the column space of A as the set of all linear combinations of A's columns, and proves the column space is always a subspace. The document contrasts the properties of null spaces and column spaces. It also discusses finding bases for null spaces and column spaces. Finally, it covers linear independence, spanning sets, and using pivots to determine bases.
This document discusses matrix addition and subtraction. It states that two matrices are equal if they are the same size and have equal corresponding elements. The sum of two matrices is a matrix with elements that are the sums of the corresponding elements. Addition is commutative and associative for matrices of the same size. A zero matrix has all elements equal to zero. The negative of a matrix has elements that are the negatives of the original matrix's elements.
- The document provides an introduction to linear algebra and MATLAB. It discusses various linear algebra concepts like vectors, matrices, tensors, and operations on them.
- It then covers key MATLAB topics - basic data types, vector and matrix operations, control flow, plotting, and writing efficient code.
- The document emphasizes how linear algebra and MATLAB are closely related and commonly used together in applications like image and signal processing.
- Dimensionality reduction techniques assign instances to vectors in a lower-dimensional space while approximately preserving similarity relationships. Principal component analysis (PCA) is a common linear dimensionality reduction technique.
- Kernel PCA performs PCA in a higher-dimensional feature space implicitly defined by a kernel function. This allows PCA to find nonlinear structure in data. Kernel PCA computes the principal components by finding the eigenvectors of the normalized kernel matrix.
- For a new data point, its representation in the lower-dimensional space is given by projecting it onto the principal components in feature space using the kernel trick, without explicitly computing features.
Support Vector Machines is the the the the the the the the thesanjaibalajeessn
This document provides an overview of support vector machines (SVMs) and how they can be used for both linear and non-linear classification problems. It explains that SVMs find the optimal separating hyperplane that maximizes the margin between classes. For non-linearly separable data, the document introduces kernel functions, which map the data into a higher-dimensional feature space to allow for nonlinear decision boundaries through the "kernel trick" of computing inner products without explicitly performing the mapping.
This document provides an overview of graphing linear equations. It defines key terms like solutions, intercepts, and linear models. Examples are given to show how to graph equations by finding intercepts or using a table of points. Horizontal and vertical lines are discussed as special cases of linear equations. The document concludes with an example of using a linear equation to model a real-world situation involving monthly phone costs.
Support vector machine in data mining.pdfRubhithaA
1. Support vector machines (SVMs) are a type of machine learning algorithm that learn nonlinear decision boundaries using kernel functions to transform data into higher dimensions.
2. SVMs find the optimal separating hyperplane that maximizes the margin between positive and negative examples. This hyperplane is determined by the support vectors, which are the data points closest to the decision boundary.
3. The SVM optimization problem involves minimizing a loss function subject to constraints. This can be solved using Lagrangian duality, which transforms the problem into an equivalent maximization problem over dual variables instead of the original weights and biases.
The document discusses quadratic equations and functions. It provides objectives of solving quadratic equations by factoring and graphing. It defines the zero of a function as where the graph crosses the x-axis. Examples are given of solving quadratic equations by factoring using the zero product property. Another example solves a quadratic equation graphically. Homework problems from the text are assigned.
5HBC: How to Graph Implicit Relations Intro Packet!A Jorge Garcia
This document discusses five methods for graphing implicit functions on a TI-83 graphing calculator:
1. Using function mode, programming, and Euler's method to graph solutions to a differential equation defined by the implicit function.
2. Using parametric mode and the quadratic formula to solve the implicit function for x as a parametric function of t.
3. Using function mode, solving for x as a function of y, and using DrawInv to graph the inverse relation.
4. Using function mode and the Solve() command to numerically solve the implicit equation for y as a function of x.
5. Using polar mode by rewriting the implicit equation in terms of r and θ and graphing r
S is a ring. It satisfies the properties of a ring:
- It is closed under addition and multiplication.
- Addition and multiplication are both associative.
- Addition has an identity element and inverse elements.
- Addition is commutative.
- Multiplication distributes over addition.
7.curves Further Mathematics Zimbabwe Zimsec Cambridgealproelearning
The document discusses properties of curves defined by functions. It begins by listing objectives for understanding important points on graphs like maxima, minima, and inflection points. It emphasizes using graphing technology to experiment but not substitute for analytical work. Examples are provided to demonstrate finding maximums, minimums, intersections, and asymptotes of various functions. The key points are determining features of a curve from its defining function.
This document provides an agenda and notes for a math class that is reviewing quadratic functions and equations. It includes warm-up problems to solve quadratic equations algebraically and graphically, identifies an upcoming test on sections 10.1-10.3, and provides class work where students must show the steps to graph quadratic functions using a table of values.
APLICACIONES DE ESPACIOS Y SUBESPACIOS VECTORIALES EN LA CARRERA DE ELECTRÓNI...GersonMendoza15
1) The document discusses the applications of vector spaces and subspaces in the field of electronics and automation. It provides examples of how vector spaces are used in areas like engineering modeling, physics, fluid applications, and structural analysis.
2) Vector spaces are the basic objects of linear algebra and are applied in science and engineering. Examples given include electric and electromagnetic fields, modeling fluids as continuous media, and modeling stresses in materials.
3) The theory of vector spaces is fundamental to linear algebra and encompasses other areas like module theory, functional analysis, representation theory, and algebraic geometry. Linear algebra originated in the study of systems of linear equations and evolved to studying matrices and geometric vectors.
Support Vector Machines aim to find an optimal decision boundary that maximizes the margin between different classes of data points. This is achieved by formulating the problem as a constrained optimization problem that seeks to minimize training error while maximizing the margin. The dual formulation results in a quadratic programming problem that can be solved using algorithms like sequential minimal optimization. Kernels allow the data to be implicitly mapped to a higher dimensional feature space, enabling non-linear decision boundaries to be learned. This "kernel trick" avoids explicitly computing coordinates in the higher dimensional space.
This document provides an overview of linear models and matrix algebra concepts that are important for economics. It discusses the objectives of using mathematics for economics, including understanding problems by stating the unknown and known variables. The document then covers key topics in linear algebra like the history of matrices, what matrices are, basic matrix operations, and properties of matrix addition and multiplication. It also introduces concepts like the inverse and transpose of a matrix. Finally, it provides an example of how matrices and vectors can represent systems of linear equations used in economic models.
This document contains a student assignment submission for a course on Perspective in Informatics. It includes the student's responses to 3 questions:
1) Analyzing different functions to determine if they satisfy the properties of a distance measure, including max(x,y), diff(x,y), and sum(x,y).
2) Computing sketches of vectors using different random vectors and analyzing the estimated vs. true angles between the vectors.
3) Calculating the expected Jaccard similarity of two randomly selected subsets R and S of a universe U with n elements and size m.
IRJET- Solving Quadratic Equations using C++ Application ProgramIRJET Journal
1) The document describes a C++ application program developed to solve quadratic equations. The program uses methods like factoring, completing the square, and the quadratic formula to find the solutions.
2) Field testing of the program showed students using it had an average score of 82.8% on a quadratic equations assessment, demonstrating the program's effectiveness.
3) Advantages of using such an application include reducing errors, supporting problem-solving processes, and creating awareness of mathematical concepts. It allows students to easily test conjectures and replay problem-solving steps.
This lesson plan teaches students how to graph linear functions using x-intercepts and y-intercepts. It includes the following:
1) An activity where students name local products on a graph and connect points to form lines representing stores.
2) An explanation of how two points determine a line and how linear equations can be graphed using intercepts. Students practice finding the intercepts of an example equation.
3) An application where students graph equations using given intercepts and an assessment where they graph additional equations and find intercepts of other equations.
Seismic data processing introductory lectureAmin khalil
This document provides a syllabus for a course on seismic data processing. The syllabus outlines topics that will be covered, including the mathematical foundations of Fourier transforms, sampling considerations for seismic time series, basic processing sequences, velocity analysis, filtering and migration techniques, acquisition of seismic data both on land and at sea, 3D seismic data processing, and other advanced topics such as Radon transforms and AVO analysis. References for the course include books on seismic data processing and digital signal processing. The document explains that seismic data processing is important to remove unwanted signals and noise and enhance signal-to-noise ratios, as reflection seismic signals may be obscured by other seismic arrivals like ground roll and direct waves.
1. The document provides information about linear programming including definitions of key terms, steps to solve linear programming problems, examples worked out in detail, and exercises.
2. Linear programming involves optimizing (maximizing or minimizing) an objective function subject to certain constraints. It was first introduced by a Russian mathematician in the 1930s-1940s to optimize resources like manpower and materials during war time.
3. Examples worked out in the document show how to set up the constraints and objective function mathematically based on word problems, sketch the feasible region, find the corner points, and determine the optimal solution that maximizes or minimizes the objective function.
This document contains information about a math class that is reviewing quadratic functions. It includes:
1. An outline of the class agenda which focuses on reviewing key concepts like how the b-value affects the parabola and completing classwork.
2. Details about grading which includes assignments, homework, tests, the final exam, and notebook checks.
3. Sample problems and class notes focused on quadratic functions, including the axis of symmetry, vertex, graphing techniques, and how changing a, b, and c values impacts the parabola.
4. Examples of completing the steps to graph quadratic functions like plotting points and reflecting over the axis of symmetry.
1) The document discusses the past, current, and future of smartphone technology.
2) In the past, "Pen on Projection" technology allowed writing on any surface using a Bluetooth pen and projected screen.
3) Currently, Qualcomm uses fingerprint sensor technology for authentication and security.
4) In the future, Qualcomm will introduce ultrasonic fingerprint sensors that can scan fingerprints through OLED displays of various thicknesses.
The document analyzes electricity consumption at home through K-means clustering and evaluates different cluster validity indices, including the Silhouette score, to determine the optimal number of clusters in the dataset. It performs K-means clustering on a household electricity consumption dataset and compares the results of the Silhouette score and other indices at different values of K to identify the best number of clusters. The analysis aims to help optimize home electricity usage through machine learning clustering techniques.
This document summarizes a master's dissertation that analyzes electricity consumption at home through K-means clustering and silhouette scoring. It contains two papers. Paper 1 analyzes a household electricity usage dataset using K-means clustering to identify the optimal number of clusters, as determined by the Calinski-Harabasz Index, Davis-Boulden index, and silhouette score. Paper 2 performs a similar analysis but with a reduced 1/8 size dataset to compare results. The dissertation concludes that both analyses produce similar silhouette scores even with a smaller dataset.
The document analyzes electricity consumption data from homes using K-means clustering to determine optimal clusters in the data. It evaluates different cluster validity indices like the Calinski-Harabasz Index, Davis-Boulden index, and Silhouette score to find the optimal number of clusters. The analysis is also performed on a reduced 1/8th dataset to see if the results are similar when using less data.
Hyun wong thesis 2019 06_22_rev40_final_grammerlyHyun Wong Choi
The document analyzes electricity consumption at home through K-means clustering and evaluates different cluster validity indices, including the Silhouette score, to determine the optimal number of clusters in the dataset. It performs K-means clustering on a household electricity consumption dataset and compares the results of the Silhouette score and other indices at different values of K to identify the best clustering. The analysis aims to optimize home electricity usage through unsupervised machine learning clustering techniques.
Hyun wong thesis 2019 06_22_rev40_final_Submitted_onlineHyun Wong Choi
The document summarizes a master's dissertation that analyzes electricity consumption at home through K-means clustering and silhouette scoring. It introduces machine learning and clustering techniques. It then describes the experimental environment, dataset used, previous work on related topics, and the proposed approach of applying K-means clustering to analyze the electricity consumption dataset. The key aspects analyzed are the optimal number of clusters determined by indices like Calinski-Harabasz, Davis-Boulden, and silhouette score. Results are compared between the full and 1/8 reduced datasets.
Hyun wong thesis 2019 06_22_rev40_final_printedHyun Wong Choi
This document summarizes a master's dissertation that analyzes electricity consumption at home through k-means clustering. The dissertation contains two papers:
1. The first paper analyzes electricity usage data from homes using k-means clustering to identify optimal clusters of usage patterns. It evaluates different metrics like silhouette score and clustering indices to determine the optimal number of clusters in the data.
2. The second paper performs a comparative analysis using a reduced 1/8th dataset to validate that the silhouette score and optimal number of clusters is similar even with smaller data.
The dissertation applies machine learning clustering techniques to analyze electricity consumption data from homes with the goal of optimizing costs and identifying factors for overcharging.
This master's dissertation analyzes electricity consumption at home through a K-means clustering algorithm and silhouette score. The document contains two papers that analyze a household electricity consumption dataset from the University of California, Irvine using K-means clustering. Paper 1 uses the Calinski-Harabasz Index, Davis-Boulden index, and silhouette score to determine the optimal number of clusters. Paper 2 performs a comparative analysis using a 1/8 subset of the full dataset and finds that the silhouette scores are similar even when using a smaller dataset. The dissertation aims to optimize household electricity usage and costs through machine learning clustering techniques.
This document summarizes a master's dissertation that analyzes electricity consumption at home through K-means clustering and silhouette scoring. The dissertation contains two papers. Paper 1 analyzes household electricity consumption data from UC Irvine using K-means clustering to determine the optimal number of clusters based on silhouette scoring and other indices. The analysis finds seven clusters to be optimal. Paper 2 performs a comparative analysis using a 1/8 subset of the full dataset, finding that silhouette scores are approximately half of the full dataset but the optimal number of clusters is similar. The dissertation concludes that machine learning clustering can effectively analyze electricity consumption patterns and predict optimal clustering even with smaller datasets.
This document appears to be a master's dissertation that analyzes electricity consumption in homes using k-means clustering. It contains chapters that introduce the topic, provide an overview and motivation, describe two papers analyzing electricity consumption data through k-means clustering with silhouette scores to determine optimal cluster numbers, present results, and conclude. The dissertation applies machine learning techniques to optimize home electricity usage by reducing costs and overcharging through clustering and prediction.
This document appears to be a master's dissertation that analyzes electricity consumption in homes using k-means clustering. It contains chapters that introduce the topic, provide an overview and motivation, describe two papers analyzing electricity consumption data through k-means clustering with silhouette scores to determine optimal cluster numbers, present results of experiments on datasets, and conclude with findings. The dissertation aims to optimize home electricity usage through machine learning clustering techniques by reducing costs and overcharging factors while enabling prediction of consumption. It applies k-means clustering to electricity usage data from homes to predict consumption patterns and determine the optimal number of clusters using silhouette scores.
This document appears to be a master's dissertation that analyzes electricity consumption in homes using k-means clustering. It contains chapters that introduce the topic, provide an overview and motivation, describe two papers analyzing electricity consumption data through k-means clustering with silhouette scores to determine optimal cluster numbers, present results of clustering a full and 1/8 sized dataset, and conclude. The dissertation aims to optimize home electricity usage through k-means clustering and determine factors influencing overcharges or costs by analyzing household consumption data.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Download & Install Module From the Odoo App Store in Odoo 17Celine George
Custom modules offer the flexibility to extend Odoo's capabilities, address unique requirements, and optimize workflows to align seamlessly with your organization's processes. By leveraging custom modules, businesses can unlock greater efficiency, productivity, and innovation, empowering them to stay competitive in today's dynamic market landscape. In this tutorial, we'll guide you step by step on how to easily download and install modules from the Odoo App Store.
How to Download & Install Module From the Odoo App Store in Odoo 17
Cg 04-math
1. Basic Math for Computer Graphics
Sungkil Lee
Sungkyunkwan University
2. Objectives
• Much of graphics is just translating math directly into code.
– The cleaner the math, the cleaner the resulting code.
– Also, clean codes result in better performance in many cases.
• In this lecture, we review various tools from high school and college
mathematics.
• This chapter is not intended to be a rigorous treatment of the material;
instead, intuition and geometric interpretation are emphasized.
Basic Math for Computer Graphics. Sungkil Lee. 1/44
3. Sets and Mappings
• Sets
– a is a member of set S
a ∈ S
– Cartesian product of two sets: given any two sets A and B,
A × B = {(a, b)|a ∈ A and b ∈ B}
∗ As a shorthand, we use the notation A2
to denote A × A.
Basic Math for Computer Graphics. Sungkil Lee. 2/44
4. Sets and Mappings
• Common sets of interest include:
– R: the real numbers
– R+
: the non-negative real numbers (includes zero)
– R2
: the ordered pairs in the real 2D plane
– Rn
: the points in n-dimensional Cartesian space
– Z: the integers
Basic Math for Computer Graphics. Sungkil Lee. 3/44
5. Sets and Mappings
• Mappings (also called functions)
f : R → Z
– “There is a function called f that takes a real number as input and
maps it to an integer.”
– equivalent to the common programming notation :
integer f(real) ← equivalent → f : R → Z
“There is a function called f which has one real argument and returns
an integer.”
Basic Math for Computer Graphics. Sungkil Lee. 4/44
6. Intervals
• Three common ways to represent an interval
a < x < b
(a, b ]
a b
_
a to b that includes b but not a
Basic Math for Computer Graphics. Sungkil Lee. 5/44
8. Logarithms
• Every logarithm has a base a.
“log base a” of x is written
loga x
• The exponent to which a must be raised to get x
y = loga x ⇔ ay
= x
Basic Math for Computer Graphics. Sungkil Lee. 7/44
9. Logarithms
• Several consequences:
– aloga x
= x
– loga ax
= x loga a = x
– loga xy = loga x + loga y
– loga x/y = loga x − loga y
– loga x = loga b logb x
Basic Math for Computer Graphics. Sungkil Lee. 8/44
10. Logarithms
• Natural logarithm
– The special number e = 2.718...
– The logarithm with base e is called the natural logarithm
ln x ≡ loge x
∗ Note that the “≡” symbol can be read “is equivalent by definition.”
Basic Math for Computer Graphics. Sungkil Lee. 9/44
11. Logarithms
• The derivatives of logarithms and exponents illuminate why the natural
logarithm is “natural”:
d
dx
loga x =
1
x ln a
d
dx
ax
= ax
ln a
The constant multipliers above are unity only for a = e.
d
dx
ln x =
1
x ln e
=
1
x
d
dx
ex
= ex
ln e = ex
Basic Math for Computer Graphics. Sungkil Lee. 10/44
12. Trigonometry
• The conversion between degrees and radians:
degrees =
180
π
radians
radians =
π
180
degrees
Basic Math for Computer Graphics. Sungkil Lee. 11/44
13. Trigonometry
• Trigonometric functions
– Pythagorean theorem: a2
+ o2
= h2
sin φ ≡ o/h csc φ ≡ h/o
cos φ ≡ a/h sec φ ≡ h/a
tan φ ≡ o/a cot φ ≡ a/o
Basic Math for Computer Graphics. Sungkil Lee. 12/44
14. Trigonometry
• The functions are not invertible when considered with the domain R.
This problem can be avoided by restricting the range of standard inverse
functions, and this is done in a standard way in almost all modern math
libraries.
• The domains and ranges are:
arcsin (asin) : [−1, 1] → [−π/2, π/2]
arccos (acos) : [−1, 1] → [0, π]
arctan (atan) : R → [−π/2, π/2]
arctan 2 (atan2) : R2
→ [−π, π]
Basic Math for Computer Graphics. Sungkil Lee. 13/44
15. Trigonometry
• The atan2(s, c) is often very useful in graphics: It takes an s value
proportional to sin A and a c value that scales cos A by the same factor,
and returns A.
Basic Math for Computer Graphics. Sungkil Lee. 14/44
16. Vector Spaces (in Algebratic Math)
• A vector space over a field F is a set V together with addition and multiplication that satisfy the eight
axioms. Elements of V and F are called vectors and scalars.
Axiom Meaning
Associativity of addition u + (v + w) = (u + v) + w
Commutativity of addition u + v = v + u
Identity element of addition There exists an element 0 ∈ V such that v + 0 = v for all v ∈ V .
Inverse elements of addition For every v ∈ V , there exists an element v ∈ V such that v + (−v) = 0.
Distributivity of scalar multiplication a(u + v) = au + av
with respect to vector addition
Distributivity of scalar multiplication (a + b)v = av + bv
with respect to field addition
Compatibility of scalar multiplication a(bv) = (ab)v
with field multiplication
Identity element of scalar multiplication 1v = v, where 1 denotes the multiplicative identity in F .
Basic Math for Computer Graphics. Sungkil Lee. 15/44
17. Vector Spaces (in Algebratic Math)
• A vector space over a field F is a set V together with addition and multiplication that satisfy the eight
axioms. Elements of V and F are called vectors and scalars.
• Mathematical structures related to the concept of a field can be tracked as follows:
- A field is a ring whose nonzero elements form a abelian group under multiplication.
- A ring is an abelian group under addition and a semigroup under multiplication; addition is commutative,
addition and multiplication are associative, multiplication distributes over addition, each element in the
set has an additive inverse, and there exists an additive identity.
- An abelian group (commutative group) is a group in which commutativity (a · b = b · a) is satisfied.
- A semigroup is a set A in which a · b satisfies associativity for any two elements a and b and opeator ·.
- A group is a set A in which a · b satisfies closure, associativity, identity element, and inverse element for
any two elements a and b and opeator ·.
Basic Math for Computer Graphics. Sungkil Lee. 16/44
18. Vectors (Simply)
• A quantity that encompasses a length and a direction.
• Represented by an arrow and not as coordinates or numbers
• Length: a
• A unit vector: Any vector whose length is one.
• The zero vector: the vector of zero length. The direction of the zero
vector is undefined.
Basic Math for Computer Graphics. Sungkil Lee. 17/44
19. Vectors (Simply)
• Two vectors are added by arranging them head to tail. This can be done
in either order.
a + b = b + a
• Vectors can be used to store an offset, also called a displacement, and a
location (or position) that is a displacement from the origin.
Basic Math for Computer Graphics. Sungkil Lee. 18/44
20. Cartesian Coordinates of a Vector
• Vectors in a n-D vector space are said to be linearly independent, iff
a1v1 + a2v2 + · · · + anvn = 0
has only the trivial solution (a1 = a2 = · · · = an = 0).
– The vectors are thus referred to as basis vectors.
– For example, a 2D vector c may be expressed as a combination of two
basis vectors a and b:
c = aca + bcb,
where ac and bc are the Cartesian coordinates of the vector c with
respect to the basis vectors {a, b}.
Basic Math for Computer Graphics. Sungkil Lee. 19/44
21. Cartesian Coordinates of a Vector
• Assuming vectors, x and y, are orthonormal,
a = xax + yay
the length of a is
a = xa
2 + ya
2
By convention we write the coordinates of a either as an ordered pair
(xa, ya) or a column matrix:
a =
xa
ya
a = [xa ya]
Basic Math for Computer Graphics. Sungkil Lee. 20/44
22. Dot Product
• The simplest way to multiply two vectors (also called the scalar product).
a · b = a b cos φ
• The projection of one vector onto another: the length a → b of the
projection of a that is projected onto b
a → b = a cos φ =
a · b
b
Basic Math for Computer Graphics. Sungkil Lee. 21/44
23. Dot Product
• If 2D vectors a and b are expressed in Cartesian coordinates,
a · b = (xax + yay) · (xbx + yby)
= xaxb(x · x) + xayb(x · y) + xbya(y · x) + yayb(y · y)
= xaxb + yayb
• Similarly in 3D,
a · b = xaxb + yayb + zazb
Basic Math for Computer Graphics. Sungkil Lee. 22/44
24. Cross Product
a × b = a b sin φ
• By definition the unit vectors in the positive direction of the x−, y− and
z−axes are given by
x = (1, 0, 0),
y = (0, 1, 0),
z = (0, 0, 1),
and we set as a convention that x × y must be in the plus or minus z
direction.
z = x × y
Basic Math for Computer Graphics. Sungkil Lee. 23/44
25. Cross Product
• The ”right-hand rule”:
Imagine placing the base of your right palm where a and b join at their
tails, and pushing the arrow of a toward b. Your extended right thumb
should point toward a × b.
Basic Math for Computer Graphics. Sungkil Lee. 24/44
26. 2D Implicit Curves
• Curve: A set of points that can be drawn on a piece of paper without
lifting the pen.
• A common way to describe a curve is using an implicit equation.
f(x, y) = 0
f(x, y) = (x − xc)2
+ (y − yc)2
− r2
• If f(x, y) = 0, the points where this equality hold are on the circle
with center (xc, yc) and radius r.
Basic Math for Computer Graphics. Sungkil Lee. 25/44
27. 2D Implicit Curves
• “implicit” equation: The points (x, y) on the curve cannot be
immediately calculated from the equation, and instead must be
determined by plugging (x, y) into f and finding out whether it is
zero or by solving the equation.
• The curve partitions space into regions where f > 0, f < 0 and f = 0.
Basic Math for Computer Graphics. Sungkil Lee. 26/44
28. The 2D Gradient
• If the function f(x, y) is a height field with height = f(x, y), the gradient
vector points in the direction of maximum upslope, i.e., straight uphill.
The gradient vector f(x, y) is given by
f(x, y) = (
∂f
∂x
,
∂f
∂y
)
• The gradient vector evaluated at a point on the implicit curve f(x, y) = 0
is perpendicular to the tangent vector of the curve at that point. This
perpendicular vector is usually called the normal vector to the curve.
Basic Math for Computer Graphics. Sungkil Lee. 27/44
29. The 2D Gradient
• The derivative of a 1D function measures the slope of the line tangent
to the curve.
• If we hold y constant, we can define an analog of the derivative, called
the partial derivative :
∂f
∂x
≡ lim
∆x→0
f(x + ∆x, y) − f(x, y)
∆x
Basic Math for Computer Graphics. Sungkil Lee. 28/44
30. Implicit 2D Lines
• The familiar “slope-intercept” form of the line is
y = mx + b
This can be converted easily to implicit form
y − mx − b = 0
Ax + By + C = 0
Basic Math for Computer Graphics. Sungkil Lee. 29/44
31. Implicit 2D Lines
• The gradient vector (A, B) is perpendicular to the implicit line Ax +
By + C = 0
Basic Math for Computer Graphics. Sungkil Lee. 30/44
32. 2D Parametric Curves
• A parametric curve: controlled by a single parameter, t,
x
y
=
g(t)
h(t)
Vector form :
p = f(t)
where f is a vector valued function f : R → R2
Basic Math for Computer Graphics. Sungkil Lee. 31/44
33. 2D Parametric Lines
• A parametric line in 2D that passes through points p0 = (x0, y0) and
p1 = (x1, y1) can be written
x
y
=
x0 + t(x1 − x0)
y0 + t(y1 − y0)
Because the formulas for x and y have such similar structure, we can use
the vector form for p = (x, y):
p(t) = p0 + t(p1 − p0)
Parametric lines can also be described as just a point o and a vector d:
p(t) = o + t(d)
Basic Math for Computer Graphics. Sungkil Lee. 32/44
34. 2D Parametric Lines
• A 2D parametric line through P0 and P1. The line segment defined by
t ∈ [0, 1] is shown in bold.
Basic Math for Computer Graphics. Sungkil Lee. 33/44
35. Linear Interpolation
• Most common mathematical operation in graphics.
• Example of linear interpolation: Position to form line segments in 2D
and 3D
p = (1 − t)a + tb
– interpolation: p goes through a and b exactly at t = 0 and t = 1
– linear interpolation: the weighting terms t and 1 − t are linear
polynomials of t
Basic Math for Computer Graphics. Sungkil Lee. 34/44
36. Linear Interpolation
• Example of linear interpolation: A set of positions on the x-axis:
x0, x1, ..., xn and for each xi we have an associated height, yi.
– A continuous function y = f(x) that interpolates these positions, so
that f goes through every data point, i.e., f(xi) = yi.
– For linear interpolation, the points (xi, yi) are connected by straight
line segments.
– It is natural to use parametric line equations for these segments. The
parameter t is just the fractional distance between xi and xi+1:
f(x) = yi +
x − xi
xi+1 − xi
(yi+1 − yi)
Basic Math for Computer Graphics. Sungkil Lee. 35/44
37. Linear Interpolation
• In the common form of linear interpolation, create a variable t that varies
from 0 to 1 as we move from data A to data B. Intermediate values are
just the function (1 − t)A + tB.
t =
x − xi
xi+1 − xi
Basic Math for Computer Graphics. Sungkil Lee. 36/44
38. 2D Triangles
• With origin and basis vectors, any point p can be written:
p = (1 − β − γ)a + βb + γc
α ≡ 1 − β − γ
p(α, β, γ) = αa + βb + γc
with the constraint that
α + β + γ = 1
• A particularly nice feature of barycentric coordinates is that a point p is
inside the triangle formed by a, b, and c if and only if
0 < α < 1,
0 < β < 1,
0 < γ < 1.
Basic Math for Computer Graphics. Sungkil Lee. 37/44
39. 2D Triangles
• A 2D triangle (vertices a, b, c) can be used to set up a non-orthogonal
coordinate system with origin a and basis vectors (b − a) and (c − a).
A point is then represented by an ordered pair (β, γ). For example, the
point p = (2.0, 0.5), i.e., p = a + 2.0(b − a) + 0.5(c − a).
Basic Math for Computer Graphics. Sungkil Lee. 38/44
40. 2D Triangles
• Defined by 2D points a, b, and c. Its area:
area =
1
2
(b − a) × (c − a)
=
1
2
xb − xa xc − xa
yb − ya yc − ya
=
1
2
((xb − xa)(yc − ya) − (xc − xa)(yb − ya))
=
1
2
(xayb + xbyc + xcya − xayc − xbya − xcyb)
Basic Math for Computer Graphics. Sungkil Lee. 39/44
41. 3D Triangles
• Barycentric coordinates extend almost transparently to 3D. If we assume
the points a, b, and c are 3D,
p = (1 − β − γ)a + βb + γc
• The normal vector: taking the cross product of any two vectors.
n = (b − a) × (c − a)
Basic Math for Computer Graphics. Sungkil Lee. 40/44
42. 3D Triangles
This normal vector is not necessarily of unit length, and it obeys the
right-hand rule of cross products.
• The area of the triangle: Taking the length of the cross product.
area =
1
2
(b − a) × (c − a)
Basic Math for Computer Graphics. Sungkil Lee. 41/44
43. Vector Operations in Matrix Form
• We can use matrix formalism to encode vector operations for vectors
when using Cartesian coordinates; if we consider the result of the dot
product a one by one matrix, it can be written:
a · b = aT
b
• If we take two 3D vectors we get:
[xa ya za]
xb
yb
zb
= [xaxb + yayb + zazb]
Basic Math for Computer Graphics. Sungkil Lee. 42/44
44. Matrices and Determinants
• The determinant in 2D is the area of the parallelogram formed by the
vectors. We can use matrices to handle the mechanics of computing
determinants.
a A
b B
=
a b
A B
= aB − Ab
• For example, the determinant of a particular 3×3 matrix
0 1 2
3 4 5
6 7 8
= 0
4 5
7 8
+ 1
5 3
8 6
+ 2
3 4
6 7
= 0(32 − 35) + 1(30 − 24) + 2(21 − 24)
= 0
Basic Math for Computer Graphics. Sungkil Lee. 43/44