1. R Matrix Math Quick Reference
Mark Niemann-Ross
2021-05-14
2. About
About This Document
This document provides brief explanations of the matrix functions available in Base
R and the matrix package.
RelatedMaterial
This Quick Reference is supplemental to courses on LinkedIn Learning. A quick
reference to plotting functions in R is here. A quick reference to clustering functions
in R is here. An index to all R functions covered at LinkedIn Learning is found here.
The latest version of this quick reference is found here. The source to this document
is found on github/mnr. This document is available as Free Software under the
terms of the Free Software Foundation’s GNU General Public License.
About Mark Niemann-Ross
Mark is an author for LinkedIn Learning and writes Science Fiction.
3. Quick Reference to Matrix Math Functions for the R
programming language
This Quick Reference is supplemental to courses on LinkedIn Learning.
The latest version of this quick reference is found here
A quick reference to plotting functions in base R is here
A quick reference to clustering functions in R is here
An index to all R functions covered at LinkedIn Learning is found here
The Matrix package is recommended for advanced operations.
9. Multiplication - “dot product”
Instructional video about %*%
A %*% B
## [,1] [,2] [,3]
## [1,] 150 186 222
## [2,] 186 231 276
## [3,] 222 276 330
10. Division
HA - just kidding. There is no matrix division. Instead, use the inverse.
𝐴𝑋 = 𝐵 is solved for X with 𝑋 = 𝐴−1
𝐵
…lots more on inverse below…
11. The determinant of a matrix
Instructional video on det() and determinant()
The determinant of $begin{pmatrix} a & b c & d end{pmatrix}$ is 𝑎𝑑 − 𝑏𝑐
sampleMatrix <- matrix(c(10,12,5,30), nrow = 2)
sampleMatrix
## [,1] [,2]
## [1,] 10 5
## [2,] 12 30
det(sampleMatrix)
## [1] 240
…which is equivalent to 𝑎𝑑 − 𝑏𝑐 … which is written as…
sampleMatrix[1,1]*sampleMatrix[2,2] - sampleMatrix[1,2] *
sampleMatrix[2,1]
13. determinant vs det
Instructional video on det() and determinant()
determinant() produces a list with $modulus and $sign
determinant(sampleMatrix)
## $modulus
## [1] 5.480639
## attr(,"logarithm")
## [1] TRUE
##
## $sign
## [1] 1
##
## attr(,"class")
## [1] "det"
determinant(sampleMatrix, logarithm = FALSE)
19. ## [2,] TRUE FALSE FALSE
## [3,] TRUE TRUE FALSE
myMatrix <- A
myMatrix[upper.tri(myMatrix)] <- NA
myMatrix
## [,1] [,2] [,3]
## [1,] 1 NA NA
## [2,] 2 5 NA
## [3,] 3 6 9
20. Matrix Comparison
There are two ways to compare matrices. First, create two matrices for example
comparison.
partiallyA <- A
partiallyA[upper.tri(partiallyA)] <- 1 # upper triangle becomes
different
simple matrice comparison, using equality
A == partiallyA # returns logical value for each element
## [,1] [,2] [,3]
## [1,] TRUE FALSE FALSE
## [2,] TRUE TRUE FALSE
## [3,] TRUE TRUE TRUE
object comparisonfor exact equality
identical(partiallyA, A) # returns logical value for entire matrice
comparison
22. Matrix transposition
Instructional video on t()
NonSymmetricMatrix <- matrix(c(1:10), nrow = 5)
NonSymmetricMatrix # show what it looks like
## [,1] [,2]
## [1,] 1 6
## [2,] 2 7
## [3,] 3 8
## [4,] 4 9
## [5,] 5 10
t(NonSymmetricMatrix) # t() for transpose...show what transpose looks
like
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 2 3 4 5
## [2,] 6 7 8 9 10
23. Distance between points
Options to choose how distance is calculated, what type of matrix is produced. See
the instructional video for details.
mymatrix <- matrix( c(1:6), nrow = 3)
mymatrix
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
dist(mymatrix, method = "euclidean" )
## 1 2
## 2 1.414214
## 3 2.828427 1.414214
24. Build a symmetric matrix
There is a package for matrix tools. Here’s how to do it with baseR
A + t(A)
## [,1] [,2] [,3]
## [1,] 2 6 10
## [2,] 6 10 14
## [3,] 10 14 18
26. Test for symmetric matrix
Instructional video on isSymmetric()
isSymmetric(A) # not symmetric
## [1] FALSE
isSymmetric(A + t(A)) # symmetric
## [1] TRUE
isSymmetric(A - t(A)) # skew symmetric
## [1] FALSE
27. Inner product of two vectors
Simple dot-product …aka 𝑣𝑒𝑐1𝑇
𝑣𝑒𝑐2
vec1 <- c(1:3)
vec2 <- c(1:3)
vec1 # what do these look like?
## [1] 1 2 3
vec2
## [1] 1 2 3
t(vec1) %*% vec2 # inner product
## [,1]
## [1,] 14
28. Outer product of two vectors
Instructional video on outer() and %o%
Transpose the second vector … 𝑣𝑒𝑐1𝑣𝑒𝑐2𝑇
These three versions produce the same result…
vec1 %*% t(vec2) # the actual formula
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 2 4 6
## [3,] 3 6 9
outer(vec1, vec2) # the R function
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 2 4 6
## [3,] 3 6 9
29. vec1 %o% vec2 # %o% is a wrapper for outer()
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] 2 4 6
## [3,] 3 6 9
30. Outer product of two matrices
First…a reminder of the contents of matrix A and B
A
## [,1] [,2] [,3]
## [1,] 1 4 7
## [2,] 2 5 8
## [3,] 3 6 9
B
## [,1] [,2] [,3]
## [1,] 11 14 17
## [2,] 12 15 18
## [3,] 13 16 19
Then…here’s the outer product. This multiplies the first matrix by individual values
from the second matrix.
Think of this as…
31. A * B[1,1]
A * B[2,1]
…and so on
outer(A,B) # ... A %o% B will produce the same result
## , , 1, 1
##
## [,1] [,2] [,3]
## [1,] 11 44 77
## [2,] 22 55 88
## [3,] 33 66 99
##
## , , 2, 1
##
## [,1] [,2] [,3]
## [1,] 12 48 84
## [2,] 24 60 96
## [3,] 36 72 108
##
## , , 3, 1
37. Inverse matrix
Instructional video about solving for an inverse matrix
𝐴𝐴−
1 = 𝐼 … -1 is the inverse of a matrix. I is the identity matrix
solve(partiallyA) # solve() finds the inverse of a matrix
## [,1] [,2] [,3]
## [1,] 1.8571429 -0.1428571 -0.19047619
## [2,] -0.7142857 0.2857143 0.04761905
## [3,] -0.1428571 -0.1428571 0.14285714
Note: Some matrices return and error of Lapack routine dgesv: system is
exactly singular: U[3,3] = 0
Also note: solve() can be used in other ways. Refer to documentation.
38. Permutations
n <- 3 # for an n x n matrix
factorial(n) # provides the number of permutations
## [1] 6
BaseR doesn’t have a function to generate all possible permutations, but there are
packages that will do this.
39. backsolve
Instructional video about forward and backward solve for a matrix
Backsolve solves a triangular system of linear equations. This can come from a
gaussian elimination (for example).
Start with… begin{align} -3x_1 + 2x_2 - x_3 &= -1 -2x_2 + 5x_3 &= -9 -2x_3 &= 2
end{align}
# r holds a matrix of coefficients for the system to be solved
r <- matrix(c(-3, 2, -1,
0, -2, 5,
0, 0, -2),
nrow = 3, byrow = TRUE)
# x is a column matrix with the solutions
x <- matrix(c(-1, -9, 2), ncol = 1)
# backsolve produces the values of x.
backsolve(r, x)
41. forwardsolve
Instructional video about forward and backward solve for a matrix
forwardsolve uses the lower triangular matrix. (backsolve() uses the upper
triangular matrix.) For example…
begin{align} 2 &= -2x_3 -9 &= 5x_3 -2x_2 -1 &= -x_3+ 2x_2 -3x_1 end{align}
# r holds a matrix of coefficients for the system to be solved
l <- matrix(c(-2, 0, 0,
5, -2, 0,
-1, 2, -3),
nrow = 3, byrow = TRUE)
# x is a column matrix with the solutions
x <- matrix(c(2, -9, -1), ncol = 1)
# forwardsolve produces the values of x.
forwardsolve(l, x)
46. crossproduct
Instructional video about crossproduct()
Not to be confused with a dot product (which is done with %*%). Regarding when
to use crossprod() vs t(A) %*% B … one may be faster than another based on the
matrix being sparse. But this will only make a difference on massive matrices.
crossprod(A,B)
## [,1] [,2] [,3]
## [1,] 74 92 110
## [2,] 182 227 272
## [3,] 290 362 434
…which is equivalent to…
t(A) %*% B
## [,1] [,2] [,3]
## [1,] 74 92 110
49. eigenvalues, eigenvectors
Instructional video about Eigenvalues and Eigenvectors()
eigen() returns both eigen values and eigen vectors
Here’s a matrix…
A
## [,1] [,2] [,3]
## [1,] 1 4 7
## [2,] 2 5 8
## [3,] 3 6 9
Here’s the eigen value and vector
eigen(A)
## eigen() decomposition
## $values
## [1] 1.611684e+01 -1.116844e+00 -5.700691e-16