SlideShare a Scribd company logo
1 of 7
Download to read offline
Comparing Least Squares Calculations
Douglas Bates
R Development Core Team
Douglas.Bates@R-project.org
September 3, 2012
Abstract
Many statistics methods require one or more least squares problems
to be solved. There are several ways to perform this calculation, using
objects from the base R system and using objects in the classes defined
in the Matrix package.
We compare the speed of some of these methods on a very small ex-
ample and on a example for which the model matrix is large and sparse.
1 Linear least squares calculations
Many statistical techniques require least squares solutions
β = arg min
β
y − Xβ
2
(1)
where X is an n × p model matrix (p ≤ n), y is n-dimensional and β is p
dimensional. Most statistics texts state that the solution to (1) is
β = XT
X
−1
XT
y (2)
when X has full column rank (i.e. the columns of X are linearly independent)
and all too frequently it is calculated in exactly this way.
1.1 A small example
As an example, let’s create a model matrix, mm, and corresponding response
vector, y, for a simple linear regression model using the Formaldehyde data.
> data(Formaldehyde)
> str(Formaldehyde)
'data.frame': 6 obs. of 2 variables:
$ carb : num 0.1 0.3 0.5 0.6 0.7 0.9
$ optden: num 0.086 0.269 0.446 0.538 0.626 0.782
1
> (m <- cbind(1, Formaldehyde$carb))
[,1] [,2]
[1,] 1 0.1
[2,] 1 0.3
[3,] 1 0.5
[4,] 1 0.6
[5,] 1 0.7
[6,] 1 0.9
> (yo <- Formaldehyde$optden)
[1] 0.086 0.269 0.446 0.538 0.626 0.782
Using t to evaluate the transpose, solve to take an inverse, and the %*% operator
for matrix multiplication, we can translate 2 into the S language as
> solve(t(m) %*% m) %*% t(m) %*% yo
[,1]
[1,] 0.005085714
[2,] 0.876285714
On modern computers this calculation is performed so quickly that it cannot
be timed accurately in R 1
> system.time(solve(t(m) %*% m) %*% t(m) %*% yo)
user system elapsed
0 0 0
and it provides essentially the same results as the standard lm.fit function that
is called by lm.
> dput(c(solve(t(m) %*% m) %*% t(m) %*% yo))
c(0.00508571428571428, 0.876285714285715)
> dput(unname(lm.fit(m, yo)$coefficients))
c(0.00508571428571408, 0.876285714285715)
1From R version 2.2.0, system.time() has default argument gcFirst = TRUE which is as-
sumed and relevant for all subsequent timings
2
1.2 A large example
For a large, ill-conditioned least squares problem, such as that described in
Koenker and Ng (2003), the literal translation of (2) does not perform well.
> library(Matrix)
> data(KNex, package = "Matrix")
> y <- KNex$y
> mm <- as(KNex$mm, "matrix") # full traditional matrix
> dim(mm)
[1] 1850 712
> system.time(naive.sol <- solve(t(mm) %*% mm) %*% t(mm) %*% y)
user system elapsed
3.682 0.014 3.718
Because the calculation of a “cross-product” matrix, such as XT
X or XT
y,
is a common operation in statistics, the crossprod function has been provided
to do this efficiently. In the single argument form crossprod(mm) calculates
XT
X, taking advantage of the symmetry of the product. That is, instead of
calculating the 7122
= 506944 elements of XT
X separately, it only calculates
the (712 · 713)/2 = 253828 elements in the upper triangle and replicates them
in the lower triangle. Furthermore, there is no need to calculate the inverse of
a matrix explicitly when solving a linear system of equations. When the two
argument form of the solve function is used the linear system
XT
X β = XT
y (3)
is solved directly.
Combining these optimizations we obtain
> system.time(cpod.sol <- solve(crossprod(mm), crossprod(mm,y)))
user system elapsed
0.989 0.007 1.002
> all.equal(naive.sol, cpod.sol)
[1] TRUE
On this computer (2.0 GHz Pentium-4, 1 GB Memory, Goto’s BLAS, in
Spring 2004) the crossprod form of the calculation is about four times as fast as
the naive calculation. In fact, the entire crossprod solution is faster than simply
calculating XT
X the naive way.
> system.time(t(mm) %*% mm)
3
user system elapsed
1.840 0.001 1.854
Note that in newer versions of R and the BLAS library (as of summer 2007),
R’s %*% is able to detect the many zeros in mm and shortcut many operations, and
is hence much faster for such a sparse matrix than crossprod which currently
does not make use of such optimizations. This is not the case when R is linked
against an optimized BLAS library such as GOTO or ATLAS. Also, for fully
dense matrices, crossprod() indeed remains faster (by a factor of two, typically)
independently of the BLAS library:
> fm <- mm
> set.seed(11)
> fm[] <- rnorm(length(fm))
> system.time(c1 <- t(fm) %*% fm)
user system elapsed
1.922 0.002 1.937
> system.time(c2 <- crossprod(fm))
user system elapsed
0.890 0.000 0.896
> stopifnot(all.equal(c1, c2, tol = 1e-12))
1.3 Least squares calculations with Matrix classes
The crossprod function applied to a single matrix takes advantage of symme-
try when calculating the product but does not retain the information that the
product is symmetric (and positive semidefinite). As a result the solution of (3)
is performed using general linear system solver based on an LU decomposition
when it would be faster, and more stable numerically, to use a Cholesky decom-
position. The Cholesky decomposition could be used but it is rather awkward
> system.time(ch <- chol(crossprod(mm)))
user system elapsed
0.965 0.000 0.972
> system.time(chol.sol <-
+ backsolve(ch, forwardsolve(ch, crossprod(mm, y),
+ upper = TRUE, trans = TRUE)))
user system elapsed
0.012 0.000 0.012
> stopifnot(all.equal(chol.sol, naive.sol))
4
The Matrix package uses the S4 class system (Chambers, 1998) to retain
information on the structure of matrices from the intermediate calculations.
A general matrix in dense storage, created by the Matrix function, has class
"dgeMatrix" but its cross-product has class "dpoMatrix". The solve methods
for the "dpoMatrix" class use the Cholesky decomposition.
> mm <- as(KNex$mm, "dgeMatrix")
> class(crossprod(mm))
[1] "dpoMatrix"
attr(,"package")
[1] "Matrix"
> system.time(Mat.sol <- solve(crossprod(mm), crossprod(mm, y)))
user system elapsed
0.962 0.000 0.967
> stopifnot(all.equal(naive.sol, unname(as(Mat.sol,"matrix"))))
Furthermore, any method that calculates a decomposition or factorization
stores the resulting factorization with the original object so that it can be reused
without recalculation.
> xpx <- crossprod(mm)
> xpy <- crossprod(mm, y)
> system.time(solve(xpx, xpy))
user system elapsed
0.096 0.000 0.097
> system.time(solve(xpx, xpy)) # reusing factorization
user system elapsed
0.001 0.000 0.001
The model matrix mm is sparse; that is, most of the elements of mm are zero.
The Matrix package incorporates special methods for sparse matrices, which
produce the fastest results of all.
> mm <- KNex$mm
> class(mm)
[1] "dgCMatrix"
attr(,"package")
[1] "Matrix"
> system.time(sparse.sol <- solve(crossprod(mm), crossprod(mm, y)))
5
user system elapsed
0.006 0.000 0.005
> stopifnot(all.equal(naive.sol, unname(as(sparse.sol, "matrix"))))
As with other classes in the Matrix package, the dsCMatrix retains any
factorization that has been calculated although, in this case, the decomposition
is so fast that it is difficult to determine the difference in the solution times.
> xpx <- crossprod(mm)
> xpy <- crossprod(mm, y)
> system.time(solve(xpx, xpy))
user system elapsed
0.002 0.000 0.002
> system.time(solve(xpx, xpy))
user system elapsed
0.001 0.000 0.000
Session Info
> toLatex(sessionInfo())
• R version 2.15.1 Patched (2012-09-01 r60539),
x86_64-unknown-linux-gnu
• Locale: LC_CTYPE=de_CH.UTF-8, LC_NUMERIC=C, LC_TIME=en_US.UTF-8,
LC_COLLATE=C, LC_MONETARY=en_US.UTF-8, LC_MESSAGES=de_CH.UTF-8,
LC_PAPER=C, LC_NAME=C, LC_ADDRESS=C, LC_TELEPHONE=C,
LC_MEASUREMENT=de_CH.UTF-8, LC_IDENTIFICATION=C
• Base packages: base, datasets, grDevices, graphics, methods, stats, tools,
utils
• Other packages: Matrix 1.0-9, lattice 0.20-10
• Loaded via a namespace (and not attached): grid 2.15.1
> if(identical(1L, grep("linux", R.version[["os"]]))) { ## Linux - only ---
+ Scpu <- sfsmisc::Sys.procinfo("/proc/cpuinfo")
+ Smem <- sfsmisc::Sys.procinfo("/proc/meminfo")
+ print(Scpu[c("model name", "cpu MHz", "cache size", "bogomips")])
+ print(Smem[c("MemTotal", "SwapTotal")])
+ }
6
_
model name AMD Phenom(tm) II X4 925 Processor
cpu MHz 800.000
cache size 512 KB
bogomips 5599.95
_
MemTotal 7920288 kB
SwapTotal 16777212 kB
References
John M. Chambers. Programming with Data. Springer, New York, 1998. ISBN
0-387-98503-4.
Roger Koenker and Pin Ng. SparseM: A sparse matrix package for R. J. of
Statistical Software, 8(6), 2003.
7

More Related Content

What's hot

Approximate Methods
Approximate MethodsApproximate Methods
Approximate MethodsTeja Ande
 
Modern Control - Lec07 - State Space Modeling of LTI Systems
Modern Control - Lec07 - State Space Modeling of LTI SystemsModern Control - Lec07 - State Space Modeling of LTI Systems
Modern Control - Lec07 - State Space Modeling of LTI SystemsAmr E. Mohamed
 
Forecasting models for Customer Lifetime Value
Forecasting models for Customer Lifetime ValueForecasting models for Customer Lifetime Value
Forecasting models for Customer Lifetime ValueAsoka Korale
 
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...IJERA Editor
 
The Chimera Grid Concept and Application
The Chimera Grid Concept and Application The Chimera Grid Concept and Application
The Chimera Grid Concept and Application Putika Ashfar Khoiri
 
Finite element analysis of space truss by abaqus
Finite element analysis of space truss by abaqus Finite element analysis of space truss by abaqus
Finite element analysis of space truss by abaqus P Venkateswalu
 

What's hot (9)

Approximate Methods
Approximate MethodsApproximate Methods
Approximate Methods
 
Modern Control - Lec07 - State Space Modeling of LTI Systems
Modern Control - Lec07 - State Space Modeling of LTI SystemsModern Control - Lec07 - State Space Modeling of LTI Systems
Modern Control - Lec07 - State Space Modeling of LTI Systems
 
Forecasting models for Customer Lifetime Value
Forecasting models for Customer Lifetime ValueForecasting models for Customer Lifetime Value
Forecasting models for Customer Lifetime Value
 
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
 
The Chimera Grid Concept and Application
The Chimera Grid Concept and Application The Chimera Grid Concept and Application
The Chimera Grid Concept and Application
 
Daa chapter 2
Daa chapter 2Daa chapter 2
Daa chapter 2
 
Finite element analysis of space truss by abaqus
Finite element analysis of space truss by abaqus Finite element analysis of space truss by abaqus
Finite element analysis of space truss by abaqus
 
Clustering tutorial
Clustering tutorialClustering tutorial
Clustering tutorial
 
Smart Multitask Bregman Clustering
Smart Multitask Bregman ClusteringSmart Multitask Bregman Clustering
Smart Multitask Bregman Clustering
 

Viewers also liked

Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
portable devices ipad and psp :)
portable devices ipad and psp :)portable devices ipad and psp :)
portable devices ipad and psp :)wildneon
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
Comparisons
ComparisonsComparisons
ComparisonsZifirio
 
La contaminació radioactiva - Congrés Medi Ambient 1r ESO
La contaminació radioactiva - Congrés Medi Ambient 1r ESOLa contaminació radioactiva - Congrés Medi Ambient 1r ESO
La contaminació radioactiva - Congrés Medi Ambient 1r ESOAnnapujolo
 
Residus tòxics i domèstics - Congrés Medi Ambient 1r ESO
Residus tòxics i domèstics - Congrés Medi Ambient 1r ESOResidus tòxics i domèstics - Congrés Medi Ambient 1r ESO
Residus tòxics i domèstics - Congrés Medi Ambient 1r ESOAnnapujolo
 
L’efecte hivernacle - Congrés Medi Ambient 1r ESO
L’efecte hivernacle - Congrés Medi Ambient 1r ESOL’efecte hivernacle - Congrés Medi Ambient 1r ESO
L’efecte hivernacle - Congrés Medi Ambient 1r ESOAnnapujolo
 
Superior vena cava syndrome
Superior vena cava syndromeSuperior vena cava syndrome
Superior vena cava syndromeAmit Jose
 

Viewers also liked (18)

Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
portable devices ipad and psp :)
portable devices ipad and psp :)portable devices ipad and psp :)
portable devices ipad and psp :)
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Ana
AnaAna
Ana
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
La contaminació radioactiva - Congrés Medi Ambient 1r ESO
La contaminació radioactiva - Congrés Medi Ambient 1r ESOLa contaminació radioactiva - Congrés Medi Ambient 1r ESO
La contaminació radioactiva - Congrés Medi Ambient 1r ESO
 
Residus tòxics i domèstics - Congrés Medi Ambient 1r ESO
Residus tòxics i domèstics - Congrés Medi Ambient 1r ESOResidus tòxics i domèstics - Congrés Medi Ambient 1r ESO
Residus tòxics i domèstics - Congrés Medi Ambient 1r ESO
 
L’efecte hivernacle - Congrés Medi Ambient 1r ESO
L’efecte hivernacle - Congrés Medi Ambient 1r ESOL’efecte hivernacle - Congrés Medi Ambient 1r ESO
L’efecte hivernacle - Congrés Medi Ambient 1r ESO
 
Superior vena cava syndrome
Superior vena cava syndromeSuperior vena cava syndrome
Superior vena cava syndrome
 

Similar to Comparisons

Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsAjay Bidyarthy
 
My Postdoctoral Research
My Postdoctoral ResearchMy Postdoctoral Research
My Postdoctoral ResearchPo-Ting Wu
 
Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...eSAT Publishing House
 
Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...eSAT Journals
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical MethodsTeja Ande
 
Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)Cdiscount
 
Module 1 notes of data warehousing and data
Module 1 notes of data warehousing and dataModule 1 notes of data warehousing and data
Module 1 notes of data warehousing and datavijipersonal2012
 
A Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsA Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsKarla Adamson
 
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceParallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceUniversity of Technology - Iraq
 
Matlab-free course by Mohd Esa
Matlab-free course by Mohd EsaMatlab-free course by Mohd Esa
Matlab-free course by Mohd EsaMohd Esa
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningAndres Hernandez
 
Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsAmr E. Mohamed
 
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudskoCHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudskoSydneyJaydeanKhanyil
 

Similar to Comparisons (20)

Mechanical Engineering Assignment Help
Mechanical Engineering Assignment HelpMechanical Engineering Assignment Help
Mechanical Engineering Assignment Help
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
 
My Postdoctoral Research
My Postdoctoral ResearchMy Postdoctoral Research
My Postdoctoral Research
 
Es272 ch1
Es272 ch1Es272 ch1
Es272 ch1
 
Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...
 
solver (1)
solver (1)solver (1)
solver (1)
 
Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...
 
Chapter26
Chapter26Chapter26
Chapter26
 
Learn Matlab
Learn MatlabLearn Matlab
Learn Matlab
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
 
Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)
 
Lab03
Lab03Lab03
Lab03
 
Module 1 notes of data warehousing and data
Module 1 notes of data warehousing and dataModule 1 notes of data warehousing and data
Module 1 notes of data warehousing and data
 
A Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsA Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT Problems
 
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceParallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
 
Analysis of Algorithum
Analysis of AlgorithumAnalysis of Algorithum
Analysis of Algorithum
 
Matlab-free course by Mohd Esa
Matlab-free course by Mohd EsaMatlab-free course by Mohd Esa
Matlab-free course by Mohd Esa
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
 
Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of Systems
 
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudskoCHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
 

Recently uploaded

AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 

Recently uploaded (20)

AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 

Comparisons

  • 1. Comparing Least Squares Calculations Douglas Bates R Development Core Team Douglas.Bates@R-project.org September 3, 2012 Abstract Many statistics methods require one or more least squares problems to be solved. There are several ways to perform this calculation, using objects from the base R system and using objects in the classes defined in the Matrix package. We compare the speed of some of these methods on a very small ex- ample and on a example for which the model matrix is large and sparse. 1 Linear least squares calculations Many statistical techniques require least squares solutions β = arg min β y − Xβ 2 (1) where X is an n × p model matrix (p ≤ n), y is n-dimensional and β is p dimensional. Most statistics texts state that the solution to (1) is β = XT X −1 XT y (2) when X has full column rank (i.e. the columns of X are linearly independent) and all too frequently it is calculated in exactly this way. 1.1 A small example As an example, let’s create a model matrix, mm, and corresponding response vector, y, for a simple linear regression model using the Formaldehyde data. > data(Formaldehyde) > str(Formaldehyde) 'data.frame': 6 obs. of 2 variables: $ carb : num 0.1 0.3 0.5 0.6 0.7 0.9 $ optden: num 0.086 0.269 0.446 0.538 0.626 0.782 1
  • 2. > (m <- cbind(1, Formaldehyde$carb)) [,1] [,2] [1,] 1 0.1 [2,] 1 0.3 [3,] 1 0.5 [4,] 1 0.6 [5,] 1 0.7 [6,] 1 0.9 > (yo <- Formaldehyde$optden) [1] 0.086 0.269 0.446 0.538 0.626 0.782 Using t to evaluate the transpose, solve to take an inverse, and the %*% operator for matrix multiplication, we can translate 2 into the S language as > solve(t(m) %*% m) %*% t(m) %*% yo [,1] [1,] 0.005085714 [2,] 0.876285714 On modern computers this calculation is performed so quickly that it cannot be timed accurately in R 1 > system.time(solve(t(m) %*% m) %*% t(m) %*% yo) user system elapsed 0 0 0 and it provides essentially the same results as the standard lm.fit function that is called by lm. > dput(c(solve(t(m) %*% m) %*% t(m) %*% yo)) c(0.00508571428571428, 0.876285714285715) > dput(unname(lm.fit(m, yo)$coefficients)) c(0.00508571428571408, 0.876285714285715) 1From R version 2.2.0, system.time() has default argument gcFirst = TRUE which is as- sumed and relevant for all subsequent timings 2
  • 3. 1.2 A large example For a large, ill-conditioned least squares problem, such as that described in Koenker and Ng (2003), the literal translation of (2) does not perform well. > library(Matrix) > data(KNex, package = "Matrix") > y <- KNex$y > mm <- as(KNex$mm, "matrix") # full traditional matrix > dim(mm) [1] 1850 712 > system.time(naive.sol <- solve(t(mm) %*% mm) %*% t(mm) %*% y) user system elapsed 3.682 0.014 3.718 Because the calculation of a “cross-product” matrix, such as XT X or XT y, is a common operation in statistics, the crossprod function has been provided to do this efficiently. In the single argument form crossprod(mm) calculates XT X, taking advantage of the symmetry of the product. That is, instead of calculating the 7122 = 506944 elements of XT X separately, it only calculates the (712 · 713)/2 = 253828 elements in the upper triangle and replicates them in the lower triangle. Furthermore, there is no need to calculate the inverse of a matrix explicitly when solving a linear system of equations. When the two argument form of the solve function is used the linear system XT X β = XT y (3) is solved directly. Combining these optimizations we obtain > system.time(cpod.sol <- solve(crossprod(mm), crossprod(mm,y))) user system elapsed 0.989 0.007 1.002 > all.equal(naive.sol, cpod.sol) [1] TRUE On this computer (2.0 GHz Pentium-4, 1 GB Memory, Goto’s BLAS, in Spring 2004) the crossprod form of the calculation is about four times as fast as the naive calculation. In fact, the entire crossprod solution is faster than simply calculating XT X the naive way. > system.time(t(mm) %*% mm) 3
  • 4. user system elapsed 1.840 0.001 1.854 Note that in newer versions of R and the BLAS library (as of summer 2007), R’s %*% is able to detect the many zeros in mm and shortcut many operations, and is hence much faster for such a sparse matrix than crossprod which currently does not make use of such optimizations. This is not the case when R is linked against an optimized BLAS library such as GOTO or ATLAS. Also, for fully dense matrices, crossprod() indeed remains faster (by a factor of two, typically) independently of the BLAS library: > fm <- mm > set.seed(11) > fm[] <- rnorm(length(fm)) > system.time(c1 <- t(fm) %*% fm) user system elapsed 1.922 0.002 1.937 > system.time(c2 <- crossprod(fm)) user system elapsed 0.890 0.000 0.896 > stopifnot(all.equal(c1, c2, tol = 1e-12)) 1.3 Least squares calculations with Matrix classes The crossprod function applied to a single matrix takes advantage of symme- try when calculating the product but does not retain the information that the product is symmetric (and positive semidefinite). As a result the solution of (3) is performed using general linear system solver based on an LU decomposition when it would be faster, and more stable numerically, to use a Cholesky decom- position. The Cholesky decomposition could be used but it is rather awkward > system.time(ch <- chol(crossprod(mm))) user system elapsed 0.965 0.000 0.972 > system.time(chol.sol <- + backsolve(ch, forwardsolve(ch, crossprod(mm, y), + upper = TRUE, trans = TRUE))) user system elapsed 0.012 0.000 0.012 > stopifnot(all.equal(chol.sol, naive.sol)) 4
  • 5. The Matrix package uses the S4 class system (Chambers, 1998) to retain information on the structure of matrices from the intermediate calculations. A general matrix in dense storage, created by the Matrix function, has class "dgeMatrix" but its cross-product has class "dpoMatrix". The solve methods for the "dpoMatrix" class use the Cholesky decomposition. > mm <- as(KNex$mm, "dgeMatrix") > class(crossprod(mm)) [1] "dpoMatrix" attr(,"package") [1] "Matrix" > system.time(Mat.sol <- solve(crossprod(mm), crossprod(mm, y))) user system elapsed 0.962 0.000 0.967 > stopifnot(all.equal(naive.sol, unname(as(Mat.sol,"matrix")))) Furthermore, any method that calculates a decomposition or factorization stores the resulting factorization with the original object so that it can be reused without recalculation. > xpx <- crossprod(mm) > xpy <- crossprod(mm, y) > system.time(solve(xpx, xpy)) user system elapsed 0.096 0.000 0.097 > system.time(solve(xpx, xpy)) # reusing factorization user system elapsed 0.001 0.000 0.001 The model matrix mm is sparse; that is, most of the elements of mm are zero. The Matrix package incorporates special methods for sparse matrices, which produce the fastest results of all. > mm <- KNex$mm > class(mm) [1] "dgCMatrix" attr(,"package") [1] "Matrix" > system.time(sparse.sol <- solve(crossprod(mm), crossprod(mm, y))) 5
  • 6. user system elapsed 0.006 0.000 0.005 > stopifnot(all.equal(naive.sol, unname(as(sparse.sol, "matrix")))) As with other classes in the Matrix package, the dsCMatrix retains any factorization that has been calculated although, in this case, the decomposition is so fast that it is difficult to determine the difference in the solution times. > xpx <- crossprod(mm) > xpy <- crossprod(mm, y) > system.time(solve(xpx, xpy)) user system elapsed 0.002 0.000 0.002 > system.time(solve(xpx, xpy)) user system elapsed 0.001 0.000 0.000 Session Info > toLatex(sessionInfo()) • R version 2.15.1 Patched (2012-09-01 r60539), x86_64-unknown-linux-gnu • Locale: LC_CTYPE=de_CH.UTF-8, LC_NUMERIC=C, LC_TIME=en_US.UTF-8, LC_COLLATE=C, LC_MONETARY=en_US.UTF-8, LC_MESSAGES=de_CH.UTF-8, LC_PAPER=C, LC_NAME=C, LC_ADDRESS=C, LC_TELEPHONE=C, LC_MEASUREMENT=de_CH.UTF-8, LC_IDENTIFICATION=C • Base packages: base, datasets, grDevices, graphics, methods, stats, tools, utils • Other packages: Matrix 1.0-9, lattice 0.20-10 • Loaded via a namespace (and not attached): grid 2.15.1 > if(identical(1L, grep("linux", R.version[["os"]]))) { ## Linux - only --- + Scpu <- sfsmisc::Sys.procinfo("/proc/cpuinfo") + Smem <- sfsmisc::Sys.procinfo("/proc/meminfo") + print(Scpu[c("model name", "cpu MHz", "cache size", "bogomips")]) + print(Smem[c("MemTotal", "SwapTotal")]) + } 6
  • 7. _ model name AMD Phenom(tm) II X4 925 Processor cpu MHz 800.000 cache size 512 KB bogomips 5599.95 _ MemTotal 7920288 kB SwapTotal 16777212 kB References John M. Chambers. Programming with Data. Springer, New York, 1998. ISBN 0-387-98503-4. Roger Koenker and Pin Ng. SparseM: A sparse matrix package for R. J. of Statistical Software, 8(6), 2003. 7