SlideShare a Scribd company logo
1 of 9
Download to read offline
Linear Algebra and its Applications 357 (2002) 163–171
www.elsevier.com/locate/laa
A PLU-factorization of rectangular matrices by
the Neville eliminationୋ
M. Gassó, Juan R. Torregrosa∗
Departamento de Matemática Aplicada, Universidad Politécnica de Valencia, 46071 Valencia, Spain
Received 13 December 2001; accepted 2 April 2002
Submitted by D. Hershkowitz
Abstract
In this paper we prove that Neville elimination can be matricially described by elementary
matrices. A PLU-factorization is obtained for any n × m matrix, where P is a permutation
matrix, L is a lower triangular matrix (product of bidiagonal factors) and U is an upper trian-
gular matrix. This result generalizes the Neville factorization usually applied to characterize
the totally positive matrices. We prove that this elimination procedure is an alternative to
Gaussian elimination and sometimes provides a lower computational cost.
© 2002 Elsevier Science Inc. All rights reserved.
Keywords: Totally positive matrix; Neville elimination; Factorization LU
1. Introduction
A real matrix is called totally positive if all its minors are nonnegative. These ma-
trices have become increasingly important in approximation theory and other fields.
For a comprehensive survey of this subject from an algebraic point of view, complete
with historical references, see [1].
Fiedler and Markham obtained, in [2,3], a factorization for totally nonsingular
matrices that satisfy the properties consecutive-column CC and consecutive-row CR,
using the Neville elimination. From this elimination process Gasca and Peña obtained
an LU factorization for matrices satisfying the without row exchange WR condition.
In both cases the Neville elimination can be performed without row exchange.
ୋ
Research supported by Spanish DGI grant number BFM2001-0081-C03-02.
∗ Corresponding author. Tel.: +34-96-3877668; fax: +34-96-3877199.
E-mail addresses: mgasso@mat.upv.es (M. Gass´o), jrtorre@math.upv.es (J.R. Torregrosa).
0024-3795/02/$ - see front matter 2002 Elsevier Science Inc. All rights reserved.
PII: S0024-3795(02)00370-1
164 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171
In this paper, taking into account that there exist totally positive matrices such that
they do not satisfy condition WR, we generalize some of these results for matrices
such that they do not satisfy conditions CC, CR or WR and are not necessarily regular
matrices.
In Section 2 we recall the Neville elimination process and in Section 3 we are
going to obtain a matricial description of this process for matrices of size n × m.
We obtain a PLU factorization of a matrix of size n × m, where P is a permutation
matrix of size n × n, L is a lower triangular matrix (product of bidiagonal factors)
of size n × n and U is an upper triangular matrix of size n × m.
Finally, from the remarks of Gasca and Peña in [5], we define a class of matrices
where Neville elimination has a lower computational cost than Gaussian elimination.
2. Neville elimination
The essence of Neville elimination is to produce zeros in a column of a matrix
by adding to each row an appropiate multiple of the previous one (instead of using a
fixed row with a fixed pivot as in Gaussian elimination). Eventual reorderings of the
rows of the matrix may be necessary.
More precisely, we recall the Neville elimination process [4] for any n × m matrix
A = (aij ). Let ¯A1 := (¯a1
ij ) be such that ¯a1
ij = aij , 1 i n and 1 j m. If there
are zeros in the first column of ¯A1, we carry the corresponding rows down to the
bottom in such a way that the relative order among them is the same as in ¯A1. We
denote the new matrix by A1. If we have not carried any row down to the bottom,
them A1 := ¯A1. In both cases, let i1 be i1 := 1. The method consist in constructing
a finite sequence of matrices Ak such that, for each Ak, the submatrix formed by its
k − 1 initial columns is an upper echelon form matrix. In fact, if At = (at
ij ) then we
introduce zeros in its tth column below the position (it , t), thus forming the n × m
matrix
¯At+1 = (¯at+1
ij ),
where, for any j such that 1 j m, we have
¯at+1
ij := at
ij , i = 1, 2, . . . , it ,
¯at+1
ij := at
ij −
at
it
at
i−1,t
at
i−1,t if at
i−1,t /= 0, it < i n,
¯at+1
ij := at
ij if at
i−1,t = 0, it < i n.
Observe that with our assumptions, at
i−1,t = 0 implies at
it = 0. Then we define
it+1 :=
it if at
it ,t (= ¯at+1
it ,t ) = 0,
it + 1 if at
it ,t (= ¯at+1
it ,t ) /= 0.
M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 165
If ¯At+1 has zeros in the (t + 1)th column in the row it+1 or below it, we will carry
these rows down as we have done with ¯A1. The matrix obtained in this way will be
denoted by At+1 = (at+1
ij ). Of course, if there is no row that has been carried down,
then At+1 := ¯At+1. After a finite number of steps we get ¯A¯t−1, A¯t−1, and
¯A¯t = U (¯t m + 1),
where U is an upper echelon form matrix. In this process the element
pij := a
j
ij , 1 j m, ij i n,
is called the (i, j) pivot of the Neville elimination of A and the number
mij :=
a
j
ij /a
j
i−1,j if a
j
i−1,j /= 0,
0 if a
j
i−1,j = 0,
1 j m, ij < i n,
the (i, j) multiplier of the Neville elimination of A. We observe that mij = 0 if and
only if a
j
ij = 0.
3. Matricial description of Neville elimination
The Neville elimination process, for an n × m matrix, can be matricially described
by elementary and permutation matrices. We shall use similar notations to [5]. Let
Eij (α), 1 i /= j n, be the elementary triangular matrix whose (r, s) entry 1
r, s n, is given by



1 if r = s,
α if (r, s) = (i, j),
0 elsewhere.
Note that if i > j the matrix Eij (α) is a lower triangular matrix. We are interested in
the matrices Ei+1,i(α) which for simplicity will be denoted, as in [5], by Ei+1(α).
They are bidiagonal and lower triangular, and given by
Ei+1(α) =














1
1
1
...
1
α 1
...
1














.
We denote by Pij the elementary permutation matrix whose (r, s) entry, 1
r, s n is given by
166 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171



1 if r = s, r /= i, j,
1 if (r, s) = (i, j),
1 if (r, s) = (j, i),
0 otherwise.
So they are given explicitly by
Pij =

















1
1
...
0 1
1
...
1 0
...
1

















.
To obtain a matricial description of Neville elimination, we define the following
matrix.
Definition 3.1. From the matrices Pij we denote by j the following permutation
matrix:
j = Pn−1,n−2 · · · Pj+1,j Pj,n. (1)
Note that the product j A carries the row j down, reordering the remaining rows of
the matrix A.
The following lemma describes elementary properties of the matrices j .
Lemma 3.1. Every matrix j satisfies the following properties:
(i) j is a nonsingular matrix and −1
j = Pj,nPj+1,j · · · Pn−1,n−2.
(ii) Let A be a matrix of size n × m. If ¯fi denote its ith row, i = 1, 2, . . . , n, then
j A =












¯f1
...
¯fj−1
¯fj+1
...
¯fn
¯fj












.
(iii) n−1 = Pn,n−1.
M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 167
For a matrix A, of size n × m, the Neville elimination process can be written in
the following way:
If there are zeros in the first column of ¯A1 = A, the corresponding rows are carried
down to the bottom in such a way that the relative order among them is the same as
in ¯A1. The new matrix is denoted by A1, that is
A1 = 1
n−1 · · · 1
1
¯A1,
where
1
j = In if ¯a1
j1 /= 0,
1
j = j if ¯ai
ji = 0.
In the next step we make zeros in the first column, and we obtain
¯A2 = En(−mn1) · · · E2(−m21)A1.
Again, if there are zeros in the second column of ¯A2, the corresponding rows are
carried down to the bottom so that the relative order among them is the same as in
¯A2. The new matrix is denoted by A2, that is
A2 = 2
n−1 · · · 2
1
¯A2
=
n−1
j=2
2
j En(−mn1) · · · E2(−m21)
n−1
j=1
1
j A.
In general, the method consists of constructing a finite sequence of matrices as
follow
A = ¯A1 −→ A1 −→ ¯A2 −→ A2 −→ · · · −→ ¯An −→ U,
where U is an upper echelon form matrix. We observe that ¯Ai = Ai when there are
not row exchanges, and
i
j = In if ¯ai
ji /= 0,
i
j = j if ¯ai
ji = 0.
(2)
Like [5] we denote by Fi the following lower triangular matrix
Fi =















1
0 1
0
...
... 1
−mi+1,i 1
−mi+2,i 1
...
...
−mni 1















= Ei+1,i(−mi+1,i) · · · En,n−1(−mni)
168 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171
and
Ki = i
n−1 · · · i
i,
where i
j is defined in (2). So we can establish the following result.
Theorem 3.1. Let A be a matrix of size n × m. The Neville elimination process for
A can be described as
Fn−1Kn−1 · · · F2K2F1K1A = U,
where Ki, i = 1, 2, . . . , n − 1, is a permutation matrix and Fi, i = 1, 2, . . . , n − 1,
is a lower triangular matrix.
4. PLU descomposition of a matrix A
If A is a matrix of size n × m, we are interested to realign, in an adequate way,
the matrices Fi and Ki in order to obtain a PLU descomposition of A, where P is a
permutation matrix of size n × n, L is a lower triangular matrix of size n × n and U
is an upper echelon form matrix of size n × m.
We can observe that the elementary matrices Pij and Elm(α) do not satisfy the
commutative property, but it is easy to prove the following properties.
Lemma 4.1. For any matrices Eij (α) and k we have the following statements:
(a) If k > i, then kEij (α) = Eij (α) k.
(b) If k = i, then iEij (α) = Enj (α) i.
(c) If k < i, we distinguish:
(c1) If k = j, then j Eij (α) = Ei−1,n(α) j .
(c2) If k /= j, we have
(∗) for j < k < i, kEij (α) = Ei−1,j (α) k,
(∗) for j > k, kEij (α) = Ei−1,j−1(α) k.
Note that we do not deal with the case k = j because we apply the matrix Eij
when the (i, j) position is a pivot of the Neville elimination and therefore we do not
need to use the matrix j . So, for i > j, kEij (α) = kElm(α), with l > m.
Therefore we can realign the matrices k and Ei,i−1(α) in order to obtain the
desired PLU descomposition. We obtain the following result.
Theorem 4.1. For a matrix A of size n × m, the Neville elimination process can be
described in the following way:
Ei1j1 (α1) · · · Einjn (αn)Kn−1 · · · K1A = U,
where Eipjp (·), p = 1, 2, . . . , n, ip > jp, is a lower triangular matrix, Ki, i =
1, 2, . . . , n − 1 is a permutation matrix and U is an n × m upper echelon form
matrix.
M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 169
Proof. If we apply Lemma 4.1 to Theorem 3.1 we obtain this result.
Taking into account that Eij (α)−1 = Eij (−α) and K−1
i is a permutation matrix
also, it is easy to prove the following result.
Corollary 4.1. A matrix A of size n × m can be factorized as A = P LU, where P
is a permutation matrix of size n × n, L is a lower triangular matrix of size n × n
and U is an upper echelon form matrix of size n × m.
Proof. According to Theorem 4.1
A = (Kn−1 · · · K1)−1
(Ei1j1 (α1) · · · Einjn (αn))−1
U
= K−1
1 · · · K−1
n−1Einjn (−αn) · · · Ei1j1 (−α1)U,
and
L = Einjn (−αn) · · · Ei1j1 (−α1),
P = K−1
1 · · · K−1
n−1,
where L is a lower triangular matrix and P is a permutation matrix.
Example 4.1. Consider the following 4 × 4 matrix
A =




1 1 1 0
0 1 1 0
0 0 1 0
1 0 0 1



 .
By applying, in this order, the elementary matrices 2, 2, E21(−1), E32(1) and
3 we obtain
U =




1 1 1 0
0 −1 −1 1
0 0 1 0
0 0 0 1



 .
So,
3E32(1)E21(−1) 2 2A = U,
and by applying Lemma 4.1
E42(1)E21(−1) 3 2 2A = U,
and
A = −1
2
−1
2
−1
3 E21(1)E42(−1)U = P LU.
170 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171
Note that when A is a nonsingular matrix and satisfies the WR condition defined
in [4,5] the Neville elimination can be performed without row exchanges and we
have the factorization LU obtained by Gasca and Peña in [5].
In [5], a class of regular matrices is defined, where the Neville elimination process
has a lower computational cost than Gaussian elimination. Now, we extend this result
for a class of matrices not necessarily regular. So, if
A =
L 0
0 0
,
where L = Ei+k(αi+k) · · · Ei(αi), of size k × k, the Gauss elimination process re-
quires k(k − 1)/2 steps and k − 1 for Neville elimination.
Example 4.2. Consider the following matrix of size 5 × 7
A =






1 0 0 0 0 0 0
−1 1 0 0 0 0 0
−4 4 1 0 0 0 0
−8 8 2 1 0 0 0
0 0 0 0 0 0 0






.
By Gauss elimination process we need the elementary matrix E21(1), E31(4),
E41(8), E32(−4), E42(−8) and E43(−2) to obtain
U =
I4 0
0 0
.
By Neville elimination we need E43(−2), E32(−4) and E21(1) to obtain the same
matrix U.
In general, for matrices whose structure is
A =
L 0
0 X
,
where X = Ip, X = 0, X = L, etc., the Neville elimination need less elementary
matrices than Gaussian elimination.
Example 4.3. Consider the following totally positive matrix:
A =


1 1 1
1 1 1
0 1 1

 ,
which does not satisfy the WR condition. By applying, in this order, the elementary
matrices E21(−1) and P32, we obtain
M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 171
U =


1 1 1
0 1 1
0 0 0

 .
The PLU descomposition of A is
P =


1 0 0
0 0 1
0 1 0

 , L =


1 0 0
0 1 0
1 0 1

 , and U =


1 1 1
0 1 1
0 0 0

 .
We can observe that U is a totally positive matrix but L is not.
The authors intend to apply the results of this paper to generalized totally positive
matrices.
References
[1] T. Ando, Totally positive matrices, Linear Algebra Appl. 90 (1987) 165–219.
[2] M. Fiedler, T.L. Markham, Consecutive-column and row properties of matrices and the Loew-
ner–Neville factorization, Linear Algebra Appl. 266 (1997) 243–259.
[3] M. Fiedler, T.L. Markham, A factorization of totally nonsingular matrices over a ring with identity,
Linear Algebra Appl. 304 (2000) 161–171.
[4] M. Gasca, J.M. Peña, Total positivity and Neville elimination, Linear Algebra Appl. 165 (1992)
25–44.
[5] M. Gasca, J.M. Peña, A matricial description of Neville elimination with applications to total posi-
tivity, Linear Algebra Appl. 202 (1994) 33–53.

More Related Content

What's hot

Maths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectorsMaths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectorsJaydev Kishnani
 
Math1000 section1.10
Math1000 section1.10Math1000 section1.10
Math1000 section1.10StuartJones92
 
Function an old french mathematician said[1]12
Function an old french mathematician said[1]12Function an old french mathematician said[1]12
Function an old french mathematician said[1]12Mark Hilbert
 
Eigenvalues and eigenvectors
Eigenvalues and eigenvectorsEigenvalues and eigenvectors
Eigenvalues and eigenvectorsiraq
 
2 linear independence
2 linear independence2 linear independence
2 linear independenceAmanSaeed11
 
Graphs in physics
Graphs in physicsGraphs in physics
Graphs in physicssimonandisa
 
Unit i
Unit i Unit i
Unit i sunmo
 
Chapter 3 solving systems of linear equations
Chapter 3 solving systems of linear equationsChapter 3 solving systems of linear equations
Chapter 3 solving systems of linear equationsssuser53ee01
 
matrices and algbra
matrices and algbramatrices and algbra
matrices and algbragandhinagar
 
Sequences and Series (Mathematics)
Sequences and Series (Mathematics) Sequences and Series (Mathematics)
Sequences and Series (Mathematics) Dhrumil Maniar
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringshubham211
 

What's hot (20)

Finite difference &amp; interpolation
Finite difference &amp; interpolationFinite difference &amp; interpolation
Finite difference &amp; interpolation
 
Maths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectorsMaths-->>Eigenvalues and eigenvectors
Maths-->>Eigenvalues and eigenvectors
 
Math1000 section1.10
Math1000 section1.10Math1000 section1.10
Math1000 section1.10
 
rank of matrix
rank of matrixrank of matrix
rank of matrix
 
Function an old french mathematician said[1]12
Function an old french mathematician said[1]12Function an old french mathematician said[1]12
Function an old french mathematician said[1]12
 
Congress
Congress Congress
Congress
 
Eigenvalues and eigenvectors
Eigenvalues and eigenvectorsEigenvalues and eigenvectors
Eigenvalues and eigenvectors
 
Unit ii
Unit iiUnit ii
Unit ii
 
2 linear independence
2 linear independence2 linear independence
2 linear independence
 
21 4 ztransform
21 4 ztransform21 4 ztransform
21 4 ztransform
 
Graphs in physics
Graphs in physicsGraphs in physics
Graphs in physics
 
Unit i
Unit i Unit i
Unit i
 
Ch07 6
Ch07 6Ch07 6
Ch07 6
 
Chapter 3 solving systems of linear equations
Chapter 3 solving systems of linear equationsChapter 3 solving systems of linear equations
Chapter 3 solving systems of linear equations
 
Nsm
Nsm Nsm
Nsm
 
matrices and algbra
matrices and algbramatrices and algbra
matrices and algbra
 
Sequences and Series (Mathematics)
Sequences and Series (Mathematics) Sequences and Series (Mathematics)
Sequences and Series (Mathematics)
 
Eigen value and vectors
Eigen value and vectorsEigen value and vectors
Eigen value and vectors
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
 
Ch07 8
Ch07 8Ch07 8
Ch07 8
 

Viewers also liked

Toan pt.de023.2010
Toan pt.de023.2010Toan pt.de023.2010
Toan pt.de023.2010BẢO Hí
 
Introduction to Green Practices and a Quick Assessment
Introduction to Green Practices and a Quick AssessmentIntroduction to Green Practices and a Quick Assessment
Introduction to Green Practices and a Quick AssessmentPaola Abbate
 
Wr team 4_rus
Wr team 4_rusWr team 4_rus
Wr team 4_rusEVA
 
News letter literacy
News letter   literacyNews letter   literacy
News letter literacyamo0oniee
 
Cuento Johann Diethelm
Cuento Johann DiethelmCuento Johann Diethelm
Cuento Johann Diethelmguest7a1fe6b5
 
eTwinning embajadores Castilla -La Mancha
eTwinning embajadores Castilla -La ManchaeTwinning embajadores Castilla -La Mancha
eTwinning embajadores Castilla -La ManchaeTwinning España
 
Choosing the Right Investigator
Choosing the Right InvestigatorChoosing the Right Investigator
Choosing the Right InvestigatorDominic Witton
 
Trabajo colaborativo
Trabajo colaborativo Trabajo colaborativo
Trabajo colaborativo LOBAAN
 

Viewers also liked (15)

My dream
My dreamMy dream
My dream
 
Toan pt.de023.2010
Toan pt.de023.2010Toan pt.de023.2010
Toan pt.de023.2010
 
Introduction to Green Practices and a Quick Assessment
Introduction to Green Practices and a Quick AssessmentIntroduction to Green Practices and a Quick Assessment
Introduction to Green Practices and a Quick Assessment
 
Wr team 4_rus
Wr team 4_rusWr team 4_rus
Wr team 4_rus
 
Kek
KekKek
Kek
 
Prevención del dolor miofascial
Prevención del dolor miofascialPrevención del dolor miofascial
Prevención del dolor miofascial
 
News letter literacy
News letter   literacyNews letter   literacy
News letter literacy
 
Aeman ibu
Aeman   ibuAeman   ibu
Aeman ibu
 
Cuento Johann Diethelm
Cuento Johann DiethelmCuento Johann Diethelm
Cuento Johann Diethelm
 
eTwinning embajadores Castilla -La Mancha
eTwinning embajadores Castilla -La ManchaeTwinning embajadores Castilla -La Mancha
eTwinning embajadores Castilla -La Mancha
 
Call
CallCall
Call
 
Ny cris
Ny crisNy cris
Ny cris
 
Slideshow CoP
Slideshow CoPSlideshow CoP
Slideshow CoP
 
Choosing the Right Investigator
Choosing the Right InvestigatorChoosing the Right Investigator
Choosing the Right Investigator
 
Trabajo colaborativo
Trabajo colaborativo Trabajo colaborativo
Trabajo colaborativo
 

Similar to 02e7e5277e9236393b000000

Definitions matrices y determinantes fula 2010 english subir
Definitions matrices y determinantes   fula 2010  english subirDefinitions matrices y determinantes   fula 2010  english subir
Definitions matrices y determinantes fula 2010 english subirHernanFula
 
Series_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdfSeries_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdfmohamedtawfik358886
 
Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1Loc Nguyen
 
Diagonalization of Matrices
Diagonalization of MatricesDiagonalization of Matrices
Diagonalization of MatricesAmenahGondal1
 
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksBeginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksJinTaek Seo
 
MC0082 –Theory of Computer Science
MC0082 –Theory of Computer ScienceMC0082 –Theory of Computer Science
MC0082 –Theory of Computer ScienceAravind NC
 
Appendix B Matrices And Determinants
Appendix B  Matrices And DeterminantsAppendix B  Matrices And Determinants
Appendix B Matrices And DeterminantsAngie Miller
 
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006Michael Soltys
 
Engg maths k notes(4)
Engg maths k notes(4)Engg maths k notes(4)
Engg maths k notes(4)Ranjay Kumar
 
NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...
NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...
NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...ijcsa
 
Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...
Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...
Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...ijcsa
 
A Rounding Technique For The Polymatroid Membership Problem
A Rounding Technique For The Polymatroid Membership ProblemA Rounding Technique For The Polymatroid Membership Problem
A Rounding Technique For The Polymatroid Membership ProblemAngela Tyger
 
Eigenvalues, Eigenvectors and Quadratic Forms.pdf
Eigenvalues, Eigenvectors and Quadratic Forms.pdfEigenvalues, Eigenvectors and Quadratic Forms.pdf
Eigenvalues, Eigenvectors and Quadratic Forms.pdfAugustoMiguel Ramos
 
final sci lab.pptx
final sci lab.pptxfinal sci lab.pptx
final sci lab.pptxgirish741205
 

Similar to 02e7e5277e9236393b000000 (20)

Definitions matrices y determinantes fula 2010 english subir
Definitions matrices y determinantes   fula 2010  english subirDefinitions matrices y determinantes   fula 2010  english subir
Definitions matrices y determinantes fula 2010 english subir
 
Series_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdfSeries_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdf
 
Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1
 
Diagonalization of Matrices
Diagonalization of MatricesDiagonalization of Matrices
Diagonalization of Matrices
 
APM.pdf
APM.pdfAPM.pdf
APM.pdf
 
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksBeginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
 
MC0082 –Theory of Computer Science
MC0082 –Theory of Computer ScienceMC0082 –Theory of Computer Science
MC0082 –Theory of Computer Science
 
Stochastic Processes Assignment Help
Stochastic Processes Assignment HelpStochastic Processes Assignment Help
Stochastic Processes Assignment Help
 
Appendix B Matrices And Determinants
Appendix B  Matrices And DeterminantsAppendix B  Matrices And Determinants
Appendix B Matrices And Determinants
 
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
The proof complexity of matrix algebra - Newton Institute, Cambridge 2006
 
Senior_Project
Senior_ProjectSenior_Project
Senior_Project
 
Engg maths k notes(4)
Engg maths k notes(4)Engg maths k notes(4)
Engg maths k notes(4)
 
NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...
NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...
NONSTATIONARY RELAXED MULTISPLITTING METHODS FOR SOLVING LINEAR COMPLEMENTARI...
 
Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...
Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...
Nonstationary Relaxed Multisplitting Methods for Solving Linear Complementari...
 
project report(1)
project report(1)project report(1)
project report(1)
 
A Rounding Technique For The Polymatroid Membership Problem
A Rounding Technique For The Polymatroid Membership ProblemA Rounding Technique For The Polymatroid Membership Problem
A Rounding Technique For The Polymatroid Membership Problem
 
FEM 3 TwoD CST.ppt
FEM 3 TwoD CST.pptFEM 3 TwoD CST.ppt
FEM 3 TwoD CST.ppt
 
Eigenvalues, Eigenvectors and Quadratic Forms.pdf
Eigenvalues, Eigenvectors and Quadratic Forms.pdfEigenvalues, Eigenvectors and Quadratic Forms.pdf
Eigenvalues, Eigenvectors and Quadratic Forms.pdf
 
final sci lab.pptx
final sci lab.pptxfinal sci lab.pptx
final sci lab.pptx
 
Linear Algebra
Linear AlgebraLinear Algebra
Linear Algebra
 

02e7e5277e9236393b000000

  • 1. Linear Algebra and its Applications 357 (2002) 163–171 www.elsevier.com/locate/laa A PLU-factorization of rectangular matrices by the Neville eliminationୋ M. Gassó, Juan R. Torregrosa∗ Departamento de Matemática Aplicada, Universidad Politécnica de Valencia, 46071 Valencia, Spain Received 13 December 2001; accepted 2 April 2002 Submitted by D. Hershkowitz Abstract In this paper we prove that Neville elimination can be matricially described by elementary matrices. A PLU-factorization is obtained for any n × m matrix, where P is a permutation matrix, L is a lower triangular matrix (product of bidiagonal factors) and U is an upper trian- gular matrix. This result generalizes the Neville factorization usually applied to characterize the totally positive matrices. We prove that this elimination procedure is an alternative to Gaussian elimination and sometimes provides a lower computational cost. © 2002 Elsevier Science Inc. All rights reserved. Keywords: Totally positive matrix; Neville elimination; Factorization LU 1. Introduction A real matrix is called totally positive if all its minors are nonnegative. These ma- trices have become increasingly important in approximation theory and other fields. For a comprehensive survey of this subject from an algebraic point of view, complete with historical references, see [1]. Fiedler and Markham obtained, in [2,3], a factorization for totally nonsingular matrices that satisfy the properties consecutive-column CC and consecutive-row CR, using the Neville elimination. From this elimination process Gasca and Peña obtained an LU factorization for matrices satisfying the without row exchange WR condition. In both cases the Neville elimination can be performed without row exchange. ୋ Research supported by Spanish DGI grant number BFM2001-0081-C03-02. ∗ Corresponding author. Tel.: +34-96-3877668; fax: +34-96-3877199. E-mail addresses: mgasso@mat.upv.es (M. Gass´o), jrtorre@math.upv.es (J.R. Torregrosa). 0024-3795/02/$ - see front matter 2002 Elsevier Science Inc. All rights reserved. PII: S0024-3795(02)00370-1
  • 2. 164 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 In this paper, taking into account that there exist totally positive matrices such that they do not satisfy condition WR, we generalize some of these results for matrices such that they do not satisfy conditions CC, CR or WR and are not necessarily regular matrices. In Section 2 we recall the Neville elimination process and in Section 3 we are going to obtain a matricial description of this process for matrices of size n × m. We obtain a PLU factorization of a matrix of size n × m, where P is a permutation matrix of size n × n, L is a lower triangular matrix (product of bidiagonal factors) of size n × n and U is an upper triangular matrix of size n × m. Finally, from the remarks of Gasca and Peña in [5], we define a class of matrices where Neville elimination has a lower computational cost than Gaussian elimination. 2. Neville elimination The essence of Neville elimination is to produce zeros in a column of a matrix by adding to each row an appropiate multiple of the previous one (instead of using a fixed row with a fixed pivot as in Gaussian elimination). Eventual reorderings of the rows of the matrix may be necessary. More precisely, we recall the Neville elimination process [4] for any n × m matrix A = (aij ). Let ¯A1 := (¯a1 ij ) be such that ¯a1 ij = aij , 1 i n and 1 j m. If there are zeros in the first column of ¯A1, we carry the corresponding rows down to the bottom in such a way that the relative order among them is the same as in ¯A1. We denote the new matrix by A1. If we have not carried any row down to the bottom, them A1 := ¯A1. In both cases, let i1 be i1 := 1. The method consist in constructing a finite sequence of matrices Ak such that, for each Ak, the submatrix formed by its k − 1 initial columns is an upper echelon form matrix. In fact, if At = (at ij ) then we introduce zeros in its tth column below the position (it , t), thus forming the n × m matrix ¯At+1 = (¯at+1 ij ), where, for any j such that 1 j m, we have ¯at+1 ij := at ij , i = 1, 2, . . . , it , ¯at+1 ij := at ij − at it at i−1,t at i−1,t if at i−1,t /= 0, it < i n, ¯at+1 ij := at ij if at i−1,t = 0, it < i n. Observe that with our assumptions, at i−1,t = 0 implies at it = 0. Then we define it+1 := it if at it ,t (= ¯at+1 it ,t ) = 0, it + 1 if at it ,t (= ¯at+1 it ,t ) /= 0.
  • 3. M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 165 If ¯At+1 has zeros in the (t + 1)th column in the row it+1 or below it, we will carry these rows down as we have done with ¯A1. The matrix obtained in this way will be denoted by At+1 = (at+1 ij ). Of course, if there is no row that has been carried down, then At+1 := ¯At+1. After a finite number of steps we get ¯A¯t−1, A¯t−1, and ¯A¯t = U (¯t m + 1), where U is an upper echelon form matrix. In this process the element pij := a j ij , 1 j m, ij i n, is called the (i, j) pivot of the Neville elimination of A and the number mij := a j ij /a j i−1,j if a j i−1,j /= 0, 0 if a j i−1,j = 0, 1 j m, ij < i n, the (i, j) multiplier of the Neville elimination of A. We observe that mij = 0 if and only if a j ij = 0. 3. Matricial description of Neville elimination The Neville elimination process, for an n × m matrix, can be matricially described by elementary and permutation matrices. We shall use similar notations to [5]. Let Eij (α), 1 i /= j n, be the elementary triangular matrix whose (r, s) entry 1 r, s n, is given by    1 if r = s, α if (r, s) = (i, j), 0 elsewhere. Note that if i > j the matrix Eij (α) is a lower triangular matrix. We are interested in the matrices Ei+1,i(α) which for simplicity will be denoted, as in [5], by Ei+1(α). They are bidiagonal and lower triangular, and given by Ei+1(α) =               1 1 1 ... 1 α 1 ... 1               . We denote by Pij the elementary permutation matrix whose (r, s) entry, 1 r, s n is given by
  • 4. 166 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171    1 if r = s, r /= i, j, 1 if (r, s) = (i, j), 1 if (r, s) = (j, i), 0 otherwise. So they are given explicitly by Pij =                  1 1 ... 0 1 1 ... 1 0 ... 1                  . To obtain a matricial description of Neville elimination, we define the following matrix. Definition 3.1. From the matrices Pij we denote by j the following permutation matrix: j = Pn−1,n−2 · · · Pj+1,j Pj,n. (1) Note that the product j A carries the row j down, reordering the remaining rows of the matrix A. The following lemma describes elementary properties of the matrices j . Lemma 3.1. Every matrix j satisfies the following properties: (i) j is a nonsingular matrix and −1 j = Pj,nPj+1,j · · · Pn−1,n−2. (ii) Let A be a matrix of size n × m. If ¯fi denote its ith row, i = 1, 2, . . . , n, then j A =             ¯f1 ... ¯fj−1 ¯fj+1 ... ¯fn ¯fj             . (iii) n−1 = Pn,n−1.
  • 5. M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 167 For a matrix A, of size n × m, the Neville elimination process can be written in the following way: If there are zeros in the first column of ¯A1 = A, the corresponding rows are carried down to the bottom in such a way that the relative order among them is the same as in ¯A1. The new matrix is denoted by A1, that is A1 = 1 n−1 · · · 1 1 ¯A1, where 1 j = In if ¯a1 j1 /= 0, 1 j = j if ¯ai ji = 0. In the next step we make zeros in the first column, and we obtain ¯A2 = En(−mn1) · · · E2(−m21)A1. Again, if there are zeros in the second column of ¯A2, the corresponding rows are carried down to the bottom so that the relative order among them is the same as in ¯A2. The new matrix is denoted by A2, that is A2 = 2 n−1 · · · 2 1 ¯A2 = n−1 j=2 2 j En(−mn1) · · · E2(−m21) n−1 j=1 1 j A. In general, the method consists of constructing a finite sequence of matrices as follow A = ¯A1 −→ A1 −→ ¯A2 −→ A2 −→ · · · −→ ¯An −→ U, where U is an upper echelon form matrix. We observe that ¯Ai = Ai when there are not row exchanges, and i j = In if ¯ai ji /= 0, i j = j if ¯ai ji = 0. (2) Like [5] we denote by Fi the following lower triangular matrix Fi =                1 0 1 0 ... ... 1 −mi+1,i 1 −mi+2,i 1 ... ... −mni 1                = Ei+1,i(−mi+1,i) · · · En,n−1(−mni)
  • 6. 168 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 and Ki = i n−1 · · · i i, where i j is defined in (2). So we can establish the following result. Theorem 3.1. Let A be a matrix of size n × m. The Neville elimination process for A can be described as Fn−1Kn−1 · · · F2K2F1K1A = U, where Ki, i = 1, 2, . . . , n − 1, is a permutation matrix and Fi, i = 1, 2, . . . , n − 1, is a lower triangular matrix. 4. PLU descomposition of a matrix A If A is a matrix of size n × m, we are interested to realign, in an adequate way, the matrices Fi and Ki in order to obtain a PLU descomposition of A, where P is a permutation matrix of size n × n, L is a lower triangular matrix of size n × n and U is an upper echelon form matrix of size n × m. We can observe that the elementary matrices Pij and Elm(α) do not satisfy the commutative property, but it is easy to prove the following properties. Lemma 4.1. For any matrices Eij (α) and k we have the following statements: (a) If k > i, then kEij (α) = Eij (α) k. (b) If k = i, then iEij (α) = Enj (α) i. (c) If k < i, we distinguish: (c1) If k = j, then j Eij (α) = Ei−1,n(α) j . (c2) If k /= j, we have (∗) for j < k < i, kEij (α) = Ei−1,j (α) k, (∗) for j > k, kEij (α) = Ei−1,j−1(α) k. Note that we do not deal with the case k = j because we apply the matrix Eij when the (i, j) position is a pivot of the Neville elimination and therefore we do not need to use the matrix j . So, for i > j, kEij (α) = kElm(α), with l > m. Therefore we can realign the matrices k and Ei,i−1(α) in order to obtain the desired PLU descomposition. We obtain the following result. Theorem 4.1. For a matrix A of size n × m, the Neville elimination process can be described in the following way: Ei1j1 (α1) · · · Einjn (αn)Kn−1 · · · K1A = U, where Eipjp (·), p = 1, 2, . . . , n, ip > jp, is a lower triangular matrix, Ki, i = 1, 2, . . . , n − 1 is a permutation matrix and U is an n × m upper echelon form matrix.
  • 7. M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 169 Proof. If we apply Lemma 4.1 to Theorem 3.1 we obtain this result. Taking into account that Eij (α)−1 = Eij (−α) and K−1 i is a permutation matrix also, it is easy to prove the following result. Corollary 4.1. A matrix A of size n × m can be factorized as A = P LU, where P is a permutation matrix of size n × n, L is a lower triangular matrix of size n × n and U is an upper echelon form matrix of size n × m. Proof. According to Theorem 4.1 A = (Kn−1 · · · K1)−1 (Ei1j1 (α1) · · · Einjn (αn))−1 U = K−1 1 · · · K−1 n−1Einjn (−αn) · · · Ei1j1 (−α1)U, and L = Einjn (−αn) · · · Ei1j1 (−α1), P = K−1 1 · · · K−1 n−1, where L is a lower triangular matrix and P is a permutation matrix. Example 4.1. Consider the following 4 × 4 matrix A =     1 1 1 0 0 1 1 0 0 0 1 0 1 0 0 1     . By applying, in this order, the elementary matrices 2, 2, E21(−1), E32(1) and 3 we obtain U =     1 1 1 0 0 −1 −1 1 0 0 1 0 0 0 0 1     . So, 3E32(1)E21(−1) 2 2A = U, and by applying Lemma 4.1 E42(1)E21(−1) 3 2 2A = U, and A = −1 2 −1 2 −1 3 E21(1)E42(−1)U = P LU.
  • 8. 170 M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 Note that when A is a nonsingular matrix and satisfies the WR condition defined in [4,5] the Neville elimination can be performed without row exchanges and we have the factorization LU obtained by Gasca and Peña in [5]. In [5], a class of regular matrices is defined, where the Neville elimination process has a lower computational cost than Gaussian elimination. Now, we extend this result for a class of matrices not necessarily regular. So, if A = L 0 0 0 , where L = Ei+k(αi+k) · · · Ei(αi), of size k × k, the Gauss elimination process re- quires k(k − 1)/2 steps and k − 1 for Neville elimination. Example 4.2. Consider the following matrix of size 5 × 7 A =       1 0 0 0 0 0 0 −1 1 0 0 0 0 0 −4 4 1 0 0 0 0 −8 8 2 1 0 0 0 0 0 0 0 0 0 0       . By Gauss elimination process we need the elementary matrix E21(1), E31(4), E41(8), E32(−4), E42(−8) and E43(−2) to obtain U = I4 0 0 0 . By Neville elimination we need E43(−2), E32(−4) and E21(1) to obtain the same matrix U. In general, for matrices whose structure is A = L 0 0 X , where X = Ip, X = 0, X = L, etc., the Neville elimination need less elementary matrices than Gaussian elimination. Example 4.3. Consider the following totally positive matrix: A =   1 1 1 1 1 1 0 1 1   , which does not satisfy the WR condition. By applying, in this order, the elementary matrices E21(−1) and P32, we obtain
  • 9. M.Gass´o, J.R. Torregrosa / Linear Algebra and its Applications 357 (2002) 163–171 171 U =   1 1 1 0 1 1 0 0 0   . The PLU descomposition of A is P =   1 0 0 0 0 1 0 1 0   , L =   1 0 0 0 1 0 1 0 1   , and U =   1 1 1 0 1 1 0 0 0   . We can observe that U is a totally positive matrix but L is not. The authors intend to apply the results of this paper to generalized totally positive matrices. References [1] T. Ando, Totally positive matrices, Linear Algebra Appl. 90 (1987) 165–219. [2] M. Fiedler, T.L. Markham, Consecutive-column and row properties of matrices and the Loew- ner–Neville factorization, Linear Algebra Appl. 266 (1997) 243–259. [3] M. Fiedler, T.L. Markham, A factorization of totally nonsingular matrices over a ring with identity, Linear Algebra Appl. 304 (2000) 161–171. [4] M. Gasca, J.M. Peña, Total positivity and Neville elimination, Linear Algebra Appl. 165 (1992) 25–44. [5] M. Gasca, J.M. Peña, A matricial description of Neville elimination with applications to total posi- tivity, Linear Algebra Appl. 202 (1994) 33–53.