Ibrahim Omar Habiballah Sep. 2013
2.3 Triangular Factorization
This programming technique is necessary to solve a simultaneous set of n
algebraic linear equations, which can be written in a matrix form as
A x = b
where A is an n x n square matrix, x and b are n x 1 vectors.
There are several triangular factorization techniques for different
applications. Such techniques are ease to use with sparsity programming.
Two of these factorization techniques will be presented.
2.3.1) LU Factorization
Let A be factored into two matrices,
A = LU1 or A = L1U
where
L is lower left triangular,
L1 is lower left triangular with a unit diagonal,
U is upper right triangular, and
U1 is upper right triangular with a unit diagonal.
Consider the L1U factorization (similar steps can be applied for LU1). Then
L1U x = b
Let
U x = w (this is known as backward substitution), then
L1w = b (this is known as forward substitution).
where w is an n-vector.
The factorized entries of L1U are stored in the same matrix (say J), i.e.,
there is no need for new storage areas. Techniques that result in the
destruction of the original data and replacement of the factorized data are
known as "in situ" methods.
Ibrahim Omar Habiballah Sep. 2013
The new factorized entries are found from the following two equations,
1,....,3,2,1
)(
1
1
−=
−
=
∑
−
=
rc
u
ulj
l
cc
c
q
qcrqrc
rc (2)
nrrculju
r
q
qcrqrcrc ,....,1,
1
1
+=−= ∑
−
=
(3)
The forward substitution equation is
nrwlbw
r
q
qrqrr ,....,3,2,1
1
1
=−= ∑
−
=
(4)
The backward substitution equation is
1,....,2,1,
1
−−=
−
=
∑
+=
nnnr
u
xuw
x
rr
n
rq
qrqr
r (5)
Notice that the summations in Eqs. (2-5) are equal to zero when the lower
index of the summation is greater than the upper index.
A drawback of this method is that if the vector b is changed, the
forward/backward substitution process must be repeated (unlike the direct
inversion of the matrix).
In most of power system applications the inversion of a matrix is not
needed. The LU factorization, therefore, is favored for such applications.
Ibrahim Omar Habiballah Sep. 2013
Example (6):
Use L1U-factorization to solve the following linear equations,
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
=
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
1
0
3
1
4000
0102
5010
1001
x
Solution:
This problem requires three steps,
1) Replacing the entries of the matrix by the table of factors (TOF).
The first row new entries, using Equation (3), are
u11 =j11 - 0 = 1, u12 = j12 - 0 = 0, u13 = j13 - 0 = 0, u14 = j14 - 0 = 1
(Note that the sums are zero because the upper index of summation is
zero).
The second row new entries, using Eqs. (2) & (3), are
0
1
)00()0(
11
21
21 =
−
=
−
=
u
j
l , u22 = j22 – l21u12 =1 – 0 = 1,
u23 = j23 – l21u13 =0 – 0 = 0, u24 = j24 – l21u14 =5 – 0 = 5
The third row new entries, using Eqs. (2) & (3), are
2
1
)02()0(
11
31
31 =
−
=
−
=
u
j
l 0
1
)00()(
22
123132
32 =
−
=
−
=
u
ulj
l
u33 = j33 – l31u13 – l32u23 = 1 – 0 – 0 = 1,
u34 = j34 – l31u14 – l32u24 = 0 – 2 – 0 = -2
Ibrahim Omar Habiballah Sep. 2013
The fourth row new entries, using Eqs. (2) & (3), are
,0
1
)00()0(
11
41
41 =
−
=
−
=
u
j
l ,0
1
)00()(
22
124142
42 =
−
=
−
=
u
ulj
l
0
1
)00()(
33
2342134143
43 =
−
=
−−
=
u
ululj
l
u44 = j44 – l41u14 – l42u24 – l43u34 = 4 – 0 – 0 - 0 = 4
Therefore, the completed table of factors (TOF), which contains both L1
and U superimposed, is
⎟
⎟
⎟
⎟
⎟
⎠
⎞
⎜
⎜
⎜
⎜
⎜
⎝
⎛
−
4000
2102
5010
1001
Where:
⎟
⎟
⎟
⎟
⎟
⎠
⎞
⎜
⎜
⎜
⎜
⎜
⎝
⎛
=
1000
0102
0010
0001
1L &
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
−
=
4000
2100
5010
1001
U
2) Using forward substitution to find the vector w according to Eq. (4).
Then
w1 = b1 - 0 = 1 - 0 = 1,
w2 = b2 - l21w1 = 3 - 0 = 3,
w3 = b3 - l31w1 - l32w2 = 0 - 2 x 1 - 0 = -2, and
w4 = b4 - l41w1 - l42w2 – l43w3 = 1 - 0 - 0 - 0 = 1
Ibrahim Omar Habiballah Sep. 2013
Therefore, the w vector is
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
−
=
1
2
3
1
w
3) Using backward substitution to find the vector x according to Eq. (5).
Then
,25.0
4
010
44
4
4 =
−
=
−
=
u
w
x
,5.1
1
25.0)2(2
33
4343
3 =
−−−
=
−
=
x
u
xuw
x
,75.1
1
25.0503
22
4243232
2 =
−−
=
−−
=
x
u
xuxuw
x and
,75.0
1
25.01001
11
4143132121
1 =
−−−
=
−−−
=
x
u
xuxuxuw
x
Therefore, the x vector (which is the final solution) is
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
−
=
25.0
5.1
75.1
75.0
x
Checkout the results,
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
+++
+−++
+++
+++
=
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
−
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
25.0*4000
0)5.1(*1075.0*2
25.0*5075.1*10
25.0*10075.0*1
25.0
5.1
75.1
75.0
*
4000
0102
5010
1001
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
=
1
0
3
1
Ibrahim Omar Habiballah Sep. 2013
2.3.2) Chipley-Coleman Matrix Inversion
This method allows inversion of a nonsingular square matrix in situ (i.e.,
the inverted matrix will replace the original matrix). The advantage of this
method is that if the b vector is changed, the inversion matrix is never
recalculated. Another advantage is its ease way in programming and its
connection to power engineering problems.
The algorithm to invert, say, an n x n matrix is:
a- Let the pivot axis, p, be axis 1. (a pivot is defined as a nonzero entry in
any row/column of a matrix which is located at or will be assigned as a
diagonal element).
b- Kron reduces all elements outside the pivot axis.
pjpi
A
AA
AA old
pp
old
pj
old
ipold
ij
new
ij ≠≠−= ; (6)
c- Replace the pivot position by its negative inverse.
old
pp
new
pp
A
A
1−
= (7)
d- Reduce the elements in the pivot axis, but outside the p, p position.
piAAA new
pp
old
ip
new
ip ≠= (8)
pjAAA new
pp
old
pj
new
pj ≠= (9)
e- Repeat steps (b) through (d) for p = 2, 3, …, n. The result is –A-1
.
Therefore, reversal of the sign is A-1
.
Example (7):
Invert A using Chipley-Coleman method.
Ibrahim Omar Habiballah Sep. 2013
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
=
0.30.05.0
0.00.30.1
5.00.10.4
A
Solution:
Let p = 1, according to Eq. (6)
,75.2
0.4
0.10.1
0.3
11
1221
2222 =−=−=
x
A
AA
AA old
oldold
oldnew
Similarly
.9375.2,125.0 333223 =−== newnewnew
AAA
According to Eqs. (7-9),
,25.0
0.4
11
11
11 −=
−
=
−
= old
new
A
A
25.0)25.0(0.111122112 −=−=== xAAAA newoldnewnew
, and
125.0)25.0(5.011133113 −=−=== xAAAA newoldnewnew
Therefore, for p = 1, the modified matrix is
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
−−
−−
−−−
9375.2125.0125.0
125.075.225.0
125.025.025.0
Repeating this process for p = 2, 3, the –A-1
is
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
−−
−−
−
34109.001550.004651.0
01550.036434.009302.0
04651.009302.027906.0
Ibrahim Omar Habiballah Sep. 2013
A subroutine INVERT for inverting a symmetry matrix is given below.
SUBROUTINE INVERT (A,N)
DIMENSION A(N,N)
DO 1 IP=1,N
DO 2 IR=1,N
IF(IR.EQ.IP) GOTO 2
DO 3 IC=1,N
IF(IC.EQ.IP) GOTO 3
A(IR,IC)=A(IR,IC)-(A(IR,IP)*A(IP,IC)/A(IP,IP))
3 CONTINUE
2 CONTINUE
A(IP,IP)=-1.0/A(IP,IP)
DO 4 I=1,N
IF(I.EQ.IP) GOTO 4
A(I,IP)=A(I,IP)*A(IP,IP)
A(IP,I)=A(IP,I)*A(IP,IP)
4 CONTINUE
1 CONTINUE
DO 5 IR=1,N
DO 5 IC=1,N
5 A(IR,IC)=-A(IR,IC)
RETURN
END
Note that this subroutine stores the A matrix in full. Considerable savings
of memory and time are possible using sparsity programming and optimal
ordering techniques.
2.4 Optimal Ordering
It is a procedure by which the 'fill-ins' (generating new nonzeros) during
the factorization or inversion process is minimized. Optimal ordering
refers to renumbering the matrix axes so that fill-ins are as minimum as
possible.
Ibrahim Omar Habiballah Sep. 2013
The process of optimal ordering results in sparse (with slightly higher fill-
ins than the original sparse matrix) table of factors and inverted matrices.
However, if the original matrix is dense (not sparse) then the application of
optimal ordering becomes ineffective.
To illustrate the objective of optimal ordering, consider the following
matrix with 'x' representing the nonzeros.
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
xx
xx
xx
xx
xx
xx
xx
xx
xxxxxxxxx
If this matrix is factorized without ordering the axes of the matrix, the
table of factors (TOF) will be dense (notice that there may by few nonzeros,
but this could be due to cancellation between some of the numbers during
the factorization process),
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
xxxxxxxx
xxxxxxxxx
xxxxxxxxx
xxxxxxxx
xxxxxxxxx
xxxxxxxx
xxxxxxxx
xxxxxxxxx
xxxxxxxxx
Ibrahim Omar Habiballah Sep. 2013
If the first pivot is moved to the last axis, i.e.,
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
xxxxxxxxx
xx
xx
xx
xx
xx
xx
xx
xx
then the table of factors (TOF) will be sparse, and for this particular
matrix structure, the TOF will have same sparse structure but with
different nonzero (fill-ins) values.
References:
[1] G.T. Heydt, "Computer Analysis Methods for Power System",
Macmillan, New York, 1986.

04 programming 2

  • 1.
    Ibrahim Omar HabiballahSep. 2013 2.3 Triangular Factorization This programming technique is necessary to solve a simultaneous set of n algebraic linear equations, which can be written in a matrix form as A x = b where A is an n x n square matrix, x and b are n x 1 vectors. There are several triangular factorization techniques for different applications. Such techniques are ease to use with sparsity programming. Two of these factorization techniques will be presented. 2.3.1) LU Factorization Let A be factored into two matrices, A = LU1 or A = L1U where L is lower left triangular, L1 is lower left triangular with a unit diagonal, U is upper right triangular, and U1 is upper right triangular with a unit diagonal. Consider the L1U factorization (similar steps can be applied for LU1). Then L1U x = b Let U x = w (this is known as backward substitution), then L1w = b (this is known as forward substitution). where w is an n-vector. The factorized entries of L1U are stored in the same matrix (say J), i.e., there is no need for new storage areas. Techniques that result in the destruction of the original data and replacement of the factorized data are known as "in situ" methods.
  • 2.
    Ibrahim Omar HabiballahSep. 2013 The new factorized entries are found from the following two equations, 1,....,3,2,1 )( 1 1 −= − = ∑ − = rc u ulj l cc c q qcrqrc rc (2) nrrculju r q qcrqrcrc ,....,1, 1 1 +=−= ∑ − = (3) The forward substitution equation is nrwlbw r q qrqrr ,....,3,2,1 1 1 =−= ∑ − = (4) The backward substitution equation is 1,....,2,1, 1 −−= − = ∑ += nnnr u xuw x rr n rq qrqr r (5) Notice that the summations in Eqs. (2-5) are equal to zero when the lower index of the summation is greater than the upper index. A drawback of this method is that if the vector b is changed, the forward/backward substitution process must be repeated (unlike the direct inversion of the matrix). In most of power system applications the inversion of a matrix is not needed. The LU factorization, therefore, is favored for such applications.
  • 3.
    Ibrahim Omar HabiballahSep. 2013 Example (6): Use L1U-factorization to solve the following linear equations, ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ = ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 1 0 3 1 4000 0102 5010 1001 x Solution: This problem requires three steps, 1) Replacing the entries of the matrix by the table of factors (TOF). The first row new entries, using Equation (3), are u11 =j11 - 0 = 1, u12 = j12 - 0 = 0, u13 = j13 - 0 = 0, u14 = j14 - 0 = 1 (Note that the sums are zero because the upper index of summation is zero). The second row new entries, using Eqs. (2) & (3), are 0 1 )00()0( 11 21 21 = − = − = u j l , u22 = j22 – l21u12 =1 – 0 = 1, u23 = j23 – l21u13 =0 – 0 = 0, u24 = j24 – l21u14 =5 – 0 = 5 The third row new entries, using Eqs. (2) & (3), are 2 1 )02()0( 11 31 31 = − = − = u j l 0 1 )00()( 22 123132 32 = − = − = u ulj l u33 = j33 – l31u13 – l32u23 = 1 – 0 – 0 = 1, u34 = j34 – l31u14 – l32u24 = 0 – 2 – 0 = -2
  • 4.
    Ibrahim Omar HabiballahSep. 2013 The fourth row new entries, using Eqs. (2) & (3), are ,0 1 )00()0( 11 41 41 = − = − = u j l ,0 1 )00()( 22 124142 42 = − = − = u ulj l 0 1 )00()( 33 2342134143 43 = − = −− = u ululj l u44 = j44 – l41u14 – l42u24 – l43u34 = 4 – 0 – 0 - 0 = 4 Therefore, the completed table of factors (TOF), which contains both L1 and U superimposed, is ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ − 4000 2102 5010 1001 Where: ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ = 1000 0102 0010 0001 1L & ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − = 4000 2100 5010 1001 U 2) Using forward substitution to find the vector w according to Eq. (4). Then w1 = b1 - 0 = 1 - 0 = 1, w2 = b2 - l21w1 = 3 - 0 = 3, w3 = b3 - l31w1 - l32w2 = 0 - 2 x 1 - 0 = -2, and w4 = b4 - l41w1 - l42w2 – l43w3 = 1 - 0 - 0 - 0 = 1
  • 5.
    Ibrahim Omar HabiballahSep. 2013 Therefore, the w vector is ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − = 1 2 3 1 w 3) Using backward substitution to find the vector x according to Eq. (5). Then ,25.0 4 010 44 4 4 = − = − = u w x ,5.1 1 25.0)2(2 33 4343 3 = −−− = − = x u xuw x ,75.1 1 25.0503 22 4243232 2 = −− = −− = x u xuxuw x and ,75.0 1 25.01001 11 4143132121 1 = −−− = −−− = x u xuxuxuw x Therefore, the x vector (which is the final solution) is ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − = 25.0 5.1 75.1 75.0 x Checkout the results, ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ +++ +−++ +++ +++ = ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 25.0*4000 0)5.1(*1075.0*2 25.0*5075.1*10 25.0*10075.0*1 25.0 5.1 75.1 75.0 * 4000 0102 5010 1001 ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ = 1 0 3 1
  • 6.
    Ibrahim Omar HabiballahSep. 2013 2.3.2) Chipley-Coleman Matrix Inversion This method allows inversion of a nonsingular square matrix in situ (i.e., the inverted matrix will replace the original matrix). The advantage of this method is that if the b vector is changed, the inversion matrix is never recalculated. Another advantage is its ease way in programming and its connection to power engineering problems. The algorithm to invert, say, an n x n matrix is: a- Let the pivot axis, p, be axis 1. (a pivot is defined as a nonzero entry in any row/column of a matrix which is located at or will be assigned as a diagonal element). b- Kron reduces all elements outside the pivot axis. pjpi A AA AA old pp old pj old ipold ij new ij ≠≠−= ; (6) c- Replace the pivot position by its negative inverse. old pp new pp A A 1− = (7) d- Reduce the elements in the pivot axis, but outside the p, p position. piAAA new pp old ip new ip ≠= (8) pjAAA new pp old pj new pj ≠= (9) e- Repeat steps (b) through (d) for p = 2, 3, …, n. The result is –A-1 . Therefore, reversal of the sign is A-1 . Example (7): Invert A using Chipley-Coleman method.
  • 7.
    Ibrahim Omar HabiballahSep. 2013 ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = 0.30.05.0 0.00.30.1 5.00.10.4 A Solution: Let p = 1, according to Eq. (6) ,75.2 0.4 0.10.1 0.3 11 1221 2222 =−=−= x A AA AA old oldold oldnew Similarly .9375.2,125.0 333223 =−== newnewnew AAA According to Eqs. (7-9), ,25.0 0.4 11 11 11 −= − = − = old new A A 25.0)25.0(0.111122112 −=−=== xAAAA newoldnewnew , and 125.0)25.0(5.011133113 −=−=== xAAAA newoldnewnew Therefore, for p = 1, the modified matrix is ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ −− −− −−− 9375.2125.0125.0 125.075.225.0 125.025.025.0 Repeating this process for p = 2, 3, the –A-1 is ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ −− −− − 34109.001550.004651.0 01550.036434.009302.0 04651.009302.027906.0
  • 8.
    Ibrahim Omar HabiballahSep. 2013 A subroutine INVERT for inverting a symmetry matrix is given below. SUBROUTINE INVERT (A,N) DIMENSION A(N,N) DO 1 IP=1,N DO 2 IR=1,N IF(IR.EQ.IP) GOTO 2 DO 3 IC=1,N IF(IC.EQ.IP) GOTO 3 A(IR,IC)=A(IR,IC)-(A(IR,IP)*A(IP,IC)/A(IP,IP)) 3 CONTINUE 2 CONTINUE A(IP,IP)=-1.0/A(IP,IP) DO 4 I=1,N IF(I.EQ.IP) GOTO 4 A(I,IP)=A(I,IP)*A(IP,IP) A(IP,I)=A(IP,I)*A(IP,IP) 4 CONTINUE 1 CONTINUE DO 5 IR=1,N DO 5 IC=1,N 5 A(IR,IC)=-A(IR,IC) RETURN END Note that this subroutine stores the A matrix in full. Considerable savings of memory and time are possible using sparsity programming and optimal ordering techniques. 2.4 Optimal Ordering It is a procedure by which the 'fill-ins' (generating new nonzeros) during the factorization or inversion process is minimized. Optimal ordering refers to renumbering the matrix axes so that fill-ins are as minimum as possible.
  • 9.
    Ibrahim Omar HabiballahSep. 2013 The process of optimal ordering results in sparse (with slightly higher fill- ins than the original sparse matrix) table of factors and inverted matrices. However, if the original matrix is dense (not sparse) then the application of optimal ordering becomes ineffective. To illustrate the objective of optimal ordering, consider the following matrix with 'x' representing the nonzeros. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ xx xx xx xx xx xx xx xx xxxxxxxxx If this matrix is factorized without ordering the axes of the matrix, the table of factors (TOF) will be dense (notice that there may by few nonzeros, but this could be due to cancellation between some of the numbers during the factorization process), ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ xxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxx xxxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxxx xxxxxxxxx
  • 10.
    Ibrahim Omar HabiballahSep. 2013 If the first pivot is moved to the last axis, i.e., ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ xxxxxxxxx xx xx xx xx xx xx xx xx then the table of factors (TOF) will be sparse, and for this particular matrix structure, the TOF will have same sparse structure but with different nonzero (fill-ins) values. References: [1] G.T. Heydt, "Computer Analysis Methods for Power System", Macmillan, New York, 1986.