Triangular factorization method of a power network problem (in form of matrix). Direct solution can be found without calculating inverse matrix which usually considered an exhaustive method, especially in large scale network.
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
Direct solution of sparse network equations by optimally ordered triangular factorization
1. Direct Solution of Sparse Network
Equations by Optimally Ordered
Triangular Factorization
By : Dimas Ruliandi
2. Background
• Usually, the objective in the matrix analysis is to obtain the inverse
of the matrix
Solve [A][X] = [B]
• Large sparse systems in many network problems
• Inverse matrix calculation is very inefficient
• Appropriately ordered triangular decomposition will provide
advantage in computational speed, storage, and reduction of
round-off error
3. Background
The method consists of two parts:
1. Scheme of recording the operation of triangular decomposition of a
matrix, such that repeated direct solution based on the matrix can be
obtained without repeating the triangularization
Applicable to any matrix
2. Scheme of ordering the operations that tend to converse sparsity of the
original system
Ordering to converse sparsity, limited to sparse matrices in which
the pattern of nonzero element is symmetric and for which arbitrary
order of decomposition doesn’t adversely affect numerical accuracy
4. Implementation Example
• Power Flow
• Short Circuit
• Transient Stability
• Network Reduction
• Switching Transients
• Reactive optimization
• Tower design
5. Triangular Decomposition
• Similar to those associated with the names of Gauss,
Doolittle, Choleski, Banachiewicz, and others
• Computational variations of the basic process of
triangularizing a matrix by equivalence
transformations.
• The scheme is applicable to any nonsingular matrix,
real or complex, sparse or full, symmetric or non-symetric.
6. Triangular Decomposition
• Ax = b [A][X] = [B]
• A is a nonsingular matrix, x is a column vector of unknowns, and b is a
known vector with at least one non-zero element.
• GOAL :
푎11 푎12 ⋯ 푎1푛 푏1
푎21 푎22 ⋯ 푎2푛 푏2
⋮ ⋮ ⋯ ⋮ ⋮
푎푛1 푎푛2 ⋯ 푎푛푛 푏푛
(1) ⋯ 푎1푛
1 푎12
(1) 푏1
(1)
(2) 푏2
1 ⋯ 푎2푛
(2)
⋯ ⋮ ⋮
(푛)
1 푏푛
Order of the derived
system
9. Triangular Decomposition
• 3rd step (cont’d)
1 푎13
1 푎12
2 ⋯ 푎2푛
1 푎33
1 푎13
2 ⋯ 푎2푛
2 ⋯ 푎3푛
Result from 1st operation Result from 2nd operation
• nth step
1 ⋯ 푎1푛
1 푏1
1
0 1 푎23
2 푏2
2
0 푎32
1 ⋯ 푎3푛
1 푏3
1
⋮ ⋮ ⋮ ⋯ ⋮ ⋮
푎푛1 푎푛2 푎푛3 ⋯ 푎푛푛 푏푛
1 푎12
1 ⋯ 푎1푛
1 푏1
1
0 1 푎23
2 푏2
2
0 0 푎33
2 푏3
2
⋮ ⋮ ⋮ ⋯ ⋮ ⋮
푎푛1 푎푛2 푎푛3 ⋯ 푎푛푛 푏푛
1 푎13
1 푎12
1 ⋯ 푎1푛
1 푏1
1
2 ⋯ 푎2푛
0 1 푎23
2 푏2
2
3 푏3
0 0 1 ⋯ 푎3푛
1
⋮ ⋮ ⋮ ⋯ ⋮ ⋮
푎푛1 푎푛2 푎푛3 ⋯ 푎푛푛 푏푛
Result from 3rd operation
1 푎13
1 푎12
1 ⋯ 푎1푛
1 푏1
1
2 ⋯ 푎2푛
1 푎23
2 푏2
2
3 푏3
1 ⋯ 푎3푛
3
⋮ ⋮ ⋮ ⋯ ⋮ ⋮
푛
⋯ 1 푏푛
At the end of k-th step, work on rows 1 to k has been completed
and rows k + 1 to n haven’t yet entered the process
10. Triangular Decomposition
• After all process has been done (nth step) the solution can be obtained by back
substitution :
(푛)
푥푛 = 푏푛
(푖) − 푗=푖+1
푥푖 = 푏푖
푖 푥푗
푛 푎푖푗
(푛−1) − 푎푛−1,푛
푥푛−1 = 푏푛−1
푛−1 푥푛
• Triangularization in the same order by column instead of rows would have produced
identically the same result.
• When A is full and n large, it can be shown that the number of multiplication-addition
operations for triangular decomposition is approximately
1
3
푛3 compared with 푛3 for
inversion
11. Recording the Operations
• The rules for recording the forward operations of triangularization are:
1) When term
1
푎푖푖
푖−1 is computed, store it in location 푖푖
(푗−1), 푖 > 푗 ,in the lower triangle
2) Leave every derived term 푎푖푗
• The final result of triangularizing A and recording the forward operations is symbolized
as (also called the table of factors) :
푑11 푢12 푢12 푢1푛
푙21 푑22 푢23 푢2푛
푙31 푙32 푑33 푢3푛
푙푛1 푙푛2 푙푛3 푑푛푛
풅풊풊 =
ퟏ
풂풊풊
(풊−ퟏ)
(풊) 풊 < 풋
풖풊풋 = 풂풊풋
(풋−ퟏ) 풊 > 풋
풍풊풋 = 풂풊풋
u = upper , l = lower, d = diagonal
12. Example – Triangularization & Recording
퐴 =
2 1 3
2 3 4
3 4 7
, find the table of factors for A..???
Ans:
1st step :
2 1 3
2 3 4
3 4 7
1st make this to 1 푎11
(1) =
푎1푗
1
푎11
푎1푗
(1) =
1
2
2 = 1
(1) =
푎12
1
2
1 =
1
2
(1) =
푎13
1
2
3 =
3
2
Table of factors
1 1
2 3
2
2 3 4
3 4 7
Result :
Record this as 푑11
Record this as
푢12& 푢13
1
2 1
2 3
2
? ? ?
? ? ?
13. Example – Triangularization & Recording
2nd step :
1
1
2
3
2
2 3 4
3 4 7
1st make this to 0 푎21
(1) = 푎2푗 − 푎21푎1푗
푎2푗
(1)
1 1
2 3
2
0 2 1
3 4 7
(2) =
푎2푗
1
푎22
(1)
(1) 푎2푗
(1) = 2 − (2 × 1) = 0
(1) = 3 − 2 ×
푎22
1
2
= 2
(1) = 4 − 2 ×
푎23
3
2
= 1
Table of factors
2nd make this to 1
(2) = 0
푎21
(2) =
푎22
1
2
2 = 1
(2) =
푎23
1
2
1 =
1
2
1 1
2 3
2
0 1 1
2
3 4 7
Result :
Record this as 푙21
Record this as 푑22
Record this as 푢23
1
2 1
2 3
2
2 1
2 1
2
? ? ?
15. Example – Triangularization & Recording
3rd step (cont’d):
1 1
2 3
2
0 1 1
2
0 0 5
4
3rd make this to 1 Table of factors
(3) =
푎3푗
1
푎33
(2)
(2) 푎3푗
1
2 1
2 3
2
2 1
2 1
2
3 5
2 4
5
(3) = 0
푎31
(3) = 0
푎32
(3) =
푎33
1
5
4
5
4
Record this as 푑33
= 1
16. Computing Direct Solutions
• It is convenient in symbolizing the operations for obtaining direction solutions to define
some special matrices in term of the element of the table of factors:
퐷푖 ∶ 푅표푤 푖 = 0,0, . . , 0, 푑푖푖 , 0, … 0,0
퐿푖 ∶ 퐶표푙 푖 = 0,0, . . , 0,1, −푙푖+1,푖 , −푙푖+2,푖 , … −푙푛−1,푖,, −푙푛,푖
푡
퐿푖∗
∶ 퐶표푙 푖 = −푙푖,1, −푙푖,2, … , −푙푖,푖−1, 1,0, … 0,0
푈푖 ∶ 푅표푤 푖 = 0,0, … 0,1, −푢푖,푖+1, −푢푖,푖+2, … −푢푖,푛−1, −푢푖,푛
∗ ∶ 퐶표푙 푖 = −푢1,푖 , −푢2,푖 , … −푢푖−1,푖 , 1,0, … 0,0
푈푖
푡
• The invese of this matrix are trivial. Inverse of matrix 퐷푖 involves only the reciprocal of
the element 푑푖푖, and inverse of matrices 퐿푖, 퐿푖∗
∗ involve only a reversal of algebraic
, 푈푖, 푈푖
signs of the off-diagonal elements
D,L,U are nonsingular matrices which
differ from the unit matrix only in the
row or column indicated
17. Computing Direct Solutions
푖• The ∗
forward and backward substitution operations on the column vector b that transform
it to x can be expressed as premultiplications by matrices 퐷, 퐿or 퐿, and 푈or 푈∗.
푖 푖 푖 푖
• Solution of 퐴푥 = 푏 can be expressed as indicated :
a) 푈1푈2 … 푈푛−2푈푛−1퐷푛퐿푛−1퐷푛−1퐿푛−2 … 퐿2퐷2퐿1퐷1푏 = 퐴−1푏 = 푥
∗ 퐷푛−1퐿푛−1
b) 푈1푈2 … 푈푛−2푈푛−1퐷푛퐿푛
∗ … 퐿3
∗ 퐷2퐿2
∗ 퐷1푏 = 퐴−1푏 = 푥
∗푈3
c) 푈2
∗ … 푈푛−1
∗ 푈푛∗
퐷푛퐿푛−1퐷푛−1퐿푛−2 … 퐿2퐷2퐿1퐷1푏 = 퐴−1푏 = 푥
∗푈3
d) 푈2
∗ … 푈푛−1
∗ 푈푛∗
∗ 퐷푛−1퐿푛−1
퐷푛퐿푛
∗ … 퐿3
∗ 퐷2퐿2
∗ 퐷1푏 = 퐴−1푏 = 푥
Depending on programming
techniques, one of these will prove to
be the most convenient
• (a) describes the forward and backward substitution operations that would be performed
on b if it augmented A during triangularization by column, while (b) describes the same
result for triangularization by rows
• (c) and (d) describes other sequences of the same operations giving the same result
18. Computing Direct Solutions
• Example :
퐴 =
2 1 3
2 3 4
3 4 7
, table factors for A (from previous slide) =
• With b given b =
6
9
14
, solve x if 퐴−1푏 = 푥
• Using direct solution formula from previous slide, from equation (a)
1 −
1
2
−
1
2
1
1
1
1
2
1
1 −
1
1
4
5
1
1
−
5
2
1
1
1
2
1
1
−2 1
−3 1
1
2
1
1
6
9
14
=
1
1
1
1
2 1
2 3
2
2 1
2 1
2
3 5
2 4
5
푼ퟏ 푼ퟐ 푫ퟑ 푳ퟐ 푫ퟐ 푳ퟏ 푫ퟏ 풃 풙
19. Computing Direct Solutions
• Given 푥, the vector 푏 can be obtained as:
−1퐿1 −
a) 퐷1
1퐷2
−1퐿2 −
1 … 퐿푛−1
−1 퐷푛
−1 푈푛−2
−1푈푛−1
−1 … 푈2
−1푈1
−1푥 = 퐴푥 = 푏
−1(퐿2
b) 퐷1
∗ )−1퐷2
−1(퐿3
∗ )−1 … (퐿푛
∗ )−1퐷푛
−1 푈푛−2
−1푈푛−1
−1 … 푈2
−1푈1
−1푥 = 퐴푥 = 푏
• With x given x =
1
1
1
, solve b if 퐴푥 = 푏
Example: (Matrix A is same as previous example)
2
1
1
1
2 1
3 1
1
2
1
1
15
2
1
1
1
5
4
1
1
1
2
1
Table of factors for A
1 1
2
1
2 1
2 3
2
2 1
3 5
3
2
1
1
2 1
2
2 4
1
1
1
=
6
9
14
−ퟏ 푳ퟏ
푫ퟏ
−ퟏ 푳ퟐ
푫 −ퟏ ퟐ
−ퟏ 푼ퟏ 풃
−ퟏ 푼ퟐ
푫 −ퟏ ퟑ
−ퟏ 풙
5
20. Computing Direct Solutions
• Given 퐴푡푦 = 푐, the vector 푐 can be obtained as:
푡 퐷2퐿2
a) 퐷1퐿1
푡 … 퐿푛−1
푡 퐷푛푈푛−1
푡 푈푛−2
푡 … 푈2 푡
푡푐 = (퐴푡)−1푐 = 푦
푈1
푡)−1(푈2 푡
b) (푈1
푡 −1 푈푛−1
)−1… 푈푛−2
푡 −1퐷푛
푡 −1 … (퐿2
−1 퐿푛−1
푡 )−1퐷2
−1(퐿1
푡 )−1퐷1
−1푦 = 퐴푡푦 = 푐
• With c given c =
9
9
17
, solve y if 퐴푡푦 = 푐
Example: (Matrix A is same as previous example)
1
2
1
1
1 −2 −3
1
1
1
1
2
1
1
5
2
1
1 −
1
1
4
5
1
1
−
1
2
Table of factors for A
1
11
2
1
3
2
1
9
9
17
=
2
1
1
1
2 1
2 3
2
2 1
2 1
2
3 5
2 4
5
푫ퟏ 푳ퟏ풕
푼ퟐ풕
푼ퟏ풕
푫ퟐ 푫ퟑ 풄 풚 푳ퟐ풕
21. Computing Direct Solutions
• The operations can be extended to include certain two-way hybrid solutions with the matrix
partitioned at any point desired
• Let the hybrid column vector g be defined as : 푔푡 = (푏1,푏2,…, 푏푘,, 푥푘+1,푥푘+2,,…, 푥푛)
• If g is given, the unknown first 푘 elements of 푥 ant the 푘 + 1 to nth elements of b can be
obtained directly by :
−1 푈푛−2
1st : Compute intermediate vector 푧 : 푈푛−1
−1 퐷푘 퐿푘
−1 … 푈푘+1
∗ … 퐿3
∗ 퐷2퐿2
∗ 퐷1푔 = 푧
2nd : By using elements from 푧 and 푔, the composite vector ℎ is formed
3rd : Using ℎ,the first 푘 unknown elements of 푥 are obtained: 푈1푈2…푈푘−1푈푘ℎ = 푥
4th : 3rd step defines the back substitution from 푘 to 1. The 푘 + 1 to nth unknowns elements of b
are obtained from:
∗ )−1퐷푘+1
(퐿푘+1
−1 … (퐿푛−1
∗ )−1퐷푛−1
−1 (퐿푛
∗ )−1퐷푛
−1푧 = 푏′
ℎ푡 = (푧1,푧2,…, 푧푘,, 푥푘+1,푥푘+2,,…, 푥푛)
23. Sparsity and Optimal Ordering
• When the matrix triangularized is sparse, the order in which rows are
processed affects the number of nonzero terms in the resultant upper
triangle
• If a programming scheme is used which process and stores only nonzero
terms, a very great saving in operations and memory can be achieved
• An efficient algorithm for determining the absolute optimal order hasn’t
been developed, and it appears to be a practical impossibility
• Scheme for near-optimal ordering introduced
24. Iterative vs. direct methods
• Direct solutions method producing exact solution in finite
number of steps (in exact arithmetic)
• Iterative methods begin with initial guess for solution and
successively improve it until desired accuracy attained
• In theory, it might take infinite number of iterations to
converge to exact solution, but in practice iterations are
terminated when residual is as small as desired
• For some types of problems, iterative methods have
significant advantages over direct methods
25. Comparative advantages for a Sparse Matrix
1. The table of factors can be obtained in a small fraction of
the time required for the inverse
2. The storage requirement is small, permitting much larger
system to be solved
3. Direct solutions can be obtained much faster unless the
independent vector is extremely sparse
4. Round-off error is reduced
5. Modifications due to changes in the matrix can be made
much faster