SlideShare a Scribd company logo
H-matrix based preconditioner for the skin problem
B.N.Khoromskij, A.Litvinenko
bokh, litvinen@mis.mpg.de
Max Planck Institute for Mathematics in the Sciences
Leipzig. 18/08/2006
Abstract
In this paper we propose and analyze the new H-Cholesky based preconditioner for the so-called
skin problem [5]. After a special reordering of indices and omitting the coupling, we obtain a
block diagonal matrix which is very suitable for the hierarchical Cholesky (H-Cholesky) factor-
ization. We perform the H-Cholesky factorization of this matrix and use it as a preconditioner
for the cg method. We will show that the new preconditioner requires less memory and com-
putational time than the standard H-Cholesky preconditioner, which is also very cheap and fast.
Key words: skin problem, H-matrix approximation, hierarchical Cholesky, jumping
coefficients, domain decomposition.
1 Introduction
In the series of papers [7], [9], [10] the authors successfully apply the iteration method (cg,
gmres, bicgstab) with H-matrices based preconditioners to different types of second order
elliptic differential problems. In this paper we continue the research in this direction.
Under some definite conditions H-matrices can be used even as a direct solver. There are
results (see, e.g., [11] and references therein) where authors apply additive Schwarz domain
decomposition preconditioners. It is known that for problems with jumping coefficients
(see (1)) the condition number cond(A) is proportional to h−d
sup
x,y∈Ω
α(x)
α(y)
, where α(x)
denotes the jumping coefficient, d the spatial dimension and h the grid step size. This is
why a good preconditioner W is needed so that cond(W−1
A) ≃ 1.
In this paper we consider a diffusion process (see (1)) through the domain as shown in
Fig. 1 (left). This figure shows cells and the lipid layer between them. In this problem
the Dirichlet boundary condition means the presence of some drugs on the boundary γ
of the skin fragment. The right-hand side presents external forces. The zero Neumann
condition on Γγ shows that there is no penetration through the surface Γγ. Typical for
the skin problem are the high jumping coefficients. The penetration coefficient inside the
cells is very low ∼ 10−5
− 10−3
, but it is large between cells.
The diffusion equation has the form:
div(α(x)∇u) = f x ∈ Ω
u = 0 x ∈ γ
∂u
∂n
= g x ∈ Γ  γ
(1)
where Γ = ∂Ω, α(x, y) = ε ≪ 1 in cells and α(x, y) = β = 1 in between. The rest of this
1
ε
β
z
y
x
Figure 1: (left) A skin fragment consists of cells and of the lipid layer. The penetration
through the cells goes very slowly and very fast through the lipid layer. (right) The
simplified model of a skin fragment contains 8 cells with the lipid layer between them.
Ω = [−1, 1]3
, α(x, y) = ε inside cells and α(x, y) = β = 1 in the lipid layer.
paper is structured as follows. In Section 2 we describe the discretisation which is done
by FEM. We recall the main idea of the H-matrix technique in Section 3. Section 4 is
devoted to the new preconditioner and estimations of its complexity. Numerical tests and
comparisons of different preconditioners are provided in Section 5. Finally, some remarks
conclude the paper.
2 Discretisation (FEM)
Let us choose the triangulation τh which is compatible with the lipid layer, i.e., τh :=
τ1
h ∪ τ2
h , where τ1
h is a triangulation of the lipid layer and τ2
h a triangulation of cells. Let
bj, j = 1..n, be piecewise linear basis functions and
Vh ⊂ H1
(Ω), Vh := span{b1, ..., bn}. (2)
Then the variational formulation of the initial problem is
find uh ∈ Vn, so that a(uh, v) = c(v) for all v ∈ Vn. (3)
Assuming (2), we obtain the equivalent problem
Au = c, where Aij = a(bj, bi) and ci := c(bi), i, j = 1, .., n. (4)
Here
a(bj, bi) = α(∇bj, ∇bi)dx =
Ω
fbjdx +
Γγ
gbjdΓ =: cj. (5)
2
The lipid layer between the cells defines the natural decomposition of Ω. The width of
this layer is proportional to the grid step size h. Note that after the reordering of indices,
we can represent the global stiffness in the following form:
A11 εA12
εA21 εA22
. (6)
Here A11, A22 are the stiffness matrices which correspond to the lipid layer and to the
rest of domain accordingly. A12, A21 are coupling matrices. To simplify the model we will
consider Ω as in Fig. 1 (right).
3 Hierarchical Matrices
The hierarchical matrices (H-matrices) were introduced in 1998 by Hackbusch [2] and
since then, H-matrices have been applied in a wide range of applications. They provide a
format for the data-sparse representation of fully-populated matrices. Suppose there are
two matrices A ∈ Rn×k
and B ∈ Rm×k
, k ≪ min(n, m), so that ABT
= R ∈ Rn×m
. We
say then that R is the rank-k matrix. The main idea of H-matrices is to approximate
certain subblocks of a given matrix by rank-k matrices. The admissible partitioning
indicates which blocks can be approximated by rank-k matrices. The storage requirement
for matrices A and B is k(n + m) instead of n · m for matrix R. One of the biggest
advantages of H-matrices is that the complexity of the H-matrix addition, multiplication
and inversion is not bigger than Ckn logq
n, q = 1, 2 (see [2], [13]). The lack is that the
constant C is large. For example for 3D case it can be bigger than 120.
To build an H-matrix one needs an admissible block partitioning (see Fig. 2). To build
this partitioning one needs an admissibility condition and a block cluster tree. To build
the block cluster tree a cluster tree is necessary. The cluster tree requires grid data. For
more details see [2] or [13].
H-matrixverices
finite elements
cluster tree
block
cluster tree
admissibility
condition
admissible
partitioning
H-Cholesky
factorization
Figure 2: The schema of building an H-matrix and its H-Cholesky factorisation.
Definition 3.1 We define the set of H-matrices with the maximal rank k as follows
H(TI×J , k) := {M ∈ RI×J
| rank(M |t×s) ≤ k for all admissible leaves t × s of TI×J}.
3
Algorithm of the H-Cholesky factorization
Our aim is to compute the H-Cholesky factorization of the stiffness matrix which appears
after discretisation of the Laplace operator. Suppose that
A =
A11 A12
A21 A22
=
L11 0
L21 L22
U11 U12
0 U22
then the algorithm is as follows
1. compute L11 and U11 as H-Cholesky decomposition of A11.
2. compute U12 from L11U12 = A12 (use a recursive block forward substitution).
3. compute L21 from L21U11 = A21 (use a recursive block backward substitution).
4. compute L22 and U22 as H-Cholesky decomposition of L22U22 = A22 ⊖ L21 ⊙ U12.
All the steps are executed in the class of H-matrices.
4 New Preconditioner
The H-Cholesky factorization of the stiffness matrix produces H-matrix as shown in Fig.
3 (left). After reordering of the index set I(Ω) and omitting the coupling between cells and
the lipid layer we obtain H-matrix as shown in Fig. 3 (right). As a new preconditioner
we use the H-Cholesky decomposition of
A11 0
0 εA22
. (7)
Remark 4.1 Note that W−1
A := (LLT
)
−1
A = L−T
L−1
A = L−T
AL−1
, i.e., W−1
A is
positive definite and symmetric. Thus, for solving the initial problem (4) we apply the pcg
method with the H-Cholesky preconditioner.
Below we prove that omitting of the coupling for small ε is possible.
Lemma 4.1 For a symmetric and positive definite matrix A =
A11 A12
A21 A22
and any
vector v =
v1
v2
it is hold (A12v1, v2) ≤ A
1/2
11 v1 · A
1/2
22 v2 .
Proof: From Cauchy inequality for any vectors u, v it follows
uT
Av = (u, v) A ≤ u A · v A.
Construct two vectors u = (v1, 0)T
and v = (0, v2)T
, then uT
Av = (A12v2, v1). It means
that
(A12v2, v1) ≤ v1 A · v2 A = A
1/2
11 v1 · A
1/2
22 v2 .
4
Lemma 4.2 For a symmetric and positive definite matrix A =
A11 A12
A21 A22
and any
vector v =
v1
v2
it is hold
2(A12u2, u1) ≤ (A11u1, u1) + (A22u2, u2),
(A12v1, v2) ≤
1
2
A
1/2
11 v1 + A
1/2
22 v2 .
Proof: Let u1 := v1 and u2 = −v2 then u =
u1
−u2
. From the positive definiteness
of A it follows
0 ≤ (Au, u) = (A11u1, u1) − (A12u2, u1) − (A21u1, u2) + (A22u2, u2).
Move negative terms to the left, obtain
(A12u2, u1) + (A21u1, u2) ≤ (A11u1, u1) + (A22u2, u2).
Recall that A is symmetric, obtain 2(A12u2, u1) ≤ (A11u1, u1) + (A22u2, u2) and
2(A12u2, u1) ≤ (A
1/2
11 u1, A
1/2
11 u1) + (A
1/2
22 u2, A
1/2
22 u2),
(A12u2, u1) ≤
1
2
A
1/2
11 u1 + A
1/2
22 u2 .
Lemma 4.3 Let u be a vector and W =
A11 0
0 A22
be a preconditioner, then
(Au, u) ≤ 2(Wu, u). (8)
Proof: Compute both scalar products
(W2u, u) =
A11 0
0 εA22
u1
u2
,
u1
u2
= (A11u1, u1) + ε(A22u2, u2).
(Au, u) =
A11 εA12
εA21 εA22
u1
u2
,
u1
u2
= (A11u1, u1) + 2ε(A12u2, u1) + ε(A22u2, u2) = (Wu, u) + 2ε(A12u2, u1),
From the previous Lemma it follows that (Au, u) ≤ (Wu, u) + (Wu, u).
Remark 4.2 Recall that A and W are spectral equivalent if c1 · I ≤ W−1
A ≤ c2 · I,
∀u ∈ Rn
.
Lemma 4.4 Matrices A and W are spectral equivalent with I ≤ W−1
A ≤ 2cdotI.
Proof: We will write A ≥ B if A − B is semi-positive definite. From Lemma 4.3 follows
(Au, u) ≤ 2(Wu, u), u ∈ Rn
. Move everything in the left part, obtain ((A−2W)u, u) ≤ 0.
Since the last holds for ∀u than A − 2W ≤ 0 or W−1
A ≤ 2.
From the construction of W it is clear that A − W ≥ 0, i.e. W−1
A ≥ I.
Thus, I ≤ W−1
A ≤ 2 · I.
5
32
15 32
8 8
8
24
15 24
8 8
8 15
12
24
15 24
15 36
8
8 15
12 15
12 12
9
24
15 24
15 36
15 15
9 15
36
15 27
8 8
8
15
12
15
12
12
9
24
15 24
8 8
8
24
15 24
15 15
12
36
15 36
15 15
12 15
9
15 15
9
36
15 36
15 15
9 15
27
15 27
8 8
8 15
12
15
12 12
9 1512 12
12
15
9 9
9
24
15 24
8 8
8
24
15 24
15 15
12
36
15 36
15 15
12 15
9
15 15
9
36
15 36
15 15
9 15
27
15 27
12 12
5 5
15 15
11 10
12
15
15
9 9
5 5
15 15
9 9
9
36
15 36
15 15
12
36
15 36
15 15
9 15 15
9
15 15
9
27
15 27
15 15
9 15
27
15 27
8 8
8 15
15
12
12
12 12
9 1512 12
12 15
9 9
9 1512 12
12 15
9 9
9
159 9
9 15
9 9
9
24
15 24
8 8
8
24
15 24
15 15
12
36
15 36
15 15
12 15
9
15 15
9
36
15 36
15 15
9 15
27
15 27
12 12
5 5
15 15
12 10
12
15
15
9 9
5 5
15 15
9 9
9
36
15 36
15 15
12
36
15 36
15 15
9 15 15
9
15 15
9
27
15 27
15 15
9 15
27
15 27
11 11
5 5 15
15 15
10 9 15
12
12
12
9 9
5 5 15
15 15
9 9
9 15
15
9 9
9 9 13
5 5
15 15
9 15 15
9 9
9 9 13
9 9
9
36
15 36
15 15
12
36
15 36
15 15
9 15 15
9
15 15
9
27
15 27
15 15
9 15
27
15 27
15 15
5 5 15
9 9
5 5 15
15 15
15 15 15
9 15
9 9 15
15
9 9
5 5 15
15 15
9 9
9
27
15 27
15 15
9 15
27
15 27
15 15
9 15 15
9
15 15
9
27
15 27
15 15
9 15
27
15 27
16
4 36
4 4
9
36
12
20
11 19
15 4
12 10
12 9
9
16
12 32
10 12
7 14
32
15 34
15
12 10
12
9 14
15
8
15
16
12 32
12 12
9 14
32
15 34
6 15
12
15
10
10 15
9
32
12 12
15 12
7 9
34
14 24
15 7
12 9
12
9 13
15 8
15 1515 8
15
14
15 12
15
16
12 32
12 12
9 14
32
15 34
15 15
12
15
10
10 15
9
32
12 12
15 12
7 9
34
14 24
7 8 15 15
12 6
7 10 9 15
9
15
10
10 12
5 5
15 14
8 8
15
32
12 12
15 12
7 9
34
14 24
15 12
8 12
15 14
15 14
7 15
34
15 24
15 33
27
9 27
9 9
9
27
15 27
9 9
9 15
9 9
9
27
15 27
15 15
9 15
27
15 27
27
9 18
9 9
6
27
12 18
9 9
6 15
9 9
6
27
12 18
15 12
6 10
27
14 18
27
9 18
9 9
6
27
12 18
9 9
6 15
9 9
6
27
12 18
15 12
6 10
27
14 18
27
9 18
15 30
9 9
6 15
10
27
12 18
15 30
27
9 18
9 9
6
27
12 18
9 9
6 15
9 9
6
27
12 18
15 12
6 10
27
14 18
27
9 18
15 30
9 9
6 15
10
27
12 18
15 30
27
9 18
15 30
9 9
6 15
10
27
12 18
15 30
27
9 18
15 30
15 15
10
30
15 20
Figure 3: H-Cholesky factorizations of the standard stiffness matrix (left) and the stiffness
matrix without coupling between the lipid layer and cells (right). The dark blocks ∈
R36×36
are dense matrices and the light blocks are low-rank matrices. The steps in the
grey blocks show the decay of the singular values in the logarithmic scale.
5 Numerical tests
Table 5 gives the theoretical estimations of the sequential and parallel complexities of the
H-Cholesky factorization of W1 and W2.
Preconditioner Comp. Complexity Parallel Complexity
W1 := H-Cholesky decomp. of
A11 A12
A21 A22
O(kn log2
n) O(kn log2
n)
W2 := H-Cholesky decomp. of
A11 0
0 A22
O(knI log2
nI) max{O(knI log2
nI),
+O(k(n − nI) log2
(n − nI)) O(kn0 log2
n0)}
Table 1: Complexities of the preconditioners W1 and W2. p is the number of processors,
nI is the number of degrees of freedom in the lipid layer, n0 := n−nI
p−1
.
Remark 5.1 The sparsity constant Csp is an important H-matrix feature and is present
in all H-matrix complexity estimates. This constant depends on the size of the H-matrix.
Since the new preconditioner is simplier the sparsity constant is also smaller. In the frame
of the numerical experiments for Table 5 Csp(W1) = 64 and Csp(W1) = 26. For the model
6
0.01
0.1
1
10
100
1000
10000
0 50 100 150 200 250
alpha=1
alpha=1e-2
alpha=1e-4
"alpha_1"
"alpha_1e-2"
"alpha_1e-4"
Figure 4: Decay of singular values of A for ε = 1, ε = 10−2
and ε = 10−4
.
domain with larger number of cells the difference between sparsity constants will be more
significant.
Table 5 shows the resources requirements for the preconditioners W1 and W2. We see
that W2 requires less resources than W1. It requires less memory (S(W1) > S(W2)) and
time (t(W1) > t(W2)) for the building. Columns 2 and 5 contain the times for computing
the Cholesky factorisations and cg iterations. In Table 5 we compare the solutions ˜u and
k t(W1),sec S(W1),MB iter(W1) t(W2),sec S(W2),MB iter(W2)
1 24 + 10.6 2 ∗ 102
69 8.7+10 102
99
2 70 + 11.3 3.8 ∗ 102
46 21.6+13.3 1.8 ∗ 102
91
4 208 + 12.5 7.5 ∗ 102
17 68+13.5 3.5 ∗ 102
60
6 483.7 + 82 1.1 ∗ 103
11 123+26 5.1 ∗ 102
74
Table 2: Comparison of the preconditioners W1 and W2. 403
dofs, Ax − b = 10−8
,
α = 10−5
.
ucg, obtained with the preconditioners W1 and W2. The solution ucg, obtained with the
preconditioner W1 is considered as ’exact’.
6 Conclusion
The matrix W2 can be successfully used as a preconditioner. The simple structure of
W2 is the reason why it is good parallelisable. The parallel computational complexity is
7
k |ucg−˜u|
|˜u|
|ucg − ˜u|∞
1 5.3 ∗ 10−10
4.5 ∗ 10−6
2 5.1 ∗ 10−9
3.5 ∗ 10−8
4 5.8 ∗ 10−10
4.6 ∗ 10−6
6 7.2 ∗ 10−10
2.5 ∗ 10−5
Table 3: Comparison of the solutions ucg and ˜u. 403
dofs, Ax − b = 10−8
, α = 10−5
.
max{O(nI log2
nI), O(nD log2
nD)}, nD := n−nI
p−1
, nI number of degrees of freedom in the
lipid layer. The sequential version of the preconditioner W2 requires less memory. Note
that the more cells domain Ω contains, the bigger the advantages in storage and compu-
tational resources will be (see Table 5). The disadvantage is the relative large number of
pcg iterations, but these iterations require less resources than the standard H-Cholesky
preconditioner W1. In frames of HLIB (see [1]) it is quite easy to implement the offered
preconditioner.
Acknowledgment: The authors wish to thank Prof. Dr. Hackbusch for his correc-
tions as well as Dr. B¨orm and Dr. Grasedyck for HLIB.
8
References
[1] Hierarchical matrix library: www.hlib.org
[2] W.Hackbusch: A sparse matrix arithmetic based on H-matrices. Part 1: Introduction
to H-matrices. Computing, 62: 89-108, 1999.
[3] W. Hackbusch: Direct Domain Decomposition using the Hierarchical Matrix Tech-
nique, pp. 39-50, Domain Decomposition Methods in Sci. and Engineering. Cocoyoc,
Mexico, 2003.
[4] W. Hackbusch, B.N. Khoromskij and R. Kriemann: Direct Schur Complement
Method by Hierarchical Matrix Techniques. Computing and Visualisation in Science,
2005, 8: 179-188.
[5] B.N. Khoromskij and G. Wittum: Numerical Solution of Elliptic Differential Equa-
tions by Reduction to the Interface. LNCSE 36, Springer, 2004.
[6] M.Bebendorf and W.Hackbusch: Existence of H-Matrix approximants to the inverse
FE-matrix of elliptic operators with L∞
- coefficients. Numerische Mathematik, 95:1-
28, 2003.
[7] M.Bebendorf: Hierarchical LU decomposition-based preconditioners for BEM, Com-
puting 74, 225-247, 2005.
[8] S. Le Borne, Ronald Kriemann, Lars Grasedyck: Parallel Black Box Domain De-
composition Based H-LU Preconditioning, Preprint 115, 2005, Max-Planck-Institut
MIS, Leipzig.
[9] S. Le Borne, Lars Grasedyck: H-matrix preconditioners in convection-dominated
problems, SIAM J. Matrix Anal. Appl., Vol. 27, No. 4, pp. 1172-1183.
[10] S. Le Borne: H-matrices for convection-diffusion problems with constant convection,
Computing, 70 (2003), 261-274.
[11] I.G. Graham, P.Lechner and R.Scheichl: Domain Decomposition for Multiscale
PDEs, Bath Institute for Complex Systems, Preprint 11/06 (2006), available at
www.bath.ac.uk/math-sci/BICS
[12] A. Litvinenko: Application of Hierarchical Matrices for Solving Multiscale Problems.
PhD Dissertation, Leipzig University, submitted, April 2006.
[13] L.Grasedyck, W.Hackbusch: Construction and Arithmetics of H-Matrices. Comput-
ing, 70: 295-334, 2003.
[14] Michael Lintner, The eigenvalue problem for the Laplacian in H-matrix arithmetic
and application to the heat and wave equation. Computing, 72:293-323, 2004.
9

More Related Content

What's hot

Higher order ODE with applications
Higher order ODE with applicationsHigher order ODE with applications
Higher order ODE with applications
Pratik Gadhiya
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
Andres Mendez-Vazquez
 
FPDE presentation
FPDE presentationFPDE presentation
FPDE presentation
Divyansh Verma
 
Inner product
Inner productInner product
Inner product
Dhrupal Patel
 
Solve ODE - BVP through the Least Squares Method
Solve ODE - BVP through the Least Squares MethodSolve ODE - BVP through the Least Squares Method
Solve ODE - BVP through the Least Squares Method
Suddhasheel GHOSH, PhD
 
Dag in mmhc
Dag in mmhcDag in mmhc
Dag in mmhc
KyusonLim
 
U unit3 vm
U unit3 vmU unit3 vm
U unit3 vm
Akhilesh Deshpande
 
Engineering Mathematics 2 questions & answers
Engineering Mathematics 2 questions & answersEngineering Mathematics 2 questions & answers
Engineering Mathematics 2 questions & answers
Mzr Zia
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
Andres Mendez-Vazquez
 
Jacobi iterative method
Jacobi iterative methodJacobi iterative method
Jacobi iterative method
Luckshay Batra
 
Improper integral
Improper integralImproper integral
Differential in several variables
Differential in several variables Differential in several variables
Differential in several variables
Kum Visal
 
How to Solve a Partial Differential Equation on a surface
How to Solve a Partial Differential Equation on a surfaceHow to Solve a Partial Differential Equation on a surface
How to Solve a Partial Differential Equation on a surface
tr1987
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equationsAhmed Haider
 
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IVEngineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Rai University
 
least squares approach in finite element method
least squares approach in finite element methodleast squares approach in finite element method
least squares approach in finite element method
sabiha khathun
 
Notes on eigenvalues
Notes on eigenvaluesNotes on eigenvalues
Notes on eigenvalues
AmanSaeed11
 

What's hot (20)

Higher order ODE with applications
Higher order ODE with applicationsHigher order ODE with applications
Higher order ODE with applications
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
 
FPDE presentation
FPDE presentationFPDE presentation
FPDE presentation
 
Inner product
Inner productInner product
Inner product
 
Solve ODE - BVP through the Least Squares Method
Solve ODE - BVP through the Least Squares MethodSolve ODE - BVP through the Least Squares Method
Solve ODE - BVP through the Least Squares Method
 
Dag in mmhc
Dag in mmhcDag in mmhc
Dag in mmhc
 
U unit3 vm
U unit3 vmU unit3 vm
U unit3 vm
 
Engineering Mathematics 2 questions & answers
Engineering Mathematics 2 questions & answersEngineering Mathematics 2 questions & answers
Engineering Mathematics 2 questions & answers
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
Jacobi iterative method
Jacobi iterative methodJacobi iterative method
Jacobi iterative method
 
1500403828
15004038281500403828
1500403828
 
Higher order differential equations
Higher order differential equationsHigher order differential equations
Higher order differential equations
 
Improper integral
Improper integralImproper integral
Improper integral
 
Differential in several variables
Differential in several variables Differential in several variables
Differential in several variables
 
How to Solve a Partial Differential Equation on a surface
How to Solve a Partial Differential Equation on a surfaceHow to Solve a Partial Differential Equation on a surface
How to Solve a Partial Differential Equation on a surface
 
Differential equations
Differential equationsDifferential equations
Differential equations
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
 
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IVEngineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-IV
 
least squares approach in finite element method
least squares approach in finite element methodleast squares approach in finite element method
least squares approach in finite element method
 
Notes on eigenvalues
Notes on eigenvaluesNotes on eigenvalues
Notes on eigenvalues
 

Viewers also liked

Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Alexander Litvinenko
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Alexander Litvinenko
 
My PhD on 4 pages
My PhD on 4 pagesMy PhD on 4 pages
My PhD on 4 pages
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Alexander Litvinenko
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...
Alexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications
Alexander Litvinenko
 
Application of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverseApplication of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverse
Alexander Litvinenko
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
Alexander Litvinenko
 
A small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleaguesA small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleagues
Alexander Litvinenko
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
Alexander Litvinenko
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
Alexander Litvinenko
 
My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"
Alexander Litvinenko
 
Low-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problemsLow-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problems
Alexander Litvinenko
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
Alexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
Alexander Litvinenko
 

Viewers also liked (20)

Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
 
RS
RSRS
RS
 
My PhD on 4 pages
My PhD on 4 pagesMy PhD on 4 pages
My PhD on 4 pages
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications
 
Application of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverseApplication of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverse
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
A small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleaguesA small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleagues
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
 
My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"
 
add_2_diplom_main
add_2_diplom_mainadd_2_diplom_main
add_2_diplom_main
 
Low-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problemsLow-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problems
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
 

Similar to My paper for Domain Decomposition Conference in Strobl, Austria, 2005

Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
Math Homework Solver
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
Edu Assignment Help
 
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...
Crimsonpublishers-Mechanicalengineering
 
physics430_lecture11.ppt
physics430_lecture11.pptphysics430_lecture11.ppt
physics430_lecture11.ppt
manjarigupta43
 
What are free particles in quantum mechanics
What are free particles in quantum mechanicsWhat are free particles in quantum mechanics
What are free particles in quantum mechanics
bhaskar chatterjee
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
Math Homework Solver
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
Alexander Litvinenko
 
Berans qm overview
Berans qm overviewBerans qm overview
Berans qm overview
Leonardo Nosce
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equation
payalpriyadarshinisa1
 
sublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energiessublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energies
Fujimoto Keisuke
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
BRNSS Publication Hub
 
Computer Network Homework Help
Computer Network Homework HelpComputer Network Homework Help
Computer Network Homework Help
Computer Network Assignment Help
 
Online Signals and Systems Assignment Help
Online Signals and Systems Assignment HelpOnline Signals and Systems Assignment Help
Online Signals and Systems Assignment Help
Matlab Assignment Experts
 
Reachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical SystemsReachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical SystemsM Reza Rahmati
 

Similar to My paper for Domain Decomposition Conference in Strobl, Austria, 2005 (20)

magnt
magntmagnt
magnt
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
 
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model_ Crimso...
 
physics430_lecture11.ppt
physics430_lecture11.pptphysics430_lecture11.ppt
physics430_lecture11.ppt
 
What are free particles in quantum mechanics
What are free particles in quantum mechanicsWhat are free particles in quantum mechanics
What are free particles in quantum mechanics
 
project final
project finalproject final
project final
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 
Berans qm overview
Berans qm overviewBerans qm overview
Berans qm overview
 
Numerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic EquationNumerical Solution of Linear algebraic Equation
Numerical Solution of Linear algebraic Equation
 
sublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energiessublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energies
 
Cs jog
Cs jogCs jog
Cs jog
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
Coueete project
Coueete projectCoueete project
Coueete project
 
Computer Network Homework Help
Computer Network Homework HelpComputer Network Homework Help
Computer Network Homework Help
 
Online Signals and Systems Assignment Help
Online Signals and Systems Assignment HelpOnline Signals and Systems Assignment Help
Online Signals and Systems Assignment Help
 
Reachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical SystemsReachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical Systems
 

More from Alexander Litvinenko

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
Alexander Litvinenko
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 

Recently uploaded

Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
MIRIAMSALINAS13
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
CarlosHernanMontoyab2
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
GeoBlogs
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 

My paper for Domain Decomposition Conference in Strobl, Austria, 2005

  • 1. H-matrix based preconditioner for the skin problem B.N.Khoromskij, A.Litvinenko bokh, litvinen@mis.mpg.de Max Planck Institute for Mathematics in the Sciences Leipzig. 18/08/2006 Abstract In this paper we propose and analyze the new H-Cholesky based preconditioner for the so-called skin problem [5]. After a special reordering of indices and omitting the coupling, we obtain a block diagonal matrix which is very suitable for the hierarchical Cholesky (H-Cholesky) factor- ization. We perform the H-Cholesky factorization of this matrix and use it as a preconditioner for the cg method. We will show that the new preconditioner requires less memory and com- putational time than the standard H-Cholesky preconditioner, which is also very cheap and fast. Key words: skin problem, H-matrix approximation, hierarchical Cholesky, jumping coefficients, domain decomposition. 1 Introduction In the series of papers [7], [9], [10] the authors successfully apply the iteration method (cg, gmres, bicgstab) with H-matrices based preconditioners to different types of second order elliptic differential problems. In this paper we continue the research in this direction. Under some definite conditions H-matrices can be used even as a direct solver. There are results (see, e.g., [11] and references therein) where authors apply additive Schwarz domain decomposition preconditioners. It is known that for problems with jumping coefficients (see (1)) the condition number cond(A) is proportional to h−d sup x,y∈Ω α(x) α(y) , where α(x) denotes the jumping coefficient, d the spatial dimension and h the grid step size. This is why a good preconditioner W is needed so that cond(W−1 A) ≃ 1. In this paper we consider a diffusion process (see (1)) through the domain as shown in Fig. 1 (left). This figure shows cells and the lipid layer between them. In this problem the Dirichlet boundary condition means the presence of some drugs on the boundary γ of the skin fragment. The right-hand side presents external forces. The zero Neumann condition on Γγ shows that there is no penetration through the surface Γγ. Typical for the skin problem are the high jumping coefficients. The penetration coefficient inside the cells is very low ∼ 10−5 − 10−3 , but it is large between cells. The diffusion equation has the form: div(α(x)∇u) = f x ∈ Ω u = 0 x ∈ γ ∂u ∂n = g x ∈ Γ γ (1) where Γ = ∂Ω, α(x, y) = ε ≪ 1 in cells and α(x, y) = β = 1 in between. The rest of this 1
  • 2. ε β z y x Figure 1: (left) A skin fragment consists of cells and of the lipid layer. The penetration through the cells goes very slowly and very fast through the lipid layer. (right) The simplified model of a skin fragment contains 8 cells with the lipid layer between them. Ω = [−1, 1]3 , α(x, y) = ε inside cells and α(x, y) = β = 1 in the lipid layer. paper is structured as follows. In Section 2 we describe the discretisation which is done by FEM. We recall the main idea of the H-matrix technique in Section 3. Section 4 is devoted to the new preconditioner and estimations of its complexity. Numerical tests and comparisons of different preconditioners are provided in Section 5. Finally, some remarks conclude the paper. 2 Discretisation (FEM) Let us choose the triangulation τh which is compatible with the lipid layer, i.e., τh := τ1 h ∪ τ2 h , where τ1 h is a triangulation of the lipid layer and τ2 h a triangulation of cells. Let bj, j = 1..n, be piecewise linear basis functions and Vh ⊂ H1 (Ω), Vh := span{b1, ..., bn}. (2) Then the variational formulation of the initial problem is find uh ∈ Vn, so that a(uh, v) = c(v) for all v ∈ Vn. (3) Assuming (2), we obtain the equivalent problem Au = c, where Aij = a(bj, bi) and ci := c(bi), i, j = 1, .., n. (4) Here a(bj, bi) = α(∇bj, ∇bi)dx = Ω fbjdx + Γγ gbjdΓ =: cj. (5) 2
  • 3. The lipid layer between the cells defines the natural decomposition of Ω. The width of this layer is proportional to the grid step size h. Note that after the reordering of indices, we can represent the global stiffness in the following form: A11 εA12 εA21 εA22 . (6) Here A11, A22 are the stiffness matrices which correspond to the lipid layer and to the rest of domain accordingly. A12, A21 are coupling matrices. To simplify the model we will consider Ω as in Fig. 1 (right). 3 Hierarchical Matrices The hierarchical matrices (H-matrices) were introduced in 1998 by Hackbusch [2] and since then, H-matrices have been applied in a wide range of applications. They provide a format for the data-sparse representation of fully-populated matrices. Suppose there are two matrices A ∈ Rn×k and B ∈ Rm×k , k ≪ min(n, m), so that ABT = R ∈ Rn×m . We say then that R is the rank-k matrix. The main idea of H-matrices is to approximate certain subblocks of a given matrix by rank-k matrices. The admissible partitioning indicates which blocks can be approximated by rank-k matrices. The storage requirement for matrices A and B is k(n + m) instead of n · m for matrix R. One of the biggest advantages of H-matrices is that the complexity of the H-matrix addition, multiplication and inversion is not bigger than Ckn logq n, q = 1, 2 (see [2], [13]). The lack is that the constant C is large. For example for 3D case it can be bigger than 120. To build an H-matrix one needs an admissible block partitioning (see Fig. 2). To build this partitioning one needs an admissibility condition and a block cluster tree. To build the block cluster tree a cluster tree is necessary. The cluster tree requires grid data. For more details see [2] or [13]. H-matrixverices finite elements cluster tree block cluster tree admissibility condition admissible partitioning H-Cholesky factorization Figure 2: The schema of building an H-matrix and its H-Cholesky factorisation. Definition 3.1 We define the set of H-matrices with the maximal rank k as follows H(TI×J , k) := {M ∈ RI×J | rank(M |t×s) ≤ k for all admissible leaves t × s of TI×J}. 3
  • 4. Algorithm of the H-Cholesky factorization Our aim is to compute the H-Cholesky factorization of the stiffness matrix which appears after discretisation of the Laplace operator. Suppose that A = A11 A12 A21 A22 = L11 0 L21 L22 U11 U12 0 U22 then the algorithm is as follows 1. compute L11 and U11 as H-Cholesky decomposition of A11. 2. compute U12 from L11U12 = A12 (use a recursive block forward substitution). 3. compute L21 from L21U11 = A21 (use a recursive block backward substitution). 4. compute L22 and U22 as H-Cholesky decomposition of L22U22 = A22 ⊖ L21 ⊙ U12. All the steps are executed in the class of H-matrices. 4 New Preconditioner The H-Cholesky factorization of the stiffness matrix produces H-matrix as shown in Fig. 3 (left). After reordering of the index set I(Ω) and omitting the coupling between cells and the lipid layer we obtain H-matrix as shown in Fig. 3 (right). As a new preconditioner we use the H-Cholesky decomposition of A11 0 0 εA22 . (7) Remark 4.1 Note that W−1 A := (LLT ) −1 A = L−T L−1 A = L−T AL−1 , i.e., W−1 A is positive definite and symmetric. Thus, for solving the initial problem (4) we apply the pcg method with the H-Cholesky preconditioner. Below we prove that omitting of the coupling for small ε is possible. Lemma 4.1 For a symmetric and positive definite matrix A = A11 A12 A21 A22 and any vector v = v1 v2 it is hold (A12v1, v2) ≤ A 1/2 11 v1 · A 1/2 22 v2 . Proof: From Cauchy inequality for any vectors u, v it follows uT Av = (u, v) A ≤ u A · v A. Construct two vectors u = (v1, 0)T and v = (0, v2)T , then uT Av = (A12v2, v1). It means that (A12v2, v1) ≤ v1 A · v2 A = A 1/2 11 v1 · A 1/2 22 v2 . 4
  • 5. Lemma 4.2 For a symmetric and positive definite matrix A = A11 A12 A21 A22 and any vector v = v1 v2 it is hold 2(A12u2, u1) ≤ (A11u1, u1) + (A22u2, u2), (A12v1, v2) ≤ 1 2 A 1/2 11 v1 + A 1/2 22 v2 . Proof: Let u1 := v1 and u2 = −v2 then u = u1 −u2 . From the positive definiteness of A it follows 0 ≤ (Au, u) = (A11u1, u1) − (A12u2, u1) − (A21u1, u2) + (A22u2, u2). Move negative terms to the left, obtain (A12u2, u1) + (A21u1, u2) ≤ (A11u1, u1) + (A22u2, u2). Recall that A is symmetric, obtain 2(A12u2, u1) ≤ (A11u1, u1) + (A22u2, u2) and 2(A12u2, u1) ≤ (A 1/2 11 u1, A 1/2 11 u1) + (A 1/2 22 u2, A 1/2 22 u2), (A12u2, u1) ≤ 1 2 A 1/2 11 u1 + A 1/2 22 u2 . Lemma 4.3 Let u be a vector and W = A11 0 0 A22 be a preconditioner, then (Au, u) ≤ 2(Wu, u). (8) Proof: Compute both scalar products (W2u, u) = A11 0 0 εA22 u1 u2 , u1 u2 = (A11u1, u1) + ε(A22u2, u2). (Au, u) = A11 εA12 εA21 εA22 u1 u2 , u1 u2 = (A11u1, u1) + 2ε(A12u2, u1) + ε(A22u2, u2) = (Wu, u) + 2ε(A12u2, u1), From the previous Lemma it follows that (Au, u) ≤ (Wu, u) + (Wu, u). Remark 4.2 Recall that A and W are spectral equivalent if c1 · I ≤ W−1 A ≤ c2 · I, ∀u ∈ Rn . Lemma 4.4 Matrices A and W are spectral equivalent with I ≤ W−1 A ≤ 2cdotI. Proof: We will write A ≥ B if A − B is semi-positive definite. From Lemma 4.3 follows (Au, u) ≤ 2(Wu, u), u ∈ Rn . Move everything in the left part, obtain ((A−2W)u, u) ≤ 0. Since the last holds for ∀u than A − 2W ≤ 0 or W−1 A ≤ 2. From the construction of W it is clear that A − W ≥ 0, i.e. W−1 A ≥ I. Thus, I ≤ W−1 A ≤ 2 · I. 5
  • 6. 32 15 32 8 8 8 24 15 24 8 8 8 15 12 24 15 24 15 36 8 8 15 12 15 12 12 9 24 15 24 15 36 15 15 9 15 36 15 27 8 8 8 15 12 15 12 12 9 24 15 24 8 8 8 24 15 24 15 15 12 36 15 36 15 15 12 15 9 15 15 9 36 15 36 15 15 9 15 27 15 27 8 8 8 15 12 15 12 12 9 1512 12 12 15 9 9 9 24 15 24 8 8 8 24 15 24 15 15 12 36 15 36 15 15 12 15 9 15 15 9 36 15 36 15 15 9 15 27 15 27 12 12 5 5 15 15 11 10 12 15 15 9 9 5 5 15 15 9 9 9 36 15 36 15 15 12 36 15 36 15 15 9 15 15 9 15 15 9 27 15 27 15 15 9 15 27 15 27 8 8 8 15 15 12 12 12 12 9 1512 12 12 15 9 9 9 1512 12 12 15 9 9 9 159 9 9 15 9 9 9 24 15 24 8 8 8 24 15 24 15 15 12 36 15 36 15 15 12 15 9 15 15 9 36 15 36 15 15 9 15 27 15 27 12 12 5 5 15 15 12 10 12 15 15 9 9 5 5 15 15 9 9 9 36 15 36 15 15 12 36 15 36 15 15 9 15 15 9 15 15 9 27 15 27 15 15 9 15 27 15 27 11 11 5 5 15 15 15 10 9 15 12 12 12 9 9 5 5 15 15 15 9 9 9 15 15 9 9 9 9 13 5 5 15 15 9 15 15 9 9 9 9 13 9 9 9 36 15 36 15 15 12 36 15 36 15 15 9 15 15 9 15 15 9 27 15 27 15 15 9 15 27 15 27 15 15 5 5 15 9 9 5 5 15 15 15 15 15 15 9 15 9 9 15 15 9 9 5 5 15 15 15 9 9 9 27 15 27 15 15 9 15 27 15 27 15 15 9 15 15 9 15 15 9 27 15 27 15 15 9 15 27 15 27 16 4 36 4 4 9 36 12 20 11 19 15 4 12 10 12 9 9 16 12 32 10 12 7 14 32 15 34 15 12 10 12 9 14 15 8 15 16 12 32 12 12 9 14 32 15 34 6 15 12 15 10 10 15 9 32 12 12 15 12 7 9 34 14 24 15 7 12 9 12 9 13 15 8 15 1515 8 15 14 15 12 15 16 12 32 12 12 9 14 32 15 34 15 15 12 15 10 10 15 9 32 12 12 15 12 7 9 34 14 24 7 8 15 15 12 6 7 10 9 15 9 15 10 10 12 5 5 15 14 8 8 15 32 12 12 15 12 7 9 34 14 24 15 12 8 12 15 14 15 14 7 15 34 15 24 15 33 27 9 27 9 9 9 27 15 27 9 9 9 15 9 9 9 27 15 27 15 15 9 15 27 15 27 27 9 18 9 9 6 27 12 18 9 9 6 15 9 9 6 27 12 18 15 12 6 10 27 14 18 27 9 18 9 9 6 27 12 18 9 9 6 15 9 9 6 27 12 18 15 12 6 10 27 14 18 27 9 18 15 30 9 9 6 15 10 27 12 18 15 30 27 9 18 9 9 6 27 12 18 9 9 6 15 9 9 6 27 12 18 15 12 6 10 27 14 18 27 9 18 15 30 9 9 6 15 10 27 12 18 15 30 27 9 18 15 30 9 9 6 15 10 27 12 18 15 30 27 9 18 15 30 15 15 10 30 15 20 Figure 3: H-Cholesky factorizations of the standard stiffness matrix (left) and the stiffness matrix without coupling between the lipid layer and cells (right). The dark blocks ∈ R36×36 are dense matrices and the light blocks are low-rank matrices. The steps in the grey blocks show the decay of the singular values in the logarithmic scale. 5 Numerical tests Table 5 gives the theoretical estimations of the sequential and parallel complexities of the H-Cholesky factorization of W1 and W2. Preconditioner Comp. Complexity Parallel Complexity W1 := H-Cholesky decomp. of A11 A12 A21 A22 O(kn log2 n) O(kn log2 n) W2 := H-Cholesky decomp. of A11 0 0 A22 O(knI log2 nI) max{O(knI log2 nI), +O(k(n − nI) log2 (n − nI)) O(kn0 log2 n0)} Table 1: Complexities of the preconditioners W1 and W2. p is the number of processors, nI is the number of degrees of freedom in the lipid layer, n0 := n−nI p−1 . Remark 5.1 The sparsity constant Csp is an important H-matrix feature and is present in all H-matrix complexity estimates. This constant depends on the size of the H-matrix. Since the new preconditioner is simplier the sparsity constant is also smaller. In the frame of the numerical experiments for Table 5 Csp(W1) = 64 and Csp(W1) = 26. For the model 6
  • 7. 0.01 0.1 1 10 100 1000 10000 0 50 100 150 200 250 alpha=1 alpha=1e-2 alpha=1e-4 "alpha_1" "alpha_1e-2" "alpha_1e-4" Figure 4: Decay of singular values of A for ε = 1, ε = 10−2 and ε = 10−4 . domain with larger number of cells the difference between sparsity constants will be more significant. Table 5 shows the resources requirements for the preconditioners W1 and W2. We see that W2 requires less resources than W1. It requires less memory (S(W1) > S(W2)) and time (t(W1) > t(W2)) for the building. Columns 2 and 5 contain the times for computing the Cholesky factorisations and cg iterations. In Table 5 we compare the solutions ˜u and k t(W1),sec S(W1),MB iter(W1) t(W2),sec S(W2),MB iter(W2) 1 24 + 10.6 2 ∗ 102 69 8.7+10 102 99 2 70 + 11.3 3.8 ∗ 102 46 21.6+13.3 1.8 ∗ 102 91 4 208 + 12.5 7.5 ∗ 102 17 68+13.5 3.5 ∗ 102 60 6 483.7 + 82 1.1 ∗ 103 11 123+26 5.1 ∗ 102 74 Table 2: Comparison of the preconditioners W1 and W2. 403 dofs, Ax − b = 10−8 , α = 10−5 . ucg, obtained with the preconditioners W1 and W2. The solution ucg, obtained with the preconditioner W1 is considered as ’exact’. 6 Conclusion The matrix W2 can be successfully used as a preconditioner. The simple structure of W2 is the reason why it is good parallelisable. The parallel computational complexity is 7
  • 8. k |ucg−˜u| |˜u| |ucg − ˜u|∞ 1 5.3 ∗ 10−10 4.5 ∗ 10−6 2 5.1 ∗ 10−9 3.5 ∗ 10−8 4 5.8 ∗ 10−10 4.6 ∗ 10−6 6 7.2 ∗ 10−10 2.5 ∗ 10−5 Table 3: Comparison of the solutions ucg and ˜u. 403 dofs, Ax − b = 10−8 , α = 10−5 . max{O(nI log2 nI), O(nD log2 nD)}, nD := n−nI p−1 , nI number of degrees of freedom in the lipid layer. The sequential version of the preconditioner W2 requires less memory. Note that the more cells domain Ω contains, the bigger the advantages in storage and compu- tational resources will be (see Table 5). The disadvantage is the relative large number of pcg iterations, but these iterations require less resources than the standard H-Cholesky preconditioner W1. In frames of HLIB (see [1]) it is quite easy to implement the offered preconditioner. Acknowledgment: The authors wish to thank Prof. Dr. Hackbusch for his correc- tions as well as Dr. B¨orm and Dr. Grasedyck for HLIB. 8
  • 9. References [1] Hierarchical matrix library: www.hlib.org [2] W.Hackbusch: A sparse matrix arithmetic based on H-matrices. Part 1: Introduction to H-matrices. Computing, 62: 89-108, 1999. [3] W. Hackbusch: Direct Domain Decomposition using the Hierarchical Matrix Tech- nique, pp. 39-50, Domain Decomposition Methods in Sci. and Engineering. Cocoyoc, Mexico, 2003. [4] W. Hackbusch, B.N. Khoromskij and R. Kriemann: Direct Schur Complement Method by Hierarchical Matrix Techniques. Computing and Visualisation in Science, 2005, 8: 179-188. [5] B.N. Khoromskij and G. Wittum: Numerical Solution of Elliptic Differential Equa- tions by Reduction to the Interface. LNCSE 36, Springer, 2004. [6] M.Bebendorf and W.Hackbusch: Existence of H-Matrix approximants to the inverse FE-matrix of elliptic operators with L∞ - coefficients. Numerische Mathematik, 95:1- 28, 2003. [7] M.Bebendorf: Hierarchical LU decomposition-based preconditioners for BEM, Com- puting 74, 225-247, 2005. [8] S. Le Borne, Ronald Kriemann, Lars Grasedyck: Parallel Black Box Domain De- composition Based H-LU Preconditioning, Preprint 115, 2005, Max-Planck-Institut MIS, Leipzig. [9] S. Le Borne, Lars Grasedyck: H-matrix preconditioners in convection-dominated problems, SIAM J. Matrix Anal. Appl., Vol. 27, No. 4, pp. 1172-1183. [10] S. Le Borne: H-matrices for convection-diffusion problems with constant convection, Computing, 70 (2003), 261-274. [11] I.G. Graham, P.Lechner and R.Scheichl: Domain Decomposition for Multiscale PDEs, Bath Institute for Complex Systems, Preprint 11/06 (2006), available at www.bath.ac.uk/math-sci/BICS [12] A. Litvinenko: Application of Hierarchical Matrices for Solving Multiscale Problems. PhD Dissertation, Leipzig University, submitted, April 2006. [13] L.Grasedyck, W.Hackbusch: Construction and Arithmetics of H-Matrices. Comput- ing, 70: 295-334, 2003. [14] Michael Lintner, The eigenvalue problem for the Laplacian in H-matrix arithmetic and application to the heat and wave equation. Computing, 72:293-323, 2004. 9