SlideShare a Scribd company logo
1 of 44
Download to read offline
A Condensation-Projection Method for the
Generalized Eigenvalue Problem
Thomas Hitziger, Wolfgang Mackens and Heinrich Vossy
Abstract
Since the early 60th condensation methods have been known as a mean to
economize the computation of selected groups of eigenvalues of large eigen-
value problems. These methods choose from the degrees of freedom of the
problem a small number of (master-) variables which appear to be repre-
sentative. In a Gaussian elimination type procedure the rest of the variables
(the slaves) is eliminated, leaving a similar but much smaller problem for
the master variables only.
Choosing the masters to contain all variables from the interface of a partition-
ing of the problem into substructures leads to data structures and formulae
which are well suited to be implemented on distributed memory MIMD par-
allel computers.
In this paper we develop such a condensation algorithm which we endow with
the additional features of a projective re nement of the results, which greatly
improve their quality, and rigorous error bounds for the re ned values. The
algorithm together with a short statement of the solved problem is contained
in Section 6. On the way to this algorithm, we review some recent development
in condensation methods.
1 Introduction
In the numerical analysis of structures as well as in any other scienti c
issue to be tackled by discretization of originally continuous equations
(using nite elements or nite di erences, e.g., cf. [1], [2]) very often the
This paper appeared in H. Power and C.A. Brebbia (eds.), High Performance
Computing 1, pp. 239 { 282, Elsevier Applied Science, London 1995
yHamburg University of Technology, Section of Mathematics, Kasernenstrasse 12,
D{20173 Hamburg, FR Germany, fmackens, vossg @tu-harburg.d400.de
1
unpleasant situation occurs that a sucient accurate representation of
the desired data in the discrete model requires the use of prohibitively
many degrees of freedom, such that a standard treatment of the resulting
large set of discrete equations is far too expensive.
For such situations several reduction techniques have been developed
in di erent disciplines to incorporate speci c parts of the (global) good
approximation behaviour of large size models into much smaller systems
derived from the larger ones (cf. the forthcoming survey paper [17] of
Ahmed K. Noor in Applied Mechanical Surveys on reduction methods).
The aim of these reduction methods is not just to construct a model
of smaller size since this could be easily done by using the discretization
method with a large discretization stepsize parameter. Such a sort of
reduction of size would have to be paid with an overall loss of accuracy.
The very challenge of reduction approaches is to reduce the size of the
discrete set while its approximation quality is kept (to a certain degree at
least) with respect to some relatively small subset of the solution data,
say for speci c functional values of the solution. This can in fact often be
done by taking into consideration (parts of) the ne grained modelduring
the design of the coarse grained reduction where even partial solutions
of portions of the equations to be reduced may be invoked.
In the study of structural vibrations the large discrete sets of equa-
tions are large algebraic eigenvalue problems of the form
Kx= Mx; (1)
where the sti ness matrix K 2 IR(n;n)
and the mass matrix M 2 IR(n;n)
are real symmetric and positive de nite, x is the vector of modal dis-
placements and  is the square of the natural frequencies.
To solve the eigenproblem (1) one has to determinea set of n linearly
independent eigenvectors x1
;:::;xn
and corresponding eigenvalues
0  1  2      n;
with
Kxi
= iMxi
; i = 1;:::;n:
Even if the problem is well behaved its full solution is normally out
of the scope of todays computing facilities. Dimensions n between 104
and 106
are not unrealistic.
2
Fortunately, engineers are usually not interested in all of the eigen-
values and corresponding modal vectors. They want to know some few
(hundreds) of eigenvalues from the lower part of the spectrum, the eigen-
values below or near a prespeci ed value or those eigenvaluesfrom a given
interval. Therefore, several (iterative) methods have been developed to
approximate just such eigenpairs. Among these are subspace iteration
and Krylov subspace methods, cf. Saad [24] or Bathe [1].
The projection of the original problem to a low dimensional subspace
| as is a partial step within the subspace iteration method | can be
viewed as a reduction method. Given linear independent approximations
^
x1
;:::;^
xm to some few (m  n) of the eigenvectors of problem (1)
(within the above cited methods these approximations are constructed
and corrected in the course of the iteration ) and putting
X :=

^
x1
;:::^
xm

this system is replaced by the n-dimensional projected eigenvalue
problem
KXy = MXy; (2)
with the projected sti ness and mass matrices
KX := XTKX and MX := XTMX: (3)
If the vectors ^
xi, i = 1;:::;m, are moderate approximations of the
eigenvectorsxi, i = 1;:::;m, then the eigenvaluesi of (2) are reasonable
approximations of 1;:::;m (cf. [2], [19], [24]). Approximations to the
corresponding eigenvectors are found in the form ~
xi := Xyi with yi
denoting the eigenvectors of problem (2).
If X containes only one column, i.e. there is an approximation to
one eigenvector x1 only, then (2) is a scalar equation, which can be
elementarily solved for the eigenvalue approximation
1 := R(x1
) := (x1)t
Kx1
(x1)t
Mx1
; (4)
called the Rayleigh quotient at x1.
In order to apply reduction by projection one has to have preinforma-
tion in form of the eigenvector approximations ^
x1
;:::; ^
xm at hand (these
can result from one of the above cited iterative methods or they may be
3
eigenmodes of an already solved similar problem, e.g.). Actually, the i-th
column of X does not have to be an approximation of an eigenvector of
(1). One easily sees that replacing X in (3) by
Z := XN; with N 2 IR(m;m)
regular
will not change the resulting eigenvalue approximations i and will pro-
duce the same eigenvector approximations Zyi.
To obtain good approximations to the rst eigenvalues by the pro-
jection approach it suces that spanfXg approximates reasonably well
to the span of the eigenvectors corresponding to the desired portion of
the spectrum. Thus it makes sense to see the projection approach (2),
(3) as a subspace projection method.
Notice that if spanfXg contains an exact eigenvector xk = Xy;y 2
IRm and k is the corresponding eigenvalue, then the projection (3) will
nd k as (at least) one of the eigenvalues i of the projected problemand
the eigenvector will be rediscovered as Xyi with a corresponding eigen-
vector of the projected system (at least as a memberof the corresponding
eigenspace).
Condensation methods for large eigenvalue problems are sub-
space projection methods together with a speci c Gaussian elimination
avoured approach to construct reasonable approximations of projection
spaces spanfXg from scratch.
To this end some (relatively few) components of the vector x are
selected to be masters and to form the master part xm 2 IRm of x. The
aim is then to construct an eigenproblem
K0xm = M0xm (5)
for these master-vectors and the eigenparameter  such that the eigen-
vectors of (5) are good approximations to the masterparts of selected
eigenvectors of (1) with similarly good approximation behaviour for the
accompanying eigenvalues.
Notice that the masterparts of eigenvectors do not determine the
eigenvectors uniquely, in general. If, e.g., the masterpart consists of a
single component then a nonzero mastervector can be the masterpart of
every eigenvector with nonvanishing masterpart. So, if we really nd a
condensed problem (5) whose eigenpairs are the masterparts of eigenvec-
tors of (1) together with their eigenvalues, can we reconstruct the original
eigenvectors from these data?
4
Actually we can, if one additional condition is ful lled. To see this
we decompose eqn. (1) into block form

Kmm Kms
Ksm Kss
# (
xm
xs
)
= 

Mmm Mms
Msm Mss
# (
xm
xs
)
(6)
where xm 2 IRm containes the mastervariables, xs 2 IRs collects the
remaining variables, the slaves, and where the permutation of x leading
to the new order xm;xs of the variables has been applied likewise to
the rows as to the columns of K and M. Then these matrices are still
symmetric and positive de nite in their permuted form.
Now we see that if the master part ^
xm is given together with the
corresponding eigenvalue ^
 then the slavepart ^
xs can be computed from
the second row of (6) through the master-slave-extension
^
xs = P (^
)^
xm := ?(Kss ? ^
Mss)?1
(Ksm ? ^
Msm)^
xm (7)
as long as the matrix
(Kss ? ^
Mss) is regular. (8)
Noticing that
Kssxs = Mssxs (9)
is the eigenvalue problem corresponding to the vibration of the slave-
portion of the system with the master degrees of freedom restricted to
be zero, the regularity-condition (8) can be expressed as the condition
that ^
 is not in the spectrum of the slave eigenproblem (9) or the slave-
spectrum for short.
While the knowledge of the eigenvalue corresponding to the master-
part of an eigenvector allows the reconstruction of that vector through
master-slave-extension (7), this formula is at the same time the key equa-
tion to reduce the original full problem to a condensed form (5): If the
eigenvalue ^
 (6= slave eigenvalue) of the original system was known, then
all corresponding eigenvectors with nonvanishing masterpart would be
linear combinations of the columns of

Im
P (^
)
#
=

Im
?(Kss ? ^
Mss)?1(Ksm ? ^
Msm)
#
;
where the rst part of the j-th column is the j-th unit master-vector and
the second part is its master-slave extension. Hence subspace projec-
tion of the original problem onto the column space of this matrix would
5
lead to a small problem for the masterparts xm which would contain
^
 as a member of its spectrum and whose corresponding eigenvectors
would reconstruct to eigenvectors of the original system by master-slave-
extension. Thus this reduction would retain the approximation quality
of the large system with respect to the eigeninformation connected with
^
.
Of course an exact eigenvalue ^
 of the original systemis not available.
Hence one has to use suitable substitutes:
The choice ^
 = 0 leads to what is known as static condensation.
In this case the master-slave-extension calculates the slaves to satisfy the
static equilibriumequation Kssxs+Ksmxm = 0 ignoring the (unknown)
inertia forces modelled by the -depending part of the second row of (6).
This leads to a condensed system (5) with condensed matrices
K0 := Kmm ?KmsK?1
ss Ksm;
M0 := Mmm ?KmsK?1
ss Msm ?MmsK?1
ss Ksm
+KmsK?1
ss MssK?1
ss Ksm;
(10)
where the projected matrix K0 is at the same time the matrix that
results from Gaussian elimination of the slaves and is known as Schur
complement of Kss in K (cf. Cottle [6]). The e ect of truncating the
mass matrices here is certainly less severe for the lower frequencies and,
in fact, the lower eigenvalues and (masterparts of the) corresponding
eigenmodes are approximated well by this condensation.
Choosing ^
 =  for some xed   0 is known as dynamic conden-
sation. The master-slave-extension computes the slave values as if they
were modes of the dynamicresponse xs exp(pit) to dynamicallydriving
masters of the form xm exp(pit). It is expected that dynamic conden-
sation will produce good approximations to eigenmodes with eigenvalues
close to . Remember that  will be reproduced as an eigenvalue of the
condensed problem if it is an eigenvalue of the original problem (with
nonvanishing masterpart of at least one accompanying eigenvector). For-
mally, dynamic condensation at  can be interpreted as static condensa-
tion of the shifted problem (K ?M)x= Mx. With this knowledge
the formulae for K0 and M0 are most comfortably derived from (10).
Finally, letting ^
 =  we nd exact condensation. Dynamical con-
densation is done (implicitly) at the (still unknown) solution . Thereby
master-slave-extension will be correct at the unknown solution  (if this
is not a slave eigenvalue at the same time). Thus the idea is nearby that
6
the exactly condensed eigenvalue problem, for which one computes the
expression
T ()xm = 0; (11)
with
T () = ?Kmm+Mmm+(Kms?Mms)(Kss?Mss)?1
(Ksm?Msm)
should be equivalent to the original problem.The following identityholds
for all  not in the slave-spectrum:
Im P T
O Is
!
Kmm ?Mmm Kms ?Mms
Ksm ?Msm Kss ?Mss
!
Im O
P Is
!
= T () O
O Kss ?Mss
! (12)
with P := P () denoting the prolongation matrix from (7). Thus an
eigenvalue of the original equation which is not a slave eigenvalue is an
eigenvalue of the exactly condensed problem and vice versa (since T ()
is only de ned for -values not in the slave-spectrum).
Notice that the price to be paid for this nice property of the exactly
condensed problem is its nonlinearity. The static condensation and the
dynamic condensation are approximations of the exact condensation in
that they are linearizations of (11) with respect to  at  = 0 or  = ,
respectively. From this we rediscover that dynamic condensation at an
eigenvalue  will reproduce this value as a eigenvalue of the condensed
problem.
It is interesting to note, that the exact condensation can be derived
as well by simple insertion of the master-slave-extension into the rst
(master) row of (6). Conforming to this view the identity (12) can be in-
terpreted as block LDLt-decomposition of the dynamical sti ness matrix
K ?M.
The focus of the present paper is to derive an ecient algorithm for
the implementation of the static condensation approach endowed with
some additional features to improve the eigenvalue approximations and
to provide rigorous computable bounds of their errors. The latter fea-
tures will be realized solely through the simple algorithmic ingredients
projection (2), (3), master-slave-extension (7) and Rayleigh quotient (4).
On the way to this result given in Section 6 the preliminary sections
are devoted to the following issues.
7
In Section 2 we report on strict error bounds for the eigenvalue ap-
proximations from static condensation (5), (10). These can be derived
with the aid of a generalization of the Rayleigh quotient (4) for nonlin-
ear eigenvalue problems, the Rayleigh functional ([28], [27], [21]).
Section 3 reports on master-slave-partitioning of the degrees of free-
dom of a problem. We review several approaches to provide a master-
slave-partitioning giving good approximations of the lower eigenvalues
with a small number of masters. Most of these try to maximize the mini-
mal slave eigenvalue ! by an adequate partitioning. This is in agreement
with the error bounds of Section 2 which become smaller with increasing
!.
Our primary interest in Section 3 is to comment on the well known
fact, that choosing the master variables from interfaces of substructures
will split the problem into independent subproblems (for each substruc-
ture) and into a coupling interface-problem. This can be used in parallel
computing and we shall brie y discuss the out ow of such a substructur-
ing on the organization of parallel computations.
Section 4 is devoted to increase the approximation quality of
the eigenvalue estimates from condensation by means of master-slave-
extension together with Rayleigh quotient improvement. In Section 5
we review strict error bounds of Krylov-Bogoliubov and of Kato-Temple
type.
Section 6 contains the nal algorithm.Its application is demonstrated
in Section 7 with some numerical examples.
2 A priori error bounds for static conden-
sation
In this section we consider the exactly condensed eigenvalue problem
(11). We recall that this problem
T()xm = 0; (13)
with
T() = ?Kmm+Mmm+(Kms?Mms)(Kss?Mss)?1
(Ksm?Msm)
8
is a nonlinear matrix eigenvalue problem of the same dimension m as the
statically condensed (linear) eigenvalue problem
K0u = M0u: (14)
Remember that (13) is equivalent to the original linear problem (1), if
(1) and the slave eigenvalue problem
Kss = !Mss (15)
do not have eigenvalues in common.
It is well known, that the eigenvalues 1  2  :::  m of the
statically condensed problem (14) satisfy the minmax principle of Ritz
and Poincar
e
j = min
dimV =j
max
u2Vnf0g
uTK0u
uTM0u (16)
where V denotes a subspace of IRm (cf. [24], e.g.).
Actually, the eigenvalues in the lower part of the spectrum of the
nonlinear problem (13) can also be characterized by a minmax principle
with a Rayleigh functional which is the generalization of the Rayleigh
quotient for nonlinear eigenvalue problems. In IRm the values of this
functional can be compared with values of the Rayleigh quotient for the
statically condensed system. This is the basis to obtain error bounds for
eigenvalues of the latter problem using similar techniques as in the proof
of comparison theorems for linear eigenvalue problems.
To be more speci c we denote by  2 IR(s;s) and := diag[!i] 2
IR(s;s) the modal matrix and the spectral matrix of the slave eigenvalue
problem (15), respectively, where  is normalized such that
tMss = I and tKss = : (17)
Then T() can be rewritten (cf. Leung [14], Petersmann [20]) as
T() = ?K0 + M0 + SD()St; (18)
where
K0 := Kmm ?KmsK?1
ss Ksm;
M0 := Mmm ?KmsK?1
ss Msm ?MmsK?1
ss Ksm
+KmsK?1
ss MssK?1
ss Ksm;
S := Mms?Kms ?1;
D() := diag
h
2=(!i ?)
i
:
9
Let
! := min
i=1;:::;s
!i
denote the smallest eigenvalue of the slave eigenvalue problem (15) and
let J be the open interval J := (0; !).
For any xedvectoru 2 IRmnf0gwe consider thereal valued function
f(;u) : J ! IR ;  7! f(;u) := utT()u; (19)
i.e.
f(;u) = ?utK0u + utM0u +
s
X
i=1
2
!i ?

t
iMsmu ? 1
!i
t
iKsmu
2
:
(20)
Since M0 is positive de nite the mapping  7! f(;u) increases
monotonically on J and the nonlinear equation f(;u) = 0 has at most
one solution in J. Therefore f(;u) = 0 implicitly de nes a functional
p : IRm  D(p) ! J ; f(p(u);u) = 0;
where the domain of de nition is given by
D(p) := fu 2 IRm nf0g : f(;u) = 0 is solvable in Jg:
For linear eigenvalue problems T()u := (K ?M)u = 0 the func-
tional which is de ned by the equation utT()u = 0 is nothing else but
the Rayleigh quotient. We therefore call p the Rayleigh functional of
the nonlinear eigenvalue problem (13).
Notice that in (20) the function f(;u) consists of the analogous
function f0(;u) := ?utK0u + utM0u for the statically condensed
problem plus an additive term, which is always nonnegative for  from
the interval J. Thus f0(;u)  f(;u) for all  in J. Hence the zero p(u)
of f(;u) (if it exists) is less than or equal to the zero utK0u=utM0u of
f0(;u) which is the Rayleigh quotient at u for the statically condensed
eigenproblem.
In accordance with the linear theory for every eigenvector u of (13)
the Rayleigh functional p(u) is the corresponding eigenvalue and the
eigenvectors are the stationary vectors of p. Moreover and more impor-
tant, it is possible to characterize the lower part of the eigenvalues of
(13) as minmax values of p. In view of the above comparison of p(u)
10
and the Rayleigh quotient one can infer from such a result, that in J
the eigenvalues of the statically condensed problem are upper bounds
of the eigenvalues of the exactly condensed problem which in turn are
the eigenvalues of the original system (see, in addition, the remark after
Theorem 2). The precise statement of the minmax characterization is as
follows and is a direct consequence of Theorem 2.1 and Theorem 2.9 in
[28]:
Theorem 1
Let 1  2  :::  n be the eigenvalues of the linear eigenvalue problem
(1) and choose k 2 IN such that k  !  k+1.
Let Hj be the set of all j-dimensional subspaces of IRm and denote by
~
Hj := fV 2 Hj : V n f0g  D(p)g the set of j-dimensional subspaces
of IRm which are contained in the domain of de nition of the Rayleigh
functional p.
Then ~
Hk 6= ; and for j = 1;:::;k the following minmax characterization
holds:
j = min
V2 ~
Hj
max
u2Vnf0g
p(u): (21)
The next theoremfrom[27] deriveserror estimatesfor the eigenvalues
of the statically condensed system. The left-hand side inequality there is
a direct consequence of the previously noted comparison of the functions
f0(;u) and f(;u). For the second inequality one has to make use of
the special form SD()St of the nonlinearity of T() in (18).
Theorem 2
Let 1  2  :::  k be the eigenvalues of problem (1) contained in
J and let 1  2  :::  m be the eigenvalues of the reduced problem
(14). Then for j = 1;:::;k one has
0  j ?j
j
 j
! ?j
: (22)
The error bound in (22) demonstrates that we can expect good ap-
proximations of eigenvalues j by the eigenvalues j of the condensed
problem if the distance of j to the spectrum of the slave problem is
suciently large.
11
For j  ! we get from (22)
j
! ?j
=
1
X
n=1
j
!
n
 j
!
:
This is the asymptotic error bound which was proved by Thomas [26]
using a quadratic approximation of T().
It should be noted at the end of this section that the relevance of
the interval J has been recognized in lots of earlier papers on the basis
of similar and di erent observations. J has been called the interval in
which the Guyan reduction process is valid. This notation seems to result
from a Neumann series derivation of the statically condensed problem
from the exact condensation (see [30], e.g.). Therein the inverse matrix
(Kss?^
Mss)?1 is approximated by the rst terms of its series expansion
(Kss ? ^
Mss)?1
= Kss
?1
1
X
k=0

^
MssKss
?1
k
which, indead, is valid for ^
 2 J and diverges for ^
 above its upper
boundary !.
It has additionally been stated that J is of importance, because
within J the eigenvalues of the statically condensed problem are up-
per bounds of their corresponding exact values. We followed these argu-
ments during the reasoning for the left-hand side inequality of (22). One
should notice, however, that this inequality is as well a consequence of
the two simpler facts, that | rst | the statically condensed problem
is a projection of the original problem and | second | the eigenvalues
of projections are always upper bounds for the corresponding original
eigenvalues (cf. [24], e.g).
3 Choice of masters and slaves
We will now discuss ways to partition the components of the vector x
into complementary slave- and masterparts xs and xm, respectively. The
size of xm determines the size of the condensed eigenproblem and hence
the nal work to be carried out for its eigenanalysis.
Actually, there are two opposite approaches to nding a partitioning
with a small number of masters but good approximation properties for
the lower eigenvalues.
12
The rst class of methods chooses an increasing set of slaves which
after their successive elimination produce a sequence of condensed prob-
lems of decreasing size. Elimination is stopped if a master problem of
a manageable size has been reached. Within each step of the procedure
precisely one slave is chosen and eliminated from the previous system.
The new slave is found by use of a heuristic that tries to retain the mod-
elling potential of the previous problem with respect to the lower eigen-
pairs (Henshell and Ong [12], Shaw and Raymund [25]). The criterion to
determineits index j is to maximizethe quotient of the corresponding di-
agonal elements kjj =mjj. Since this quotient is the minimal (since in fact
only) eigenvalue of the one-dimensional slave system, the approximation
qualities as measured by the error bounds of Theorem 2 are optimized
for each elimination step (though Theorem 2 has not been known to the
authors of [12] and [25]). It is then hoped that stepwise optimization
for each elimination step will give a good nal condensed system (which
may be seen, by the way, to be identical to the system which arises by
the simultaneous elimination of all slave variables that are chosen in the
course of the algorithm). Actually, there may arise diculties with this
approach. One can easily construct (nonacademic) problems where mi-
nor changes would lead to totally di erent partionings. This can produce
instabilities.
In contrary to collecting slaves the second class of methods forms an
increasing sequence of sets of masters with the aim to maximally raise
the minimal eigenvalue ! of the attached slave eigenvalue problem step
by step (Bouhaddi and Fillod [3], [4]). Thereby | as Theorem 2 tells
us | the approximation potential of the condensed problem will be in-
creased. As is known to every guitar player the minimum of the lowest
eigenfrequencies of the two parts of a vibrating string which is held down
by (only) one nger is highest if the nger is put at the node of the sec-
ond mode of vibration. This is (in case of a homogeneous guitar string)
at the same time the middle of the string as well as the point of largest
de ection of the rst mode. Bouhaddi and Fillod use a generalization of
this observation (for the case of discrete and higher dimensioned eigen-
functions, which probably might have no node lines within their discrete
set of de nition) to place the next master at a point where the rst mode
is large and the second (and some of the further ones) are comparably
small.
13
If within a slave problem with spectrum
!1  !2  :::  !s
k  s additional degrees of freedom are chosen to be masters (and hence
are clamped down) the theoretically best result would be to increase
! from !1 to !k+1: Though the method of Bouhaddi and Fillod seems
to work quite well it certainly will not reach this optimum. This follows
easily using the the guitar string example again. Having placed one nger
at the middle of the string the optimal position of two ngers (dividing
the string into three equal portions) can not be reached without removing
the rst nger before.
Actually, it is possible to sequentially impose conditions which will
gain the optimalincrease of ! at everysingle step. Such conditions can for
example be realized by clamping down the (generalized) components
in the direction of the eigenvectors themselves.The necessity to compute
the eigenvectors does not devaluate this method of modal masters as
compared to Bouhaddi's and Fillod's approach since they need these
vectors, too. More than this, they have to recompute the vectors after
the introduction of a new master. Though they try to save computing
time by approximately combining the new vectors from the old ones, the
modal master method is better since the clamping of a modal master
does not change the following modes. Of course, this approach is not
without its own diculties.It o ers some problems, already, to de ne the
condensed eigenvalue problems with these masters. We hope to report on
such approaches in a subsequent paper, soon.
For the rest of this section we shall concentrate on master slave par-
titions which are de ned by substructures of the problem. Such parti-
tionings are not new in the study of condensation methods for eigen-
value problems. They have already been treated in Leung [15], Zehn [31],
[32], Bouhaddi and Fillod [3], [4], e.g., and of course substructuring is
well known in static structure analysis. In the above cited paper [4] of
Bouhaddi and Fillod the reported choice of masters has actually been
performed for every substructure of a substructured problem.
Even though substructuring is thus well known, we shall brie y com-
ment on its computational merits here, since we shall need the notation
and the formulae later.
Suppose that the structure is decomposed into r substructures and
that the vibration problem is discretized (by nite elements or nite dif-
14
ferences) in correspondence to the substructure decomposition (i.e. kij =
0 and mij = 0 whenever i and j denote indices of interior nodes of dif-
ferent substructures). We choose the degrees of freedom which are on
the boundaries of the substructures and some of the interior degrees of
freedom as masters and the remaining ones as slaves (cf. Section 7).
If the variables are numbered in an appropriate way the sti ness and
mass matrices have the following block form
K =
2
6
6
6
6
6
6
4
Kmm Kms1 Kms2 ::: Kmsr
Ksm1 Kss1 O ::: O
Ksm2 O Kss2 ::: O
:::::::::::::::::::::::::::::::::
Ksmr O O ::: Kssr
3
7
7
7
7
7
7
5
; (23)
M =
2
6
6
6
6
6
6
4
Mmm Mms1 Mms2 ::: Mmsr
Msm1 Mss1 O ::: O
Msm2 O Mss2 ::: O
:::::::::::::::::::::::::::::::::::
Msmr O O ::: Mssr
3
7
7
7
7
7
7
5
; (24)
where the coupling of the slave variables and the masters of the j-th
substructure is given by the matrices Ksmj = Kt
msj and Msmj = Mt
msj,
respectively. Note that only those columns of Ksmj and Msmj which
correspond to master variables of the j-th substructure can be di erent
from the nullvector.
From the block structure of the matrices K and M in (23) and (24)
we obtain the reduced matrices
K0 = Kmm ?
r
X
j=1
Kmmj ; M0 = Mmm ?
r
X
j=1
Mmmj; (25)
where
Kmmj := KmsjK?1
ssjKsmj (26)
and
Mmmj := KmsjK?1
ssjMsmj + MmsjK?1
ssjKsmj
?KmsjK?1
ssjMssjK?1
ssjKsmj: (27)
Obviously the matrices Kmmj and Mmmj can be evaluated substruc-
turewise (and hence completely in parallel) in the following way: Solve
simultaneously the systems of linear equations
KssjXj := Ksmj (28)
15
where only those columns of Ksmj have to be taken into account which
belong to master variables of the j-th substructure and determine
Kmmj = KmsjXj;
Y j := MmsjXj;
Mmmj = Y j + Y t
j ?Xt
jMssjXj:
(29)
The above formulae give an idea how to compile the matrices K0
and M0 eciently in parallel, if the number of processors is equal to or
less than the number k of substructures. If there are much more proces-
sors then several of them have to share in the work connected with a
substructure.
On the other hand, for small numbers of processors and for rather
course discretizations, where the size of the condensed problem is approx-
imately the size of the subproblems one has to parallelize the solution
of the condensed eigenproblem. As is shown by an example in Rothe
and Voss [22] the total computing time can otherwise be considerably
enlarged. A parallel algorithm to solve the condensed eigenproblem is
given in Rothe and Voss [23].
It is worth noting that with the above substructuring the slave eigen-
value problem decouples into r independent eigenvalue problems
Kssjj = !Mssjj; j = 1;:::;r: (30)
The minimal slave eigenvalue ! is the minimum of the minimal slave-
eigenvalues of the r substructures
! = min
j=1;:::;r
!j:
Hence a reduction of ! can be done in parallel again by reducing
the individual !js by choosing additional masters from those structures
with small !js using the algorithm of Bouhaddi and Fillod, e.g. The
corresponding rows and columns of the sti ness and mass matrices (23)
and (24), respectively, are then transferred by simultaneous interchange
of rows and columns from the substructure parts to the master-rows and
-columns .
For the above we assumed, that a substructuring of the structure is
given fromthe beginning. For most problemsof structural analysis as well
as for domain decomposition approaches in partial di erential equation
it makes sense to assume this. If this is not the case, then the problem of
16
construction of a useful substructuring can be quite cumbersome.Several
approaches have been described for example in Heath, Ng and Peyton
[11]. It is interesting that some of these need the solution of eigenvalue
problemswith matricesof a similarstructure as that of thegivenproblem.
4 Projective improvement of eigenvalues
It is wellknown that the eigenvectors of problem (1) are stationary points
of the Rayleigh quotient
R(x) = xTKx
xTMx: (31)
and the corresponding eigenvalues are the critical values of R(x) at the
eigenvectors. This means, that the derivative of R with respect to x at
an eigenvector xi is zero.
More precisely from the Ritz-Poincar
e characterization (16) one sees
that the smallest and largest eigenvalues are the minimumand maximum
of R, respectively, and the intermediate eigenvalues are values of R at its
saddlepoints.
Since the graph of the Rayleigh quotient has a horizontal tangential
plane at an eigenvector it is reasonable, that evaluation of the quotient
at a point not too far from an eigenvector will lead to a reasonable ap-
proximation of the accompanying eigenvalue. In fact a small error of size
 for the eigenvector approximation will lead to an error of size 2 for the
Rayleigh approximation of the corresponding eigenvalue.
Assume that we have found from the statically condensed problem
(5);(10) a set of m eigenpairs
(1;x1
m);:::;(m;xm
m):
(We shall call themthe c-eigenvalues and c-eigenvectors,respectively,
for short.) Then if both j and xj
m are not too bad approximations to
an eigenvalue j and the masterpart of a corresponding eigenvector xj,
respectively, it may be expected, that master-slave continuation (7) with
these data should lead to not too bad an approximation
x(j;xj
m) :=

xj
m
P(j)xj
m
#
(32)
17
of the eigenvector xj. Insertion of this approximation into the Rayleigh
quotient is expected to deliver an acceptable approximation
(j;xj
m) := R(x(j;xj
m)) = x(j;xj
m)tKx(j;xj
m)
x(j;xj
m)tMx(j;xj
m) (33)
to j. Actually, this procedure works well and a similar enhancement has
already been suggested by Wright and Miles in [30].
Master-slave extension of xj
m using for ^
 in (7) the eigenvalue ap-
proximation j instead of the value ^
 = 0 increases the quality of the
gained (full space) eigenvector approximation and hence the quality of
the corresponding Rayleigh approximations of the eigenvalue. Knowing
this it is nearby to iterate this procedure, i.e. to execute
k+1
j := (k
j;xj
m); k  0; 0
j := j: (34)
Alas, a rst look at the errors of the resulting sequence of approxi-
mations is disappointing. There is no (relevant) further increase in the
approximation quality after the rst step.
However, disappointment vanishes through analyzing the sequence
and its limit more carefully. Then it turns out that | for those pairs
with c-eigenvalue below !) | the limit is the Rayleigh functional p(xj
m)
which is approximated by the sequence fk
j gk2IN with a quadratic order
of convergence. To be more precise, one has the following lemma.
Lemma 3
For u2 IRm, u6= 0; let x(;u) and (;u) be de ned in IRnf!1;:::;!sg
by (32) and (33). Furthermore, in consistence with (19), let
f(;u) := x(;u)t(M ?K)x(;u) for  2 IR nf!1;:::;!sg:
Then the following assertions hold.
(i) 
2 IRnf!1;:::;!sg is a xed point of (;u) (i.e. 
= (
;u))
if and only if it is a zero of f(;u) (i.e. f(
;u) = 0).
(ii) If 
2 IR nf!1;:::;!sg is a xed point of (;u) then
@
@
(;u) =
= 0; (35)
18
such that the iteration
k+1 := (k;u) (36)
is locally convergent to 
with (at least) quadratic order of conver-
gence.
(iii) The Rayleigh functionals p(u) from Theorem 1 are zeroes of f(;u)
and can hence be approximated by use of (36) with local and
quadratic convergence.
Proof:
If u6= 0 then x(;u) 6= 0 and hence x(;u)tMx(;u) 6= 0. Thus
 = (;u) ()  = x(;u)tKx(;u)
x(;u)tMx(;u)
() x(;u)t [K ?M]x(;u)
x(;u)tMx(;u)
= 0
() utT()u
x(;u)tMx(;u)
= 0
() f(;u)
x(;u)tMx(;u) = 0
() f(;u) = 0;
which proves (i).
To see (35) di erentiate the identity
x(;u)tMx(;u)(;u) = x(;u)tKx(;u)
with respect to . This gives
2x(;u)tMx(;u)(;u) + x(;u)tMx(;u)(;u)
= 2x(;u)tKx(;u):
Inserting  = 
and using (
;u) = 
we get
(
;u) = 2
x(;u)tMx(;u)x(
;u)(K?
M)x(
;u): (37)
Since
(
M ?K)x(
;u) = T(
)u
0
!
19
and
x(;u) = @
@
u
P()u
!
= 0
P0
()u
!
one has
x(
;u)(K?
M)x(
;u) = 0:
Hence eqn. (35) follows from eqn. (37).
The statement about local and quadratic convergence is a consequence
of (;u) being C1
near 
, eqn. (35) and Ostrowski's local convergence
theory (cf. Ortega and Rheinboldt [18], e.g.).
Part (iii) of the lemma is trivial.
The last part of Lemma 3 explains why the iterative re nement (36)
does not reduce the error of the values from eqn. (33) signi cantly. Due
to quadratic convergence of the iteration to the corresponding Rayleigh
functional, the distance between the rst iteration value and the Rayleigh
functional is below the distance between the Rayleigh functional and
the approximated eigenvalue of the original problem. Thus the value
(j;xj
m) normally behaves already like the Rayleigh functional. Hence
most of the observations from Rothe and Voss [21] for enhancements of
c-eigenvalues by the use of the Rayleigh functional carry over to these
approximations. Among these is the empirical
`1%-rule':
For those eigenpairs (j;xj
m) of the statically condensed eigenvalue prob-
lem with j 2 (0;0:5  !) the improved values (j;xj
m) have relative
errors less than 1%.
Remarks: Some remarks are in order:
1. It should be noted that care has to be taken with the application of
condensational approximation of eigenvalues and their Rayleigh quotient
enhancement.
Though the smallest slave eigenvalue ! is not explicitly needed within
the computation of the values one has to have an idea of its size for an
interpretation of the computed approximations.
2. By the above `1%-rule' only the improvements of the c-eigenvalues
from (0;0:5  !) will give 1%-accurate approximations to the smallest
eigenvalues of the original problem.
3. Further only the set of c-eigenvaluesbelow ! (or their -improvements)
will approximate the set of lowest eigenvalues of the original problem.
We encountered simple examples where all the condensed values j;j =
20
1;:::;s, where larger than !. In these cases the iteration (34) converged
to a xed point of the function (;xj
m). This very often turned out to
be a moderately good approximation to a higher eigenvalue. However,
none converged to an approximation less than !. From the interlacing
property of eigenvalues of eigenproblems and their constrained versions
(cf. [9], [19]) one knows, that there is always at least one eigenvalue of the
original problem which is less than or equal to !. Hence this value can be
forgotten by condensation, if one excepts the (improved) c-eigenvalues as
approximations to the lowest eigenvalues of the original problem,without
paying attention to the relative size of !.
4. For the Rayleigh functionals one has (cf. [21])
p(xj
m)  j for all j  !:
The same holds for the -improvements:
(j;xj
m)  j for all j  !:
If the calculation of ! appears to be too expensive, then | the other
way round | an indication for ! to be less than k is the occurrence of
the inequality
k  (k;xk
m): (38)
For the eigenvalues j, j = 1;:::;m, of the statically (as well as
for the dynamically) condensed problem one knows that these are upper
bounds of the corresponding eigenvalues j, j = 1;:::;m, of the original
problem (1):
j  j; j = 1;:::;m:
In a similar way as for the values p(xj
m) of the Rayleigh functional this
useful property is sometimes lost by -improvement (for j  1).
It is, however, an easy undertaking to reinstall such inequalities. One
only has to ensure that the approximations result as the eigenvalues
of a projection of problem (1). Recalling from Section 1, that the -
improvement (35) as the Rayleigh quotient at x(j;xj
m) can be inter-
preted as the eigenvalue of the 1-dimensional projection of (1) onto the
space spanned by x(j;xj
m), it is nearby to generalize this approach to
the projection of problem (1) onto the space spanned by
x(1;x1
m);:::;x(k;xk
m); k  m
21
if approximations to the rst r  k eigenvalues are desired.
Notice, that the resulting projected problem
(Kp ?Mp)xp = 0 (39)
with
Kp = (x(i;xi
m)tKx(j;xj
m))i;j=1;:::;k ;
Mp = (x(i;xi
m)tMx(j;xj
m))i;j=1;:::;k
(40)
is neither a static nor dynamic condensation since the ^
-values which
are used for extensions change with the masters to be prolongated.
It is, however, a projection of the original problem which we call the
condensation-projection of problem (1). Due to its projection prop-
erty its eigenvalues 1;:::;k are upper bounds of 1;:::;k.
Notice further that the extra computational e ort for this calculation
is not very high. The diagonal entries of the matricesKp and Mp are just
the nominators and denominators of the Rayleigh quotients computed for
the -improvements. For their computation all the vectors Kx(j;xj
m),
Mx(j;xj
m), j = 1;:::;k, have already to be calculated and thus the
setup of the rest of the matrices Kp and Mp requires just k(k ?1) extra
inner products. Assuming that the eigenvalue analysis of the condensed
problems of size k requires an amount proportional to k3 the additional
eigenanalysis of (39) does not count.
In Section 6 we will incorporate this projective re nement of j,
j = 1;:::;k, into our nal algorithm. We expect then the approximations
to be upper bounds of the original values and to have 1%-accuracy. While
the rst expectation is known to be always true the second one is up to
now just a rule of thumb. Though it is a good one, one has to use a
rigorous error bounds if absolutely secure information is needed. One
can use the a posteriori bounds of the following section.
5 A posteriori error bounds
In this chapter we describe rigorous a posteriori error bounds which can
be computed at negligible additional cost using the substructuring and
the information from the condensation process. The nal estimation pro-
cedure is a combination of generalized Krylov-Bogoliubov bounds, Kato-
Temple bounds, the minmax characterization of eigenvalues and a gen-
eralization of Sylvester's law of inertia by Wittrick and Williams [29].
22
Krylov-Bogoliubov and Kato-Temple bounds for eigenvalues have
been considered for the generalized eigenvalue problem already in a pa-
per of Matthies [16] where these are applied to bounding the errors of
approximations obtained by subspace iteration or by Lanczos method.
Geradin [7] and Geradin and Carnoy [8] introduced these bounds for
statically condensed eigenvalue problems.
A generalization of the Krylov-Bogoliubov bounds to clustered eigen-
values of the symmetric special eigenvalue problem
Ax = x; A 2 IR(n;n)
;AT = A (41)
due to Kahan can be found in [19], page 219, and reads as follows.
Theorem 4
Let Q 2 IR(n;m)
have orthonormal columns. Associated with Q and the
system matrix A from the eigenproblem (41) are the projected matrix
H := QTAQ
and its residual matrix
R := AQ?QH:
Then there are m of A's eigenvalues f j0 ; j = 1;:::;mg which can be
put in one-to-one correspondence with the eigenvalues j; j = 1;:::;m,
of H in such a way that
jj ? j0 j  kRk2; j = 1;:::;m:
We need a special case of this result, which we formulate as
Corollary 5
If the columns of Q from Theorem 4 are eigenvector approximations from
a projection approximation of problem (41) by projection onto spanfQg
such that
H = QTAQ = diag[1;:::;m]
and
R = (r1
;:::;rm) with rk := Aqk ?kqk;k = 1;:::;m;
then
jj ? j0 j  kRk2  kRkF :=
v
u
u
tm
X
i=1
krik2
2: (42)
23
A generalized eigenvalue problem
Ax = B; A;B 2 IR(n;n)
symmetric;B positive de nite; (43)
may be transformed via
B?1=2
AB?1=2
B1=2
x = B1=2
x;
with ~
A := B?1=2
AB?1=2
and y := B1=2
xto the equivalent special eigen-
value problem
~
Ay = y: (44)
Applying Corollary 5 to this problem one arrives at
Corollary 6
Let the columns of Y 2 IR(n;m)
be B-orthonormal projective approxima-
tions of eigenvectors of the generalized problem (43) with corresponding
eigenvalue approximations 1;:::;m, i.e. let
Y TBY = Im and YTAY = diag[1;:::;m]:
Then there are m eigenvalues f j0 ; j = 1;:::;mg of problem (43) which
can be put in one-to-one correspondence with j; j = 1;:::;m; in such a
way that
jj ? j0 j 
v
u
u
t
m
X
i=1
B?1=2
(Ayi ?iByi)
2
2
:
We could apply the last corollary directly to our problem (1) putting
A := K and B := M. The special case m = 1 would then lead to the
bounds given in [16]. In accordance with [7] and [8] we prefer, however,
to apply the corollary to the equivalent problem
Mx = 1
Kx: (45)
In the rst case we would have to use the discrete di erential operator
M?1
K in the course of the calculations. In the latter case this operator
is replaced by the discrete integral operator K?1
M, which | in contrast
to M?1
K | is a smoothing operator and therefore yields better numer-
ical results for the modes corresponding to low frequencies. Moreover,
24
K?1M can be calculated at negligible cost using the information from
the condensation process.
Having solved the eigenvalue problem (39) and having embedded the
Mp-orthogonal eigenvectors of that system into the original space IRn
one encounters at the end of the condensation-projection method from
Section 4 the following situation:
There are m vectors ~
xj;j = 1;:::;m; which approximate eigenvectors of
(1). These vectors are are M- and K-orthogonal:
~
XT
M ~
X = diag[(~
x1
)tM~
x1
;:::;(~
xm)tM~
xm] =: diag[m1;:::;mm];
~
XT
K ~
X = diag[(~
x1
)tK~
x1
;:::;(~
xm)tK~
xm] =: diag[k1;:::;km]:
(46)
Furthermore the projection-condensation approximations j of the cor-
responding eigenvalues are the Rayleigh-quotients of the ~
xjs:
j = (~
xj)tK~
xj
(~
xj)tM~
xj = kj
mj
; j = 1;:::;m:
Thus Corollary 6 applies with
A := M; B := K;
yi := k?1=2
i ~
xi; i := ?1
i ; i := ?1
i
and gives
1
j
? 1
j0

v
u
u
t
X
i2I
1
ki
K?1=2

M~
xi ??1
i K~
xi
 2
2
; j 2 I (47)
where I is any subset of f1;:::;mg:
Putting
i := (~
xi)tMK?1
M~
xi
(~
xi)tM~
xi (48)
one nds that
1
ki
K?1=2

M~
xi ??1
i K~
xi
 2
2
= (ii ?1)/2
i :
If we nally transform the bounds for the ?1
j into bounds for the desired
eigenvalues themselves we arrive at the following
25
Lemma 7
Let ~
x1
;:::;~
xm be the eigenvector approximations for the eigenvectors
of problem (1) derived by the condensation-projection method and let
i = (~
xi)tK~
xi=(~
xi)tM~
xi, i = 1;:::;m, be the corresponding eigenvalue
approximations.
Then for any subset I of the index set f1;:::;mg there are eigenvalues
j0 , j 2 I of problem (1) that can be put in one-to-one correspondence
with j, j 2 I such that
j
1 + jE(I)  j0  j
1 ?jE(I); j 2 I; (49)
where
E(I) :=
sX
i2I
(ii ?1)=2
i
and i is de ned in (48).
Remark: For the last inequality from (49) one has to assume that 1 
iE(I). It is not necessary to discuss this condition at length, sincewe will
replace this upper bound by a better one derived from minimax theory.
Let ~
x1
;:::;~
xk be approximate eigenvectors which are obtained by
the condensation-projection method introduced in Section 4 and let j :=
R(~
xj), j = 1;:::;k, be the eigenvalue approximation from problem (39).
Then since (39) is a projected problem of the original eigenvalue problem
(1) one has
j  j ; j = 1;:::;k: (50)
If the index set I in Lemma 7 contains just one index, I = fjg, then
the Krylov-Bogoliubov bounds (49) yields that each interval
h j
1 + pjj ?1 ; j
1 ?pjj ?1
i (51)
contains at least one eigenvalue of problem (1).
If these intervals are disjoint and if exactly k eigenvalues of (1) are
contained in (0;k], then there is exactly one eigenvalue in each of these
intervals, and from (50) one gets that even each of the intervals
h j
1 + pjj ?1 ; j] (52)
contains exactly one eigenvalue j.
26
The number of eigenvalues of eqn. (1) which are less than a given
parameter ~
 with the numbers of eigenvalues counted by multiplicitycan
be obtained by the following result of Wittrick and Williams [29]:
Theorem 8
Suppose that ~
 is not an eigenvalue of the slave problem (9).
Let N(~
) and N0(~
) be the number of eigenvalues of eqn. (1) and of
the slave eigenvalue problem (9), respectively, which are less than ~
, and
denote by s(T(~
)) the number of negative eigenvalues of the matrix T(~
).
Then
N(~
) = N0(~
) + s(T(~
)): (53)
The proof follows immediately from the fact that by eqn. (12) the
matrices
K ? ~
M and

T(~
) O
O Kss ? ~
Mss
#
are congruent and thus by Sylvester's law of inertia (cf. [9]) have the
same number of negative eigenvalues.
Remark: Notice that we are only interested in the case ~
  !
(i.e. N0(~
) = 0) and that the number of negative eigenvalues of T(~
)
can be computed easily as the number of negative diagonal entries in
the LR-decomposition of T(~
) which is obtained without pivoting. The
major cost of the application of Theorem 8 therefore consists in the eval-
uation of T(~
).
Now assume that we want to nd approximations to the rst Q( m)
eigenvalues of (1) together with rigorous bounds of their errors. If we
assure with the aid of the previous remark that there are precisely Q
eigenvalues less than Q and the intervals (52) are disjoint, than each of
these intervals for j = 1;:::;Q contains exactly one of these eigenvalue.
If these intervalls are not disjoint, one has to use Lemma 7 in its full
generality to form Q possibly intersectings intervals
[1;1];[2;2];:::;[Q;Q]
in a one-to-one correspondence with the eigenvalues
1  2  :::  Q
27
and
j 2 [j;j]; j = 1;:::;Q:
To achieve this one starts with the application of Lemma 7 to the pair
(Q; ~
xQ) with I := fQg to nd the interval [Q=(1 + pQQ ?1); Q].
If the lower bound of this interval is larger than Q?1 then this in-
terval will be disjoint from all the intervals to come and will contain
the eigenvalue Q. If the interval contains Q?1 then Lemma 7 is rein-
voked with the enlarged set I := I [ fQ ? 1g to nd two intervals
[Q?1=(1 + Q?1E(I)); Q?1], [Q=(1 + QE(I)); Q].
If Q?2  Q?1=(1 + Q?1E(I)) then the union of these intervals is dis-
joint from all intervals to come, one has
j 2
h j
1 + jE(I) ; j
i; j = Q ?1;Q
and one can start with the process again with Q?2 instead of Q analizing
now the pair (Q?2; ~
xQ?2) using an index set I := fQ ? 2g with one
element in Lemma 7 again.
If this is not the case then the index set I is further increased. The
computational details will be given in the next section. The algorithm
for error estimation presented their incorporates as an additional device
the application of Kato-Temple-type bounds (cf. [16], [7], [8]).
Assume that (by the above procedure, e.g.) the following situation is
given:
j  j  j+1  j+1: (54)
Let ~
xj be the eigenvector approximation associated with j = R(~
xj) and
let
~
xj =
n
X
k=1
kxk
be the representation of ~
xj by the M-orthogonal eigenvectors xk, k =
1;:::;n.
Using (K?M)xj = (j ?)Mxj; the positivity of the eigenvalue j,
and K?1Mxj = (j)?1xj, for j = 1;:::;n one easily sees that
h(K ?j+1M)~
xjit
K?1
h(K ?jM)~
xji
=
n
X
k=1
(k ?j+1)(k ?j)
2
k
k
(xk)tMxk  0:
(55)
Evaluation of the left hand side and division by (~
xj)tM~
xj leads to
j ?(j + j+1) + jj+1j  0;
28
which in turn produces the (lower) Kato-Temple-bound
j  j+1 ?j
j+1j ?1: (56)
A similar upper Kato-Temple-bound could likewise be computed if in
analogy to (54) a gap between j and j?1 could be speci ed. Since only
the given estimate (56) will be applied we do not go into the details of
that estimate.
6 A prototype condensation-projection al-
gorithm
In this section we collect the results of the previous ones to form a con-
densation algorithm for generalized eigenvalue problems with projective
re nement and an a posteriori error estimation. In order to ease imple-
mentation we write down the full problem a second time.
The problem is to determine a group of smallest eigenvalues of the
generalized eigenvalue problem
(K?M)x= 0
with the symmetric positive de nite matrices K;M 2 IR(n;n)
by conden-
sation. The group should contain approximations to all eigenvalues less
than a prespeci ed bound  0 with relative errors of approximately
one percent or | if these eigenvalues can not all be approximated to such
a precision | approximations to those rst eigenvalues for which such a
precision can be reached.
Let the sti ness matrixK and the mass matrixM have the following
block structure
K =
2
6
6
6
6
6
6
4
Kmm Kms1 Kms2 ::: Kmsr
Ksm1 Kss1 O ::: O
Ksm2 O Kss2 ::: O
:::::::::::::::::::::::::::::::::
Ksmr O O ::: Kssr
3
7
7
7
7
7
7
5
; (57)
and
M =
2
6
6
6
6
6
6
4
Mmm Mms1 Mms2 ::: Mmsr
Msm1 Mss1 O ::: O
Msm2 O Mss2 ::: O
:::::::::::::::::::::::::::::::::::
Msmr O O ::: Mssr
3
7
7
7
7
7
7
5
; (58)
29
respectively, where for the nonzero blocks
Kmm;Mmm 2 IR(m;m)
; Kssj;Mssj 2 IR(sj;sj )
; j = 1;:::;r;
Ksmj;Msmj 2 IR(sj ;m)
; Kmsj;Mmsj 2 IR(m;sj)
; j = 1;:::;r;
with
n = m + (s1 +    + sr):
This form of the matrices arises, e.g. by dividing the original problem
into r substructures with one coupling interface structure.
Then Kssj and Mssj are the sti ness and massmatrices,respectively,
of the j-th substructure, Kmm and Mmm are the corresponding matrices
of the coupling interface system and the matricesKsmj, Msmj collect the
in uence of the interface system on the j-th substructure, while Kmsj,
Mmsj describe the in uence of this substructure on the coupling system.
The block structure of the matrices K and M is conserved if ad-
ditionally interior degrees of freedom of the substructures are chosen
as master variables. By a proper choice of interior masters the minimal
slave eigenvalue is raised substantially thus improving the eigenvalue and
eigenvector approximations and enlarging that part of the spectrum that
is approximated well (cf. Section 7).
We will write down the algorithm making explicit use of the struc-
tures (57) and (58) to show that most of the computations can be done
in parallel.
CONDENSATION-PROJECTION ALGORITHM:
Input: The nonzero submatrices of K and M from (57) and (58). A
threshold for the eigenvalues of the problem to be approximated.
Output: Number q of eigenvalues from the lower part of the spectrum
(equal to the number of eigenvalues below or less), that can (most
probably) be approximated with a relative error less than 1%.
Approximate eigenvalues 1  :::  q and approximations ~
x1
;:::;~
xq
of corresponding eigenvectors.
Optional lower and upper bounds j  j  j for the rst q eigenvalues.
1. Static condensation of the problem
Determine the LU-decompositions of the sti ness matrices Kssj:
Kssj =: LjRj; j = 1;:::;r: (59)
30
Using these decompositions, determine Xj from
KssjXj = Ksmj; j = 1;:::;r:
Compute
Kmmj := KmsjXj; j = 1;:::;r;
Yj := MmsjXj; j = 1;:::;r;
Mmmj := Yj + YT
j ?XT
j MssjXj; j = 1;:::;r;
and collect
K0 := Kmm +
r
X
j=1
Kmmj;
M0 := Mmm +
r
X
j=1
Mmmj:
2. Estimate minimal slave eigenvalue.
For j := 1 to r estimate the smallest eigenvalue !j of the eigenvalue
problem
(Kssj ?Mssj)zj = 0:
(Apply, e.g., inverse iteration, making use of the LU-decomposition
(59).)
Put
! := min
j=1;:::;r
!j:
Let
:=
(
!=(! ? ) if  !=3
!=2 otherwise.
Remarks: (i) From the a priori bound in Theorem 2 one ob-
tains that in the case  !=3 all eigenvalues less than have
c-eigenvaluesless than . So those c-eigenvalueswillbe used for pro-
jective enhancement. Otherwise we use all c-eigenvalues which can
be improvedto 1% accuracy by the 1%-rule, i.e. those c-eigenvalues
below !=2.
(ii) If in the second case the maximal nal eigenvalue approxima-
tion does not exceed a message to the user is given.
31
3. Determine appropriate number of eigenvalues of con-
densed problem.
Let 1  :::  m denote the eigenvalues of the condensed problem
K0u= M0u (60)
and put 0 := 0 and m+1 := +1.
Let q denote the nonnegative integer, which satis es
0  :::  q   q+1:
If q = 0 then STOP.
Otherwise put q := minfq;m ? 1g and Q := q +  with a small
nonnegative integer , such that Q  m.
Compute the Q smallest eigenvalues 1;:::;Q of (60) together with
the corresponding eigenvectors uj, j = 1;:::;Q:
Remarks: (i) 1;:::;q will be those eigenvalues less than the
prescribed bound for which, hopefully, approximations 1;:::;q
with a relative error of about 1% and less can be computed by con-
densation.
The additional approximations q+1;:::;Q will be computed to
increase the quality of the nal error estimates if such are desired.
If one wants to save computing time one can trust the 1%-rule, put
 = 0 and skip most of the error estimation subprocedure from
item 8.
(ii) The determination of q and Q will probably have to be done
simultaneously with the computation of the eigenpairs. One can
think of subspace iterations, e.g., with increasing subspace dimen-
sions.
4. Prolongate the eigenvectors of the condensation.
Compute
vk
j := ?(Kssj ?kMssj)?1(Ksmj ?kMsmj)uk; j := 1;:::;r;
k := 1;:::;Q:
Notice that the prolongation x(k;uk) of uk to the full space is
x(k;uk) =

(uk)t;(vk
1)t;:::;(vk
r)t
t
;
wherein vk
j is the part of that vector associated with the jth sub-
structure.
32
5. Project problem (1) to spanfx(j;uj) : j = 1;:::;Qg
Using the partition of the involved data, one gets for L 2 fK;Mg
the projected matrices
Lp :=
0
@ (ui)t
Lmmuj
+Pr
k=1
h
(ui)t
Lmskvj
k + (vi
k)t
Lsmkuj + (vi
k)t
Lsskvj
k
i
1
A
Q
i;j=1
6. Solve the projected problem
Kpj = jMpj; j = 1;:::;Q: (61)
If q  print a message that possibly not all eigenvalues below
are approximated by the computed 1;:::;q.
7. Embed the eigenvectors j into the full space.
For j := 1 to Q do
~
xj :=
t
X
k=1

(uk)t;(vk
1)t;:::;(vk
r)t
t
j
k; j = 1;:::;Q; (62)
and let
~
xj =:

(xj
m)t;(xj
s1)t;:::;(xj
sr)t
t
; j = 1;:::;Q:
The nal approximate eigenpairs are now given by
(j; ~
xj); j = 1;:::;q:
Remark: The additional pairs (j; ~
xj), j = q + 1;:::;Q are used
within the error estimates.
8. Error estimation.
If one decides not to compute rigorous error bounds but instead
to trust the 1%-rule to save computing time, one should at least
perform the following rst step of this procedure:
Calculate with Theorem 8 the number N of eigenvalues less than
Q.
If N is larger than Q: Write a message saying that there are eigen-
values below Q which are not approximated by the condensation
method. STOP.
33
Detection of N  Q is bad, since the calculated information is in-
complete. This bad case can not be excluded if no special routine
for choosing the masters is included. An eigenvector can not be de-
tected by condensation if, e.g., its master part vanishes identically.
In this case the error estimation procedure will in general not give
trustworthy results and should hence not be run.
If N = Q, however, it will produce intervals [j;j], j = 1;:::;Q;
such that de nitively
j  j  j; j = 1;:::;Q:
For the estimates of the errors the quantities
j := (~
xj)tMK?1M~
xj
(~
xj)tM~
xj
as de ned in (48) will be needed. We start with a description of
how to compute these entities if the sti ness and mass matrices
have the block form given in (57) and (58) and the approximate
eigenvector ~
x is given in its partitioned form
~
x= (~
xt
m; ~
xt
s1;:::; ~
xt
sr)t
where ~
xm denotes the master portion of ~
xand ~
xsj the slave portion
corresponding to the j-th substructure.
Function TAU(~
x): Let
v := M~
x=
8











:
Mmm~
xm + Pr
j=1 Mmsj~
xsj
.
.
.
Msmj~
xm + Mssj~
xsj
.
.
.
9





=





;
and let w := K?1M~
x= K?1v where
w = (wt
m;wt
s1;:::;wt
sr)t:
Then block Gaussian elimination yields
K0wm = vm ?
r
X
j=1
KmsjK?1
ssjvsj;
Kssjwsj = vsj ?Ksmjwm; j = 1;:::;r;
34
where K0 denotes the reduced matrix given in eqn. (25). Put
 = vt
w
vt
~
x:
Remark:Notice that in the condensation process already systems
of equations with system matrices K0 and Kssj have to be solved
and therefore LR-decompositions of these matrices are at hand.
Hence  can be calculated at negligible cost.
With the FUNCTION-Subroutine TAU() available we can now
write down the error estimation procedure.
0 = ?1; z0 := 0; B := 0;
for i := 1 to Q do zi := ?1
i ;
for j := Q downto 0 do
begin
if (B  zj or j = 0) then
begin f nal bound for cluster j+1;:::;I g
if j  Q then
begin
for i := j + 1 to I ?1 do
i := i=(1 + iW);
I := maxfI ; I =(1 + I W)g:
end;
if j  0 then
begin fstart a new clusterg
I := j;
 := TAU(~
xj
);
R := (j ?1)=2
j ;
W :=
p
R;
B := zj + W;
end;
if j  0 and j  Q then
begin fcompute Karo-Temple boundsg
j := (j+1 ?j)=(j+1 ?1);
B := minfB; ?1
j g;
end;
end
else
35
x
y
Fig. 1. Substructuring of the tapered cantilever beam
begin fupdate Kahan-Krylov-Bogoliubov boundsg
 := TAU(~
xj
);
R := R + (j ?1)=2
j ;
W :=
p
R;
B := zj + W;
end;
end;
7 Numerical examples
7.1 Tapered cantilever beam
We consider the transversal vibrations of a tapered cantilever beam of
length 1 with area of cross section Ax := A0(1 ?0:5x)2
, 0  x  1.
The problem is described by the eigenvalue problem
((1 ?0:5x)4y00
)00
= (1 ?0:5x)2y
y(0) = y0
(0) = y00
(1) = y000
(1) = 0; (63)
where  = !2
A0=(EI0), A0 and I0 are the area of the cross section and
momentof inertia at x = 0, respectively, is the mass per unit volume,E
is the modulus of elasticityand ! denotes the natural circular frequencies
of the beam.
We discretized eqn. (63) by nite elements with cubic hermite splines
(beam elements), divided the beam into 6 substructures of identical
length and subdivided each substructure into 5 elements of identical
length. Thus, problem (1) has dimension n = 60 and is condensed to
36
dimension m = 12. The 6 substructure eigenproblems (15) are of dimen-
sion s=r = 8 and the minimal slave eigenvalue is !  190200:
By the 1%-rule 6 eigenvalues can be obtained with an error less than
1%. Table 1 contains the eigenvalue approximations j, j = 1;:::;6, from
the condensation-projection method and its relative errors. We added
the relative errors of the eigenvalue approximations that are obtained
by projection to the 12 dimensional space spanned by the prolongations
of all eigenvectors of the condensed problems. The improvement of the
approximations using the higher dimension is insigni cant. Hence, di er-
ently from the subspace iteration one does not have to use higher dimen-
sional subspaces to obtain satisfactory eigenvalue approximations (for
the subspace iteration it is recommended to use subspaces of dimension
min(2k;k +8) to obtain good approximations of the k smallest eigenval-
ues, cf. Bathe [1]). A similar behaviour was observed in all examples that
we treated.
Tab. 1. tapered cantilever beam; dimensions 6 and 12
j j
j ?j
j
(dim = 6) j ?j
j
(dim = 12)
1 2:13920157650317E + 01 3:52E ?13 9:17E ?14
2 3:82109582756324E + 02 1:38E ?09 6:55E ?10
3 2:35992737167125E + 03 2:51E ?07 2:34E ?07
4 8:42988817779263E + 03 8:58E ?06 7:78E ?06
5 2:23209130540396E + 04 8:56E ?05 4:05E ?05
6 4:91603912747807E + 04 3:39E ?03 2:83E ?03
Table 2 contains the lower bounds j of the smallest 6 eigenvalues
obtained from the algorithm in Section 5 (which are all Kato-Temple
bounds) using the condensed-projected problem of dimension 7 as well
as the relative distance (j ?j )=j of these bounds to the upper bounds
j. The last column contains the relative distance of the bounds if the
problem is projected to a 12-dimensional space demonstrating that the
gain of accuracy is not signi cant. Moreover, a comparison with Table 1
demonstrates that the Kato-Temple bounds are realistic.
Tab. 2. tapered cantilever beam; lower bounds
37
j j
j ?j
j
(dim = 6) j ?j
j
(dim = 12)
1 2:13920157650236E + 01 3:76E ?13 1:05E ?13
2 3:82109582126386E + 02 1:65E ?09 7:81E ?10
3 2:35992655024818E + 03 3:47E ?07 3:25E ?07
4 8:42977535282817E + 03 1:28E ?05 1:24E ?05
5 2:23177154291946E + 04 1:29E ?04 7:14E ?04
6 4:87866533772251E + 04 7:35E ?03 5:76E ?03
7.2 Clamped plate; only interface masters
Free vibration of a uniform thin elastic plate covering the region :=
[0;2][0;2] and clamped at its boundary @ are governed by the eigen-
value problem
2u = u in ;
u = @u
@n = 0 on @ : (64)
where  = !2
h=D and  is the mass density of the plate material, h
the thickness and D the exural rigidity of the plate, and ! denotes the
circular frequency of a free vibration. The region is divided into r = 9
quadratic substructures of edge length 2=3 (cf. Fig. 2).
Each substructure is discretizedby 16 quadratic Bogner-Fox-Schmidt
elements (node variables u;ux;uy;uxy) and the master variables are cho-
sen to be the degrees of freedom on the boundaries of the substructures.
Thus problem (1) is of dimension n = 484 and after condensation we
retain m = 160 master variables. Each substructure has s=r = 36 slave
variables and the minimal slave eigenvalue is !  6581:9.
38
u u u u u u u u u u u
u u u u u u u u u u u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
e e e e e e e e e e e
(0,0) (2,0)
(0,2)
(2,2)
- x
6
y
u = master node
e = slave node
Fig. 2. Substructuring of the clamped plate
10 eigenvalues of the condensed problem are contained in (0; 0:5!).
Tab. 3 contains the approximate eigenvalues j , their relative errors and
their relative distances to the lower bounds j from section 6 where the
dimension of the projected problem is 16.
Tab. 3. clamped plate; only interface masters
j j
j ?j
j
j ?j
j
j ?j
j
1 80:938897 8:70E ?08 4:25E ?07 1:14E ?07
2 336:753546 6:14E ?06 3:51E ?03 1:14E ?05
3 336:753546 6:14E ?06 1:14E ?05 1:14E ?05
4 732:281575 1:20E ?04 4:03E ?04 4:03E ?04
5 1083:377527 2:80E ?04 1:61E ?02 1:61E ?02
6 1095:397716 1:87E ?03 5:69E ?03 5:11E ?03
7 1703:859234 4:83E ?04 7:77E ?02 8:93E ?03
8 1708:859234 4:83E ?04 9:68E ?03 8:93E ?03
9 2805:335439 1:03E ?02 1:31E ?01 3:93E ?02
10 2805:335439 1:03E ?02 3:93E ?02 3:93E ?02
The algorithm from section 6 yielding lower bounds detects the dou-
ble eigenvalues 2 = 3, 7 = 8 and 9 = 10 as clusters of two eigenval-
ues. The relative distances (j ?j )=j of the upper and lower bounds are
39
much better for j = 3;8;10 than those for j = 2;7;9, respectively, since
in these cases the Kato-Temple bounds apply. If it is known in advance
that 2 = 3, 7 = 8 and 9 = 10 are double eigenvalues then the
Kato-Temple bounds can be computed for these pairs and can be used
in the calculation of the Kato-Temple bounds of 1 and 6. The relative
distances of these bounds j are contained in the last column of Table 3.
Again the Kato-Temple bounds are realistic with the exception of
the fth eigenvalue which is quite close to the sixth one.
7.3 Clamped plate; interface and interior masters
Again we considers the clamped plate of Example 7.2 with the same
discretization as before. As master variables we choose the degrees of
freedom on the boundaries of the substructures and additionally the dis-
placement in the center of each substructure raising the dimension of
the condensed problem to m = 169 and the minimal slave eigenvalue
to !  27747. In this case 22 eigenvalues of the condensed problem are
contained in the interval (0;0:5!).
Tab. 4 contains the approximate eigenvalues j , their relative errors
and their relative distances to the lower Kato-Temple bounds j which
were obtained taking advantage of the knowledge that the eigenvalues 2,
7, 9, 14 and 18 are double eigenvalues. The dimension of the projected
problem is 23.
Tab. 4. clamped plate; interface and interior masters
40
j j
j ?j
j
j ?j
j
1 80:938890 4:93E ?09 6:48E ?09
2=3 336:751582 3:03E ?07 5:62E ?07
4 732:195843 2:66E ?06 8:16E ?06
5 1083:091322 1:60E ?05 1:67E ?03
6 1093:373316 1:66E ?05 4:58E ?05
7=8 1703:134131 5:76E ?05 1:51E ?04
9=10 2777:391326 2:39E ?04 2:85E ?03
11 3030:929515 5:27E ?04 3:30E ?03
12 3674:593069 4:38E ?04 2:04E ?02
13 3704:839141 5:37E ?04 1:79E ?03
14=15 5515:901661 2:88E ?03 5:23E ?02
16 6002:292169 1:01E ?03 3:04E ?02
17 6016:217086 1:66E ?03 1:15E ?02
18=19 7303:330418 2:53E ?03 2:57E ?02
20 8720:547076 9:14E ?03 9:01E ?02
21 9773:861784 9:26E ?03 9:40E ?02
22 9821:369621 8:47E ?03 8:90E ?02
References
[1] Bathe, K. J., `Finite Element Procedures in Engineering Analysis',
Prentice-Hall, Englewood Cli s 1982
[2] Bathe, K. J. and Wilson, E. L. `Numerical Methods in Finite Ele-
ment Analysis', Prentice-Hall, Englewood Cli s 1976
[3] Bouhaddi, N. and Fillod, R. `Substructuring Using a Linearized Dy-
namic Condensation Method', Computers  Structures 45, pp. 697
| 683, 1992
[4] Bouhaddi, N. and Fillod, R. `A Method for Selecting Master DOF in
Dynamic Substructuring Using the Guyan Condensation Method',
Computers  Structures 45, pp. 941 | 946, 1992
[5] Chen, S.-H. and Pan, H. H. `Guyan Reduction', Comm. Appl. Nu-
mer. Meth. 4, pp. 549 | 556, 1988
41
[6] Cottle, R. W. `Manifestations of the Schur Complement', Lin. Alg.
Appl. 8, pp. 189 | 211, 1974
[7] Geradin, M. 'Error Bounds for Eigenvalue Analysis by Elimination
of Variables.' J. of Sound and Vibration 19, pp. 111 | 132, 1971
[8] Geradin, M. and Carnoy, E. `On the Practical Use of Eigenvalue
BracketinginFiniteElementApplicationsto Vibration and Stability
Problems.' in EUROMECH 112, Hungarian Academy of Sciences,
Budapest 1979, pp. 151 | 172
[9] Golub, G. H. and van Loan, C. F. `Matrix Computations', 2nd edi-
tion, The John Hopkins University Press, Baltimore 1989
[10] Guyan, R. J. `Reduction of Sti ness and Mass Matrices', AIAA J. 3,
p. 380, 1965
[11] Heath,M. , Ng, E. and Peyton, B. `Parallel Algorithms for Sparse
Linear Systems', SIAM Review 33, pp 420 | 460, 1991
[12] Henshell, R. D. and Ong, J. H. `Automatic Masters for Eigenvalue
Economization', Earthquake Engineering and Structural Dynamics
3, pp. 375 | 383, 1975
[13] Irons, B. `Structural Eigenvalue Problems: Elimination of Unwanted
Variables', AIAA J. 3, pp. 961 | 962, 1965
[14] Leung, Y. T. `An Accurate Method of Dynamic Condensation in
Structural Analysis', Internat. J. Numer. Meth. Engrg. 12, pp. 1705
| 1715, 1978
[15] Leung, Y. T. `An Accurate Method of Dynamic Substructuring with
Simpli ed Computation', Internat. J. Numer. Meth. Engrg. 14, pp.
1241 | 1256, 1979
[16] Matthies,H.G. `ComputableError Bounds for the GeneralizedSym-
metric Eigenvalue Problem.' Comm. Appl. Numer. Meth. 1, pp. 33
| 38, 1985
[17] Noor, A. K. `Recent Advances and Applications of Reduction Meth-
ods', to appear in Applied Mechanics Reviews
[18] Ortega, J. M. and Rheinboldt, W. C. `IterativeSolution of Nonlinear
Equations in Several Variables', Academic Press, New York, 1970
42
[19] Parlett, B. N. `The Symmetric Eigenvalue Problem', Prentice-Hall,
Englewood Cli s, N.J., 1980
[20] Petersmann, N. `Substrukturtechnik und Kondensation bei der
Schwingungsanalyse', Fortschrittberichte VDI, Reihe 11: Schwin-
gungstechnik, Nr. 76, VDI Verlag, D
usseldorf, 1986
[21] Rothe, K. and Voss, H. `ImprovingCondensation Methods for Eigen-
value Problems via Rayleigh Functional', Comp. Meth. Appl. Mech.
Engrg. 111, pp. 169 | 183, 1994
[22] Rothe, K. and Voss, H. `Improved Dynamic Substructuring on Dis-
tributed Memory MIMD-Computers.' in: Application of Supercom-
puters in Engineering III (Ed. Brebbia, C.A. and Power, H.), Com-
putational MechanicsPublications,pp. 339 | 352, Elsevier,London,
1993
[23] Rothe, K. and Voss, H. `A Fully Parallel Condensation Method for
Generalized Eigenvalue Problems on Distributed Memory Comput-
ers.' submitted to Parallel Computing
[24] Saad, Y. `Numerical Methods for Large Eigenvalue Problems',
Manchester University Press, Manchester, 1992
[25] Shah, V. N. and Raymund, M. `Analytic Selection of Masters for the
Reduced Eigenvalue Problem',Internat. J. Numer.Meth. Engrg. 18,
pp. 89 | 98, 1982
[26] Thomas, D. L. `Errors in Natural Frequency Calculations Using
Eigenvalue Economization', Internat. J. Numer. Meth. Engrg. 18,
pp. 1521 | 1527, 1982
[27] Voss, H. `An Error Bound for Eigenvalue Analysis by Nodal Conden-
sation', in: Numerical Treatment of Eigenvalue Problems 3 (Ed. Al-
brecht, J., Collatz, L. and Velte, W.), Internat. Series Numer. Math.
69, pp. 205 | 214, Birkh
auser, Stuttgart, 1983
[28] Voss, H. and Werner, B. `A Minimax Principle for Nonlinear Eigen-
value Problems with Applications to Nonoverdamped Systems',
Math. Meth. Appl. Sci. 4, pp. 415 | 422, 1982
[29] Wittrick,W. H. and Williams,F. W. `A General Algorithmfor Com-
puting Natural Frequencies of Elastic Structures.' Quart. J. Mech.
Appl. Math. 24, pp. 263 | 284, 1971
43
[30] Wright, G. C. and Miles, G. A. `An Economical Method for De-
termining the Smallest Eigenvalues of Large Linear Systems', Inter-
nat. J. Numer. Meth. Engrg, 3, pp. 25 | 33, 1971
[31] Zehn, M. `Substruktur-/Superelementtechnik f
ur die Eigenschwin-
gungsberechnung dreidimensonaler Modelle', Technische Mechanik
4, pp. 56 | 63, 1983
[32] Zehn, M. `Dynamische FEM-Strukturanalyse mit Substrukturtech-
nik', Technische Mechanik 9, pp. 245 | 253, 1988
44

More Related Content

Similar to A Condensation-Projection Method For The Generalized Eigenvalue Problem

Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equationsXequeMateShannon
 
Exact Matrix Completion via Convex Optimization Slide (PPT)
Exact Matrix Completion via Convex Optimization Slide (PPT)Exact Matrix Completion via Convex Optimization Slide (PPT)
Exact Matrix Completion via Convex Optimization Slide (PPT)Joonyoung Yi
 
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...SSA KPI
 
Approximation in Stochastic Integer Programming
Approximation in Stochastic Integer ProgrammingApproximation in Stochastic Integer Programming
Approximation in Stochastic Integer ProgrammingSSA KPI
 
HOME ASSIGNMENT omar ali.pptx
HOME ASSIGNMENT omar ali.pptxHOME ASSIGNMENT omar ali.pptx
HOME ASSIGNMENT omar ali.pptxSayedulHassan1
 
Illustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIllustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIJMER
 
The Sample Average Approximation Method for Stochastic Programs with Integer ...
The Sample Average Approximation Method for Stochastic Programs with Integer ...The Sample Average Approximation Method for Stochastic Programs with Integer ...
The Sample Average Approximation Method for Stochastic Programs with Integer ...SSA KPI
 
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image ijcsa
 
HOME ASSIGNMENT (0).pptx
HOME ASSIGNMENT (0).pptxHOME ASSIGNMENT (0).pptx
HOME ASSIGNMENT (0).pptxSayedulHassan1
 
Propagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionPropagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionMohammad
 
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...IJCSIS Research Publications
 
CHN and Swap Heuristic to Solve the Maximum Independent Set Problem
CHN and Swap Heuristic to Solve the Maximum Independent Set ProblemCHN and Swap Heuristic to Solve the Maximum Independent Set Problem
CHN and Swap Heuristic to Solve the Maximum Independent Set ProblemIJECEIAES
 
Nonlinear Algebraic Systems with Three Unknown Variables
Nonlinear Algebraic Systems with Three Unknown VariablesNonlinear Algebraic Systems with Three Unknown Variables
Nonlinear Algebraic Systems with Three Unknown VariablesIJRES Journal
 
Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
 

Similar to A Condensation-Projection Method For The Generalized Eigenvalue Problem (20)

Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equations
 
Exact Matrix Completion via Convex Optimization Slide (PPT)
Exact Matrix Completion via Convex Optimization Slide (PPT)Exact Matrix Completion via Convex Optimization Slide (PPT)
Exact Matrix Completion via Convex Optimization Slide (PPT)
 
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...
 
Approximation in Stochastic Integer Programming
Approximation in Stochastic Integer ProgrammingApproximation in Stochastic Integer Programming
Approximation in Stochastic Integer Programming
 
HOME ASSIGNMENT omar ali.pptx
HOME ASSIGNMENT omar ali.pptxHOME ASSIGNMENT omar ali.pptx
HOME ASSIGNMENT omar ali.pptx
 
Illustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIllustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
Illustration Clamor Echelon Evaluation via Prime Piece Psychotherapy
 
The Sample Average Approximation Method for Stochastic Programs with Integer ...
The Sample Average Approximation Method for Stochastic Programs with Integer ...The Sample Average Approximation Method for Stochastic Programs with Integer ...
The Sample Average Approximation Method for Stochastic Programs with Integer ...
 
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image
 
HOME ASSIGNMENT (0).pptx
HOME ASSIGNMENT (0).pptxHOME ASSIGNMENT (0).pptx
HOME ASSIGNMENT (0).pptx
 
Propagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionPropagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace Reduction
 
project final
project finalproject final
project final
 
D034017022
D034017022D034017022
D034017022
 
Modeling the dynamics of molecular concentration during the diffusion procedure
Modeling the dynamics of molecular concentration during the  diffusion procedureModeling the dynamics of molecular concentration during the  diffusion procedure
Modeling the dynamics of molecular concentration during the diffusion procedure
 
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
 
overviewPCA
overviewPCAoverviewPCA
overviewPCA
 
CHN and Swap Heuristic to Solve the Maximum Independent Set Problem
CHN and Swap Heuristic to Solve the Maximum Independent Set ProblemCHN and Swap Heuristic to Solve the Maximum Independent Set Problem
CHN and Swap Heuristic to Solve the Maximum Independent Set Problem
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
Nonlinear Algebraic Systems with Three Unknown Variables
Nonlinear Algebraic Systems with Three Unknown VariablesNonlinear Algebraic Systems with Three Unknown Variables
Nonlinear Algebraic Systems with Three Unknown Variables
 
1641
16411641
1641
 
Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...
 

More from Scott Donald

Humorous Eulogy - How To Create A Humorous Eulogy
Humorous Eulogy - How To Create A Humorous EulogyHumorous Eulogy - How To Create A Humorous Eulogy
Humorous Eulogy - How To Create A Humorous EulogyScott Donald
 
Literacy Worksheets, Teaching Activities, Teachi
Literacy Worksheets, Teaching Activities, TeachiLiteracy Worksheets, Teaching Activities, Teachi
Literacy Worksheets, Teaching Activities, TeachiScott Donald
 
Cause And Effect Essay Conclusion Telegraph
Cause And Effect Essay Conclusion TelegraphCause And Effect Essay Conclusion Telegraph
Cause And Effect Essay Conclusion TelegraphScott Donald
 
Argumentative Introduction Example. Argu
Argumentative Introduction Example. ArguArgumentative Introduction Example. Argu
Argumentative Introduction Example. ArguScott Donald
 
College Sample Scholarship Essays Master Of Template Document
College Sample Scholarship Essays Master Of Template DocumentCollege Sample Scholarship Essays Master Of Template Document
College Sample Scholarship Essays Master Of Template DocumentScott Donald
 
Sample Informative Essay Outline
Sample Informative Essay OutlineSample Informative Essay Outline
Sample Informative Essay OutlineScott Donald
 
Causes And Effect Essay Example Plosbasmi5
Causes And Effect Essay Example Plosbasmi5Causes And Effect Essay Example Plosbasmi5
Causes And Effect Essay Example Plosbasmi5Scott Donald
 
016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example
016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example
016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L ExampleScott Donald
 
Pharmacology Essay Pharmacology Pharmace
Pharmacology Essay Pharmacology PharmacePharmacology Essay Pharmacology Pharmace
Pharmacology Essay Pharmacology PharmaceScott Donald
 
Tips To Write An Essay RInfograp
Tips To Write An Essay RInfograpTips To Write An Essay RInfograp
Tips To Write An Essay RInfograpScott Donald
 
Foolscap Paper Pdf - SusanropConner
Foolscap Paper Pdf - SusanropConnerFoolscap Paper Pdf - SusanropConner
Foolscap Paper Pdf - SusanropConnerScott Donald
 
Jungle Safari Writing Paper - 3 Styles - Black And White
Jungle Safari Writing Paper - 3 Styles - Black And WhiteJungle Safari Writing Paper - 3 Styles - Black And White
Jungle Safari Writing Paper - 3 Styles - Black And WhiteScott Donald
 
8 Best Smart Pens And Tablets WeVe Tried So F
8 Best Smart Pens And Tablets WeVe Tried So F8 Best Smart Pens And Tablets WeVe Tried So F
8 Best Smart Pens And Tablets WeVe Tried So FScott Donald
 
High Quality Writing Paper. Write My Paper. 2019-02-07
High Quality Writing Paper. Write My Paper. 2019-02-07High Quality Writing Paper. Write My Paper. 2019-02-07
High Quality Writing Paper. Write My Paper. 2019-02-07Scott Donald
 
8 Easy Ways To Motivate Yourself To Write Papers In College Sh
8 Easy Ways To Motivate Yourself To Write Papers In College  Sh8 Easy Ways To Motivate Yourself To Write Papers In College  Sh
8 Easy Ways To Motivate Yourself To Write Papers In College ShScott Donald
 
Educational Autobiography Essay
Educational Autobiography EssayEducational Autobiography Essay
Educational Autobiography EssayScott Donald
 
Writing Paper Templates Professional Word Templat
Writing Paper Templates  Professional Word TemplatWriting Paper Templates  Professional Word Templat
Writing Paper Templates Professional Word TemplatScott Donald
 
IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...
IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...
IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...Scott Donald
 
Descriptive Essay Examples College
Descriptive Essay Examples CollegeDescriptive Essay Examples College
Descriptive Essay Examples CollegeScott Donald
 
Poverty Essay 3 Poverty Pover
Poverty Essay 3  Poverty  PoverPoverty Essay 3  Poverty  Pover
Poverty Essay 3 Poverty PoverScott Donald
 

More from Scott Donald (20)

Humorous Eulogy - How To Create A Humorous Eulogy
Humorous Eulogy - How To Create A Humorous EulogyHumorous Eulogy - How To Create A Humorous Eulogy
Humorous Eulogy - How To Create A Humorous Eulogy
 
Literacy Worksheets, Teaching Activities, Teachi
Literacy Worksheets, Teaching Activities, TeachiLiteracy Worksheets, Teaching Activities, Teachi
Literacy Worksheets, Teaching Activities, Teachi
 
Cause And Effect Essay Conclusion Telegraph
Cause And Effect Essay Conclusion TelegraphCause And Effect Essay Conclusion Telegraph
Cause And Effect Essay Conclusion Telegraph
 
Argumentative Introduction Example. Argu
Argumentative Introduction Example. ArguArgumentative Introduction Example. Argu
Argumentative Introduction Example. Argu
 
College Sample Scholarship Essays Master Of Template Document
College Sample Scholarship Essays Master Of Template DocumentCollege Sample Scholarship Essays Master Of Template Document
College Sample Scholarship Essays Master Of Template Document
 
Sample Informative Essay Outline
Sample Informative Essay OutlineSample Informative Essay Outline
Sample Informative Essay Outline
 
Causes And Effect Essay Example Plosbasmi5
Causes And Effect Essay Example Plosbasmi5Causes And Effect Essay Example Plosbasmi5
Causes And Effect Essay Example Plosbasmi5
 
016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example
016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example
016 Short Essay Grading Rubrics Gcisdk12Webfc2Com Rubric For L Example
 
Pharmacology Essay Pharmacology Pharmace
Pharmacology Essay Pharmacology PharmacePharmacology Essay Pharmacology Pharmace
Pharmacology Essay Pharmacology Pharmace
 
Tips To Write An Essay RInfograp
Tips To Write An Essay RInfograpTips To Write An Essay RInfograp
Tips To Write An Essay RInfograp
 
Foolscap Paper Pdf - SusanropConner
Foolscap Paper Pdf - SusanropConnerFoolscap Paper Pdf - SusanropConner
Foolscap Paper Pdf - SusanropConner
 
Jungle Safari Writing Paper - 3 Styles - Black And White
Jungle Safari Writing Paper - 3 Styles - Black And WhiteJungle Safari Writing Paper - 3 Styles - Black And White
Jungle Safari Writing Paper - 3 Styles - Black And White
 
8 Best Smart Pens And Tablets WeVe Tried So F
8 Best Smart Pens And Tablets WeVe Tried So F8 Best Smart Pens And Tablets WeVe Tried So F
8 Best Smart Pens And Tablets WeVe Tried So F
 
High Quality Writing Paper. Write My Paper. 2019-02-07
High Quality Writing Paper. Write My Paper. 2019-02-07High Quality Writing Paper. Write My Paper. 2019-02-07
High Quality Writing Paper. Write My Paper. 2019-02-07
 
8 Easy Ways To Motivate Yourself To Write Papers In College Sh
8 Easy Ways To Motivate Yourself To Write Papers In College  Sh8 Easy Ways To Motivate Yourself To Write Papers In College  Sh
8 Easy Ways To Motivate Yourself To Write Papers In College Sh
 
Educational Autobiography Essay
Educational Autobiography EssayEducational Autobiography Essay
Educational Autobiography Essay
 
Writing Paper Templates Professional Word Templat
Writing Paper Templates  Professional Word TemplatWriting Paper Templates  Professional Word Templat
Writing Paper Templates Professional Word Templat
 
IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...
IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...
IELTS Writing Tips Practice IELTS Tips Www.Eagetutor.Com ...
 
Descriptive Essay Examples College
Descriptive Essay Examples CollegeDescriptive Essay Examples College
Descriptive Essay Examples College
 
Poverty Essay 3 Poverty Pover
Poverty Essay 3  Poverty  PoverPoverty Essay 3  Poverty  Pover
Poverty Essay 3 Poverty Pover
 

Recently uploaded

call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 

Recently uploaded (20)

call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 

A Condensation-Projection Method For The Generalized Eigenvalue Problem

  • 1. A Condensation-Projection Method for the Generalized Eigenvalue Problem Thomas Hitziger, Wolfgang Mackens and Heinrich Vossy Abstract Since the early 60th condensation methods have been known as a mean to economize the computation of selected groups of eigenvalues of large eigen- value problems. These methods choose from the degrees of freedom of the problem a small number of (master-) variables which appear to be repre- sentative. In a Gaussian elimination type procedure the rest of the variables (the slaves) is eliminated, leaving a similar but much smaller problem for the master variables only. Choosing the masters to contain all variables from the interface of a partition- ing of the problem into substructures leads to data structures and formulae which are well suited to be implemented on distributed memory MIMD par- allel computers. In this paper we develop such a condensation algorithm which we endow with the additional features of a projective re nement of the results, which greatly improve their quality, and rigorous error bounds for the re ned values. The algorithm together with a short statement of the solved problem is contained in Section 6. On the way to this algorithm, we review some recent development in condensation methods. 1 Introduction In the numerical analysis of structures as well as in any other scienti c issue to be tackled by discretization of originally continuous equations (using nite elements or nite di erences, e.g., cf. [1], [2]) very often the This paper appeared in H. Power and C.A. Brebbia (eds.), High Performance Computing 1, pp. 239 { 282, Elsevier Applied Science, London 1995 yHamburg University of Technology, Section of Mathematics, Kasernenstrasse 12, D{20173 Hamburg, FR Germany, fmackens, vossg @tu-harburg.d400.de 1
  • 2. unpleasant situation occurs that a sucient accurate representation of the desired data in the discrete model requires the use of prohibitively many degrees of freedom, such that a standard treatment of the resulting large set of discrete equations is far too expensive. For such situations several reduction techniques have been developed in di erent disciplines to incorporate speci c parts of the (global) good approximation behaviour of large size models into much smaller systems derived from the larger ones (cf. the forthcoming survey paper [17] of Ahmed K. Noor in Applied Mechanical Surveys on reduction methods). The aim of these reduction methods is not just to construct a model of smaller size since this could be easily done by using the discretization method with a large discretization stepsize parameter. Such a sort of reduction of size would have to be paid with an overall loss of accuracy. The very challenge of reduction approaches is to reduce the size of the discrete set while its approximation quality is kept (to a certain degree at least) with respect to some relatively small subset of the solution data, say for speci c functional values of the solution. This can in fact often be done by taking into consideration (parts of) the ne grained modelduring the design of the coarse grained reduction where even partial solutions of portions of the equations to be reduced may be invoked. In the study of structural vibrations the large discrete sets of equa- tions are large algebraic eigenvalue problems of the form Kx= Mx; (1) where the sti ness matrix K 2 IR(n;n) and the mass matrix M 2 IR(n;n) are real symmetric and positive de nite, x is the vector of modal dis- placements and is the square of the natural frequencies. To solve the eigenproblem (1) one has to determinea set of n linearly independent eigenvectors x1 ;:::;xn and corresponding eigenvalues 0 1 2 n; with Kxi = iMxi ; i = 1;:::;n: Even if the problem is well behaved its full solution is normally out of the scope of todays computing facilities. Dimensions n between 104 and 106 are not unrealistic. 2
  • 3. Fortunately, engineers are usually not interested in all of the eigen- values and corresponding modal vectors. They want to know some few (hundreds) of eigenvalues from the lower part of the spectrum, the eigen- values below or near a prespeci ed value or those eigenvaluesfrom a given interval. Therefore, several (iterative) methods have been developed to approximate just such eigenpairs. Among these are subspace iteration and Krylov subspace methods, cf. Saad [24] or Bathe [1]. The projection of the original problem to a low dimensional subspace | as is a partial step within the subspace iteration method | can be viewed as a reduction method. Given linear independent approximations ^ x1 ;:::;^ xm to some few (m n) of the eigenvectors of problem (1) (within the above cited methods these approximations are constructed and corrected in the course of the iteration ) and putting X := ^ x1 ;:::^ xm this system is replaced by the n-dimensional projected eigenvalue problem KXy = MXy; (2) with the projected sti ness and mass matrices KX := XTKX and MX := XTMX: (3) If the vectors ^ xi, i = 1;:::;m, are moderate approximations of the eigenvectorsxi, i = 1;:::;m, then the eigenvaluesi of (2) are reasonable approximations of 1;:::;m (cf. [2], [19], [24]). Approximations to the corresponding eigenvectors are found in the form ~ xi := Xyi with yi denoting the eigenvectors of problem (2). If X containes only one column, i.e. there is an approximation to one eigenvector x1 only, then (2) is a scalar equation, which can be elementarily solved for the eigenvalue approximation 1 := R(x1 ) := (x1)t Kx1 (x1)t Mx1 ; (4) called the Rayleigh quotient at x1. In order to apply reduction by projection one has to have preinforma- tion in form of the eigenvector approximations ^ x1 ;:::; ^ xm at hand (these can result from one of the above cited iterative methods or they may be 3
  • 4. eigenmodes of an already solved similar problem, e.g.). Actually, the i-th column of X does not have to be an approximation of an eigenvector of (1). One easily sees that replacing X in (3) by Z := XN; with N 2 IR(m;m) regular will not change the resulting eigenvalue approximations i and will pro- duce the same eigenvector approximations Zyi. To obtain good approximations to the rst eigenvalues by the pro- jection approach it suces that spanfXg approximates reasonably well to the span of the eigenvectors corresponding to the desired portion of the spectrum. Thus it makes sense to see the projection approach (2), (3) as a subspace projection method. Notice that if spanfXg contains an exact eigenvector xk = Xy;y 2 IRm and k is the corresponding eigenvalue, then the projection (3) will nd k as (at least) one of the eigenvalues i of the projected problemand the eigenvector will be rediscovered as Xyi with a corresponding eigen- vector of the projected system (at least as a memberof the corresponding eigenspace). Condensation methods for large eigenvalue problems are sub- space projection methods together with a speci c Gaussian elimination avoured approach to construct reasonable approximations of projection spaces spanfXg from scratch. To this end some (relatively few) components of the vector x are selected to be masters and to form the master part xm 2 IRm of x. The aim is then to construct an eigenproblem K0xm = M0xm (5) for these master-vectors and the eigenparameter such that the eigen- vectors of (5) are good approximations to the masterparts of selected eigenvectors of (1) with similarly good approximation behaviour for the accompanying eigenvalues. Notice that the masterparts of eigenvectors do not determine the eigenvectors uniquely, in general. If, e.g., the masterpart consists of a single component then a nonzero mastervector can be the masterpart of every eigenvector with nonvanishing masterpart. So, if we really nd a condensed problem (5) whose eigenpairs are the masterparts of eigenvec- tors of (1) together with their eigenvalues, can we reconstruct the original eigenvectors from these data? 4
  • 5. Actually we can, if one additional condition is ful lled. To see this we decompose eqn. (1) into block form Kmm Kms Ksm Kss # ( xm xs ) = Mmm Mms Msm Mss # ( xm xs ) (6) where xm 2 IRm containes the mastervariables, xs 2 IRs collects the remaining variables, the slaves, and where the permutation of x leading to the new order xm;xs of the variables has been applied likewise to the rows as to the columns of K and M. Then these matrices are still symmetric and positive de nite in their permuted form. Now we see that if the master part ^ xm is given together with the corresponding eigenvalue ^ then the slavepart ^ xs can be computed from the second row of (6) through the master-slave-extension ^ xs = P (^ )^ xm := ?(Kss ? ^ Mss)?1 (Ksm ? ^ Msm)^ xm (7) as long as the matrix (Kss ? ^ Mss) is regular. (8) Noticing that Kssxs = Mssxs (9) is the eigenvalue problem corresponding to the vibration of the slave- portion of the system with the master degrees of freedom restricted to be zero, the regularity-condition (8) can be expressed as the condition that ^ is not in the spectrum of the slave eigenproblem (9) or the slave- spectrum for short. While the knowledge of the eigenvalue corresponding to the master- part of an eigenvector allows the reconstruction of that vector through master-slave-extension (7), this formula is at the same time the key equa- tion to reduce the original full problem to a condensed form (5): If the eigenvalue ^ (6= slave eigenvalue) of the original system was known, then all corresponding eigenvectors with nonvanishing masterpart would be linear combinations of the columns of Im P (^ ) # = Im ?(Kss ? ^ Mss)?1(Ksm ? ^ Msm) # ; where the rst part of the j-th column is the j-th unit master-vector and the second part is its master-slave extension. Hence subspace projec- tion of the original problem onto the column space of this matrix would 5
  • 6. lead to a small problem for the masterparts xm which would contain ^ as a member of its spectrum and whose corresponding eigenvectors would reconstruct to eigenvectors of the original system by master-slave- extension. Thus this reduction would retain the approximation quality of the large system with respect to the eigeninformation connected with ^ . Of course an exact eigenvalue ^ of the original systemis not available. Hence one has to use suitable substitutes: The choice ^ = 0 leads to what is known as static condensation. In this case the master-slave-extension calculates the slaves to satisfy the static equilibriumequation Kssxs+Ksmxm = 0 ignoring the (unknown) inertia forces modelled by the -depending part of the second row of (6). This leads to a condensed system (5) with condensed matrices K0 := Kmm ?KmsK?1 ss Ksm; M0 := Mmm ?KmsK?1 ss Msm ?MmsK?1 ss Ksm +KmsK?1 ss MssK?1 ss Ksm; (10) where the projected matrix K0 is at the same time the matrix that results from Gaussian elimination of the slaves and is known as Schur complement of Kss in K (cf. Cottle [6]). The e ect of truncating the mass matrices here is certainly less severe for the lower frequencies and, in fact, the lower eigenvalues and (masterparts of the) corresponding eigenmodes are approximated well by this condensation. Choosing ^ = for some xed 0 is known as dynamic conden- sation. The master-slave-extension computes the slave values as if they were modes of the dynamicresponse xs exp(pit) to dynamicallydriving masters of the form xm exp(pit). It is expected that dynamic conden- sation will produce good approximations to eigenmodes with eigenvalues close to . Remember that will be reproduced as an eigenvalue of the condensed problem if it is an eigenvalue of the original problem (with nonvanishing masterpart of at least one accompanying eigenvector). For- mally, dynamic condensation at can be interpreted as static condensa- tion of the shifted problem (K ?M)x= Mx. With this knowledge the formulae for K0 and M0 are most comfortably derived from (10). Finally, letting ^ = we nd exact condensation. Dynamical con- densation is done (implicitly) at the (still unknown) solution . Thereby master-slave-extension will be correct at the unknown solution (if this is not a slave eigenvalue at the same time). Thus the idea is nearby that 6
  • 7. the exactly condensed eigenvalue problem, for which one computes the expression T ()xm = 0; (11) with T () = ?Kmm+Mmm+(Kms?Mms)(Kss?Mss)?1 (Ksm?Msm) should be equivalent to the original problem.The following identityholds for all not in the slave-spectrum: Im P T O Is ! Kmm ?Mmm Kms ?Mms Ksm ?Msm Kss ?Mss ! Im O P Is ! = T () O O Kss ?Mss ! (12) with P := P () denoting the prolongation matrix from (7). Thus an eigenvalue of the original equation which is not a slave eigenvalue is an eigenvalue of the exactly condensed problem and vice versa (since T () is only de ned for -values not in the slave-spectrum). Notice that the price to be paid for this nice property of the exactly condensed problem is its nonlinearity. The static condensation and the dynamic condensation are approximations of the exact condensation in that they are linearizations of (11) with respect to at = 0 or = , respectively. From this we rediscover that dynamic condensation at an eigenvalue will reproduce this value as a eigenvalue of the condensed problem. It is interesting to note, that the exact condensation can be derived as well by simple insertion of the master-slave-extension into the rst (master) row of (6). Conforming to this view the identity (12) can be in- terpreted as block LDLt-decomposition of the dynamical sti ness matrix K ?M. The focus of the present paper is to derive an ecient algorithm for the implementation of the static condensation approach endowed with some additional features to improve the eigenvalue approximations and to provide rigorous computable bounds of their errors. The latter fea- tures will be realized solely through the simple algorithmic ingredients projection (2), (3), master-slave-extension (7) and Rayleigh quotient (4). On the way to this result given in Section 6 the preliminary sections are devoted to the following issues. 7
  • 8. In Section 2 we report on strict error bounds for the eigenvalue ap- proximations from static condensation (5), (10). These can be derived with the aid of a generalization of the Rayleigh quotient (4) for nonlin- ear eigenvalue problems, the Rayleigh functional ([28], [27], [21]). Section 3 reports on master-slave-partitioning of the degrees of free- dom of a problem. We review several approaches to provide a master- slave-partitioning giving good approximations of the lower eigenvalues with a small number of masters. Most of these try to maximize the mini- mal slave eigenvalue ! by an adequate partitioning. This is in agreement with the error bounds of Section 2 which become smaller with increasing !. Our primary interest in Section 3 is to comment on the well known fact, that choosing the master variables from interfaces of substructures will split the problem into independent subproblems (for each substruc- ture) and into a coupling interface-problem. This can be used in parallel computing and we shall brie y discuss the out ow of such a substructur- ing on the organization of parallel computations. Section 4 is devoted to increase the approximation quality of the eigenvalue estimates from condensation by means of master-slave- extension together with Rayleigh quotient improvement. In Section 5 we review strict error bounds of Krylov-Bogoliubov and of Kato-Temple type. Section 6 contains the nal algorithm.Its application is demonstrated in Section 7 with some numerical examples. 2 A priori error bounds for static conden- sation In this section we consider the exactly condensed eigenvalue problem (11). We recall that this problem T()xm = 0; (13) with T() = ?Kmm+Mmm+(Kms?Mms)(Kss?Mss)?1 (Ksm?Msm) 8
  • 9. is a nonlinear matrix eigenvalue problem of the same dimension m as the statically condensed (linear) eigenvalue problem K0u = M0u: (14) Remember that (13) is equivalent to the original linear problem (1), if (1) and the slave eigenvalue problem Kss = !Mss (15) do not have eigenvalues in common. It is well known, that the eigenvalues 1 2 ::: m of the statically condensed problem (14) satisfy the minmax principle of Ritz and Poincar e j = min dimV =j max u2Vnf0g uTK0u uTM0u (16) where V denotes a subspace of IRm (cf. [24], e.g.). Actually, the eigenvalues in the lower part of the spectrum of the nonlinear problem (13) can also be characterized by a minmax principle with a Rayleigh functional which is the generalization of the Rayleigh quotient for nonlinear eigenvalue problems. In IRm the values of this functional can be compared with values of the Rayleigh quotient for the statically condensed system. This is the basis to obtain error bounds for eigenvalues of the latter problem using similar techniques as in the proof of comparison theorems for linear eigenvalue problems. To be more speci c we denote by 2 IR(s;s) and := diag[!i] 2 IR(s;s) the modal matrix and the spectral matrix of the slave eigenvalue problem (15), respectively, where is normalized such that tMss = I and tKss = : (17) Then T() can be rewritten (cf. Leung [14], Petersmann [20]) as T() = ?K0 + M0 + SD()St; (18) where K0 := Kmm ?KmsK?1 ss Ksm; M0 := Mmm ?KmsK?1 ss Msm ?MmsK?1 ss Ksm +KmsK?1 ss MssK?1 ss Ksm; S := Mms?Kms ?1; D() := diag h 2=(!i ?) i : 9
  • 10. Let ! := min i=1;:::;s !i denote the smallest eigenvalue of the slave eigenvalue problem (15) and let J be the open interval J := (0; !). For any xedvectoru 2 IRmnf0gwe consider thereal valued function f(;u) : J ! IR ; 7! f(;u) := utT()u; (19) i.e. f(;u) = ?utK0u + utM0u + s X i=1 2 !i ? t iMsmu ? 1 !i t iKsmu 2 : (20) Since M0 is positive de nite the mapping 7! f(;u) increases monotonically on J and the nonlinear equation f(;u) = 0 has at most one solution in J. Therefore f(;u) = 0 implicitly de nes a functional p : IRm D(p) ! J ; f(p(u);u) = 0; where the domain of de nition is given by D(p) := fu 2 IRm nf0g : f(;u) = 0 is solvable in Jg: For linear eigenvalue problems T()u := (K ?M)u = 0 the func- tional which is de ned by the equation utT()u = 0 is nothing else but the Rayleigh quotient. We therefore call p the Rayleigh functional of the nonlinear eigenvalue problem (13). Notice that in (20) the function f(;u) consists of the analogous function f0(;u) := ?utK0u + utM0u for the statically condensed problem plus an additive term, which is always nonnegative for from the interval J. Thus f0(;u) f(;u) for all in J. Hence the zero p(u) of f(;u) (if it exists) is less than or equal to the zero utK0u=utM0u of f0(;u) which is the Rayleigh quotient at u for the statically condensed eigenproblem. In accordance with the linear theory for every eigenvector u of (13) the Rayleigh functional p(u) is the corresponding eigenvalue and the eigenvectors are the stationary vectors of p. Moreover and more impor- tant, it is possible to characterize the lower part of the eigenvalues of (13) as minmax values of p. In view of the above comparison of p(u) 10
  • 11. and the Rayleigh quotient one can infer from such a result, that in J the eigenvalues of the statically condensed problem are upper bounds of the eigenvalues of the exactly condensed problem which in turn are the eigenvalues of the original system (see, in addition, the remark after Theorem 2). The precise statement of the minmax characterization is as follows and is a direct consequence of Theorem 2.1 and Theorem 2.9 in [28]: Theorem 1 Let 1 2 ::: n be the eigenvalues of the linear eigenvalue problem (1) and choose k 2 IN such that k ! k+1. Let Hj be the set of all j-dimensional subspaces of IRm and denote by ~ Hj := fV 2 Hj : V n f0g D(p)g the set of j-dimensional subspaces of IRm which are contained in the domain of de nition of the Rayleigh functional p. Then ~ Hk 6= ; and for j = 1;:::;k the following minmax characterization holds: j = min V2 ~ Hj max u2Vnf0g p(u): (21) The next theoremfrom[27] deriveserror estimatesfor the eigenvalues of the statically condensed system. The left-hand side inequality there is a direct consequence of the previously noted comparison of the functions f0(;u) and f(;u). For the second inequality one has to make use of the special form SD()St of the nonlinearity of T() in (18). Theorem 2 Let 1 2 ::: k be the eigenvalues of problem (1) contained in J and let 1 2 ::: m be the eigenvalues of the reduced problem (14). Then for j = 1;:::;k one has 0 j ?j j j ! ?j : (22) The error bound in (22) demonstrates that we can expect good ap- proximations of eigenvalues j by the eigenvalues j of the condensed problem if the distance of j to the spectrum of the slave problem is suciently large. 11
  • 12. For j ! we get from (22) j ! ?j = 1 X n=1 j ! n j ! : This is the asymptotic error bound which was proved by Thomas [26] using a quadratic approximation of T(). It should be noted at the end of this section that the relevance of the interval J has been recognized in lots of earlier papers on the basis of similar and di erent observations. J has been called the interval in which the Guyan reduction process is valid. This notation seems to result from a Neumann series derivation of the statically condensed problem from the exact condensation (see [30], e.g.). Therein the inverse matrix (Kss?^ Mss)?1 is approximated by the rst terms of its series expansion (Kss ? ^ Mss)?1 = Kss ?1 1 X k=0 ^ MssKss ?1 k which, indead, is valid for ^ 2 J and diverges for ^ above its upper boundary !. It has additionally been stated that J is of importance, because within J the eigenvalues of the statically condensed problem are up- per bounds of their corresponding exact values. We followed these argu- ments during the reasoning for the left-hand side inequality of (22). One should notice, however, that this inequality is as well a consequence of the two simpler facts, that | rst | the statically condensed problem is a projection of the original problem and | second | the eigenvalues of projections are always upper bounds for the corresponding original eigenvalues (cf. [24], e.g). 3 Choice of masters and slaves We will now discuss ways to partition the components of the vector x into complementary slave- and masterparts xs and xm, respectively. The size of xm determines the size of the condensed eigenproblem and hence the nal work to be carried out for its eigenanalysis. Actually, there are two opposite approaches to nding a partitioning with a small number of masters but good approximation properties for the lower eigenvalues. 12
  • 13. The rst class of methods chooses an increasing set of slaves which after their successive elimination produce a sequence of condensed prob- lems of decreasing size. Elimination is stopped if a master problem of a manageable size has been reached. Within each step of the procedure precisely one slave is chosen and eliminated from the previous system. The new slave is found by use of a heuristic that tries to retain the mod- elling potential of the previous problem with respect to the lower eigen- pairs (Henshell and Ong [12], Shaw and Raymund [25]). The criterion to determineits index j is to maximizethe quotient of the corresponding di- agonal elements kjj =mjj. Since this quotient is the minimal (since in fact only) eigenvalue of the one-dimensional slave system, the approximation qualities as measured by the error bounds of Theorem 2 are optimized for each elimination step (though Theorem 2 has not been known to the authors of [12] and [25]). It is then hoped that stepwise optimization for each elimination step will give a good nal condensed system (which may be seen, by the way, to be identical to the system which arises by the simultaneous elimination of all slave variables that are chosen in the course of the algorithm). Actually, there may arise diculties with this approach. One can easily construct (nonacademic) problems where mi- nor changes would lead to totally di erent partionings. This can produce instabilities. In contrary to collecting slaves the second class of methods forms an increasing sequence of sets of masters with the aim to maximally raise the minimal eigenvalue ! of the attached slave eigenvalue problem step by step (Bouhaddi and Fillod [3], [4]). Thereby | as Theorem 2 tells us | the approximation potential of the condensed problem will be in- creased. As is known to every guitar player the minimum of the lowest eigenfrequencies of the two parts of a vibrating string which is held down by (only) one nger is highest if the nger is put at the node of the sec- ond mode of vibration. This is (in case of a homogeneous guitar string) at the same time the middle of the string as well as the point of largest de ection of the rst mode. Bouhaddi and Fillod use a generalization of this observation (for the case of discrete and higher dimensioned eigen- functions, which probably might have no node lines within their discrete set of de nition) to place the next master at a point where the rst mode is large and the second (and some of the further ones) are comparably small. 13
  • 14. If within a slave problem with spectrum !1 !2 ::: !s k s additional degrees of freedom are chosen to be masters (and hence are clamped down) the theoretically best result would be to increase ! from !1 to !k+1: Though the method of Bouhaddi and Fillod seems to work quite well it certainly will not reach this optimum. This follows easily using the the guitar string example again. Having placed one nger at the middle of the string the optimal position of two ngers (dividing the string into three equal portions) can not be reached without removing the rst nger before. Actually, it is possible to sequentially impose conditions which will gain the optimalincrease of ! at everysingle step. Such conditions can for example be realized by clamping down the (generalized) components in the direction of the eigenvectors themselves.The necessity to compute the eigenvectors does not devaluate this method of modal masters as compared to Bouhaddi's and Fillod's approach since they need these vectors, too. More than this, they have to recompute the vectors after the introduction of a new master. Though they try to save computing time by approximately combining the new vectors from the old ones, the modal master method is better since the clamping of a modal master does not change the following modes. Of course, this approach is not without its own diculties.It o ers some problems, already, to de ne the condensed eigenvalue problems with these masters. We hope to report on such approaches in a subsequent paper, soon. For the rest of this section we shall concentrate on master slave par- titions which are de ned by substructures of the problem. Such parti- tionings are not new in the study of condensation methods for eigen- value problems. They have already been treated in Leung [15], Zehn [31], [32], Bouhaddi and Fillod [3], [4], e.g., and of course substructuring is well known in static structure analysis. In the above cited paper [4] of Bouhaddi and Fillod the reported choice of masters has actually been performed for every substructure of a substructured problem. Even though substructuring is thus well known, we shall brie y com- ment on its computational merits here, since we shall need the notation and the formulae later. Suppose that the structure is decomposed into r substructures and that the vibration problem is discretized (by nite elements or nite dif- 14
  • 15. ferences) in correspondence to the substructure decomposition (i.e. kij = 0 and mij = 0 whenever i and j denote indices of interior nodes of dif- ferent substructures). We choose the degrees of freedom which are on the boundaries of the substructures and some of the interior degrees of freedom as masters and the remaining ones as slaves (cf. Section 7). If the variables are numbered in an appropriate way the sti ness and mass matrices have the following block form K = 2 6 6 6 6 6 6 4 Kmm Kms1 Kms2 ::: Kmsr Ksm1 Kss1 O ::: O Ksm2 O Kss2 ::: O ::::::::::::::::::::::::::::::::: Ksmr O O ::: Kssr 3 7 7 7 7 7 7 5 ; (23) M = 2 6 6 6 6 6 6 4 Mmm Mms1 Mms2 ::: Mmsr Msm1 Mss1 O ::: O Msm2 O Mss2 ::: O ::::::::::::::::::::::::::::::::::: Msmr O O ::: Mssr 3 7 7 7 7 7 7 5 ; (24) where the coupling of the slave variables and the masters of the j-th substructure is given by the matrices Ksmj = Kt msj and Msmj = Mt msj, respectively. Note that only those columns of Ksmj and Msmj which correspond to master variables of the j-th substructure can be di erent from the nullvector. From the block structure of the matrices K and M in (23) and (24) we obtain the reduced matrices K0 = Kmm ? r X j=1 Kmmj ; M0 = Mmm ? r X j=1 Mmmj; (25) where Kmmj := KmsjK?1 ssjKsmj (26) and Mmmj := KmsjK?1 ssjMsmj + MmsjK?1 ssjKsmj ?KmsjK?1 ssjMssjK?1 ssjKsmj: (27) Obviously the matrices Kmmj and Mmmj can be evaluated substruc- turewise (and hence completely in parallel) in the following way: Solve simultaneously the systems of linear equations KssjXj := Ksmj (28) 15
  • 16. where only those columns of Ksmj have to be taken into account which belong to master variables of the j-th substructure and determine Kmmj = KmsjXj; Y j := MmsjXj; Mmmj = Y j + Y t j ?Xt jMssjXj: (29) The above formulae give an idea how to compile the matrices K0 and M0 eciently in parallel, if the number of processors is equal to or less than the number k of substructures. If there are much more proces- sors then several of them have to share in the work connected with a substructure. On the other hand, for small numbers of processors and for rather course discretizations, where the size of the condensed problem is approx- imately the size of the subproblems one has to parallelize the solution of the condensed eigenproblem. As is shown by an example in Rothe and Voss [22] the total computing time can otherwise be considerably enlarged. A parallel algorithm to solve the condensed eigenproblem is given in Rothe and Voss [23]. It is worth noting that with the above substructuring the slave eigen- value problem decouples into r independent eigenvalue problems Kssjj = !Mssjj; j = 1;:::;r: (30) The minimal slave eigenvalue ! is the minimum of the minimal slave- eigenvalues of the r substructures ! = min j=1;:::;r !j: Hence a reduction of ! can be done in parallel again by reducing the individual !js by choosing additional masters from those structures with small !js using the algorithm of Bouhaddi and Fillod, e.g. The corresponding rows and columns of the sti ness and mass matrices (23) and (24), respectively, are then transferred by simultaneous interchange of rows and columns from the substructure parts to the master-rows and -columns . For the above we assumed, that a substructuring of the structure is given fromthe beginning. For most problemsof structural analysis as well as for domain decomposition approaches in partial di erential equation it makes sense to assume this. If this is not the case, then the problem of 16
  • 17. construction of a useful substructuring can be quite cumbersome.Several approaches have been described for example in Heath, Ng and Peyton [11]. It is interesting that some of these need the solution of eigenvalue problemswith matricesof a similarstructure as that of thegivenproblem. 4 Projective improvement of eigenvalues It is wellknown that the eigenvectors of problem (1) are stationary points of the Rayleigh quotient R(x) = xTKx xTMx: (31) and the corresponding eigenvalues are the critical values of R(x) at the eigenvectors. This means, that the derivative of R with respect to x at an eigenvector xi is zero. More precisely from the Ritz-Poincar e characterization (16) one sees that the smallest and largest eigenvalues are the minimumand maximum of R, respectively, and the intermediate eigenvalues are values of R at its saddlepoints. Since the graph of the Rayleigh quotient has a horizontal tangential plane at an eigenvector it is reasonable, that evaluation of the quotient at a point not too far from an eigenvector will lead to a reasonable ap- proximation of the accompanying eigenvalue. In fact a small error of size for the eigenvector approximation will lead to an error of size 2 for the Rayleigh approximation of the corresponding eigenvalue. Assume that we have found from the statically condensed problem (5);(10) a set of m eigenpairs (1;x1 m);:::;(m;xm m): (We shall call themthe c-eigenvalues and c-eigenvectors,respectively, for short.) Then if both j and xj m are not too bad approximations to an eigenvalue j and the masterpart of a corresponding eigenvector xj, respectively, it may be expected, that master-slave continuation (7) with these data should lead to not too bad an approximation x(j;xj m) := xj m P(j)xj m # (32) 17
  • 18. of the eigenvector xj. Insertion of this approximation into the Rayleigh quotient is expected to deliver an acceptable approximation (j;xj m) := R(x(j;xj m)) = x(j;xj m)tKx(j;xj m) x(j;xj m)tMx(j;xj m) (33) to j. Actually, this procedure works well and a similar enhancement has already been suggested by Wright and Miles in [30]. Master-slave extension of xj m using for ^ in (7) the eigenvalue ap- proximation j instead of the value ^ = 0 increases the quality of the gained (full space) eigenvector approximation and hence the quality of the corresponding Rayleigh approximations of the eigenvalue. Knowing this it is nearby to iterate this procedure, i.e. to execute k+1 j := (k j;xj m); k 0; 0 j := j: (34) Alas, a rst look at the errors of the resulting sequence of approxi- mations is disappointing. There is no (relevant) further increase in the approximation quality after the rst step. However, disappointment vanishes through analyzing the sequence and its limit more carefully. Then it turns out that | for those pairs with c-eigenvalue below !) | the limit is the Rayleigh functional p(xj m) which is approximated by the sequence fk j gk2IN with a quadratic order of convergence. To be more precise, one has the following lemma. Lemma 3 For u2 IRm, u6= 0; let x(;u) and (;u) be de ned in IRnf!1;:::;!sg by (32) and (33). Furthermore, in consistence with (19), let f(;u) := x(;u)t(M ?K)x(;u) for 2 IR nf!1;:::;!sg: Then the following assertions hold. (i) 2 IRnf!1;:::;!sg is a xed point of (;u) (i.e. = ( ;u)) if and only if it is a zero of f(;u) (i.e. f( ;u) = 0). (ii) If 2 IR nf!1;:::;!sg is a xed point of (;u) then @ @ (;u) = = 0; (35) 18
  • 19. such that the iteration k+1 := (k;u) (36) is locally convergent to with (at least) quadratic order of conver- gence. (iii) The Rayleigh functionals p(u) from Theorem 1 are zeroes of f(;u) and can hence be approximated by use of (36) with local and quadratic convergence. Proof: If u6= 0 then x(;u) 6= 0 and hence x(;u)tMx(;u) 6= 0. Thus = (;u) () = x(;u)tKx(;u) x(;u)tMx(;u) () x(;u)t [K ?M]x(;u) x(;u)tMx(;u) = 0 () utT()u x(;u)tMx(;u) = 0 () f(;u) x(;u)tMx(;u) = 0 () f(;u) = 0; which proves (i). To see (35) di erentiate the identity x(;u)tMx(;u)(;u) = x(;u)tKx(;u) with respect to . This gives 2x(;u)tMx(;u)(;u) + x(;u)tMx(;u)(;u) = 2x(;u)tKx(;u): Inserting = and using ( ;u) = we get ( ;u) = 2 x(;u)tMx(;u)x( ;u)(K? M)x( ;u): (37) Since ( M ?K)x( ;u) = T( )u 0 ! 19
  • 20. and x(;u) = @ @ u P()u ! = 0 P0 ()u ! one has x( ;u)(K? M)x( ;u) = 0: Hence eqn. (35) follows from eqn. (37). The statement about local and quadratic convergence is a consequence of (;u) being C1 near , eqn. (35) and Ostrowski's local convergence theory (cf. Ortega and Rheinboldt [18], e.g.). Part (iii) of the lemma is trivial. The last part of Lemma 3 explains why the iterative re nement (36) does not reduce the error of the values from eqn. (33) signi cantly. Due to quadratic convergence of the iteration to the corresponding Rayleigh functional, the distance between the rst iteration value and the Rayleigh functional is below the distance between the Rayleigh functional and the approximated eigenvalue of the original problem. Thus the value (j;xj m) normally behaves already like the Rayleigh functional. Hence most of the observations from Rothe and Voss [21] for enhancements of c-eigenvalues by the use of the Rayleigh functional carry over to these approximations. Among these is the empirical `1%-rule': For those eigenpairs (j;xj m) of the statically condensed eigenvalue prob- lem with j 2 (0;0:5 !) the improved values (j;xj m) have relative errors less than 1%. Remarks: Some remarks are in order: 1. It should be noted that care has to be taken with the application of condensational approximation of eigenvalues and their Rayleigh quotient enhancement. Though the smallest slave eigenvalue ! is not explicitly needed within the computation of the values one has to have an idea of its size for an interpretation of the computed approximations. 2. By the above `1%-rule' only the improvements of the c-eigenvalues from (0;0:5 !) will give 1%-accurate approximations to the smallest eigenvalues of the original problem. 3. Further only the set of c-eigenvaluesbelow ! (or their -improvements) will approximate the set of lowest eigenvalues of the original problem. We encountered simple examples where all the condensed values j;j = 20
  • 21. 1;:::;s, where larger than !. In these cases the iteration (34) converged to a xed point of the function (;xj m). This very often turned out to be a moderately good approximation to a higher eigenvalue. However, none converged to an approximation less than !. From the interlacing property of eigenvalues of eigenproblems and their constrained versions (cf. [9], [19]) one knows, that there is always at least one eigenvalue of the original problem which is less than or equal to !. Hence this value can be forgotten by condensation, if one excepts the (improved) c-eigenvalues as approximations to the lowest eigenvalues of the original problem,without paying attention to the relative size of !. 4. For the Rayleigh functionals one has (cf. [21]) p(xj m) j for all j !: The same holds for the -improvements: (j;xj m) j for all j !: If the calculation of ! appears to be too expensive, then | the other way round | an indication for ! to be less than k is the occurrence of the inequality k (k;xk m): (38) For the eigenvalues j, j = 1;:::;m, of the statically (as well as for the dynamically) condensed problem one knows that these are upper bounds of the corresponding eigenvalues j, j = 1;:::;m, of the original problem (1): j j; j = 1;:::;m: In a similar way as for the values p(xj m) of the Rayleigh functional this useful property is sometimes lost by -improvement (for j 1). It is, however, an easy undertaking to reinstall such inequalities. One only has to ensure that the approximations result as the eigenvalues of a projection of problem (1). Recalling from Section 1, that the - improvement (35) as the Rayleigh quotient at x(j;xj m) can be inter- preted as the eigenvalue of the 1-dimensional projection of (1) onto the space spanned by x(j;xj m), it is nearby to generalize this approach to the projection of problem (1) onto the space spanned by x(1;x1 m);:::;x(k;xk m); k m 21
  • 22. if approximations to the rst r k eigenvalues are desired. Notice, that the resulting projected problem (Kp ?Mp)xp = 0 (39) with Kp = (x(i;xi m)tKx(j;xj m))i;j=1;:::;k ; Mp = (x(i;xi m)tMx(j;xj m))i;j=1;:::;k (40) is neither a static nor dynamic condensation since the ^ -values which are used for extensions change with the masters to be prolongated. It is, however, a projection of the original problem which we call the condensation-projection of problem (1). Due to its projection prop- erty its eigenvalues 1;:::;k are upper bounds of 1;:::;k. Notice further that the extra computational e ort for this calculation is not very high. The diagonal entries of the matricesKp and Mp are just the nominators and denominators of the Rayleigh quotients computed for the -improvements. For their computation all the vectors Kx(j;xj m), Mx(j;xj m), j = 1;:::;k, have already to be calculated and thus the setup of the rest of the matrices Kp and Mp requires just k(k ?1) extra inner products. Assuming that the eigenvalue analysis of the condensed problems of size k requires an amount proportional to k3 the additional eigenanalysis of (39) does not count. In Section 6 we will incorporate this projective re nement of j, j = 1;:::;k, into our nal algorithm. We expect then the approximations to be upper bounds of the original values and to have 1%-accuracy. While the rst expectation is known to be always true the second one is up to now just a rule of thumb. Though it is a good one, one has to use a rigorous error bounds if absolutely secure information is needed. One can use the a posteriori bounds of the following section. 5 A posteriori error bounds In this chapter we describe rigorous a posteriori error bounds which can be computed at negligible additional cost using the substructuring and the information from the condensation process. The nal estimation pro- cedure is a combination of generalized Krylov-Bogoliubov bounds, Kato- Temple bounds, the minmax characterization of eigenvalues and a gen- eralization of Sylvester's law of inertia by Wittrick and Williams [29]. 22
  • 23. Krylov-Bogoliubov and Kato-Temple bounds for eigenvalues have been considered for the generalized eigenvalue problem already in a pa- per of Matthies [16] where these are applied to bounding the errors of approximations obtained by subspace iteration or by Lanczos method. Geradin [7] and Geradin and Carnoy [8] introduced these bounds for statically condensed eigenvalue problems. A generalization of the Krylov-Bogoliubov bounds to clustered eigen- values of the symmetric special eigenvalue problem Ax = x; A 2 IR(n;n) ;AT = A (41) due to Kahan can be found in [19], page 219, and reads as follows. Theorem 4 Let Q 2 IR(n;m) have orthonormal columns. Associated with Q and the system matrix A from the eigenproblem (41) are the projected matrix H := QTAQ and its residual matrix R := AQ?QH: Then there are m of A's eigenvalues f j0 ; j = 1;:::;mg which can be put in one-to-one correspondence with the eigenvalues j; j = 1;:::;m, of H in such a way that jj ? j0 j kRk2; j = 1;:::;m: We need a special case of this result, which we formulate as Corollary 5 If the columns of Q from Theorem 4 are eigenvector approximations from a projection approximation of problem (41) by projection onto spanfQg such that H = QTAQ = diag[1;:::;m] and R = (r1 ;:::;rm) with rk := Aqk ?kqk;k = 1;:::;m; then jj ? j0 j kRk2 kRkF := v u u tm X i=1 krik2 2: (42) 23
  • 24. A generalized eigenvalue problem Ax = B; A;B 2 IR(n;n) symmetric;B positive de nite; (43) may be transformed via B?1=2 AB?1=2 B1=2 x = B1=2 x; with ~ A := B?1=2 AB?1=2 and y := B1=2 xto the equivalent special eigen- value problem ~ Ay = y: (44) Applying Corollary 5 to this problem one arrives at Corollary 6 Let the columns of Y 2 IR(n;m) be B-orthonormal projective approxima- tions of eigenvectors of the generalized problem (43) with corresponding eigenvalue approximations 1;:::;m, i.e. let Y TBY = Im and YTAY = diag[1;:::;m]: Then there are m eigenvalues f j0 ; j = 1;:::;mg of problem (43) which can be put in one-to-one correspondence with j; j = 1;:::;m; in such a way that jj ? j0 j v u u t m X i=1 B?1=2 (Ayi ?iByi) 2 2 : We could apply the last corollary directly to our problem (1) putting A := K and B := M. The special case m = 1 would then lead to the bounds given in [16]. In accordance with [7] and [8] we prefer, however, to apply the corollary to the equivalent problem Mx = 1 Kx: (45) In the rst case we would have to use the discrete di erential operator M?1 K in the course of the calculations. In the latter case this operator is replaced by the discrete integral operator K?1 M, which | in contrast to M?1 K | is a smoothing operator and therefore yields better numer- ical results for the modes corresponding to low frequencies. Moreover, 24
  • 25. K?1M can be calculated at negligible cost using the information from the condensation process. Having solved the eigenvalue problem (39) and having embedded the Mp-orthogonal eigenvectors of that system into the original space IRn one encounters at the end of the condensation-projection method from Section 4 the following situation: There are m vectors ~ xj;j = 1;:::;m; which approximate eigenvectors of (1). These vectors are are M- and K-orthogonal: ~ XT M ~ X = diag[(~ x1 )tM~ x1 ;:::;(~ xm)tM~ xm] =: diag[m1;:::;mm]; ~ XT K ~ X = diag[(~ x1 )tK~ x1 ;:::;(~ xm)tK~ xm] =: diag[k1;:::;km]: (46) Furthermore the projection-condensation approximations j of the cor- responding eigenvalues are the Rayleigh-quotients of the ~ xjs: j = (~ xj)tK~ xj (~ xj)tM~ xj = kj mj ; j = 1;:::;m: Thus Corollary 6 applies with A := M; B := K; yi := k?1=2 i ~ xi; i := ?1 i ; i := ?1 i and gives 1 j ? 1 j0 v u u t X i2I 1 ki K?1=2 M~ xi ??1 i K~ xi 2 2 ; j 2 I (47) where I is any subset of f1;:::;mg: Putting i := (~ xi)tMK?1 M~ xi (~ xi)tM~ xi (48) one nds that 1 ki K?1=2 M~ xi ??1 i K~ xi 2 2 = (ii ?1)/2 i : If we nally transform the bounds for the ?1 j into bounds for the desired eigenvalues themselves we arrive at the following 25
  • 26. Lemma 7 Let ~ x1 ;:::;~ xm be the eigenvector approximations for the eigenvectors of problem (1) derived by the condensation-projection method and let i = (~ xi)tK~ xi=(~ xi)tM~ xi, i = 1;:::;m, be the corresponding eigenvalue approximations. Then for any subset I of the index set f1;:::;mg there are eigenvalues j0 , j 2 I of problem (1) that can be put in one-to-one correspondence with j, j 2 I such that j 1 + jE(I) j0 j 1 ?jE(I); j 2 I; (49) where E(I) := sX i2I (ii ?1)=2 i and i is de ned in (48). Remark: For the last inequality from (49) one has to assume that 1 iE(I). It is not necessary to discuss this condition at length, sincewe will replace this upper bound by a better one derived from minimax theory. Let ~ x1 ;:::;~ xk be approximate eigenvectors which are obtained by the condensation-projection method introduced in Section 4 and let j := R(~ xj), j = 1;:::;k, be the eigenvalue approximation from problem (39). Then since (39) is a projected problem of the original eigenvalue problem (1) one has j j ; j = 1;:::;k: (50) If the index set I in Lemma 7 contains just one index, I = fjg, then the Krylov-Bogoliubov bounds (49) yields that each interval h j 1 + pjj ?1 ; j 1 ?pjj ?1 i (51) contains at least one eigenvalue of problem (1). If these intervals are disjoint and if exactly k eigenvalues of (1) are contained in (0;k], then there is exactly one eigenvalue in each of these intervals, and from (50) one gets that even each of the intervals h j 1 + pjj ?1 ; j] (52) contains exactly one eigenvalue j. 26
  • 27. The number of eigenvalues of eqn. (1) which are less than a given parameter ~ with the numbers of eigenvalues counted by multiplicitycan be obtained by the following result of Wittrick and Williams [29]: Theorem 8 Suppose that ~ is not an eigenvalue of the slave problem (9). Let N(~ ) and N0(~ ) be the number of eigenvalues of eqn. (1) and of the slave eigenvalue problem (9), respectively, which are less than ~ , and denote by s(T(~ )) the number of negative eigenvalues of the matrix T(~ ). Then N(~ ) = N0(~ ) + s(T(~ )): (53) The proof follows immediately from the fact that by eqn. (12) the matrices K ? ~ M and T(~ ) O O Kss ? ~ Mss # are congruent and thus by Sylvester's law of inertia (cf. [9]) have the same number of negative eigenvalues. Remark: Notice that we are only interested in the case ~ ! (i.e. N0(~ ) = 0) and that the number of negative eigenvalues of T(~ ) can be computed easily as the number of negative diagonal entries in the LR-decomposition of T(~ ) which is obtained without pivoting. The major cost of the application of Theorem 8 therefore consists in the eval- uation of T(~ ). Now assume that we want to nd approximations to the rst Q( m) eigenvalues of (1) together with rigorous bounds of their errors. If we assure with the aid of the previous remark that there are precisely Q eigenvalues less than Q and the intervals (52) are disjoint, than each of these intervals for j = 1;:::;Q contains exactly one of these eigenvalue. If these intervalls are not disjoint, one has to use Lemma 7 in its full generality to form Q possibly intersectings intervals [1;1];[2;2];:::;[Q;Q] in a one-to-one correspondence with the eigenvalues 1 2 ::: Q 27
  • 28. and j 2 [j;j]; j = 1;:::;Q: To achieve this one starts with the application of Lemma 7 to the pair (Q; ~ xQ) with I := fQg to nd the interval [Q=(1 + pQQ ?1); Q]. If the lower bound of this interval is larger than Q?1 then this in- terval will be disjoint from all the intervals to come and will contain the eigenvalue Q. If the interval contains Q?1 then Lemma 7 is rein- voked with the enlarged set I := I [ fQ ? 1g to nd two intervals [Q?1=(1 + Q?1E(I)); Q?1], [Q=(1 + QE(I)); Q]. If Q?2 Q?1=(1 + Q?1E(I)) then the union of these intervals is dis- joint from all intervals to come, one has j 2 h j 1 + jE(I) ; j i; j = Q ?1;Q and one can start with the process again with Q?2 instead of Q analizing now the pair (Q?2; ~ xQ?2) using an index set I := fQ ? 2g with one element in Lemma 7 again. If this is not the case then the index set I is further increased. The computational details will be given in the next section. The algorithm for error estimation presented their incorporates as an additional device the application of Kato-Temple-type bounds (cf. [16], [7], [8]). Assume that (by the above procedure, e.g.) the following situation is given: j j j+1 j+1: (54) Let ~ xj be the eigenvector approximation associated with j = R(~ xj) and let ~ xj = n X k=1 kxk be the representation of ~ xj by the M-orthogonal eigenvectors xk, k = 1;:::;n. Using (K?M)xj = (j ?)Mxj; the positivity of the eigenvalue j, and K?1Mxj = (j)?1xj, for j = 1;:::;n one easily sees that h(K ?j+1M)~ xjit K?1 h(K ?jM)~ xji = n X k=1 (k ?j+1)(k ?j) 2 k k (xk)tMxk 0: (55) Evaluation of the left hand side and division by (~ xj)tM~ xj leads to j ?(j + j+1) + jj+1j 0; 28
  • 29. which in turn produces the (lower) Kato-Temple-bound j j+1 ?j j+1j ?1: (56) A similar upper Kato-Temple-bound could likewise be computed if in analogy to (54) a gap between j and j?1 could be speci ed. Since only the given estimate (56) will be applied we do not go into the details of that estimate. 6 A prototype condensation-projection al- gorithm In this section we collect the results of the previous ones to form a con- densation algorithm for generalized eigenvalue problems with projective re nement and an a posteriori error estimation. In order to ease imple- mentation we write down the full problem a second time. The problem is to determine a group of smallest eigenvalues of the generalized eigenvalue problem (K?M)x= 0 with the symmetric positive de nite matrices K;M 2 IR(n;n) by conden- sation. The group should contain approximations to all eigenvalues less than a prespeci ed bound 0 with relative errors of approximately one percent or | if these eigenvalues can not all be approximated to such a precision | approximations to those rst eigenvalues for which such a precision can be reached. Let the sti ness matrixK and the mass matrixM have the following block structure K = 2 6 6 6 6 6 6 4 Kmm Kms1 Kms2 ::: Kmsr Ksm1 Kss1 O ::: O Ksm2 O Kss2 ::: O ::::::::::::::::::::::::::::::::: Ksmr O O ::: Kssr 3 7 7 7 7 7 7 5 ; (57) and M = 2 6 6 6 6 6 6 4 Mmm Mms1 Mms2 ::: Mmsr Msm1 Mss1 O ::: O Msm2 O Mss2 ::: O ::::::::::::::::::::::::::::::::::: Msmr O O ::: Mssr 3 7 7 7 7 7 7 5 ; (58) 29
  • 30. respectively, where for the nonzero blocks Kmm;Mmm 2 IR(m;m) ; Kssj;Mssj 2 IR(sj;sj ) ; j = 1;:::;r; Ksmj;Msmj 2 IR(sj ;m) ; Kmsj;Mmsj 2 IR(m;sj) ; j = 1;:::;r; with n = m + (s1 + + sr): This form of the matrices arises, e.g. by dividing the original problem into r substructures with one coupling interface structure. Then Kssj and Mssj are the sti ness and massmatrices,respectively, of the j-th substructure, Kmm and Mmm are the corresponding matrices of the coupling interface system and the matricesKsmj, Msmj collect the in uence of the interface system on the j-th substructure, while Kmsj, Mmsj describe the in uence of this substructure on the coupling system. The block structure of the matrices K and M is conserved if ad- ditionally interior degrees of freedom of the substructures are chosen as master variables. By a proper choice of interior masters the minimal slave eigenvalue is raised substantially thus improving the eigenvalue and eigenvector approximations and enlarging that part of the spectrum that is approximated well (cf. Section 7). We will write down the algorithm making explicit use of the struc- tures (57) and (58) to show that most of the computations can be done in parallel. CONDENSATION-PROJECTION ALGORITHM: Input: The nonzero submatrices of K and M from (57) and (58). A threshold for the eigenvalues of the problem to be approximated. Output: Number q of eigenvalues from the lower part of the spectrum (equal to the number of eigenvalues below or less), that can (most probably) be approximated with a relative error less than 1%. Approximate eigenvalues 1 ::: q and approximations ~ x1 ;:::;~ xq of corresponding eigenvectors. Optional lower and upper bounds j j j for the rst q eigenvalues. 1. Static condensation of the problem Determine the LU-decompositions of the sti ness matrices Kssj: Kssj =: LjRj; j = 1;:::;r: (59) 30
  • 31. Using these decompositions, determine Xj from KssjXj = Ksmj; j = 1;:::;r: Compute Kmmj := KmsjXj; j = 1;:::;r; Yj := MmsjXj; j = 1;:::;r; Mmmj := Yj + YT j ?XT j MssjXj; j = 1;:::;r; and collect K0 := Kmm + r X j=1 Kmmj; M0 := Mmm + r X j=1 Mmmj: 2. Estimate minimal slave eigenvalue. For j := 1 to r estimate the smallest eigenvalue !j of the eigenvalue problem (Kssj ?Mssj)zj = 0: (Apply, e.g., inverse iteration, making use of the LU-decomposition (59).) Put ! := min j=1;:::;r !j: Let := ( !=(! ? ) if !=3 !=2 otherwise. Remarks: (i) From the a priori bound in Theorem 2 one ob- tains that in the case !=3 all eigenvalues less than have c-eigenvaluesless than . So those c-eigenvalueswillbe used for pro- jective enhancement. Otherwise we use all c-eigenvalues which can be improvedto 1% accuracy by the 1%-rule, i.e. those c-eigenvalues below !=2. (ii) If in the second case the maximal nal eigenvalue approxima- tion does not exceed a message to the user is given. 31
  • 32. 3. Determine appropriate number of eigenvalues of con- densed problem. Let 1 ::: m denote the eigenvalues of the condensed problem K0u= M0u (60) and put 0 := 0 and m+1 := +1. Let q denote the nonnegative integer, which satis es 0 ::: q q+1: If q = 0 then STOP. Otherwise put q := minfq;m ? 1g and Q := q + with a small nonnegative integer , such that Q m. Compute the Q smallest eigenvalues 1;:::;Q of (60) together with the corresponding eigenvectors uj, j = 1;:::;Q: Remarks: (i) 1;:::;q will be those eigenvalues less than the prescribed bound for which, hopefully, approximations 1;:::;q with a relative error of about 1% and less can be computed by con- densation. The additional approximations q+1;:::;Q will be computed to increase the quality of the nal error estimates if such are desired. If one wants to save computing time one can trust the 1%-rule, put = 0 and skip most of the error estimation subprocedure from item 8. (ii) The determination of q and Q will probably have to be done simultaneously with the computation of the eigenpairs. One can think of subspace iterations, e.g., with increasing subspace dimen- sions. 4. Prolongate the eigenvectors of the condensation. Compute vk j := ?(Kssj ?kMssj)?1(Ksmj ?kMsmj)uk; j := 1;:::;r; k := 1;:::;Q: Notice that the prolongation x(k;uk) of uk to the full space is x(k;uk) = (uk)t;(vk 1)t;:::;(vk r)t t ; wherein vk j is the part of that vector associated with the jth sub- structure. 32
  • 33. 5. Project problem (1) to spanfx(j;uj) : j = 1;:::;Qg Using the partition of the involved data, one gets for L 2 fK;Mg the projected matrices Lp := 0 @ (ui)t Lmmuj +Pr k=1 h (ui)t Lmskvj k + (vi k)t Lsmkuj + (vi k)t Lsskvj k i 1 A Q i;j=1 6. Solve the projected problem Kpj = jMpj; j = 1;:::;Q: (61) If q print a message that possibly not all eigenvalues below are approximated by the computed 1;:::;q. 7. Embed the eigenvectors j into the full space. For j := 1 to Q do ~ xj := t X k=1 (uk)t;(vk 1)t;:::;(vk r)t t j k; j = 1;:::;Q; (62) and let ~ xj =: (xj m)t;(xj s1)t;:::;(xj sr)t t ; j = 1;:::;Q: The nal approximate eigenpairs are now given by (j; ~ xj); j = 1;:::;q: Remark: The additional pairs (j; ~ xj), j = q + 1;:::;Q are used within the error estimates. 8. Error estimation. If one decides not to compute rigorous error bounds but instead to trust the 1%-rule to save computing time, one should at least perform the following rst step of this procedure: Calculate with Theorem 8 the number N of eigenvalues less than Q. If N is larger than Q: Write a message saying that there are eigen- values below Q which are not approximated by the condensation method. STOP. 33
  • 34. Detection of N Q is bad, since the calculated information is in- complete. This bad case can not be excluded if no special routine for choosing the masters is included. An eigenvector can not be de- tected by condensation if, e.g., its master part vanishes identically. In this case the error estimation procedure will in general not give trustworthy results and should hence not be run. If N = Q, however, it will produce intervals [j;j], j = 1;:::;Q; such that de nitively j j j; j = 1;:::;Q: For the estimates of the errors the quantities j := (~ xj)tMK?1M~ xj (~ xj)tM~ xj as de ned in (48) will be needed. We start with a description of how to compute these entities if the sti ness and mass matrices have the block form given in (57) and (58) and the approximate eigenvector ~ x is given in its partitioned form ~ x= (~ xt m; ~ xt s1;:::; ~ xt sr)t where ~ xm denotes the master portion of ~ xand ~ xsj the slave portion corresponding to the j-th substructure. Function TAU(~ x): Let v := M~ x= 8 : Mmm~ xm + Pr j=1 Mmsj~ xsj . . . Msmj~ xm + Mssj~ xsj . . . 9 = ; and let w := K?1M~ x= K?1v where w = (wt m;wt s1;:::;wt sr)t: Then block Gaussian elimination yields K0wm = vm ? r X j=1 KmsjK?1 ssjvsj; Kssjwsj = vsj ?Ksmjwm; j = 1;:::;r; 34
  • 35. where K0 denotes the reduced matrix given in eqn. (25). Put = vt w vt ~ x: Remark:Notice that in the condensation process already systems of equations with system matrices K0 and Kssj have to be solved and therefore LR-decompositions of these matrices are at hand. Hence can be calculated at negligible cost. With the FUNCTION-Subroutine TAU() available we can now write down the error estimation procedure. 0 = ?1; z0 := 0; B := 0; for i := 1 to Q do zi := ?1 i ; for j := Q downto 0 do begin if (B zj or j = 0) then begin f nal bound for cluster j+1;:::;I g if j Q then begin for i := j + 1 to I ?1 do i := i=(1 + iW); I := maxfI ; I =(1 + I W)g: end; if j 0 then begin fstart a new clusterg I := j; := TAU(~ xj ); R := (j ?1)=2 j ; W := p R; B := zj + W; end; if j 0 and j Q then begin fcompute Karo-Temple boundsg j := (j+1 ?j)=(j+1 ?1); B := minfB; ?1 j g; end; end else 35
  • 36. x y Fig. 1. Substructuring of the tapered cantilever beam begin fupdate Kahan-Krylov-Bogoliubov boundsg := TAU(~ xj ); R := R + (j ?1)=2 j ; W := p R; B := zj + W; end; end; 7 Numerical examples 7.1 Tapered cantilever beam We consider the transversal vibrations of a tapered cantilever beam of length 1 with area of cross section Ax := A0(1 ?0:5x)2 , 0 x 1. The problem is described by the eigenvalue problem ((1 ?0:5x)4y00 )00 = (1 ?0:5x)2y y(0) = y0 (0) = y00 (1) = y000 (1) = 0; (63) where = !2 A0=(EI0), A0 and I0 are the area of the cross section and momentof inertia at x = 0, respectively, is the mass per unit volume,E is the modulus of elasticityand ! denotes the natural circular frequencies of the beam. We discretized eqn. (63) by nite elements with cubic hermite splines (beam elements), divided the beam into 6 substructures of identical length and subdivided each substructure into 5 elements of identical length. Thus, problem (1) has dimension n = 60 and is condensed to 36
  • 37. dimension m = 12. The 6 substructure eigenproblems (15) are of dimen- sion s=r = 8 and the minimal slave eigenvalue is ! 190200: By the 1%-rule 6 eigenvalues can be obtained with an error less than 1%. Table 1 contains the eigenvalue approximations j, j = 1;:::;6, from the condensation-projection method and its relative errors. We added the relative errors of the eigenvalue approximations that are obtained by projection to the 12 dimensional space spanned by the prolongations of all eigenvectors of the condensed problems. The improvement of the approximations using the higher dimension is insigni cant. Hence, di er- ently from the subspace iteration one does not have to use higher dimen- sional subspaces to obtain satisfactory eigenvalue approximations (for the subspace iteration it is recommended to use subspaces of dimension min(2k;k +8) to obtain good approximations of the k smallest eigenval- ues, cf. Bathe [1]). A similar behaviour was observed in all examples that we treated. Tab. 1. tapered cantilever beam; dimensions 6 and 12 j j j ?j j (dim = 6) j ?j j (dim = 12) 1 2:13920157650317E + 01 3:52E ?13 9:17E ?14 2 3:82109582756324E + 02 1:38E ?09 6:55E ?10 3 2:35992737167125E + 03 2:51E ?07 2:34E ?07 4 8:42988817779263E + 03 8:58E ?06 7:78E ?06 5 2:23209130540396E + 04 8:56E ?05 4:05E ?05 6 4:91603912747807E + 04 3:39E ?03 2:83E ?03 Table 2 contains the lower bounds j of the smallest 6 eigenvalues obtained from the algorithm in Section 5 (which are all Kato-Temple bounds) using the condensed-projected problem of dimension 7 as well as the relative distance (j ?j )=j of these bounds to the upper bounds j. The last column contains the relative distance of the bounds if the problem is projected to a 12-dimensional space demonstrating that the gain of accuracy is not signi cant. Moreover, a comparison with Table 1 demonstrates that the Kato-Temple bounds are realistic. Tab. 2. tapered cantilever beam; lower bounds 37
  • 38. j j j ?j j (dim = 6) j ?j j (dim = 12) 1 2:13920157650236E + 01 3:76E ?13 1:05E ?13 2 3:82109582126386E + 02 1:65E ?09 7:81E ?10 3 2:35992655024818E + 03 3:47E ?07 3:25E ?07 4 8:42977535282817E + 03 1:28E ?05 1:24E ?05 5 2:23177154291946E + 04 1:29E ?04 7:14E ?04 6 4:87866533772251E + 04 7:35E ?03 5:76E ?03 7.2 Clamped plate; only interface masters Free vibration of a uniform thin elastic plate covering the region := [0;2][0;2] and clamped at its boundary @ are governed by the eigen- value problem 2u = u in ; u = @u @n = 0 on @ : (64) where = !2 h=D and is the mass density of the plate material, h the thickness and D the exural rigidity of the plate, and ! denotes the circular frequency of a free vibration. The region is divided into r = 9 quadratic substructures of edge length 2=3 (cf. Fig. 2). Each substructure is discretizedby 16 quadratic Bogner-Fox-Schmidt elements (node variables u;ux;uy;uxy) and the master variables are cho- sen to be the degrees of freedom on the boundaries of the substructures. Thus problem (1) is of dimension n = 484 and after condensation we retain m = 160 master variables. Each substructure has s=r = 36 slave variables and the minimal slave eigenvalue is ! 6581:9. 38
  • 39. u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e (0,0) (2,0) (0,2) (2,2) - x 6 y u = master node e = slave node Fig. 2. Substructuring of the clamped plate 10 eigenvalues of the condensed problem are contained in (0; 0:5!). Tab. 3 contains the approximate eigenvalues j , their relative errors and their relative distances to the lower bounds j from section 6 where the dimension of the projected problem is 16. Tab. 3. clamped plate; only interface masters j j j ?j j j ?j j j ?j j 1 80:938897 8:70E ?08 4:25E ?07 1:14E ?07 2 336:753546 6:14E ?06 3:51E ?03 1:14E ?05 3 336:753546 6:14E ?06 1:14E ?05 1:14E ?05 4 732:281575 1:20E ?04 4:03E ?04 4:03E ?04 5 1083:377527 2:80E ?04 1:61E ?02 1:61E ?02 6 1095:397716 1:87E ?03 5:69E ?03 5:11E ?03 7 1703:859234 4:83E ?04 7:77E ?02 8:93E ?03 8 1708:859234 4:83E ?04 9:68E ?03 8:93E ?03 9 2805:335439 1:03E ?02 1:31E ?01 3:93E ?02 10 2805:335439 1:03E ?02 3:93E ?02 3:93E ?02 The algorithm from section 6 yielding lower bounds detects the dou- ble eigenvalues 2 = 3, 7 = 8 and 9 = 10 as clusters of two eigenval- ues. The relative distances (j ?j )=j of the upper and lower bounds are 39
  • 40. much better for j = 3;8;10 than those for j = 2;7;9, respectively, since in these cases the Kato-Temple bounds apply. If it is known in advance that 2 = 3, 7 = 8 and 9 = 10 are double eigenvalues then the Kato-Temple bounds can be computed for these pairs and can be used in the calculation of the Kato-Temple bounds of 1 and 6. The relative distances of these bounds j are contained in the last column of Table 3. Again the Kato-Temple bounds are realistic with the exception of the fth eigenvalue which is quite close to the sixth one. 7.3 Clamped plate; interface and interior masters Again we considers the clamped plate of Example 7.2 with the same discretization as before. As master variables we choose the degrees of freedom on the boundaries of the substructures and additionally the dis- placement in the center of each substructure raising the dimension of the condensed problem to m = 169 and the minimal slave eigenvalue to ! 27747. In this case 22 eigenvalues of the condensed problem are contained in the interval (0;0:5!). Tab. 4 contains the approximate eigenvalues j , their relative errors and their relative distances to the lower Kato-Temple bounds j which were obtained taking advantage of the knowledge that the eigenvalues 2, 7, 9, 14 and 18 are double eigenvalues. The dimension of the projected problem is 23. Tab. 4. clamped plate; interface and interior masters 40
  • 41. j j j ?j j j ?j j 1 80:938890 4:93E ?09 6:48E ?09 2=3 336:751582 3:03E ?07 5:62E ?07 4 732:195843 2:66E ?06 8:16E ?06 5 1083:091322 1:60E ?05 1:67E ?03 6 1093:373316 1:66E ?05 4:58E ?05 7=8 1703:134131 5:76E ?05 1:51E ?04 9=10 2777:391326 2:39E ?04 2:85E ?03 11 3030:929515 5:27E ?04 3:30E ?03 12 3674:593069 4:38E ?04 2:04E ?02 13 3704:839141 5:37E ?04 1:79E ?03 14=15 5515:901661 2:88E ?03 5:23E ?02 16 6002:292169 1:01E ?03 3:04E ?02 17 6016:217086 1:66E ?03 1:15E ?02 18=19 7303:330418 2:53E ?03 2:57E ?02 20 8720:547076 9:14E ?03 9:01E ?02 21 9773:861784 9:26E ?03 9:40E ?02 22 9821:369621 8:47E ?03 8:90E ?02 References [1] Bathe, K. J., `Finite Element Procedures in Engineering Analysis', Prentice-Hall, Englewood Cli s 1982 [2] Bathe, K. J. and Wilson, E. L. `Numerical Methods in Finite Ele- ment Analysis', Prentice-Hall, Englewood Cli s 1976 [3] Bouhaddi, N. and Fillod, R. `Substructuring Using a Linearized Dy- namic Condensation Method', Computers Structures 45, pp. 697 | 683, 1992 [4] Bouhaddi, N. and Fillod, R. `A Method for Selecting Master DOF in Dynamic Substructuring Using the Guyan Condensation Method', Computers Structures 45, pp. 941 | 946, 1992 [5] Chen, S.-H. and Pan, H. H. `Guyan Reduction', Comm. Appl. Nu- mer. Meth. 4, pp. 549 | 556, 1988 41
  • 42. [6] Cottle, R. W. `Manifestations of the Schur Complement', Lin. Alg. Appl. 8, pp. 189 | 211, 1974 [7] Geradin, M. 'Error Bounds for Eigenvalue Analysis by Elimination of Variables.' J. of Sound and Vibration 19, pp. 111 | 132, 1971 [8] Geradin, M. and Carnoy, E. `On the Practical Use of Eigenvalue BracketinginFiniteElementApplicationsto Vibration and Stability Problems.' in EUROMECH 112, Hungarian Academy of Sciences, Budapest 1979, pp. 151 | 172 [9] Golub, G. H. and van Loan, C. F. `Matrix Computations', 2nd edi- tion, The John Hopkins University Press, Baltimore 1989 [10] Guyan, R. J. `Reduction of Sti ness and Mass Matrices', AIAA J. 3, p. 380, 1965 [11] Heath,M. , Ng, E. and Peyton, B. `Parallel Algorithms for Sparse Linear Systems', SIAM Review 33, pp 420 | 460, 1991 [12] Henshell, R. D. and Ong, J. H. `Automatic Masters for Eigenvalue Economization', Earthquake Engineering and Structural Dynamics 3, pp. 375 | 383, 1975 [13] Irons, B. `Structural Eigenvalue Problems: Elimination of Unwanted Variables', AIAA J. 3, pp. 961 | 962, 1965 [14] Leung, Y. T. `An Accurate Method of Dynamic Condensation in Structural Analysis', Internat. J. Numer. Meth. Engrg. 12, pp. 1705 | 1715, 1978 [15] Leung, Y. T. `An Accurate Method of Dynamic Substructuring with Simpli ed Computation', Internat. J. Numer. Meth. Engrg. 14, pp. 1241 | 1256, 1979 [16] Matthies,H.G. `ComputableError Bounds for the GeneralizedSym- metric Eigenvalue Problem.' Comm. Appl. Numer. Meth. 1, pp. 33 | 38, 1985 [17] Noor, A. K. `Recent Advances and Applications of Reduction Meth- ods', to appear in Applied Mechanics Reviews [18] Ortega, J. M. and Rheinboldt, W. C. `IterativeSolution of Nonlinear Equations in Several Variables', Academic Press, New York, 1970 42
  • 43. [19] Parlett, B. N. `The Symmetric Eigenvalue Problem', Prentice-Hall, Englewood Cli s, N.J., 1980 [20] Petersmann, N. `Substrukturtechnik und Kondensation bei der Schwingungsanalyse', Fortschrittberichte VDI, Reihe 11: Schwin- gungstechnik, Nr. 76, VDI Verlag, D usseldorf, 1986 [21] Rothe, K. and Voss, H. `ImprovingCondensation Methods for Eigen- value Problems via Rayleigh Functional', Comp. Meth. Appl. Mech. Engrg. 111, pp. 169 | 183, 1994 [22] Rothe, K. and Voss, H. `Improved Dynamic Substructuring on Dis- tributed Memory MIMD-Computers.' in: Application of Supercom- puters in Engineering III (Ed. Brebbia, C.A. and Power, H.), Com- putational MechanicsPublications,pp. 339 | 352, Elsevier,London, 1993 [23] Rothe, K. and Voss, H. `A Fully Parallel Condensation Method for Generalized Eigenvalue Problems on Distributed Memory Comput- ers.' submitted to Parallel Computing [24] Saad, Y. `Numerical Methods for Large Eigenvalue Problems', Manchester University Press, Manchester, 1992 [25] Shah, V. N. and Raymund, M. `Analytic Selection of Masters for the Reduced Eigenvalue Problem',Internat. J. Numer.Meth. Engrg. 18, pp. 89 | 98, 1982 [26] Thomas, D. L. `Errors in Natural Frequency Calculations Using Eigenvalue Economization', Internat. J. Numer. Meth. Engrg. 18, pp. 1521 | 1527, 1982 [27] Voss, H. `An Error Bound for Eigenvalue Analysis by Nodal Conden- sation', in: Numerical Treatment of Eigenvalue Problems 3 (Ed. Al- brecht, J., Collatz, L. and Velte, W.), Internat. Series Numer. Math. 69, pp. 205 | 214, Birkh auser, Stuttgart, 1983 [28] Voss, H. and Werner, B. `A Minimax Principle for Nonlinear Eigen- value Problems with Applications to Nonoverdamped Systems', Math. Meth. Appl. Sci. 4, pp. 415 | 422, 1982 [29] Wittrick,W. H. and Williams,F. W. `A General Algorithmfor Com- puting Natural Frequencies of Elastic Structures.' Quart. J. Mech. Appl. Math. 24, pp. 263 | 284, 1971 43
  • 44. [30] Wright, G. C. and Miles, G. A. `An Economical Method for De- termining the Smallest Eigenvalues of Large Linear Systems', Inter- nat. J. Numer. Meth. Engrg, 3, pp. 25 | 33, 1971 [31] Zehn, M. `Substruktur-/Superelementtechnik f ur die Eigenschwin- gungsberechnung dreidimensonaler Modelle', Technische Mechanik 4, pp. 56 | 63, 1983 [32] Zehn, M. `Dynamische FEM-Strukturanalyse mit Substrukturtech- nik', Technische Mechanik 9, pp. 245 | 253, 1988 44