Comparison of Radial Basis
Function Algorithms for
Convection Diffusion Equations
Artemis Nika
Keble College
University of Oxford
A thesis submitted for the degree of
Master of Science
2014
I dedicate this project to my wonderful parents who have always been by
my side.
Acknowledgements
There are several people that helped me throughout the project and I
would like to take a moment to thank them.
First and foremost I would like to thank my family. They have been with
me through the entirety of this project and have always supported me the
best way they could.
A special thank you goes to my supervisor Dr. Kathryn Gillow who has al-
ways provided her support and guidance, not only throughout the project,
but during the whole duration of this masters course.
Lastly I want to thank all of my closest friends that have always been
there for me. Special thanks to Pavlos Georgiou who has always stood by
me and had the patience to proof read most of my reports.
Abstract
In this project, we implement three radial basis function methods for
solving convection-diffusion equations, aiming to compare them in terms
of ease of implementation, accuracy, stability and efficiency. Namely, the
methods we consider are collocation, Galerkin formulation and generalised
interpolation. Each of the methods is implemented twice using one of
Wendland’s compactly supported RBFs and the Gaussian RBF. We per-
form numerical experiments in order to find how the scaling parameter δ
affects their accuracy and conditioning, if convergence can be achieved by
increasing the number of points N, and also what are the advantages and
disadvantages of using each of the RBFs.
We find that the Gaussian RBF helps us obtain high accuracy but at the
same time makes the method used unstable, and hence the choice of δ dif-
ficult. The method which displays higher accuracy when using the Gaus-
sian is generalised interpolation, except when used for the stiff 2D model
problem where collocation is superior. The compactly supported Wend-
land RBFs provide greater stability but reduced accuracy. The method
which results in the most accurate results when used with them is the col-
location method. Concerning the Galerkin method, we found that it was
unstable and extremely inefficient because of the numerical integrations
that take place.
Finally we give some possible extensions of the project. Specifically we
discuss possible ways to predict good values for the scaling parameter δ, an
alternative point distribution to the uniform one for stiff problems in one
space dimension and we also briefly describe the ideas behind multiscale
versions of the methods.
Contents
1 Introduction 1
1.1 Aim of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Model Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Measuring the Error . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Collocation 8
2.1 Method Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Scaling Parameter δ . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 Distribution of Eigenvalues . . . . . . . . . . . . . . . . . . . . 14
2.3 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Scaling Parameter δ . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.3 Distribution of Eigenvalues . . . . . . . . . . . . . . . . . . . . 17
2.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Galerkin Formulation 19
3.1 Galerkin Method for Robin Problems . . . . . . . . . . . . . . . . . . 19
3.2 Galerkin Method for the Dirichlet Problem . . . . . . . . . . . . . . . 20
3.3 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 Increasing the number of points N . . . . . . . . . . . . . . . 25
3.3.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
i
3.4.2 Increasing the number of points N . . . . . . . . . . . . . . . 28
3.4.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Generalised Interpolation 31
4.1 Method Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Stability and Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.4.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5 Method Comparison 45
5.1 Ease of Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2 Accuracy and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2.1 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2.2 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6 Further Work 50
6.1 Extension: Choice of the Scaling Parameter . . . . . . . . . . . . . . 50
6.2 Extension: Point Distribution . . . . . . . . . . . . . . . . . . . . . . 51
6.3 Extension: Multilevel Algorithms . . . . . . . . . . . . . . . . . . . . 53
A Collocation 55
A.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
A.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
B Galerkin Formulation 59
B.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
B.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
ii
C Generalised Interpolation 63
C.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
C.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
D Comparison 67
D.1 Accuracy Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 67
D.1.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
D.1.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
D.2 Conditioning Comparison . . . . . . . . . . . . . . . . . . . . . . . . 70
D.2.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
D.2.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
D.3 Efficiency Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 72
D.3.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
D.3.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
E Further Work 74
E.1 Choice of Scaling Parameter . . . . . . . . . . . . . . . . . . . . . . . 74
E.2 Point Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
E.2.1 Collocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
E.2.2 Galerkin Formulation . . . . . . . . . . . . . . . . . . . . . . . 76
E.2.3 Generalised Interpolation . . . . . . . . . . . . . . . . . . . . . 76
F MATLAB code 77
F.1 Collocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
F.1.1 Coll 1D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
F.1.2 Coll 2D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
F.2 Galerkin Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 81
F.2.1 Gal 1D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
F.2.2 Gal 2D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
F.3 Generalised Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . 87
F.3.1 GenInter 1D.m . . . . . . . . . . . . . . . . . . . . . . . . . . 87
F.3.2 GenInter 2D.m . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Bibliography 92
iii
List of Figures
1.1 Numerical solutions obtained using finite element methods. . . . . . . 2
1.2 Radial basis functions, with centre xj = 4
9
, for different values of δ. . . 4
1.3 Exact solutions for one dimensional model problem. . . . . . . . . . . 5
1.4 Exact solutions for two dimensional model problem. . . . . . . . . . . 6
2.1 Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum
error is 5.6 × 10−3
, for δ = 5 with condition number 2.27 × 1010
. . . . 11
2.2 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum
error is 6.7 × 10−4
, for δ = 0.12 with condition number 4.22 × 1017
. . 11
2.3 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum
error is 1.7 × 10−4
, for δ = 5 with condition number 4.81 × 105
. . . . 12
2.4 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 16, using the Gaussian RBF. The minimum error
is 2.45 × 10−7
, for δ = 0.7 with condition number 3.31 × 1017
. . . . . 12
2.5 Log of the error versus N for = 0.01. . . . . . . . . . . . . . . . . . . 13
2.6 Eigenvalue plots for = 0.01 and N = 64. . . . . . . . . . . . . . . . 14
2.7 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The
minimum error is 1.5×10−2
, for δ = 5 with condition number 8.01×1011
. 15
2.8 Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 1024, using the Gaussian RBF. The minimum
error is 2.9 × 10−2
, for δ = 0.07 with condition number 1.09 × 109
. . . 16
2.9 Eigenvalue plots for = 0.5 and N = 256. . . . . . . . . . . . . . . . 17
2.10 Numerical solutions for 1D model problem for = 0.01 and N = 64. . 18
2.11 Numerical solutions and pointwise error for 2D model problem for =
0.01 and N = 1024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
iv
3.1 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 64, using the Wendland(1D) RBF. Minimum
error is 5.6 × 10−3
, for δ = 0.19 with condition number 3.64 × 1010
. . 23
3.2 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum
error is 1.1 × 10−3
, for δ = 0.05 with condition number 2.62 × 1017
. . 23
3.3 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum
error is 3.0 × 10−5
, for δ = 4.1 with condition number 4.43 × 1016
. . . 24
3.4 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 16, using the Gaussian RBF. Minimum error is
5.61 × 10−6
, for δ = 0.36 with condition number 3.54 × 1017
. . . . . . 24
3.5 Log of L2 norm of the error versus N for = 0.01. . . . . . . . . . . . 25
3.6 Eigenvalue plots for = 0.01 and N = 64. . . . . . . . . . . . . . . . 26
3.7 Eigenvalue plots for = 0.5 and N = 32. . . . . . . . . . . . . . . . . 26
3.8 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The
minimum error is 2.95 × 10−1
, for δ = 0.1 with condition number
5.32 × 1019
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.9 Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 1024, using the Gaussian RBF. The minimum
error is 1.42, for δ = 0.1 with condition number 9.81 × 1019
. . . . . . 28
3.10 Eigenvalue distribution for = 0.01 using the Wendland(2D) RBF
with N = 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.11 Eigenvalue distribution for = 0.01 using the Gaussian RBF with
N = 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.12 Numerical solutions for 1D model problem for = 0.01 and N = 64. . 30
3.13 Numerical solutions and pointwise error for 2D model problem for =
0.01 and N = 1024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.1 Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum
error is 2.51 × 10−1
, for δ = 5 with condition number 2.85 × 109
. . . . 35
4.2 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum
error is 8.40 × 10−4
, for δ = 0.09 with condition number 4.97 × 1017
. . 36
v
4.3 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum
error is 1.14 × 10−4
, for δ = 5 with condition number 597. . . . . . . . 36
4.4 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 16, using the Gaussian RBF. The minimum error
is 1.36 × 10−8
, for δ = 0.86 with condition number 5.15 × 1017
. . . . . 37
4.5 Log of the error versus N for = 0.01. . . . . . . . . . . . . . . . . . 37
4.6 Log of the error versus N for = 0.5, using the Wendland(1D) with
δ = 15h1−2/σ
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.7 Eigenvalue distributions for = 0.01. . . . . . . . . . . . . . . . . . . 39
4.8 Log of the error and condition number versus the scaling parameter
δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The
minimum error is 2.27×10−1
, for δ = 5 with condition number 3.48×1012
. 41
4.9 Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 1024, using the Gaussian RBF. The minimum
error is 1.45 × 10−1
, for δ = 0.25 with condition number 4.63 × 1022
. . 41
4.10 Log of the error versus N for = 0.5. . . . . . . . . . . . . . . . . . . 42
4.11 Eigenvalue distributions for = 0.01 with N = 256. . . . . . . . . . . 43
4.12 Numerical solutions for 1D model problem for = 0.01 and N = 64. . 43
4.13 Numerical solutions and pointwise error for 2D model problem for =
0.01 and N = 1024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.1 Execution times of methods for the 2D model problem for fixed and δ. 48
6.1 Log of exact and predicted errors versus δ with N = 64 and = 0.01.
RBF method used is Collocation. . . . . . . . . . . . . . . . . . . . . 51
6.2 Shishkin mesh for = 0.01 and = 0.5 using N = 27 points. . . . . . 52
A.1 Log of the error and condition number versus the scaling parameter
δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum
error is 0.002637, for δ = 5 with condition number 27764490.18245. . 58
A.2 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is
0.000091, for δ = 1.62 with condition number 4.95066936940742e+18. 58
B.1 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error
is 0.003078, for δ = 0.1 with condition number 720774606009488770. . 61
vi
B.2 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is
0.006890, for δ = 1 with condition number 623220457232.205570. . . . 62
C.1 Log of the error and condition number versus the scaling parameter
δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum
error is 0.006173, for δ = 5 with condition number 70823500.508384. . 65
C.2 Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is
0.000042, for δ = 1.65 with condition number 798519455837603580. . 66
vii
Chapter 1
Introduction
Radial Basis Function (RBF) methods trace their origins back to Hardy’s multi-
quadric (MQ) method [6]. This method was developed by Hardy as a way to obtain a
continuous function that would accurately describe the morphology of a geographical
surface. The motivation came from the fact that the already existing methods at the
time, like Fourier and polynomial series approximations, provided neither acceptable
accuracy nor efficiency [6], [24]. Moreover obtaining the desired result as a continu-
ous function meant that analytical techniques from calculus and geometry could be
utilised to provide useful results, e.g. the height of the highest hill, unobstructed lines
of sight, volumes of earth and others [6].
In mathematical terms, the problem Hardy was trying to solve can be stated
as: given a set of unique points X = {x1, ..., xn} ∈ Rd
with corresponding values
{f1, ..., fn}, find a continuous function s(x) that satisfies the given value at each
point, i.e., s(xi) = fi for i = 1, .., n. Hardy [6], following a trial and error approach,
constructed the approximate solution by taking a linear combination of the multi-
quadric functions
qj(x) = c2 + x − xj
2
2, j = 1, .., n, (1.1)
where each of these functions is centered about one of the unique points in our set.
The problem therefore reduces to finding the coefficients Cj such that
s(xi) =
n
j=1
Cjqj(xi) = fi, i = 1, .., n, (1.2)
which involves nothing more than solving a system of linear equations. It has been
proved that the system of linear equations resulting from the MQ method is always
nonsingular, see for example, Micchelli [12].
1
The MQ method, which has found applications in areas other than topography,
has also been used in order to numerically approximate the solution to partial dif-
ferential equations. In fact, Kansa [9] found, after performing a series of numerical
experiments, that the MQ method in many cases outperformed finite difference meth-
ods, providing a more accurate solution while using a smaller number of points and
without having to create a mesh.
After Micchelli [12] provided the conditions under which the resulting linear sys-
tem of Hardy’s method is nonsingular, it was realised that other functions, as well
as (1.1) could also be used with the method. The common characteristic of those
functions is that their value only depends on the distance from a chosen center, i.e.,
they are radially symmetric with respect to that center. These functions are now
widely known as radial basis functions (RBFs).
1.1 Aim of Thesis
The motivation for looking into radial basis function methods for numerically solv-
ing partial differential equations (PDEs) comes from the fact that standard methods,
like finite differences or finite elements, can easily become computationally expensive.
This is a direct result of the requirement of these methods for creating a mesh, and
also the need for having usually a large number of points in order to obtain accept-
able accuracy. The necessity of using a really small stepsize is more evident when the
solution to be approximated has stiff regions. For example, in Figure 1.1 we see the
finite element solution on a uniform mesh to both a 1D and a 2D convection domi-
nated diffusion problem where we get oscillations in the boundary layer. Moreover,
Figure 1.1: Numerical solutions obtained using finite element methods.
2
the methods become increasingly complex as we take into consideration problems in
higher dimensions.
Perhaps the greatest advantage of RBF methods is that, in contrast to finite
elements and finite differences, they offer mesh free approximation. Highly important
is also the ease in which these methods can be generalized to higher dimensions. We
only have to consider the Euclidean norm of the distance between a point and the
center of the RBF in order to find its value and therefore, the majority of times, the
same function can be used to solve problems in any dimension.
These two main characteristics of RBFs captured the attention of mathematicians.
Since the time Hardy’s MQ method was first introduced , several algorithms utilizing
different RBFs have been developed, each with its own merits and faults.
An ideal algorithm would combine ease of implementation with high accuracy
and efficiency. The aim of this thesis is the implementation and comparison, in terms
of accuracy and conditioning of the resulting linear system, of three different radial
basis function algorithms for solving convection-diffusion equations in one and two
dimensions. The methods we consider are collocation [8],[9], a Galerkin formulation
method [1],[3],[11],[21] and generalized interpolation [4]. We will implement each of
these methods using two different types of RBFs, an infinitely differentiable one and
two compactly supported functions.
1.2 Radial Basis Functions
As mentioned in the previous section, throughout this project we will make use of
different RBFs when implementing our methods. We consider the infinitely differen-
tiable Gaussian function and two of Wendland’s [19] compactly supported functions
for one and two dimensions. These functions are available in Table 1.1.
Type of RBF φ(r) r ≥ 0 Ck
Gaussian exp(−r2
) C∞
Wendland (1D) (1 − r)5
+(8r2
+ 5r + 1) C4
Wendland (2D) (1 − r)6
+(35r2
+ 18r + 3) C4
Table 1.1: Table of radial basis functions.
We take as the input r for our RBF the Euclidean distance of a point from the
centre of the function, scaled by a scaling parameter δ ∈ R, that is,
r =
x − xj
δ
, j = 1, .., n. (1.3)
3
(a) Gaussian (b) Wendland (1D)
(c) Wendland (2D)
Figure 1.2: Radial basis functions, with centre xj = 4
9
, for different values of δ.
The parameter δ affects the shape of the RBF, as well as the accuracy of the method.
Observing Figure 1.2 we can see that as we increase the scaling parameter δ the
Gaussian RBF flattens out while the support radius of the two Wendland functions
increases. We expect that increasing δ will result in larger condition numbers for our
systems, especially for the Gaussian function which is not compactly supported.
A slight advantage of the Gaussian RBF is the fact that it can be used for ap-
proximations in any dimension, while we need to use a different compactly supported
Wendland function for each dimension. The reason why the same Wendland function
should not be used for any dimension has to do with the positive definiteness of com-
pactly supported functions, such as the Wendland RBFs, being dependent on how
many dimensions we are working with, as stated in [19]. A positive definite function
results in a positive definite linear system for (1.2), which in turns implies that we
can always find a unique solution, as all its eigenvalues are positive.
1.3 Model Problems
In this section we give the model problems on which we will test the different RBF
algorithms. Our 1D convection diffusion equation with Dirichlet boundary conditions
4
(a) = 0.01 (b) = 0.5
Figure 1.3: Exact solutions for one dimensional model problem.
is given by
− u + u = 0, 0 < x < 1,
u(0) = 1,
u(1) = 0,
(1.4)
where > 0 is the diffusion coefficient. The exact solution for this problem is
u(x) =
1 − exp(−(1 − x)/ )
1 − exp(−1/ )
. (1.5)
We will also consider a two dimensional problem, again with Dirichlet boundary
conditions. It is given by
− 2
u + (1, 2) · u = 0, (x, y) ∈ Ω,
u(1, y) = 0, y ∈ [0, 1]
u(x, 1) = 0, x ∈ [0, 1]
u(0, y) =
1 − exp(−2(1 − y)/ )
1 − exp(−2/ )
, y ∈ [0, 1],
u(x, 0) =
1 − exp(−(1 − x)/ )
1 − exp(−1/ )
, x ∈ [0, 1],
(1.6)
where again > 0 is the diffusion coefficient and Ω = (0, 1)2
. The exact solution of
the problem is
u(x, y) =
1 − exp(−(1 − x)/ )
1 − exp(−1/ )
1 − exp(−2(1 − y)/ )
1 − exp(−2/ )
. (1.7)
In both cases, varying affects how stiff the solution is and hence how easy it is to
construct an approximation for it, i.e., for small values of our solution becomes stiff
whereas for values close to 1 we have a solution whose gradient is not as steep. We
will perform our numerical experiments for two values of , namely = 0.01 for which
our solutions are stiff and = 0.5 for which the solutions are more well behaved, see
Figures 1.3 and 1.4.
5
(a) = 0.01 (b) = 0.5
Figure 1.4: Exact solutions for two dimensional model problem.
1.4 Measuring the Error
In order to be able to compare the methods effectively we need to quantify the error
between the exact and numerical solution. Since the solutions to our model problems
are known, measuring the error is relatively simple.
We will make use of the L2 norm of the error, where for the one dimensional case
we have
u − s L2 =
m
i=2
xi
xi−1
(u − s)2
dx
1/2
≈
1
√
m − 1
u − s 2,
(1.8)
where we have used the trapezium rule with m points in order to evaluate the integral
for the L2 norm. Also u and s are vectors of the exact and numerical solutions at
each of the m points. A similar expression can be obtained for two dimensional
problems. Note that, since the numerical solutions resulting from RBF algorithms
are continuous, it is possible to use the same number of m points for the trapezium
rule for approximations that used a different number of nodes.
1.5 Thesis Structure
The structure for the rest of this project is as follows. Chapters 2 to 4 contain the
description and implementation details, as well as some background theory where
relevant, for each of the three RBF methods. In each of these chapters we investigate
the effect of choosing different RBFs in order to generate our basis fucntions and
also the effect of varying δ. We will specifically look into the accuracy and stability
of each method for either choice of , i.e., for stiff and non-stiff problems. The
6
data gathered from each method can be found in the corresponding appendix. In
Chapter 5, we make a general comparison between the methods focusing on the
ease of implementation, the accuracy, the stability and also efficiency of each one of
them. Again, the tables containing all relevant data in detail can be found in the
appropriate appendix. Finally, in Chapter 6 we present the reader with ideas about
possible extensions of this project, as well as with some further work. The MATLAB
files containing the implementation of the methods can be found in Appendix F.
7
Chapter 2
Collocation
2.1 Method Description
Collocation, which was first introduced by Kansa [8], [9], is perhaps the most straight-
forward method amongst the three to be discussed. In order to demonstrate how collo-
cation works, let us first consider the following general convection-diffusion equation,
Lu = − 2
u + b · u + cu = f in Ω ⊆ Rd
u = gD on ∂Ω
(2.1)
where > 0 is the diffusion coefficient and b ∈ Rd
.
The first step of this method is to consider a set of basis functions
Φj(x) = φ
x − xj 2
δ
, j = 1, ..., N, (2.2)
where N is the total number of points we are using and j indicates which node
we are centering around. Note that we are using Φj as a function with a vector
argument and φ as a function with a scalar argument. The set of points used are
known as collocation points. We then form our approximate solution by taking a
linear combination of our basis functions, that is,
s(x) =
N
j=1
CjΦj(x). (2.3)
Substituting s(x) back into our equation and boundary conditions and evaluating at
each of our N points gives,
N
j=1
Cj (− 2
Φj(xi) + b · Φj(xi) + cΦj(xi)) = f(xi) i = 1, ..., N∗
,
N
j=1
Cj Φj(xi) = gD(xi) i = N∗
+ 1, ..., N,
(2.4)
8
where points 1 to N∗
are located within the domain and points N∗
+1 to N lie on the
boundary. We will only consider uniform distributions for all methods in this project,
for comparison purposes. The system of equations (2.4) can be written as a matrix
equation of the form
AC = F (2.5)
where
A =




Lφ
φ



 , C =




C



 , F =




f
gD



 . (2.6)
The matrix A has dimensions N × N and F, C are N dimensional vectors. The
method is also known as unsymmetric collocation [7] because of the non symmetric
collocation matrix A. Note that the matrix A is not symmetric even if the PDE is
self-adjoint. In order to acquire the unknown coefficients Cj we must solve Equation
(2.5) for C. It is therefore important that the collocation matrix is nonsingular for
this method to work.
In the case of a simple interpolation problem, see (1.2), this method may some-
times yield singular collocation matrices for specific RBFs [7], like Thin Plate Splines
for which φ(r) = r2
log(r). This can be fixed by adding an extra polynomial to (2.3)
and imposing additional constraints on the coefficients Cj in order to eliminate the
additional degrees of freedom. However, it has been proven that this is not the case
for elliptic problems. Hon and Schaback [7] have managed to construct examples
where the collocation matrix becomes singular, whether you add the extra appropri-
ate polynomial or not.
In particular, Hon and Schaback [7], have managed to find, relatively easily, cases
where using the Gaussian RBF results in a singular collocation matrix. Numerical
experiments were performed with Wendland functions as well but a singular colloca-
tion matrix was not found. However, Hon and Schaback do not believe that using the
compactly supported Wendland functions will always result in a nonsingular system
of equations. This indicates however that we should probably expect better condi-
tioning when using the Wendland RBFs, compared to the Gaussian. We also note
that we will not be considering any additional polynomial terms in our approximate
solution because, as stated by [10] and [2], they do not offer any significant benefits
with regards to the accuracy of the method.
In the following sections of this chapter we will apply the method to our model
problems in one and two dimensions, for different values of , using the Wendland
and Gaussian RBFs. The aim is to look into which of the two RBFs perform better
9
Gaussian Wendland (1D) Wendland (2D)
φ(r) exp(−r2
) (1 − r)5
+(8r2
+ 5r + 1) (1 − r)6
+(35r2
+ 18r + 3)
φ (r) −2r exp(−r2
) (1 − r)4
+(−56r − 14r) (1 − r)5
+(−280r2
− 56r)
φ (r) (−2 + 4r2
) exp(−r2
) (1 − r)3
+(336r2
− 42r − 14) (1 − r)4
+(1960r2
− 224r − 56)
Table 2.1: Derivatives of RBFs.
in terms of conditioning of the collocation matrix and accuracy of the solution. Also
of interest is how the value of the scaling parameter δ affects the method.
2.2 One Dimension
Having explained how the collocation method works, we will now apply it to our 1D
problem (1.4). Prior to coding the method in MATLAB, we must first calculate the
first and second derivatives of φ with respect to x, that is
dΦj
dx
=
1
δ
dφ
dr
if x > xj
−1
δ
dφ
dr
if x < xj
,
d2
Φj
dx2
=
1
δ2
d2
φ
dr2
, (2.7)
where the derivatives of φ with respect to r can be found in Table 2.1. The translation
of the collocation method to a computer program is relatively simple, which is one of
the reasons why this method became popular.
2.2.1 Scaling Parameter δ
The scaling parameter δ is known to affect the accuracy of RBF based methods.
Figures 2.1 and 2.2 show how the error and condition number get affected when we
vary δ, for the Wendland(1D) and Gaussian RBFs respectively, where = 0.01. The
Wendland(1D) RBF seems to provide us with exponential convergence as we increase
δ. However as the error decreases the condition number of the method increases. This
implies that as we increase δ the method becomes unstable due to the ill-conditioning
of the collocation matrix. Something similar was also observed for plain interpolation
problems, where as mentioned in Schaback [15],‘either one goes for a small error and
gets a bad sensitivity, or one wants a stable algorithm and has to take a comparably
larger error’. As far as the Wendland(1D) case of the method is concerned, for
= 0.01, we observe that after some point increasing δ any further does not have a
significant impact on the accuracy while it still affects the condition number. This
suggests that it might be worth sacrificing a bit of accuracy for a more stable method.
Changing the number of points does not affect the behaviour observed in Figure 2.1.
10
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 2.1: Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum error is
5.6 × 10−3
, for δ = 5 with condition number 2.27 × 1010
.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 2.2: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 64, using the Gaussian RBF. The minimum error is 6.7 × 10−4
, for
δ = 0.12 with condition number 4.22 × 1017
.
The Gaussian RBF, for = 0.01, causes a more erratic behaviour of the error
and the condition number. In contrast with the Wendland(1D) RBF, the range
of δ values we can use is very limited and the condition number is generally a lot
larger. This might be looked upon as a disadvantage of the Gaussian for this type of
problem, however for particular values of δ we do get a better approximation compared
to Wendland(1D). We note here that even though Wendland might outperform the
Gaussian for some values of N, see Tables A.1 and A.2, we can usually obtain a more
accurate solution using the Gaussian RBF, at the cost though of an unstable method.
Again, even if we change the number of points used we still obtain similar plots.
Now, for the case = 0.5, we can obtain a very good approximation using a
11
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 2.3: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 1.7×10−4
,
for δ = 5 with condition number 4.81 × 105
.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 2.4: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 16, using the Gaussian RBF. The minimum error is 2.45 × 10−7
, for
δ = 0.7 with condition number 3.31 × 1017
.
significantly smaller number of points than what we have used for = 0.01, see Tables
A.3 and A.4. The Wendland(1D) still provides us with exponential convergence, see
Figure 2.3, but this time the line does not flatten out as quickly as in the = 0.01
case. We can improve the accuracy, with the side effect of also increasing the condition
number, by considering an even larger value of δ. For example, for δ = 20 the error
reduces to 8.73 × 10−5
. However, increasing δ means we also increase the support
radius of our RBF, which in a way defeats the purpose of using a compactly supported
function. The Gaussian, on the other hand, allows us to use a slightly bigger range
of δ values this time, see Figure 2.4, and outperforms the Wendland(1D) for values of
N at least up to 128, see Tables A.3 and A.4. We can use a bigger range of δ values
12
(a) Gaussian RBF, where δ = 5h.
The minimum error is 2.0 × 10−6,
for N = 512 with condition number
1.1 × 1018.
(b) Wendland RBF, where δ = 0.25.
The minimum error is 2.5 × 10−5,
for N = 512 with condition number
1.1 × 108.
Figure 2.5: Log of the error versus N for = 0.01.
because we are using less points, which means that to start off the condition number
is somewhat decreased.
2.2.2 Varying N
We have seen in the previous section that choosing the appropriate δ can lead to a
more accurate solution without increasing the operation count of the method, with
the Wendland(1D) RBF requiring larger δ values compared to the Gaussian. Another
way to improve the accuracy is to use more points, even though in the case of the
Gaussian it is not always beneficial, see Table A.4.
Now, keeping δ fixed while increasing the number of points does not lead to a
convergent method for the Gaussian because the behaviour of the error as we vary δ
is erratic, as we have seen before. The Wendland(1D) however will cause the method
to converge as we increase N for most values of δ, even though it still may not achieve
the higher level of accuracy of the Gaussian. For = 0.01 we can get convergence
out of the Gaussian by setting δ = ch, where c is a constant, for some values of c,
see Figure 2.5. After trying out a few values for c we found that c = 5 produces
satisfactory results, but since convergence is not guaranteed for every c this is not an
ideal setting. The same choice for δ can be used with the Wendland(1D) for large
values of c, however numerical experiments suggest that it is more beneficial to keep
δ fixed rather than having it proportional to the meshsize h. Setting δ = ch for the
Wendland RBFs is similar to what happens in finite element methods, where the
support radius of the function is proportional to the meshsize. It is sometimes called
the stationary setting as the bandwidth of the matrix A stays fixed as the meshsize
13
(a) Gaussian with δ = 0.25. (b) Wendland(1D) δ = 0.25.
Figure 2.6: Eigenvalue plots for = 0.01 and N = 64.
decreases. For the case = 0.5, choosing δ = ch does not lead to convergence for either
of the RBFs. However, we can obtain convergence of the method for Wendland(1D)
only, if we keep δ fixed while increasing N.
Therefore, while the Gaussian in many cases yields a better approximation, the
fact that the range of δ values we can consider is very limited, coupled with the ill-
conditioning of the resulting linear systems, is a significant disadvantage. Equally
unsatisfactory is the fact that it is difficult to obtain a convergent scheme when we
use the Gaussian.
2.2.3 Distribution of Eigenvalues
The method for most combinations of N and δ will produce a collocation matrix with
complex eigenvalues for both RBFs, see Figure 2.6. The only exception we have found
is for = 0.5 using a small number of points, where we only have real eigenvalues, see
Tables A.3 and A.4. In general, when using the Gaussian, some of the eigenvalues are
clustered around 0 and some have a large modulus. The Wendland usually produces
eigenvalues which are better distributed, hence the smaller condition numbers. If
one looks at Figure 2.6, at first it looks as though there are less eigenvalues when the
Gaussian is used. In reality the majority of the eigenvalues are extremely close to zero
and hence they cannot be seen without zooming in multiple times. This phenomenon
is not as obvious in the Wendland(1D) case and this is what we mean by saying it
produces better distributed eigenvalues.
14
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 2.7: Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is
1.5 × 10−2
, for δ = 5 with condition number 8.01 × 1011
.
2.3 Two Dimensions
As for the one dimensional case, before proceeding at the coding stage, we must first
calculate the derivatives required for problem (1.6), that is
∂Φj
∂x
=
x − xj
δ2r
dφ
dr
,
∂Φj
∂y
=
y − yj
δ2r
dφ
dr
(2.8)
and
∂2
Φj
∂x2
=
1
δ2r
dφ
dr
+
(x − xj)2
δ4r
−
1
r2
dφ
dr
+
1
r
d2
φ
dr2
∂2
Φj
∂y2
=
1
δ2r
dφ
dr
+
(y − yj)2
δ4r
−
1
r2
dφ
dr
+
1
r
d2
φ
dr2
,
(2.9)
where r is given by (1.3), x = (x, y) and the derivatives of φ with respect to r can
be found in Table 2.1. We need to be careful to simplify the expressions for the
derivatives before implementing them in MATLAB in order to avoid divisions by
zero, therefore after some algebra we can write the Laplacian of Φj as
2
Φj =
1
δ2r
dφ
dr
+
1
δ2
d2
φ
dr2
, (2.10)
where the 1/r in the first term can always be cancelled with the r factor of dφ
dr
.
2.3.1 Scaling Parameter δ
As we are working with two dimensions we naturally need a significantly greater
number of points, especially when = 0.01 as the solution becomes stiff. In order to
15
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 2.8: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 2.9 × 10−2
,
for δ = 0.07 with condition number 1.09 × 109
.
get acceptable accuracy we need to use up to at least 1024 points, in which case the
Wendland(2D) is the better choice as far as accuracy is concerned, see Figures 2.7
and 2.8. The Gaussian surprisingly has a smaller condition number. This is because a
very small value of δ was chosen, if we had picked a slightly larger δ the conditioning
would be much worse as the behaviour of the condition number for a wider range
of δ values is similar to the 1D case, i.e., it increases very rapidly close to δ = 0.2.
Experimenting with other values of N we can conclude that for the case = 0.01
using the Wendland(2D) is a better choice, see Tables A.5 and A.6, not only because
of the better accuracy but also for allowing more values for δ to be used.
If we take = 0.5, we find that the collocation method produces more accurate
results when the Gaussian is used, see Tables A.7 and A.8. It is also worth noticing
how well the method performs for smoother solutions, even if just a few points are
used.
In general, for cases like = 0.01, collocation requires a large amount of points to
be used, which implies larger condition numbers. The Gaussian is likely to make the
method unstable very quickly and for that reason one might want to use a compactly
supported Wendland function with a not too large value for the scaling parameter δ.
2.3.2 Varying N
As for the 1D case we are interested in whether we can achieve convergence by in-
creasing the number of points we use. The results we have obtained for our two
dimensional problem are somewhat similar to what we had for the 1D case.
16
(a) Gaussian with δ = 0.3. (b) Wendland(2D) δ = 0.3.
Figure 2.9: Eigenvalue plots for = 0.5 and N = 256.
If we make δ proportional to the stepsize h, i.e. δ = ch, we do not always obtain
a convergent scheme for either of our functions. Moreover, we find that making the
method using the Gaussian converge to be very challenging, if we keep δ fixed. The
Wendland function, on the other hand, will result in a convergent method if δ is kept
fixed.
2.3.3 Distribution of Eigenvalues
We find again that the majority of times the method produces complex eigenvalues.
We can get real eigenvalues for a small number of points and a small δ, however those
cases do not produce acceptable solutions. As we had observed for the 1D case, if
we use the Gaussian RBF, more eigenvalues tend to have modulus close to 0, while
the same is not true if we use the Wendland(2D) RBF. This fact is quite obvious if
we observe Figure 2.9, where the eigenvalues obtained when using the Gaussian look
a lot less in quantity than the ones we get when the Wendland(2D) is used. This
happens because the majority of them are clustered extremely close to zero.
2.4 Chapter Summary
In this chapter, we have implemented the collocation method, using two different types
of RBFs, for our model problems in one and two dimensions. We have seen that the
Gaussian RBF provides better accuracy for the right choice of δ, except for the 2D
problem with = 0.01 where the Wendland(2D) is clearly better. Also, the Gaussian
makes the method unstable due to the ill-conditioning and hence it is very difficult to
obtain convergence, which is not the case for the Wendland RBFs. Figures 2.10 and
2.11 show numerical solutions obtained through collocation for the case = 0.01, for
17
both 1D and 2D cases respectively. Perhaps a drawback of the method when used for
solving PDEs, is the fact that there is no theoretical background for it and also the
fact that there is a small possibility that the collocation matrix A will be singular.
(a) Solution obtained using the
Gaussian RBF with δ = 0.12.
(b) Solution obtained using the
Wendland(1D) RBF with δ = 0.05.
Figure 2.10: Numerical solutions for 1D model problem for = 0.01 and N = 64.
(a) Solution obtained using the Gaussian
RBF with δ = 0.1.
(b) Solution obtained using the Wend-
land(2D) RBF with δ = 5.
Figure 2.11: Numerical solutions and pointwise error for 2D model problem for =
0.01 and N = 1024.
18
Chapter 3
Galerkin Formulation
3.1 Galerkin Method for Robin Problems
The second method we consider is based on a Galerkin formulation of the problem.
It is somewhat similar to a finite element method with the distinct difference that we
do not need to perform a computationally expensive mesh generation. Of course the
basis functions used in our case are radially symmetric.
Wendland [21], looked into a Galerkin based RBF method for second order PDEs
with Robin boundary conditions, for an open bounded domain Ω having a C1
bound-
ary, that is,
−
d
i,j=1
∂
∂xi
aij
∂u
∂xj
(x) + c(x)u(x) = f(x), x ∈ Ω,
d
i,j=1
aij(x)
∂u(x)
∂xj
νi(x) + h(x)u(x) = g(x), x ∈ ∂Ω,
(3.1)
where aij, c ∈ L∞(Ω), i, j = 1, ..., n, f ∈ L2(Ω), aij, h ∈ L∞(∂Ω), g ∈ L2(∂Ω) and
ν is the outward unit normal vector to ∂Ω. The entries aij(x) satisfy the following
ellipticity condition, namely, there exists a constant γ > 0 such that for all x ∈ Ω
and all α ∈ Rd
γ
d
j=1
α2
j ≤
d
i,j=1
aij(x)αiαj. (3.2)
As stated in [21], under the additional assumption that the functions c and h are
both non-negative and one or both of them are ‘uniformly bounded away from zero
on a subset of nonzero measure of Ω for c or ∂Ω for h, we obtain a strictly coercive
and continuous bilinear functional’
a(u, v) =
Ω
d
i,j=1
aij
∂u
∂xj
∂v
∂xi
+ cuv dx +
∂Ω
huv dS ∈ V × V, (3.3)
19
where V = H1
(Ω). Combining a(u, v) with the continuous linear functional l(v) ∈ V ,
where
l(v) =
Ω
fvdx +
∂Ω
gvdS, (3.4)
we obtain the weak formulation of the problem,
find u ∈ V such that a(u, v) = l(v) holds for all v ∈ V, (3.5)
which by the Lax-Milgram theorem is always uniquely solvable.
The next step is to consider a finite dimensional subspace VN ⊂ V spanned by
our RBFs, that is,
VN := span{Φj(x), j = 1, ..., N} (3.6)
for a set of pairwise distinct points X = {x1, ..., xN} ∈ Ω, and search for an approx-
imation s ∈ VN to u that satisfies a(s, v) = l(v) for all v ∈ VN . It is mentioned in
[21] that is preferable to use compactly supported functions, such as the Wendland
RBFs, in order to obtain some sparsity in the resulting matrix. Wendland provided
the theoretical bounds for such settings. We give below a special case of Theorem
5.3, with m = 0, proved in [21].
Theorem 3.1.1.
Assume u ∈ Hk
(Ω) and Φ is such that its Fourier transform satisfies ˆΦ(ω) ∼ (1 +
ω 2)−2σ
with σ ≥ k > d/2. Then there exists a function s ∈ VN , such that
u − s L2(Ω) ≤ Cˆhk
u Hk(Ω),
for sufficiently small ˆh = supx∈Ω min1≤j≤N x − xj 2.
The Wendland RBFs satisfy the requirements of the theorem with σ = 3 and
σ = 3.5, for the 1D and 2D versions respectively. Also, ˆh corresponds to the data
density or mesh norm as stated in [4] that ‘measures the radius of the largest data-free
hole contained in Ω’.
3.2 Galerkin Method for the Dirichlet Problem
In our case, both model problems have Dirichlet boundary conditions. This poses a
difficulty for Galerkin methods as they cannot simply be used with RBFs, as men-
tioned in [1]. As the boundary conditions need to be satisfied by the space in which
the solution u belongs [1], [3], a problem arises because RBFs, even compactly sup-
ported ones, do not satisfy these boundary conditions in general. Therefore the error
bound given in Theorem 3.1.1, is most likely, not valid in our case.
20
Different methods have been developed to tackle problems with Dirichlet boundary
conditions, including approximating the Dirichlet problem by an appropriate Robin
problem [1], using a Lagrange multiplier approach [3] and others [11]. We will follow a
quite different approach by adding additional terms to our weak formulation in order
to impose the boundary conditions.
The weak formulation of the general convection-diffusion problem (2.1) is to find
u ∈ H1
E(Ω) such that
Ω
u · v + b · uv + cuv dΩ −
∂Ω
v
∂u
∂ν
dS =
Ω
fv dΩ (3.7)
for all v ∈ H1
E0
(Ω), where H1
E(Ω) = {ω| ω ∈ H1
(Ω), ω = gD on ∂Ω} and H1
Eo
(Ω) =
{ω| ω ∈ H1
(Ω), ω = 0 on ∂Ω}. Note here that the boundary terms vanish since
v ∈ H1
Eo
(Ω). However, as we have pointed out earlier our RBFs span VN which is a
subspace of H1
(Ω) and therefore cannot be used in this setting. In order to impose
the boundary conditions we consider the following relation
∂Ω
θ
∂v
∂ν
+ κv u dS =
∂Ω
θ
∂v
∂ν
+ κv gD dS, (3.8)
where θ ∈ [−1, 1] and κ = c/h, which we incorporate in (3.7). Note that here h
is the meshsize, which is constant as we are using a uniform distribution. Our new
formulation of the problem is,
find u ∈ H1
(Ω) such that a(u, v) = l(v) holds for all v ∈ H1
(Ω), (3.9)
where
a(u, v) =
Ω
u · v + b · uv + cuv dΩ −
∂Ω
v
∂u
∂ν
dS
+
∂Ω
θ
∂v
∂ν
+ κv u dS
l(v) =
Ω
fv dΩ +
∂Ω
θ
∂v
∂ν
+ κv gD dS.
(3.10)
The idea of imposing the boundary conditions in a weak sense, like we have done
here, has been discussed in [16] for finite element methods. After a few numerical
experiments we have found that choosing θ = −1 and κ = 5/h produces more accurate
results the majority of time. Setting θ = −1 means that the symmetric part of the
elliptic operator will correspond to a symmetric bilinear form.
We construct the approximate solution s ∈ VN by taking a linear combination of
our basis functions, that is,
s(x) =
N
j=1
CjΦj(x) (3.11)
21
and we take v to be each of our basis functions in turn, i.e., v = Φi(x) for i = 1, ..., N.
The problem once again reduces to a matrix equation
AC = F, (3.12)
where Aij = a(Φj, Φi) and Fi = l(Φi). Now all that is left is to solve Equation (3.12)
in order to obtain the coefficients Cj.
A disadvantage of this method is the fact that we have to employ numerical
integration in order to get the entries of the matrix A. This is computationally
expensive and time consuming, especially for problems in more that one dimension.
After trying different MATLAB methods for numerical integration, as well as some
we implemented from scratch like the Trapezium and Simpson’s rule, we found the
quad2d method for double integration and the integral method for one dimensional
integrals to be the fastest. However, as the execution time is still very poor, especially
when we increase N, we had to use parallel for loops in the implementation of the
algorithm in order to improve running times, see Appendix F, Section F.2. However,
the improvement this gives depends on how powerful the CPU of the computer is.
3.3 One Dimension
We will now apply the Galerkin formulation method to our 1D problem. The matrix
A and right-hand-side vector F, are given by
Aij =
1
0
ΦjΦi + ΦjΦi dx + (θΦj(1)Φi(1) − Φj(1)Φi(1))
− (θΦj(0)Φi(0) − Φj(0)Φi(0)) + κΦj(1)Φi(1) + κΦj(0)Φi(0),
Fi = − θΦi(0) + κΦi(0).
(3.13)
The expressions for the derivatives of Φj with respect to x are given by (2.7) and the
derivatives of φ with respect to r can be found in Table 2.1 for each RBF.
3.3.1 Effect of δ
In this section we investigate how the scaling parameter δ affects the accuracy of the
method as well as which of the two RBFs gives better results for the one dimensional
model problem. Let us first take a look at the case = 0.01 for which the solution is
stiff. Using the Wendland(1D) RBF results in a method whose accuracy is sensitive to
the choice of the scaling parameter δ, as can be seen in Figure 3.1. This phenomenon
is weaker if fewer points are used, however the accuracy in those cases is not desirable.
22
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 3.1: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 64, using the Wendland(1D) RBF. Minimum error is 5.6 × 10−3
,
for δ = 0.19 with condition number 3.64 × 1010
.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 3.2: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 64, using the Gaussian RBF. The minimum error is 1.1 × 10−3
, for
δ = 0.05 with condition number 2.62 × 1017
.
The condition number of the resulting matrix also depends on the choice of δ, where
most times an increase in the scaling parameter will result in an increase of the
condition number. It turns out that mostly small values of δ produce better results,
see Table B.1. If we use the Gaussian RBF we find that we can improve the accuracy
of the method, see Table B.2. The behaviour of the error as we vary δ seems unstable
but in this case we can find a range of δ values for which the error does not seem to
change dramatically and this can be observed for other values of N. The condition
number does not really increase by increasing δ but for the values of the scaling
parameter for which we have good accuracy, for N = 64, the system is more ill
conditioned. However this is not the case for every choice of N making it hard to
23
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 3.3: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 3.0×10−5
,
for δ = 4.1 with condition number 4.43 × 1016
.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 3.4: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 16, using the Gaussian RBF. Minimum error is 5.61 × 10−6
, for
δ = 0.36 with condition number 3.54 × 1017
.
predict which values of δ will produce an accurate method. We observe that using
the Gaussian RBF will most times result in worse conditioning than the Wendland
RBF. The reason is that Wendland(1D) can produce sparse linear systems for the
method in contrast to the Gaussian.
We now move on to the smoother case of = 0.5. The first observation we make is
that the method requires far fewer points in order to produce a good approximation,
as expected. The Wendland(1D) RBF this time allows us to use a greater range of δ
values, however this fact is not true if the number of points is large. Also, there comes
a point were increasing the parameter any further has negative effects on the accuracy
and stability of the method, see Figure 3.3. Nevertheless, if a small number of points
24
(a) Wendland(1D) with δ = 0.2. (b) Gaussian with δ = 0.25.
Figure 3.5: Log of L2 norm of the error versus N for = 0.01.
is used, the choice of δ is not as tricky as when = 0.01. Now for the Gaussian,
even though the choice of δ is a bit more difficult, see Figure 3.4, the accuracy of the
method is clearly better, even though more unstable, for most values of N, see Tables
B.3 and B.4.
3.3.2 Increasing the number of points N
It should be possible to achieve convergence of the method by increasing the number
of points used. We remind the reader that we are always using uniformly distributed
points. As can be observed from the tables in Appendix B, values of δ that work well
with a specific number of points, do not necessarily result in an accurate scheme if
we change N. This is an issue if one wishes to obtain convergence by increasing the
number of points used. After a few numerical experiments we realise that keeping δ
fixed while increasing N does not guarantee convergence for either RBF, see Figure
3.5. Also, for compactly supported functions, such as the Wendland(1D) RBF, using
an increasingly smaller stepsize h while keeping the support fixed, eliminates the
advantage of having sparse matrices [20]. The fact that we cannot obtain convergence
by increasing N confirms that Theorem 3.1.1 does not hold in our setting.
The other choice we have would be to take δ to be proportional to h. This choice
would potentially keep the sparsity of the resulting linear systems if a compactly
supported function is used. However numerical experiments suggest that convergence
is unattainable in this setting too. This was also observed in [20] where the Helmholtz
equation is considered.
25
(a) Wendland with δ = 0.19.
(b) Wendland with δ = 0.19, zoom
in.
Figure 3.6: Eigenvalue plots for = 0.01 and N = 64.
(a) Wendland with δ = 0.1. (b) Wendland with δ = 0.4.
Figure 3.7: Eigenvalue plots for = 0.5 and N = 32.
3.3.3 Eigenvalues
For the case = 0.01, both RBFs produce mainly complex eigenvalues. Figure
3.6 shows the distribution of eigenvalues when the Wendland(1D) RBF is used. If
instead we had chosen the Gaussian RBF, in addition to having a dense matrix,
the eigenvalue distribution even though schematically similar to Figure 3.6, consists
mainly of eigenvalues whose real part is really close to zero. This is expected as we
have already seen that using the Gaussian usually results in ill conditioned matrices.
For the case = 0.5, when using the Wendland(1D) RBF we get better condi-
tioning when we use small enough δ values that give a sparse linear system. This can
also be deduced from the distribution of the eigenvalues. In Figure 3.7 we observe
that for a bigger δ the majority of eigenvalues are situated close to 0. The Gaussian
RBF still produces eigenvalues whose modulus tends to zero as we increase δ, in a
rate faster than that of the Wendland(1D), as expected.
26
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 3.8: Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is
2.95 × 10−1
, for δ = 0.1 with condition number 5.32 × 1019
.
3.4 Two Dimensions
Applying the Galerkin formulation method to problems of higher dimensions becomes
increasingly complex as we have to perform the appropriate numerical integrations.
This has an impact on the speed of the method, which quickly becomes more and
more unattractive to use because of the slow execution times. After applying the
method to our 2D model problem we obtain the matrix formulation of the problem,
where the expressions for the entries of the matrix A and vector F are considerably
nastier and longer than those we had for the 1D problem and that is why we chose
to omit them.
3.4.1 Effect of δ
Varying δ for the 2D case we observe a slightly peculiar behaviour of the error and
condition number of the method. For both values of we are considering and both
RBFs, the condition number always seems to have a constant value for all values of
δ except when δ = 1 for which it rapidly decreases, see for example Figures 3.8 and
3.9. The behaviour of the error is similar but with the difference that it sometimes
increases for δ = 1 and sometimes it decreases, see Figure B.2.
Now in terms of accuracy, for both = 0.01 and = 0.5, using the Wendland(2D)
RBF provides us with a more accurate solution, see Tables B.5 to B.8. Concerning
the conditioning of the method, for the same value of δ the Wendland(2D) produces
a better conditioned linear system. However, for the choice of δ that produces the
27
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 3.9: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 1.42, for
δ = 0.1 with condition number 9.81 × 1019
.
most accurate result for each RBF, the Gaussian may sometimes result in a smaller
condition number than the Wendland(2D).
3.4.2 Increasing the number of points N
We saw that we were not able to make the Galerkin method converge by increasing
the number of points for the 1D model problem. After looking at the error and
condition number plots from Section 3.4.1 one may suspect that this will also be the
case for our 2D problem.
As predicted, we found obtaining a convergent method to be difficult. Keeping
δ fixed while increasing N not only eliminates the advantage of having compactly
supported functions when using the Wendland(2D), but does not produce a conver-
gent scheme for any of our two RBFs for either choice of . We have also considered
taking δ to be proportional to the stepsize h again with no success. This seems to
be a disadvantage of the method as if the choice of δ is not appropriate, increasing
N not only rapidly increases the computational time, but might also have negative
results on the accuracy of the method.
3.4.3 Eigenvalues
The first observation we make with regards to the eigenvalue distribution for the 2D
problem, is that it is the same for all values of δ unequal to 1, for both RBFs. This is
something we expected after looking into the condition number plots of Section 3.4.1.
We note that the majority of times the method produces complex eigenvalues, from
28
which for δ = 1, the majority have real and imaginary parts that are extremely close
to zero.
Comparing the distribution of eigenvalues produced by using our two RBFs, we
find that for the same value of δ using the Gaussian will result in eigenvalues closer
to zero and usually a real one with a very large modulus. In contrast employing the
Wendland(2D), whilst still having complex eigenvalues, they are usually situated a
bit further away from 0 and the real eigenvalue with the largest modulus is smaller
in comparison, see Figures 3.10 and 3.11. Our numerical experiments suggest that
for δ = 1 the conditioning when using the compactly supported RBF is better than
when using the Gaussian, while the same is not always true for other values of δ. We
note that for = 0.5, i.e., for smoother solutions, the conditioning is generally better.
(a) δ = 0.1 (b) δ = 1
Figure 3.10: Eigenvalue distribution for = 0.01 using the Wendland(2D) RBF with
N = 64.
(a) δ = 0.1 (b) δ = 1
Figure 3.11: Eigenvalue distribution for = 0.01 using the Gaussian RBF with
N = 64.
29
3.5 Chapter Summary
After implementing the Galerking method for our model problems the obvious con-
clusion is that it is impractical to use because of the slow execution times and also the
fact that it seems to be impossible to obtain a convergent scheme. We have observed
that for the 1D case using the Gaussian results in a more accurate scheme but with
worse conditioning, where for the 2D case the Wendland(2D) is without a doubt a
better choice. Figures 3.12 and 3.13 show the numerical solution to the 1D and 2D
model problem for = 0.01, where for the 2D case it is obvious that none of the RBFs
produces an acceptable solution. Employing the Wendland RBFs results in sparse
matrices for small enough δ and usually results in better conditioning for the method.
(a) Solution obtained using the
Gaussian RBF with δ = 0.05.
(b) Solution obtained using the
Wendland(1D) RBF with δ = 0.19.
Figure 3.12: Numerical solutions for 1D model problem for = 0.01 and N = 64.
(a) Solution obtained using the Gaussian
RBF with δ = 0.1.
(b) Solution obtained using the Wend-
land(1D) RBF with δ = 0.1.
Figure 3.13: Numerical solutions and pointwise error for 2D model problem for =
0.01 and N = 1024.
30
Chapter 4
Generalised Interpolation
The final method we will discuss is called generalised interpolation. This method is
unique in the sense that it results in symmetric collocation matrices which are also
positive definite for constant coefficient PDEs when positive definite RBFs are used
[22].
Wendland in [22] has managed to provide bounds for the smallest eigenvalue of this
method for general boundary value problems, while in [4] a bound on the condition
number is provided for strictly elliptic PDEs with Dirichlet boundary conditions, i.e.,
for
Lu =f in Ω,
u =gD on ∂Ω,
(4.1)
where
Lu(x) =
d
i,j=1
aij(x)
∂2
∂xi∂xj
u(x) +
d
i=1
bi(x)
∂
∂xi
u(x) + c(x)u(x), (4.2)
and the coefficients aij(x) satisfy the ellipticity condition (3.2). These results however
hold only for a specific type of RBFs. As the setting discussed in [4] is closer to our
model problems we will focus on it.
4.1 Method Description
Before moving on to the convergence and stability results , in this section, we provide
the reader with a description of the method. To start off, we define the functionals
λi as follows
λi(u) = Lu(xi) if xi ∈ Ω,
λi(u) = u(xi) if xi ∈ ∂Ω.
(4.3)
For generalised interpolation, we construct our approximation s to the solution u in
a slightly different way than the previous two methods. The approximation s to the
31
exact solution u, this time is given by
s(x) =
N
j=1
Cjλχ
j φ
x − χ 2
δ
, (4.4)
where this time our RBF depends on two variables x and χ. Note that we have
applied the functional λj to the function φ with respect to the new argument χ. In
this chapter we redefine Φ as a function of two vector arguments, that is,
Φ(x, χ) = φ(r),
where r = x−χ 2
δ
. As always, we are using N uniformly distributed points in order
to simplify the comparison process. We then substitute our expression for s(x) back
into the PDE and boundary conditions or equivalently apply the functional λi to s
with respect to x, that is,
λx
i (s) = fi, i = 1, ..., N, (4.5)
where
fi = f(xi) if xi ∈ Ω,
fi = gD(xi) if xi ∈ ∂Ω.
(4.6)
This can be rewritten as a matrix equation
AC = F, (4.7)
where the entries of the symmetric collocation matrix A are given by
Ai,j = λx
i λχ
j φ
x − χ 2
δ
, (4.8)
and the entries of the right-hand-side vector F are given by Fi = fi. Solving the
matrix Equation (4.7) will provide us with the unknown coefficients Cj and hence the
approximate solution s(x).
4.2 Stability and Accuracy
We first need to give the definition of the fractional Sobolev space Wσ
2 (Rd
). As stated
in [4] ‘we describe functions in the fractional Sobolev space Wσ
2 (Rd
) as those square
integrable functions that are finite in the norm
f 2
Wσ
2 (Rd) =
Rd
| ˆf(ω)|2
(1 + ω 2
2)σ
dω, (4.9)
where ˆf is the usual Fourier transform’. We note here that σ can be both an integer
and a fraction. Also needed is the definition of a reproducing kernel to a Hilbert
space, which is given below as found in [4].
32
Definition 4.2.1. (Reproducing Kernel)
Suppose H ⊂ C(Ω) denotes a real Hilbert space of continuous functions f : Ω → R;
then Φ : Ω × Ω → R is said to be a reproducing kernel for H if
• Φ(·, χ) ∈ H for all χ ∈ Ω
• f(χ) = (f, Φ(·, χ))H for all f ∈ H and all χ ∈ Ω.
Both of our RBFs are reproducing kernels to a Hilbert space, also know as the
native space of the RBF [5]. We now move on to the main stability result proven
in [4]. It is important here to notice that the result only holds for the Wendland
RBFs which are compactly supported reproducing kernels. As we will also see from
numerical experiments in the following sections, this result does not apply to the
Gaussian RBF.
Theorem 4.2.1.
Suppose Wσ
2 (Rd
) with σ > d/2+2 has a compactly supported reproducing kernel, Φ,
which has a Fourier transform satisfying
c1(1 + ω 2
2)−σ
≤ ˆΦ(ω) ≤ c2(1 + ω 2
2)−σ
.
Let 0 < δ ≤ 1. Suppose L is a linear, strictly elliptic, bounded, second order differ-
ential operator. Then for sufficiently small δ the condition number of the collocation
matrix A can be bounded by
cond(A) ≤ Cδ−4
1 +
2δ
h
d
δ
h
2σ−d
with a constant C independent of h and δ.
The bound on the condition number of A is derived from the bounds
λmin ≥ C1
h
δ
2σ−d
,
λmax ≤ C2δ−4
1 +
2δ
h
d
,
(4.10)
where λmax and λmin are respectively the maximum and minimum eigenvalues of A
and C1, C2 > 0 are independent of h and δ. It is proved in [19], that Wendland(1D)
and Wendland(2D) RBFs satisfy the conditions of Theorem 4.2.1 with σ = 3 and
σ = 3.5 respectively.
Wendland and Farrell in [4] have also provided a bound on the L2 norm of the
error. We give below Lemma 4.4 which is stated and proven in [4].
33
Gaussian Wendland (1D) Wendland (2D)
φ (r) −4r(2r2
− 3)e−r2
(1 − r)2
+(−1680r2
+ 840r) (1 − r)3
+(5040r −
11760r2
)
φiv
(r) (12 − 48r2
+ 16r4
)e−r2
(1 − r)+(6720r2
− 5880r + 840) 1680(1 − r)2
+(5r −
3)(7r − 1)
Table 4.1: Derivatives of RBFs.
Theorem 4.2.2. (L2−error)
Assume δ ∈ (0, 1]. Let u ∈ Hσ
(Ω) be the solution of (4.1). Let the domain Ω have a
Ck,s
boundary for s ∈ (0, 1] such that σ = k + s and k := σ > 2 + d/2. Then the
error between the solution u and its generalised interpolation approximation s can be
bounded in the L2 norm by
u − s L2(Ω) ≤ Cδ−σ
hσ−2
u Hσ(Ω),
where h = supx∈Ω min1≤j≤N x − xj 2.
In the original version of the theorem, h is the maximum between the mesh norm
of the points in Ω and the mesh norm of the points on the boundary ∂Ω. In our case
we have uniformly distributed points so h can be taken to simply be the meshsize.
The definition of a Ck,s
boundary is given in [23], Definition 2.7.
In our numerical experiments we will consider three cases for δ, the stationary
setting δ = ch, the nonstationary setting δ = ch1−2/σ
and also keeping δ fixed. We
note that the nonstationary setting will only be considered for the Wendland RBFs.
4.3 One Dimension
Prior to coding the method for our 1D model problem in MATLAB, we need to
calculate the appropriate derivatives. Let Gj = λχ
j φ(|x−χ|
δ
) for j = 1, ..., N where
Gj(x) = −
∂2
Φ
∂χ2
+
∂Φ
∂χ χ=xj
, j = 2, ..., N − 1,
Gj(x) = φ
|x − xj|
δ
, j = 1, N,
(4.11)
and
∂Φ
∂χ
=
−1
δ
dφ
dr
if x > χ
1
δ
dφ
dr
if x < χ
,
∂2
Φ
∂χ2
=
1
δ2
d2
φ
dr2
. (4.12)
We can then write s(x) = N
j=1 CjGj(x) which we essentially substitute back into
the PDE and boundary conditions. We therefore need to calculate up to the fourth
34
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 4.1: Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum error is
2.51 × 10−1
, for δ = 5 with condition number 2.85 × 109
.
derivative with respect to r of our RBFs. These are given in Tables 2.1 and 4.1. We
also need the first and second derivatives of Gj with respect to x, which are given by
Gj(x) =
∂r
∂x χ=xj
−
δ2
d3
φj
dr3
+
∂r
∂χ χ=xj
d2
φj
dr2
,
Gj (x) = −
δ4
d4
φj
dr4
+
1
δ2
∂r
∂χ χ=xj
d3
φj
dr3
,
(4.13)
where
r =
|x − χ|
δ
.
We can now move on to developing our code in MATLAB and experimenting.
4.3.1 Effect of δ
As we have seen for the previous two methods, the choice of δ plays an important
role on how accurate and ill-conditioned our method is. We start off by considering
the case for which the exact solution to our model problem is stiff, i.e., for = 0.01.
Observing Figures 4.1 and 4.2, the first thing that we notice is that the Wendland(1D)
RBF allows the use of a wider range of δ values. However using the Gaussian allows
us to achieve a by far more accurate approximation to the solution, even though
by doing so we jeopardise the stability of the method. The Wendand(1D) seems to
offer exponential convergence as we increase δ. If we consider more values of δ we can
slightly improve the accuracy but we also increase the condition number. For example
for δ = 20 and N = 64 the error is 0.2455 with the condition number shooting up
35
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 4.2: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 64, using the Gaussian RBF. The minimum error is 8.40 × 10−4
,
for δ = 0.09 with condition number 4.97 × 1017
.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 4.3: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 1.14×10−4
,
for δ = 5 with condition number 597.
to 2.04 × 1011
. We feel that it might not be worth sacrificing stability for such a
small improvement in accuracy. The Gaussian on the other hand, produces accurate
solutions for only a few δ values. This has the disadvantage that it might be hard
to choose a ‘good’ δ value. In general, the behaviour of the error versus the δ values
is similar for other values of N for the Wendland(1D), whereas for the Gaussian we
observe a less erratic behaviour for smaller values of N.
Now let us have a look at the smoother case for = 0.5. Observing Figure 4.3
we see that the error reduces as we increase δ while the condition number seems to
decrease initially only to start increasing again later. If we produce the same plots
considering a larger range of δ values, for example up to δ = 50, the resulting plots are
36
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 4.4: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 16, using the Gaussian RBF. The minimum error is 1.36 × 10−8
, for
δ = 0.86 with condition number 5.15 × 1017
.
almost identical to those of Figure 4.1. The Gaussian again provides a much better
approximation, see Figure 4.4, but the difference in the conditioning of the method
is enormous. We note here, that even if the behaviour of the error is again erratic,
almost all of the δ values considered here will produce acceptable accuracy which is
almost always better than that produced by Wendland(1D). Once more the decision
on which one of the RBFs is better, depends on whether accuracy or stability is more
important to the user.
4.3.2 Varying N
The question we want to answer in this section is whether convergence can be obtained
by increasing N and what should δ be in order to achieve it. Let us first consider the
case = 0.01. Our numerical experiments have shown than the stationary setting,
(a) Gaussian with δ = 5h. (b) Wendland(1D) with δ = 0.25.
Figure 4.5: Log of the error versus N for = 0.01.
37
δ = ch, does not lead to a convergent scheme when the Wendland(1D) is used, while
for the nonstationary setting δ = ch1−2/σ
or keeping the scaling parameter fixed, we
obtain convergence, see Figure 4.5(b). This agrees with Theorem 4.2.2. How accurate
the scheme is depends on the value of c for the nonstationary setting or what fixed δ
we choose. We observed that larger values of c or in general δ produce more accurate
approximations but also increased condition numbers. Now, if we use the Gaussian,
the method does not converge if we keep δ fixed. Also, when the Gaussian is used
with the stationary setting, the method seems to initially converge but then starts to
diverge see Figure 4.5(a). However, even in this situation where the accuracy might
not necessary improve as we increase N, using the Gaussian leads to more accurate
results.
For = 0.5, we have similar results. Again, it seems hard to make the method
converge when the Gaussian is used and for the Wendland(1D) it converges if we fix
the scaling parameter or use the nonstationary setting, see Figure 4.6. Once more we
get better results for large values of c, e.g. c = 20, 25 and so on. We note here that
higher accuracy can be achieved for the case = 0.5 in comparison to = 0.01. The
Wendland(1D) seems to have an advantage over the Gaussian, as for the right choice
of δ the method converges as we increase N.
Figure 4.6: Log of the error versus N for = 0.5, using the Wendland(1D) with
δ = 15h1−2/σ
.
4.3.3 Eigenvalues
We are interested in the distribution of the eigenvalues of the collocation matrix
A. More importantly we want to check whether the bounds for the maximum and
minimum eigenvalue given by (4.10) hold in our case.
In all our numerical experiments, for both choices of which we are considering,
using the Wendland(1D) produced strictly positive real eigenvalues, see Tables C.1
38
(a) Gaussian with δ = 0.25. (b) Wendland(1D) with δ = 0.25.
Figure 4.7: Eigenvalue distributions for = 0.01.
and C.3. The Gaussian however, for up to a certain number of points yielded real
eigenvalues some of which were negative and extremely close to zero. Increasing N
further caused complex eigenvalues with very small imaginary parts to appear, see
Tables C.2 and C.4. This fact confirms that using the Wendland(1D) function pro-
duces a positive definite collocation matrix. However our numerical findings suggest
this is not the case if we use the Gaussian RBF, see Figure 4.7. Since the colloca-
tion matrix A is always symmetric this implies that the eigenvalues should always
be real. Therefore the imaginary parts must be due to rounding errors either in the
computation of the matrix entries or in the computation of the eigenvalues. In fact
the matrix A should be positive definite [4], so the negative eigenvalues must also be
an artifact. Nevertheless, the smallest Gaussian eigenvalues are significantly smaller
than the smallest Wendland ones.
Now, as far as the bounds given by (4.10) are concerned we found that they
were satisfied when the Wendland(1D) was used, for all the different values of N we
considered and δ ∈ (0, 1], for both values of . Of course there is no reason to check
these bounds for the Gaussian as they only apply to compactly supported RBFs.
4.4 Two Dimensions
Generalised interpolation requires us to calculate up to the fourth derivative of our
RBFs, therefore it can only be used with RBFs that at least belong in C4
. Another
drawback that becomes obvious when the method is used for higher dimensions, is the
need to basically apply the PDE twice, which complicates the process significantly
even in just two dimensions.
39
For our 2D model problem, let us write s(x) = N
j=1 CjGj(x) where Gj(x) =
λχ
j φ( x−χ 2
δ
) for j = 1, ..., N. That is,
Gj(x) = − 2
χΦ + (1, 2) · χΦ χ=xj
, j = 1, ..., N∗
,
Gj(x) = φ
x − xj 2
δ
, j = N∗
+ 1, ..., N,
(4.14)
where points 1 to N∗
are located within the domain, points N∗
+ 1 to N lie on
the boundary and x = (x1, y1), χ = (x2, y2). The notation χ simply means take
the gradient with respect to the argument χ. The next step is to substitute our
expression for s back into our PDE and boundary conditions, or equivalently apply
the functional λi, this time with respect to x. After long and tedious calculations we
end up with the following expressions,
λiGj(x) = − 2
Gj(x) + (1, 2) · Gj(x) x=xi
=
2
δ4
d4
φ
dr4
+
2 2
δ4r
d3
φ
dr3
−
5
δ2r
dφ
dr
−
1
δ4
2
+ δ2
r2
+ 4(x2 − x1)(y2 − y1) + 3(y2 − y1)2
×
1
r2
d2
φ
dr2
−
1
r3
dφ
dr2
x=xi
(4.15)
for i = 1, ..., N∗
and λiGj(x) = Gj(xi) for i = N∗
+ 1, ..., N. Expression (4.15)
is used for the entries (i, j) of matrix A for which both xi and xj are points lying
within our domain. We notice that we have terms that have divisions by powers of
r. This cannot be left as it is, as our code will not work due to divisions by zero in
the diagonal entries of our matrix. We therefore need to simplify these further and
the only way to do so is to actually substitute the appropriate expression for φ and
its derivatives and hope that we will be able to cancel out the problematic terms.
This process involves a significant amount of algebra and it turns out to be hard and
confusing to do by hand, especially for the Wendland(2D) RBF whose expression is
a bit longer. For this reason at this point we had to make use of MAPLE in order
to obtain the appropriate expressions for the entries of the collocation matrix A. We
found that we could cancel out all divisions by powers of r.
4.4.1 Effect of δ
One more time, the observations we make about the effect of δ for the 2D model
problem are similar to those we had for the 1D model problem. Making use of the
Gaussian RBF generally produces more accurate results for a well chosen scaling
40
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 4.8: Log of the error and condition number versus the scaling parameter δ
for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is
2.27 × 10−1
, for δ = 5 with condition number 3.48 × 1012
.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure 4.9: Log of the error and condition number versus the scaling parameter δ for
= 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 1.45 × 10−1
,
for δ = 0.25 with condition number 4.63 × 1022
.
parameter. However, the optimal value for δ is usually situated in parts of the graph
that look unstable, see Figure 4.9. This phenomenon weakens as we decrease the
number of points, especially for = 0.5. In contrast, for the Wendland(2D) the
behaviour of the error does not become erratic as we vary δ. Generally increasing the
value of the scaling parameter seems to reduce the error while increasing the condition
number of the collocation matrix, see Figure 4.8.
4.4.2 Varying N
Investigating convergence for the 2D model problem leads us to similar observations
to the ones made for the 1D model problem. Again we found that a convergent scheme
41
(a) Gaussian with δ = 10h.
(b) Wendland(2D) with
δ = 10h1−2σ.
Figure 4.10: Log of the error versus N for = 0.5.
could only be obtained for = 0.01 if the compactly supported Wendland(2D) RBF
was used with a fixed δ or with δ = ch1−2/σ
, where σ = 3.5. As before, increasing c
seems to help us achieve better accuracy. Once more, our findings agree with Theorem
4.2.2.
In our numerical experiments for = 0.5, however, we found that the Gaussian
could produce a convergent scheme for δ = ch, providing us with better accuracy than
the Wendland(2D) RBF, see Figure 4.10. Of course this increased accuracy that the
Gaussian RBF provides us with goes together with an enormous condition number.
4.4.3 Eigenvalues
Experimenting with various values for N we see that as for the 1D case using the
Wendland(2D) produces a collocation matrix with only real and positive eigenvalues
for both = 0.01 and = 0.5. Furthermore, we found that these eigenvalues satisfied
the bounds given by (4.10) for δ ∈ (0, 1].
Now if we use the Gaussian RBF, in contrast to the 1D case, we only found real
eigenvalues in all our numerical experiments. However the majority of times some of
them were negative, even though with a very small modulus. The largest in modulus
negative eigenvalue in Figure 4.11(a) is −1.74×10−13
. Now, since the matrix is again
theoretically positive definite, we believe that the negative eigenvalues appear due to
rounding errors, as for the 1D model problem. This confirms, however, that similar
bounds to (4.10) cannot be obtained when we use translates of the Gaussian RBF as
our basis functions, as we had also deduced for the 1D case.
42
(a) Gaussian with δ = 0.5. (b) Wendland(2D) with δ = 0.5.
Figure 4.11: Eigenvalue distributions for = 0.01 with N = 256.
4.5 Chapter Summary
It is clear that generalised interpolation is a stable method, whose eigenvalues can
be bounded, when used with the compactly supported Wendland RBFs. However if
one chooses to use the Gaussian function, the method produces by far more accurate
solutions even though the method becomes unstable, making the choice of δ difficult.
Hence, it is hard to choose δ in a way that causes the method to converge as we increase
N for the Gaussian, whereas this can be easily done for the Wendland RBFs. Figures
4.12 and 4.13 show numerical solutions obtained through generalised interpolation for
for both model problems.
(a) Solution obtained using the
Gaussian RBF with δ = 0.09.
(b) Solution obtained using the
Wendland(1D) RBF with δ = 5.
Figure 4.12: Numerical solutions for 1D model problem for = 0.01 and N = 64.
43
(a) Solution obtained using the Gaussian
RBF with δ = 0.25.
(b) Solution obtained using the Wend-
land(1D) RBF with δ = 5.
Figure 4.13: Numerical solutions and pointwise error for 2D model problem for =
0.01 and N = 1024.
44
Chapter 5
Method Comparison
In the previous three chapters we have presented three different methods that use
RBFs in order to solve PDEs. We have talked through their implementation in
general and also more specifically for our model problems in one and two dimensions.
We implemented each of the three methods using two different types of RBFS, the
compactly supported Wendland functions and the Gaussian. In each chapter we have
looked into the accuracy and stability of the methods for the different RBFs. We
have also investigated the effect of the scaling parameter δ on the accuracy of the
method.
In this chapter we aim to compare the three methods. As mentioned at the
beginning of this project a desirable algorithm would combine ease of implementation,
accuracy, stability and efficiency. We therefore want to conclude whether one of
these methods satisfies our requirements better than the other two, when used on
convection-diffusion equations.
5.1 Ease of Implementation
A method that is easy to implement is immediately a lot more attractive to a possible
user. When implementing the methods, we found that the collocation algorithm was
the easiest to implement. Its advantage over the Galerkin method was the fact that
we did not need to concern ourselves with weak formulations and more importantly
numerical integrations. It was also simpler than generalized interpolation, as we did
not have to consider any functionals and it involved less differentiation. The simplicity
of collocation was more evident when we considered the 2D problem. The Galerkin
formulation and generalized interpolation method become increasingly complex as
we consider problems in higher dimensions because of the integrations and multiple
differentiations respectively.
45
5.2 Accuracy and Stability
Two very important aspects of a method are its accuracy and its stability. In this
section we will compare the methods on the quality of the solutions they provide and
on their stability, which has to do with the condition number of the matrix A and
how sensitive the method is to the choice of δ. This section is based on the tables of
Appendix D, Sections D.1 and D.2.
5.2.1 One Dimension
We will first look how well the methods perform when the solution is stiff, i.e., we
take = 0.01. Suppose our RBF of choice is the Wendland(1D). Comparing the accu-
racy of the methods for different number of points, we find that collocation provides
the best results, followed closely be the Galerkin formulation method. Generalised
interpolation is proven useless in this setting as it does not provide us with satisfac-
tory results, compared to the other two methods. If we look at the stability of the
methods, the best conditioning is given by generalised interpolation and the worse
by the Galerkin formulation. Recall the Galerkin method was very sensitive to the
choice of δ and convergence by increasing N could not be obtained for any setting for
δ, in contrast with the other two methods.
Suppose now that we use translates of the Gaussian RBF as our basis functions.
The first observation is that all three methods are sensitive to the choice of δ and in-
creasing N does not provide a convergent scheme for any of the settings for δ which we
have considered, i.e., δ = ch and δ = c. The accuracy however is generally better than
when we used the Wendland(1D) RBF. The most accurate method is now generalised
interpolation followed by collocation. The Galerkin method, even though it provides
acceptable accuracy, still stays in last place. In terms of conditioning, all three meth-
ods have large condition numbers, but the worst conditioning is given by the Galerkin
method. Generalised interpolation and collocation have similar conditioning.
Now let us consider the case = 0.5 when the Wendland(1D) is used. Each of the
methods we have considered produced acceptable results, however for most choices
of N, generalised interpolation was outperformed by the other two methods. The
Galerkin method produced better accuracy than collocation for up to N = 32 and
after that, the roles were reversed. In terms of conditioning, the Galerkin formulation
was by far the worst of the three and the only one for which convergence could not
be obtained by increasing N for any setting of δ, while generalised interpolation was
by far the best.
46
Using the Gaussian RBF makes all the methods unstable, i.e., we have large
condition numbers and hence sensitivity at the choice of δ. The situation is similar
to when = 0.01 with generalised interpolation giving the highest accuracy and the
Galerkin method giving the lowest. One more time, using the Gaussian RBF with
the right δ results in higher accuracy for all three methods, when compared to the
Wendland(1D) RBF.
5.2.2 Two Dimensions
Let us now compare the performance of the methods for our two dimensional problem.
We start off again with the stiff case for = 0.01 and using the Wendland(2D) RBF.
As for the corresponding case for the 1D problem, the Galerkin method results in
the worse accuracy and conditioning. Generalised interpolation even though slightly
better, still does not produce any useful solutions. Collocation gives the best accuracy
and conditioning, making it clearly better than the other two methods and the only
one that actually results in approximate solutions that look like the exact ones.
Now, employing the Gaussian RBF for = 0.01 not only does not greatly improve
the accuracy but in some cases we find that it actually makes it worse. This fact
combined with the higher instability of the algorithms does not make the Gaussian
an attractive choice. Among the three methods, the best accuracy and conditioning
is provided by collocation. The other two methods are both unusable, where the
Galerkin one has the lowest accuracy between them .
Next, we look into the smooth case for = 0.5. Suppose that we choose basis
functions based on Wendland(2D). It turns out that collocation is once more the
method that gives the most accurate approximations to our problem. The worst
method in this case is the Galerkin formulation which results in the lowest accuracy
and the highest condition number. It is the only algorithm for which we cannot
get convergence for any of the choices of δ we have considered. Collocation and
generalised interpolation converge as we increase N if δ = c or δ = ch1−2/σ
.
Finally the last case we will discuss is the performance of the methods for = 0.5
when we use the Gaussian RBF. We find that generalised interpolation is clearly better
in terms of accuracy. While both the other two methods produce acceptable results,
collocation is clearly better than the Galerkin formulation. In terms of conditioning,
generalised interpolation suffers from high condition numbers, as does collocation.
The Galerkin formulation, on the other hand, gives lower condition numbers. Inter-
estingly enough, it seems that convergence by increasing N can be obtained only for
the generalised interpolation for δ = ch.
47
(a) Gaussian. (b) Wendland(2D).
Figure 5.1: Execution times of methods for the 2D model problem for fixed and δ.
5.3 Efficiency
It is also of interest to see how fast our methods run in MATLAB. For this reason
we have performed numerical experiments where we run each method ten times, for
different number of points and fixed and δ, and measure each execution time using
MATLAB’s tic, toc commands. The average execution times for each method, for
both 1D and 2D model problems, are presented, in seconds, in the tables of Appendix
D.
Figure 5.1 summarises the results for the 2D case, where it is clear the Galerkin
approach is by far the worst method of the three in terms of efficiency and this is due
to the numerical integration taking place in each loop. We remind the reader that
we had used parallel for loops, parfor in MATLAB, in the implementation of the
Galerkin formulation method in order to speed it up. Therefore the execution times
here are for when two loops are performed simultaneously. No parallel for loops were
used in the implementation of the other two methods. Collocation and generalised
interpolation have similar execution times, with the second one being slightly slower.
Similar observations can be made for the 1D version of the methods, where again
the Galerkin formulation method is clearly impractical because of the very slow exe-
cution times. As far as the other two methods are concerned their difference in speed
is extremely small, especially for smaller values of N, see Tables D.17 and D.18.
5.4 Conclusion
It is very clear at this point that the worst method in terms of accuracy, stability,
ease of implementation and efficiency, is the Galerkin formulation method. Another
48
thing that can be added to the list of disadvantages of this method, is the difficulty
of applying it to problems with Dirichlet boundary conditions.
Now in terms of accuracy, we have seen that collocation performs better than gen-
eralised interpolation in all cases, when the Wendland RBFs are used. The opposite
is true when we make use of the Gaussian RBF, providing better accuracy at the cost
of instability.
The choice therefore seems to depend on what one regards as more important,
stability or accuracy. If the highest accuracy is required, then the Gaussian RBF
should be chosen and since all methods are more or less unstable for this choice of
RBF, we might as well choose the one that gives the most accurate solution, i.e.,
generalised interpolation. However we note here that for the stiff version of the
2D model problem, collocation was the only method that could produce acceptable
accuracy for both RBFs.
Now, if one is willing to sacrifice a bit of accuracy in order to have stability and
therefore make the choice of δ easier, the Wendland RBFs should be chosen. The
most accurate method when the Wendland RBFs are used is collocation.
Concluding, our personal choice would be the collocation method used with the
Wendland RBFs, especially for stiff problems. It is the most efficient and easy to
implement out of the three and also provides us with convergence as we increase the
number of points. Its only drawbacks is that the collocation matrix A might not
always be invertible, however as mentioned in [7] these cases are rare. Also, the lack
of theory for collocation, as opposed to the other two approaches, might also be seen
as a disadvantage, even though in the case of the Galerkin method the theory is not
really applicable.
49
Chapter 6
Further Work
6.1 Extension: Choice of the Scaling Parameter
We have observed throughout this project that the choice of the scaling parameter
plays an important role in the accuracy of all three methods. We were able to find
the optimal values of δ because the exact solution to our model problems was known.
However this will not be the case when the algorithms are used in practice. It would
therefore be very useful if there was a way to predict which value of δ should be used.
Mongillo in [13] investigates the use of ‘predictor functions’ whose behaviour is similar
to that of the interpolation error, in the case of collocation used for plain interpolation
problems. The optimal parameter δ is chosen by minimizing these predictor functions.
Uddin in [18] has tried one of these ‘predictor functions’, more specifically the
‘Leave One Out Cross Validation’ (LOOCV), in the context of Collocation applied to
time-dependent PDEs. As also mentioned in [13], for LOOCV we use N − 1 points,
out of the total of N points, in order to compute the solution. We then test the error
at the point we have left out. That is we have,
LOOCV (δ) =
N
i=1
|si(xi) − fi|2
, (6.1)
where si is the approximate solution evaluated at N − 1 points by leaving out the
ith
point and fi is the exact value at points xi. This is repeated N times leaving
out a different point every time. As one might imagine, this procedure increases
the computational complexity of the algorithm, as we have to solve N systems of
equations. This drawback can be eliminated using Rippas algorithm, [14] through [13],
which is also the version of LOOCV algorithm used in [18]. What Rippa essentially
showed was that it is not necessary to solve N linear systems as we have,
si(xi) − fi =
Ci
A−1
ii
. (6.2)
50
(a) Gaussian. (b) Wendland(1D).
Figure 6.1: Log of exact and predicted errors versus δ with N = 64 and = 0.01.
RBF method used is Collocation.
As stated in [13], using Rippas algorithm reduces the computational complexity to
O(N3
) rather than O(N4
), which is the complexity for solving a dense linear system
of equations.
Uddin’s numerical experiments have shown that Rippa’s algorithm is not very
useful when used for time-dependent PDEs. Even though in some cases the results
were fairly good, in general the numerical experiments have shown that it does not
always predict good values for δ. Nevertheless, looking into how well this algorithm,
and more generally ‘predictor functions’, perform for the methods we have used in
this project is a topic that requires further work and study.
Here, we have implemented Rippa’s algorithm for all three methods using both
RBFs. Detailed results can be found in Appendix E for the 1D model problem.
Figure 6.1 shows the behaviour of the exact error and the error predictor function
obtained through LOOCV. We can see that the predictor function mimics the error
behaviour best when the Gaussian RBF is used. However our numerical experiments
seem to agree with Uddin’s findings for time-dependent PDES, i.e., even though in
some cases this algorithm gives good results, see Table E.3, in general it is not a
very reliable way to predict δ. As already mentioned in this section, further work in
finding trustworthy methods for predicting δ is required.
6.2 Extension: Point Distribution
Another interesting extension to this project would be to investigate how the methods
perform when non uniform distributions are used. We have seen that a uniform
distribution is perfectly adequate when the solution to be approximated is smooth,
i.e., in our case for = 0.5. However, for stiff problems like for = 0.01, we saw that
51
a much greater number of points was required in order to obtain acceptable levels of
accuracy. This fact provides the motivation to look into other type of meshes.
In this section, we will briefly look at the Shishkin mesh [17] and how it affects
the accuracy of the methods when applied to our 1D model problem. A Shishkin
mesh requires an odd number of points N, or equivalently an even number of mesh
spacings M = N − 1. Then, for an equation of the form
− u + bu = 0,
with constant b > 0, a Shishkin mesh consists of M/2 equal mesh spacings in the
interval [0, 1 − σ] and M/2 equal mesh spacings in the interval [1 − σ, 1], where
σ = min(1/2, 2 log(N)/b). (6.3)
For our model problem we have b = 1. We note here that when σ = 0.5 then the
Shishkin mesh reduces to a uniform distribution of the points, see Figure 6.2. This
is the case when = 0.5, therefore we will consider the case for = 0.01. A question
that naturally arises at this point is whether the value of δ should be the same for all
our basis functions. In order to answer this question we will perform our experiments
trying out different values of δ in each of the two differently scaled sections of the
interval [0, 1].
Carrying out our numerical experiments requires us to adjust our code in order to
admit two values for the scaling parameter δ instead of one, i.e., we now have δ = δ1
in the first part of the interval and δ = δ2 in the second part. We then repeatedly
solved our model problem each time using different combinations for our δ values,
for both RBFs and all three methods. We then picked out, for each method and
(a) Shiskin mesh for = 0.5 is just a
uniform distribution.
(b) Shishkin mesh for = 0.01 splits
the interval in two.
Figure 6.2: Shishkin mesh for = 0.01 and = 0.5 using N = 27 points.
52
RBF, the two δ values, amongst those considered, for which the approximate solution
displayed the lowest error. Detailed results form these experiments can be found in
Appendix E, Section E.2.
We found that using the Shishkin mesh in almost all cases considerably reduces
the error. Exceptions are for the collocation method with both RBFs and generalised
interpolation for the Gaussian, where for N = 9 and N = 27 the error actually
increases. However we feel that this might be avoided if we consider more δ values
from the same interval. We observe that for the Galerkin formulation method, using
the Shishkin mesh leads to reductions in the error as great as 99%, for example for the
Gaussian RBF for N = 27 which is the smaller error produced by any combination
of method and RBF for this many points. The most accurate solution, however, is
produced by collocation using the Wendland(1D) RBF for N = 243. It is also worth
noting that in most cases we have δ2 < δ1. Moreover, we observe that in general
the Shishkin mesh with the two different values for δ produces more ill-conditioned
systems than a uniform distribution does.
It is clear that a non uniform distribution of points might be beneficial for stiff
problems, however it does complicates RBF methods especially with respect to how
δ should be chosen. It would be interesting to look into how different distributions
affect the accuracy and also the stability of each of the methods, and whether some
of those methods work better for particular meshes. Without any doubt, alternative
meshes and specifically the Shishkin mesh are worthy of further research.
6.3 Extension: Multilevel Algorithms
Finally, a possible extension would be to look into how well the multilevel versions of
the three methods perform. In general, a multilevel algorithm consists of solving the
same problem on each level and only changing the right hand side. As mentioned in
[20], ‘on each level the residual of the previous level is interpolated’.
The multilevel version of generalized interpolation is investigated in [4] as a way
to overcome the problem of ill-conditioning of the collocation matrix A. This version
of the method is known to converge for δ = ch1−2/σ
. A multilevel algorithm, based
on a Galerkin formulation method, has also been proposed for the solution of PDEs
with Neumann boundary conditions in [20]. In that paper, the multilevel algorithm
is investigated as a solution to the problem of obtaining a convergent scheme and at
the same time not eliminating the sparsity of the matrix A. The multilevel versions
have been investigated theoretically in [4] and [20].
53
A potential extension would be to formulate the multilevel version for the col-
location method and also implement the remaining two methods in this setting for
our convection-diffusion PDEs. We would then move on to compare the conditioning
and accuracy of each of the algorithms. Also, it is worth comparing them with the
initial version of the methods in order to determine whether the extra effort and time
required for these algorithms is well spent.
54
Appendix A
Collocation
The δ values chosen for each number of points are the best, in terms of accuracy of
the method, within the range of values considered. We in no way claim that these
are the absolute best values to be used.
A.1 1D
N δ Error cond(A) Eigenvalues 1
4 0.8 0.40634 7.1789 0
8 0.6 0.19869 236.232 0
16 5 0.062222 60258581.0221 0
32 5 0.022785 1386351862.49 0
64 5 0.0056014 22687684893.9963 0
128 5 0.00065556 271760927278.709 0
256 5 0.00012573 2646791243817.722 0
512 3.2 1.7216e-05 5102837511132.485 0
Table A.1: Results using Wendland function for = 0.01.
1
For Eigenvalues column: 0 indicates complex eigenvalues and 1 only real eigenvalues.
55
N δ Error cond(A) Eigenvalues
4 0.34 0.40923 8.9518 0
8 0.15 0.20734 36.3858 0
16 0.47 0.049507 1.863193727017158e+16 0
32 0.08 0.021768 131194.0161 0
64 0.12 0.00066631 4.217959126339261e+17 0
128 0.06 6.3811e-05 1.675720930657156e+18 0
256 0.02 2.392e-05 6.086050711230822e+17 0
512 0.02 2.1085e-05 4.318159163807684e+18 0
Table A.2: Results using Gaussian function for = 0.01
N δ Error cond(A) Eigenvalues
4 5 0.012211 684.8798 1
8 5 0.0014008 24962.042 1
16 5 0.00017005 480852.7521 0
32 5 2.0912e-05 7641571.0764 0
64 5 2.5846e-06 118572387.009 0
128 5 3.1776e-07 1854779891.047 0
256 5 3.976e-08 29289234459.2367 0
512 5 5.2946e-09 465347995430.5385 0
Table A.3: Results using Wendland function for = 0.5
N δ Error cond(A) Eigenvalues
4 5 0.0074665 1061857.5419 1
8 3 3.4202e-06 742939487097616.3 1
16 0.67 9.3575e-08 4.524462381605034e+16 0
32 0.33 2.4145e-08 2.759966451309931e+17 0
64 0.3 2.2839e-07 1.089502194484428e+18 0
128 0.15 9.1153e-08 3.875515968623898e+18 0
256 0.07 5.2388e-08 3.878070971754523e+18 0
512 0.19 1.151e-07 7.525978178568281e+19 0
Table A.4: Results using Gaussian function for = 0.5
56
A.2 2D
N δ Error cond(A) Eigenvalues
16 5 0.508194 186938.025475 0
64 0.3 0.301323 50.314541 0
256 0.6 0.094124 325107.765529 0
1024 5 0.015602 800617423818.607 0
4096 5 0.003259 91297640853704.6 0
Table A.5: Results using Wendland function for = 0.01
N δ Error cond(A) Eigenvalues
16 0.6 0.52026 5692.0091 0
64 0.1 0.30343 46.3017 0
256 0.1 0.14144 58512.3481 0
1024 0.07 0.029366 1085482055.205480 0
4096 0.05 0.003157 6.13886162265298e+19 0
Table A.6: Results using Gaussian function for = 0.01
N δ Error cond(A) Eigenvalues
16 5 0.024465 58605.2944 1
64 5 0.0026371 27764490.1825 0
256 5 0.0002832 6485369347.7429 0
1024 5 2.6301e-05 1033308657219.152 0
4096 5 2.4048e-06 144800011412563.3 0
Table A.7: Results using Wendland function for = 0.5
N δ Error cond(A) Eigenvalues
16 5 0.019235 82551618520355.1 1
64 1.62 9.1e-05 4.95066936940742e+18 0
256 0.49 1e-06 1.83830041750604e+19 0
1024 0.29 1e-06 3.00422330230600e+20 0
4096 0.25 6e-06 2.84007149271890e+22 0
Table A.8: Results using Gaussian function for = 0.5
57
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure A.1: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.002637, for
δ = 5 with condition number 27764490.18245.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure A.2: Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.000091, for
δ = 1.62 with condition number 4.95066936940742e+18.
58
Appendix B
Galerkin Formulation
The δ values in the following tables give the best accuracy within the range we have
considered. As the Galerkin Formulation method has painfully slow execution times,
we had to restrict the range of δ values we have considered as well as the number of
points.
B.1 1D
N δ Error cond(A) Eigenvalues
4 0.9 0.433397 58.1637 0
8 0.25 0.288309 170.291246 0
16 2 0.073294 339691646549598.9 0
32 1.46 0.022290 7.89359235137908e+16 0
64 0.19 0.005615 36392621426.390694 0
128 0.09 0.001345 23212406003.945152 0
256 0.06 0.000874 70484873188531.266 0
Table B.1: Results using Wendland function for = 0.01.
N δ Error cond(A) Eigenvalues
4 0.35 0.43574 62.6952 0
8 1.6 0.18091 4.328950475744867e+16 1
16 0.26 0.038523 5.265356321020146e+17 0
32 0.1 0.0066638 1.361314063016573e+18 0
64 0.05 0.0010972 2.619578357454881e+17 0
128 0.03 0.00020555 2.19561102223129e+18 0
256 0.01 8.5308e-05 1014938382486065 0
Table B.2: Results using Gaussian function for = 0.01.
59
N δ Error cond(A) Eigenvalues
4 5 0.0050045 242574753.3912 1
8 5 0.00029727 35655713527269.57 1
16 4.1 3.0000e-05 4.42760613410415e+16 1
32 2.03 6.5529e-06 2.105717664442266e+17 1
64 1.71 8.0281e-06 1.597721924465754e+18 0
128 1.67 9.1199e-06 2.170646574057813e+19 0
256 1.61 6.8245e-06 5.923205956564853e+19 0
Table B.3: Results using Wendland function for = 0.5.
N δ Error cond(A) Eigenvalues
4 4.8 0.0020629 276392112925696.7 1
8 0.7 2.901e-05 2111846305320057 1
16 0.36 5.6092e-06 3.538336270104092e+17 0
32 0.26 2.0748e-06 4.658050252898005e+19 0
64 0.13 2.2661e-06 4.473572009763266e+19 0
128 0.05 2.5265e-06 1.301690708433364e+19 0
256 0.1 2.5583e-06 4.695635430601746e+20 0
Table B.4: Results using Gaussian function for = 0.5.
B.2 2D
N δ Error cond(A) Eigenvalues
16 1 0.46142 1188.8894 0
64 1 0.4336 686336430.5341 0
256 0.1 0.40509 4.924631211639372e+19 0
1024 0.1 0.29478 5.322726475826274e+19 0
Table B.5: Results using Wendland function for = 0.01.
N δ Error cond(A) Eigenvalues
16 1 0.48025 5136276.6079 0
64 0.1 0.92714 9700973739763108 0
256 1 1.1924 4346555597682143 0
1024 0.1 1.4182 9.811440348332992e+19 0
Table B.6: Results using Gaussian function for = 0.01.
60
N δ Error cond(A) Eigenvalues
16 0.1 0.014283 249105911903.8025 1
64 0.1 0.0008122 7.207746060094888e+17 0
256 0.1 0.0021552 7.312308496581809e+18 0
1024 1 0.002126 9760447139440.068 0
Table B.7: Results using Wendland function for = 0.5.
N δ Error cond(A) Eigenvalues
16 1 0.019492 2079281.5879 1
64 1 0.0068897 623220457232.2056 0
256 1 0.00652 402103070535078.6 0
1024 1 0.0097393 3.742233733577658e+16 0
Table B.8: Results using Gaussian function for = 0.5.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure B.1: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.003078, for
δ = 0.1 with condition number 720774606009488770.
61
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure B.2: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.006890, for δ = 1
with condition number 623220457232.205570.
62
Appendix C
Generalised Interpolation
C.1 1D
N δ Error cond(A) Eigenvalues
4 1.8 0.45536 112.9914 1
8 5 0.36434 158126.1561 1
16 5 0.25498 8863191.6872 1
32 5 0.22782 233303888.1452 1
64 5 0.25101 2848330301.4329 1
128 5 0.21819 21239886014.0627 1
256 5 0.13402 106507240858.5696 1
512 5 0.05704 365354941172.672 1
Table C.1: Results using Wendland function for = 0.01.
N δ Error cond(A) Eigenvalues
4 0.55 0.45047 68.8652 1
8 0.28 0.34876 1193.5552 1
16 0.56 0.03694 1.702112482244674e+16 1
32 0.16 0.010234 1.326559500827952e+17 1
64 0.09 0.000840 1.000396558251471e+18 0
128 0.04 3.2671e-05 4.153978485650736e+17 0
256 0.02 2.139e-05 7.905967417209493e+17 0
512 0.02 2.1576e-07 1.533557481596383e+18 0
Table C.2: Results using Gaussian function for = 0.01.
63
N δ Error cond(A) Eigenvalues
4 5 0.023871 51.9623 1
8 5 0.0051338 204.8982 1
16 5 0.0011399 597.2818 1
32 5 0.00026767 2383.4702 1
64 5 6.4983e-05 9897.304 1
128 5 1.6025e-05 40304.9445 1
256 5 3.9805e-06 162640.6925 1
512 5 9.9199e-07 653394.4112 1
Table C.3: Results using Wendland function for = 0.5.
N δ Error cond(A) Eigenvalues
4 5 0.0074469 60144.4006 1
8 4.6 1.6556e-06 1595685885209409 1
16 0.86 1.3606e-08 5.145780341841712e+17 1
32 0.36 3.5396e-09 1.924714104632663e+17 0
64 0.36 6.3564e-09 2.551981270282159e+18 0
128 0.1 1.121e-09 5.736698929414776e+18 0
256 0.08 4.9564e-09 2.821489837620749e+18 0
512 0.23 1.5405e-08 4.762049908883697e+19 0
Table C.4: Results using Gaussian function for = 0.5.
C.2 2D
N δ Error cond(A) Eigenvalues
16 5 0.48945 66049.23 1
64 1.1 0.43249 194217.6804 1
256 0.9 0.3511 21864049.0801 1
1024 5 0.22714 3483390037138.676 1
4096 5 0.18793 504276784046839.3 1
Table C.5: Results using Wendland function for = 0.01.
N δ Error cond(A) Eigenvalues
16 0.44 0.51916 890.5127 1
64 0.22 0.46686 47120.3821 1
256 0.61 0.24758 1.465149099790438e+21 1
1024 0.25 0.14459 4.631545773847626e+22 1
4096 0.04 0.044236 445868376077.6423 1
Table C.6: Results using Gaussian function for = 0.01.
64
N δ Error cond(A) Eigenvalues
16 5 0.032098 51058.8644 1
64 5 0.0061729 70823500.5084 1
256 5 0.001049 22627183199.4702 1
1024 5 0.00015061 4098731439680.468 1
4096 5 1.992e-05 609077055765232 1
Table C.7: Results using Wendland function for = 0.5.
N δ Error cond(A) Eigenvalues
16 2.15 0.008364 361092141.253364 1
64 1.65 0.000042 1.49584049996374e+18 1
256 0.69 3.3031e-07 1.342322394773982e+20 1
1024 0.31 3.3604e-08 4.603221314210624e+21 1
4096 0.13 3.3061e-08 1.01980084765424e+23 1
Table C.8: Results using Gaussian function for = 0.5.
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure C.1: Log of the error and condition number versus the scaling parameter δ for
= 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.006173, for
δ = 5 with condition number 70823500.508384.
65
(a) Log of L2 norm of the error ver-
sus δ.
(b) Log of condition number of the
collocation matrix versus δ.
Figure C.2: Log of the error and condition number versus the scaling parameter δ
for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.000042, for
δ = 1.65 with condition number 798519455837603580.
66
Appendix D
Comparison
D.1 Accuracy Comparison
The numbers (1), (2), (3) are there to order the methods in terms of accuracy, namely,
(1) indicates the lowest error and (3) indicates the highest. We remind the reader
that we had to restrict the number of points we considered for the Galerkin method
because of the really slow execution times. This is the reason behind the missing
results in the tables.
D.1.1 1D
N Collocation Galerkin Formulation Generalised Interpolation
4 (1) 0.40634 (2) 0.433397 (3) 0.45536
8 (1) 0.19869 (2) 0.288309 (3) 0.36434
16 (1) 0.06222 (2) 0.073294 (3) 0.25498
32 (2) 0.022785 (1) 0.022290 (3) 0.22782
64 (1) 0.0056014 (2) 0.005615 (3) 0.25101
128 (1) 0.00065556 (2) 0.001345 (3) 0.21810
256 (1) 0.00012573 (2) 0.000874 (3) 0.13402
512 (1) 1.7216e-05 - (2) 0.05704
Table D.1: Results using Wendland function for = 0.01.
67
N Collocation Galerkin Formulation Generalised Interpolation
4 (1) 0.40923 (2) 0.43574 (3) 0.45047
8 (2) 0.20734 (1) 0.18091 (3) 0.34876
16 (3) 0.049507 (2) 0.038523 (1) 0.03694
32 (3) 0.021768 (1) 0.0066638 (2) 0.010234
64 (1) 0.00066631 (3) 0.0010972 (2) 0.00071748
128 (2) 6.3811e-05 (3) 0.00020555 (1) 3.2671e-05
256 (2) 2.392e-05 (3) 8.5308e-05 (1) 2.139e-05
512 (2) 2.1085e-05 - (1) 2.1576e-07
Table D.2: Results using Gaussian function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
4 (2) 0.012211 (1) 0.005 (3) 0.023871
8 (2) 0.0014008 (1) 0.00029727 (3) 0.0051338
16 (2) 0.00017005 (1) 3.0000e-05 (3) 0.0011399
32 (1) 2.0912e-05 (1) 6.5529e-06 (3) 0.00026767
64 (1) 2.5846e-06 (2) 8.0281e-06 (3) 6.4983e-05
128 (1) 3.1776e-07 (2) 9.1199e-06 (3) 1.6025e-05
256 (1) 3.976e-08 (2) 6.8245e-06 (3) 3.9805e-06
512 (1) 5.2946e-09 - (2) 9.9199e-07
Table D.3: Results using Wendland(1D) function for = 0.5.
N Collocation Galerkin Formulation Generalised Interpolation
4 (3) 0.0074665 (1) 0.0020629 (2) 0.0074469
8 (2) 3.4202e-06 (3) 2.901e-05 (1) 1.655e-06
16 (2) 9.3575e-08 (3) 5.6092e-06 (1) 1.3606e-08
32 (2) 2.4145e-08 (3) 2.0748e-06 (1) 3.5396e-09
64 (2) 2.2839e-07 (3) 2.2661e-06 (1) 6.3564e-09
128 (2) 9.1153e-08 (3) 2.5265e-06 (2) 1.121e-08
256 (2) 5.2388e-08 (3) 2.5583e-06 (1) 4.9564e-09
512 (2) 1.151e-07 - (1) 1.5405e-08
Table D.4: Results using Gaussian function for = 0.5.
68
D.1.2 2D
N Collocation Galerkin Formulation Generalised Interpolation
16 (3) 0.508194 (1) 0.46142 (2) 0.48945
64 (1) 0.301323 (3) 0.4336 (2) 0.43249
256 (1) 0.094124 (3) 0.40509 (2) 0.3511
1024 (1) 0.015602 (3) 0.29478 (2) 0.22714
4096 (1) 0.003259 - (2) 0.18793
Table D.5: Results using Wendland(2D) function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
16 (3) 0.52026 (1) 0.48025 (2) 0.51916
64 (1) 0.30343 (3) 0.92714 (2) 0.46689
256 (1) 0.14144 (3) 1.1924 (2) 0.24758
1024 (1) 0.029366 (3) 1.4182 (2) 0.14459
4096 (1) 0.003157 - (2) 0.044236
Table D.6: Results using Gaussian function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
16 (2) 0.024465 (1) 0.014283 (3) 0.032098
64 (2) 0.0026371 (1) 0.0008122 (3) 0.0061729
256 (1) 0.0002832 (3) 0.0021552 (2) 0.001049
1024 (1) 2.6301e-05 (3) 0.002120 (2) 0.00015061
4096 (1) 2.4048e-06 - (2) 1.992e-05
Table D.7: Results using Wendland(2D) function for = 0.5.
N Collocation Galerkin Formulation Generalised Interpolation
16 (2) 0.019235 (3) 0.19492 (1) 0.008364
64 (2) 9.1000e-05 (3) 0.0068897 (1) 4.2e-05
256 (2) 1.0000e-06 (3) 0.00652 (1) 3.3031e-07
1024 (2) 1.0000e-06 (3) 0.0097393 (1) 3.3604e-08
4096 (2) 6.0000e-06 - (1) 3.3061e-08
Table D.8: Results using Gaussian function for = 0.5.
69
D.2 Conditioning Comparison
D.2.1 1D
N Collocation Galerkin Formulation Generalised Interpolation
4 7.1789 58.1637 112.9914
8 236.232 170.291246 158126.1561
16 60258581.0221 339691646549598.9 8863191.6872
32 1386351862.49 7.89359235137908e+16 233303888.1452
64 22687684893.9963 36392621426.390694 2848330301.4329
128 271760927278.709 23212406003.945152 21239886014.0627
256 2646791243817.722 70484873188531.266 106507240858.5696
512 5102837511132.485 - 365354941172.672
Table D.9: Results using Wendland(1D) function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
4 8.9518 62.6952 68.8652
8 36.3858 4.328950475744867e+16 1193.5552
16 1.863193727017158e+16 5.265356321020146e+17 1.702112482244674e+16
32 131194.0161 1.361314063016573e+18 1.326559500827952e+17
64 4.217959126339261e+17 2.619578357454881e+17 1.000396558251471e+18
128 1.675720930657156e+18 2.19561102223129e+18 4.153978485650736e+17
256 6.086050711230822e+17 1014938382486065 7.905967417209493e+17
512 4.318159163807684e+18 - 1.533557481596383e+18
Table D.10: Results using Gaussian function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
4 684.8798 242574753.3912 51.9623
8 24962.042 35655713527269.57 204.8982
16 480852.7521 4.42760613410415e+16 597.2818
32 7641571.0764 2.105717664442266e+17 2383.4702
64 118572387.009 1.597721924465754e+18 9897.304
128 1854779891.047 2.170646574057813e+19 40304.9445
256 29289234459.2367 5.923205956564853e+19 162640.6925
512 465347995430.5385 - 653394.4112
Table D.11: Results using Wendland(1D) function for = 0.5.
70
N Collocation Galerkin Formulation Generalised Interpolation
4 1061857.5419 276392112925696.7 60144.4006
8 742939487097616.3 2111846305320057 1595685885209409
16 4.524462381605034e+16 3.538336270104092e+17 5.145780341841712e+17
32 2.759966451309931e+17 4.658050252898005e+19 1.924714104632663e+17
64 1.089502194484428e+18 4.473572009763266e+19 2.551981270282159e+18
128 3.875515968623898e+18 1.301690708433364e+19 5.736698929414776e+18
256 3.878070971754523e+18 4.695635430601746e+20 2.821489837620749e+18
512 7.525978178568281e+19 - 4.762049908883697e+19
Table D.12: Results using Gaussian function for = 0.5.
D.2.2 2D
N Collocation Galerkin Formulation Generalised Interpolation
16 186938.025475 1188.8894 66049.23
64 50.314541 686336430.5341 194217.6804
256 325107.765529 4.924631211639372e+19 21864049.0801
1024 800617423818.607 5.322726475826274e+19 3483390037138.676
4096 91297640853704.6 - 504276784046839.3
Table D.13: Condition numbers using Wendland(2D) function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
16 5692.0091 5136276.6079 890.5127
64 46.3017 9700973739763108 47120.3821
256 58512.3481 4346555597682143 1.465149099790438e+21
1024 1085482055.205480 9.811440348332992e+19 4.631545773847626e+22
4096 6.13886162265298e+19 - 445868376077.6423
Table D.14: Condition numbers using Gaussian function for = 0.01.
N Collocation Galerkin Formulation Generalised Interpolation
16 58605.2944 249105911903.8025 51058.8644
64 27764490.1825 7.207746060094888e+17 70823500.5084
256 6485369347.7429 7.312308496581809e+18 22627183199.4702
1024 1033308657219.152 9760447139440.068 4098731439680.468
4096 144800011412563.3 - 609077055765232
Table D.15: Condition numbers using Wendland(2D) function for = 0.5.
71
N Collocation Galerkin Formulation Generalised Interpolation
16 82551618520355.1 2079281.5879 361092141.253364
64 4.95066936940742e+18 623220457232.2056 1.49584049996374e+18
256 1.83830041750604e+19 402103070535078.6 1.342322394773982e+20
1024 3.00422330230600e+20 3.742233733577658e+16 4.603221314210624e+21
4096 2.84007149271890e+22 - 1.01980084765424e+23
Table D.16: Condition numbers using Gaussian function for = 0.5.
D.3 Efficiency Comparison
D.3.1 1D
N Collocation Galerkin Formulation Generalised Interpolation
4 0.3289 0.4004 0.29874
8 0.36008 0.53399 0.30214
16 0.37228 0.79611 0.32572
32 0.41707 1.8339 0.39505
64 0.4835 5.8412 0.53788
128 0.73317 18.802 0.87016
256 1.4056 71.4824 2.2811
Table D.17: Execution times of methods when the Wendland(1D) function is used.
N Collocation Galerkin Formulation Generalised Interpolation
4 0.31229 0.41018 0.26693
8 0.31422 0.46013 0.28149
16 0.33549 0.63725 0.33069
32 0.42061 1.251 0.44263
64 0.54576 3.5872 0.76113
128 0.98028 11.9279 1.6383
256 2.4739 43.1757 5.4173
Table D.18: Execution times of methods when the Gaussian function is used.
72
D.3.2 2D
N Collocation Galerkin Formulation Generalised Interpolation
16 0.41514 4.4828 0.56719
64 0.79762 87.0723 1.6729
256 4.1495 1082.24 7.5874
1024 38.1636 20364.0093 45.0266
Table D.19: Execution times of methods when the Wendland(2D) function is used.
N Collocation Galerkin Formulation Generalised Interpolation
16 0.43681 1.5006 0.67272
64 1.2195 16.2408 3.0231
256 8.1501 241.4667 15.8797
1024 77.5698 3811.1843 94.0172
Table D.20: Execution times of methods when the Gaussian function is used.
73
Appendix E
Further Work
E.1 Choice of Scaling Parameter
= 0.01 δ Error Predicted δ Error
Collocation 5 0.0026214 0.1 0.017358
Galerkin 0.2 0.0086031 2.2 2.7256
Gen. Interpolation 5 0.24861 0.1 0.86234
Table E.1: Results for the actual δ for which the error is minimised and for the
predicted δ with the corresponding error. RBF used is the Wendland(1D) with N =
64 for = 0.01.
= 0.01 δ Error Predicted δ Error
Collocation 0.1 0.00046022 0.07 0.00096631
Galerkin 0.05 0.00098121 0.14 0.018369
Gen. Interpolation 0.09 0.0003916 0.1 0.0012091
Table E.2: Results for the actual δ for which the error is minimised and for the
predicted δ with the corresponding error. RBF used is the Gaussian with N = 64 for
= 0.01.
= 0.5 δ Error Predicted δ Error
Collocation 5 1.9913e-05 5 1.9913e-05
Galerkin 1.9 8.2733e-06 2.6 1.236e-05
Gen. Interpolation 5 0.00026325 5 0.00026325
Table E.3: Results for the actual δ for which the error is minimised and for the
predicted δ with the corresponding error. RBF used is the Wendland(1D) with N =
32 for = 0.5.
74
= 0.5 δ Error Predicted δ Error
Collocation 0.31 3.9171e-08 0.46 1.0208e-06
Galerkin 0.17 2.3076e-06 0.11 1.8138e-05
Gen. Interpolation 0.43 4.3992e-09 0.49 2.7605e-08
Table E.4: Results for the actual δ for which the error is minimised and for the
predicted δ with the corresponding error. RBF used is the Gaussian with N = 32 for
= 0.5.
E.2 Point Distribution
This section contains results from all three methods, for both RBFs, when used with
a Shishkin mesh to solve the 1D problem with = 0.01. Note that we have two
values of δ here, δ1 corresponds to the value of δ used in the interval [0, 1 − σ] and
δ2 is used in the interval (1 − σ, 1]. The U.Error column corresponds to the error of
the approximate solution when a uniform distribution of points is used, for the value
of δ which minimizes it. The column U.cond(A) shows the condition number of the
corresponding matrix.
E.2.1 Collocation
N δ1 δ2 Error cond(A) U.Error U.cond(A)
9 0.4 0.1 0.39562 8.38e+02 0.30291 1.91e+03
27 3.8 3.8 0.0017414 7.97e+10 0.029909 6.64e+08
81 3.7 3.7 3.0637e-05 1.15e+12 0.00295 5.44e+10
243 0.9 1.4 1.9548e-06 4.35e+14 0.00014385 1.78e+12
Table E.5: Results for Collocation using the Wendland(1D) RBF for = 0.01.
N δ1 δ2 Error cond(A) U.Error U.cond(A)
9 0.14 0.02 0.40512 2.39e+02 0.25087 30.0
27 0.62 0.02 0.039876 3.61e+17 0.029754 6.49e+17
81 0.26 0.04 4.8402e-05 2.96e+18 0.00027538 4.58e+17
243 0.12 0.02 4.9847e-06 2.04e+19 5.9445e-06 1.86e+18
Table E.6: Results for Collocation using the Gaussian RBF for = 0.01.
75
E.2.2 Galerkin Formulation
N δ1 δ2 Error cond(A) U.Error U.cond(A)
9 2 1.4 0.012246 1.12e+17 0.28796 3.25e+02
27 3.3 0.1 0.0050187 1.62e+17 0.033607 3.28e+16
81 0.2 0.1 0.0023519 5.60e+12 0.0033483 1.76e+12
243 0.1 0.1 0.0010913 5.61e+14 0.001579 7.20e+13
Table E.7: Results for Galerkin Formulation using the Wendland(1D) RBF for =
0.01.
N δ1 δ2 Error cond(A) U.Error U.cond(A)
9 0.26 0.42 0.055148 2.77e+16 0.11467 2.04e+16
27 0.28 0.02 9.0069e-05 1.60e+17 0.012188 3.53e+18
81 0.08 0.02 4.6448e-05 5.89e+19 0.0011801 4.04e+19
243 0.04 0.02 6.6215e-05 2.49e+20 0.0002221 5.92e+18
Table E.8: Results for Galerkin Formulation using the Gaussian RBF for = 0.01.
E.2.3 Generalised Interpolation
N δ1 δ2 Error cond(A) U.Error U.cond(A)
9 5 0.1 0.27934 9.16e+06 0.34495 3.17e+05
27 2.7 1.2 0.012462 3.12e+06 0.22412 1.13e+08
81 3.9 1.3 0.00070319 1.57e+08 0.24776 5.90e+09
243 3.7 1.3 7.4493e-05 9.60e+08 0.14088 9.57e+10
Table E.9: Results for Generalised Interpolation using the Wendland(1D) RBF for
= 0.01.
N δ1 δ2 Error cond(A) U.Error U.cond(A)
9 0.3 0.04 0.2901 1.08e+04 0.087719 5.88e+06
27 0.84 0.02 0.039681 8.39e+18 0.016475 1.93e+12
81 0.32 0.04 5.477e-05 2.00e+20 0.00039222 1.13e+17
243 0.08 0.04 3.1187e-06 2.31e+19 9.456e-06 2.52e+17
Table E.10: Results for Generalised Interpolation using the Gaussian RBF for =
0.01.
76
Appendix F
MATLAB code
F.1 Collocation
F.1.1 Coll 1D.m
1 function [ X,A, Us ] = Coll 1D ( epsilon , N, choice , delta )
2 % Calculates the s o l u t i o n to the ODE
3 % −e p s i l o n ∗u’ ’+u’=0 , e p s i l o n > 0 , f o r 0<x<1,
4 % with boundary conditions u (0) =1, u (1) =0.
5 % Method : Collocation
6 % Radial b a s i s function of the form p h i j = Phi ( r )
7 % where r =|x−x j |/ delta .
8 % Artemis Nika 2014
9
10 %% D i s c r e t i z a t i o n in x−d i r e c t i o n :
11
12 % Equispaced points
13 x=l i n s p a c e (0 ,1 ,N) ;
14
15 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r )
16
17 i f choice==1
18 % t h i s r a d i a l b a s i s function has compact support f o r O<r <1.
19 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗(8∗ r ˆ2 + 5∗ r + 1) ;
20 % h e a v i s i d e (0) =0.5 but here i t i s not a problem as f o r r=1
21 % the r e s t of Phi i s 0.
22 dPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (−56∗ r ˆ2 − 14∗ r ) ; % 1 st deriv .
23 ddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ3 ∗ (336∗ r ˆ2 − 42∗ r − 14) ; % 2nd der .
24 e l s e i f choice==2
25 % no compact support
26 Phi=@( r ) exp(−r ˆ2) ;
27 dPhi=@( r ) −2∗r ∗exp(−r ˆ2) ;
28 ddPhi=@( r ) (−2+4∗r ˆ2) ∗ exp(−r ˆ2) ;
29 e l s e
30 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = compact support , 2 = no
compact support ’ )
31
32 end
33
34
35 %% Create Matrix Equation
36 A=zeros (N,N) ;
37 % f o r 0<x<1
38 f o r i =2:N−1
39 f o r j =1:N
40 r=abs (x ( i )−x ( j ) ) / delta ; % x j −> p h i j ( x i )
41 % f i r s t d e r i v a t i v e of p h i j a l t e r n a t e s sign because of the abs value
77
42 i f x( i ) > x ( j )
43 dphi= (1/ delta ) ∗ dPhi ( r ) ;
44 e l s e
45 dphi= − (1/ delta ) ∗ dPhi ( r ) ;
46 end
47 ddphi= (1/ delta ˆ2) ∗ ddPhi ( r ) ;
48 % Calculate e n t r i e s of matrix
49 A( i , j )= − e p s i l o n ∗ ddphi + dphi ;
50 end
51 end
52
53 % F i r s t and l a s t rows of matrix A − boundary conditions
54 f o r j =1:N
55 % f o r x i=0=x 1 ;
56 A(1 , j )= Phi ( abs(0−x ( j ) ) / delta ) ; %x 1=0 −> p h i j (0)
57 % f o r x i=1=x N ;
58 A(N, j )= Phi ( abs(1−x ( j ) ) / delta ) ;
59 end
60
61 % RHS of Matrix Equation
62 b= zeros (N, 1 ) ;
63 b (1)= 1; % because of u (0)=1
64
65 %% Solve Matrix Equation to obtain c o e f f i c i e n t vector U
66 c=Ab ;
67
68 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us
69 Nsol =100;
70 Us=zeros ( Nsol , 1 ) ;
71 X=l i n s p a c e (0 ,1 , Nsol ) ;
72
73 f o r i =1: Nsol
74 dummy=0;
75 f o r j =1:N
76 dummy=dummy+c ( j ) ∗Phi ( abs (X( i )−x ( j ) ) / delta ) ;
77 end
78 Us( i )=dummy;
79 end
80
81
82 %% Exact Solution
83
84 uExact= (1− exp ((X−1)/ e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
85
86
87 %% Error
88 Error=zeros ( Nsol , 1 ) ;
89 f o r i =1: Nsol
90 Error ( i )=abs (Us( i )−uExact ( i ) ) ;
91 end
92
93 % %% Plots
94 f i g u r e (1)
95
96 subplot (2 ,1 ,1)
97
98 plot (X, Us , ’ r ’ , ’ LineWidth ’ ,2) ;
99 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ;
100 s t r=s p r i n t f ( ’  e p s i l o n= %f ,  delta= %f , h= %f ’ , epsilon , delta , h) ;
101 t i t l e ( s t r ) ;
102 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Numerical Solution ’ ) ;
103
104 subplot (2 ,1 ,2)
105 plot (X, Error , ’k−∗ ’ )
106 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Error ’ ) ;
107
108 f i g u r e (2)
109
78
110 hold on
111 plot (X, Us , ’ r−∗ ’ , ’ LineWidth ’ ,2) ;
112 plot (X, uExact , ’ LineWidth ’ ,2) ;
113 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ;
114 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’u ’ ) ;
115 legend ( ’ Numerical Solution ’ , ’ Exact Solution ’ )
116 t i t l e ( s t r ) ;
117 hold o f f
118
119
120
121 end
F.1.2 Coll 2D.m
1 function [ xsol , ysol ,A,U]= Coll 2D ( epsilon , N1, N2, delta , choice )
2 % Calculates the s o l u t i o n to the PDE
3 % −e p s i l o n ∗( u xx + u yy ) + (1 ,2) ∗grad (u) =0, (x , y ) in (0 ,1) ˆ2 ,
4 % with boundary conditions
5 % u (1 , y )=u(x , 1 )=0
6 % u (0 , y )=(1−exp(−2∗(1−y ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) )
7 % u(x , 0 ) =(1−exp(−2∗(1−x ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) .
8 % Method : Collocation
9 % Radial b a s i s function of the form p h i j = Phi ( r )
10 % where r =|x−x j |/ delta .
11 % Artemis Nika 2014
12
13 %% D i s c r e t i z a t i o n in x and y d i r e c t i o n
14 x=l i n s p a c e (0 ,1 ,N1) ;
15 y=l i n s p a c e (0 ,1 ,N2) ;
16 N=N1∗N2 ; % t o t a l number of nodes
17
18 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r )
19
20 i f choice==1
21 % r a d i a l b a s i s with compact support
22 Phi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ6 ∗ (35∗ r ˆ2 + 18∗ r +3) ;
23 dPhi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗ (−280∗ r −56) ; % f i r s t d e r i v a t i v e divided by
r
24 ddPhi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (1960∗ r ˆ2 − 224∗ r −56) ;
25 e l s e i f choice==2
26 % r a d i a l b a s i s with no compact support
27 % no compact support
28 Phi=@( r ) exp(−r ˆ2) ;
29 dPhi=@( r ) −2∗exp(−r ˆ2) ; % f i r s t d e r i v a t i v e divided by r
30 ddPhi=@( r ) (−2+4∗r ˆ2) ∗ exp(−r ˆ2) ;
31 e l s e
32 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = compact support , 2 = no
compact support ’ )
33
34 end
35
36 %% Create matrix equation and obtain c o e f f i c i e n t vector
37
38 % Create an Nx2 matrix containing a l l the nodes
39 X=zeros (N, 2 ) ;
40 k=1;
41 f o r i =1:N2
42 f o r j =1:N1
43 X(k , 1 )=x ( j ) ;
44 X(k , 2 )=y ( i ) ;
45 k=k+1;
46 end
47 end
48
79
49 % Create matrix A
50 A=zeros (N,N) ;
51 f o r i =1:N
52 f o r j =1:N
53
54 r=norm(X( i , : )−X( j , : ) ) / delta ; % −−> p h i j
55
56 i f ( X( i , 1 )==1 | | X( i , 2 )==1 | | X( i , 1 )==0 | | X( i , 2 ) ==0) % on the boundary
57 A( i , j )= Phi ( r ) ;
58
59 e l s e % f o r i n t e r n a l nodes
60 dphi x= ((X( i , 1 )−X( j , 1 ) ) /( delta ˆ2 ) ) ∗dPhi ( r ) ; % we mu lti plie d by r and
divided dPhi by r
61 dphi y= ((X( i , 2 )−X( j , 2 ) ) /( delta ˆ2 ) ) ∗dPhi ( r ) ;
62 grad= [ dphi x , dphi y ] ’ ;
63 % dphi xx= (1/( delta ˆ2 )−(X( i , 1 )−X( j , 1 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗dPhi ( r ) +((X( i
, 1 )−X( j , 1 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗ddPhi ( r ) ;
64 % dphi yy= (1/( delta ˆ2 )−(X( i , 2 )−X( j , 2 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗dPhi ( r ) +((X( i
, 2 )−X( j , 2 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗ddPhi ( r ) ;
65 l a p l a c i a n= (1/ delta ˆ2) ∗dPhi ( r ) +(1/ delta ˆ2) ∗ddPhi ( r ) ;
66 % l a p l a c i a n = dphi xx+dphi yy −> t h i s way we can avoid divi ding by r
67 % which i s zero f o r diagonal e n t r i e s
68 A( i , j )= −e p s i l o n ∗ ( l a p l a c i a n ) + [ 1 , 2 ] ∗ grad ;
69 end
70
71
72 end
73 end
74
75 % create RHS vector b
76 b=zeros (N, 1 ) ;
77 f o r i =1:N
78 i f (X( i , 1 ) ==0) % f o r x=0
79 b( i )= (1−exp(−2∗(1−X( i , 2 ) ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ;
80 e l s e i f (X( i , 2 ) ==0) % f o r y=0
81 b( i )= (1−exp(−(1−X( i , 1 ) ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
82 e l s e % a l l other nodes
83 b( i )= 0;
84 end
85 end
86
87
88
89 % Solve to get c o e f f i c i e n t vector
90 c=Ab ;
91
92
93 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us
94 N1sol =30;
95 N2sol =30;
96
97 xsol=l i n s p a c e (0 ,1 , N1sol ) ;
98 ysol=l i n s p a c e (0 ,1 , N2sol ) ;
99 Nsol=N1sol ∗ N2sol ;
100
101 Xsol=zeros (N, 2 ) ;
102 k=1;
103 f o r i =1: N2sol
104 f o r j =1: N1sol
105 Xsol (k , 1 )=xsol ( j ) ;
106 Xsol (k , 2 )=ysol ( i ) ;
107 k=k+1;
108 end
109 end
110
111 Us=zeros ( Nsol , 1 ) ;
112 f o r i =1: Nsol
113 dummy=0;
80
114 f o r j =1:N
115 dummy=dummy+c ( j ) ∗Phi (norm( Xsol ( i , : )−X( j , : ) ) / delta ) ;
116 end
117 Us( i )=dummy;
118 end
119
120
121 %% Arrange s o l u t i o n in a matrix to plot
122 k=1;
123 U=zeros ( N2sol , N1sol ) ;
124 f o r i =1: N2sol
125 f o r j =1: N1sol
126 U( i , j )=Us( k ) ; % in f i r s t row y=y1 , second row y=y2 , etc .
127 k=k+1;
128 end
129 end
130
131 %% plot numerical and exact s o l u t i o n
132
133
134 f i g u r e (1)
135 % numerical s o l u t i o n
136 subplot (2 ,1 ,1)
137 s u r f ( xsol , ysol ,U) ;
138 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n
139 axis ( [ 0 1 0 1 0 1 ] )
140 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ;
141
142
143 subplot (2 ,1 ,2)
144 % exact s o l u t i o n
145 Z1=(1−exp(−(1− xsol ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
146 Z2= (1−exp(−2∗(1− ysol ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ;
147 Uexact=Z2 ’∗ Z1 ;
148 s u r f ( xsol , ysol , Uexact ) ;
149 view ( [ 4 0 , 6 5 ] ) ;
150 axis ( [ 0 1 0 1 0 1 ] )
151 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Uexact ’ ) ;
152
153
154 %% Error
155 f i g u r e (2)
156
157 subplot (2 ,1 ,1)
158 s u r f ( xsol , ysol ,U) ;
159 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n
160 axis ( [ 0 1 0 1 0 1 ] )
161 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ;
162
163 subplot (2 ,1 ,2)
164 Error=abs ( Uexact−U) ;
165 s u r f ( xsol , ysol , Error )
166 view ( [ 4 0 , 6 5 ] ) ;
167 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Error ’ ) ;
168
169
170 end
F.2 Galerkin Formulation
F.2.1 Gal 1D.m
1 function [ X,A, Us ]=Gal 1D ( epsilon , N, choice , delta )
81
2 % Calculates the s o l u t i o n to the ODE
3 % −e p s i l o n ∗u’ ’+u’= 0 , e p s i l o n > 0 , f o r 0<x<1,
4 % with D i r i c h l e t boundary conditions
5 % u (0) =1,
6 % u (1) =0.
7 % Method : Galerkin Formulation
8 % Radial b a s i s function of the form p h i j = Phi ( r )
9 % where r =|x−x j |/ delta .
10 % Artemis Nika 2014
11
12
13
14 %% D i s c r e t i z a t i o n in x−d i r e c t i o n :
15
16 % Equispaced points
17 x= l i n s p a c e (0 ,1 ,N) ;
18
19 % Parameters
20 theta= −1 ;
21 sigma= 5/(1/N) ;
22 alpha= 1;
23
24 %% Create Matrix Equation
25 A= zeros (N,N) ; % i n i t i a l i z e matrix
26 F= zeros (N, 1 ) ; % i n i t i a l i z e RHS of matrix equation
27
28
29 i f matlabpool ( ’ s i z e ’ ) == 0 % checking to see i f my pool i s already open
30 matlabpool open
31 end
32
33
34 parfor i =1:N % execute loop i t e r a t i o n s in p a r a l l e l
35 [ a , b]= c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N, x) ;
36 A( i , : )= a ;
37 F( i )= b ;
38 end
39
40
41
42 % Solve equation to obtain c o e f f i c i e n t s
43 c= AF;
44
45 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us
46 Nsol= 100;
47 Us= zeros ( Nsol , 1 ) ;
48 X= l i n s p a c e (0 ,1 , Nsol ) ;
49
50 i f ( choice == 1)
51 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗(8∗ r ˆ2 + 5∗ r + 1) ;
52 e l s e
53 Phi= @( r ) exp(−r ˆ2) ;
54 end
55
56 h=1/(N−1) ;
57 f o r i= 1: Nsol
58 dummy= 0;
59 f o r j= 1:N
60 dummy= dummy+c ( j ) ∗Phi ( abs (X( i )−x ( j ) ) / delta ) ;
61 end
62 Us( i )= dummy;
63 end
64
65
66 %% Exact Solution
67
68 uExact= (1− exp ((X−1)/ e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
69
82
70
71 %% Error
72 Error= zeros ( Nsol , 1 ) ;
73 f o r i= 1: Nsol
74 Error ( i )= abs (Us( i )−uExact ( i ) ) ;
75 end
76
77 %% Plots
78 f i g u r e (1)
79
80 subplot (2 ,1 ,1)
81
82 plot (X, Us , ’ r ’ , ’ LineWidth ’ ,2) ;
83 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ;
84 s t r=s p r i n t f ( ’  e p s i l o n= %f ,  delta= %f , s t e p s i z e h= %f ’ , epsilon , delta , h) ;
85 t i t l e ( s t r ) ;
86 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Numerical Solution ’ ) ;
87
88 subplot (2 ,1 ,2)
89 plot (X, Error , ’k−∗ ’ )
90 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Error ’ ) ;
91
92 f i g u r e (2)
93
94 hold on
95 plot (X, Us , ’ r−∗ ’ , ’ LineWidth ’ ,2) ;
96 plot (X, uExact , ’ LineWidth ’ ,2) ;
97 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ;
98 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’u ’ ) ;
99 legend ( ’ Numerical Solution ’ , ’ Exact Solution ’ )
100 t i t l e ( s t r ) ;
101 hold o f f
102
103
104
105 end
106
107 % D e f i n i t i o n of function c a l c u l a t e ()
108 function [ a , b]= c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N, x)
109 i f ( choice==1)
110 p h i i= @(X) h e a v i s i d e (1−abs (X−x ( i ) ) / delta ) .∗ (1−abs (X−x( i ) ) / delta ) .ˆ5
. ∗ ( 8 ∗ ( abs (X−x( i ) ) / delta ) .ˆ2 + 5∗( abs (X−x( i ) ) / delta ) + 1) ;
111 d p h i i= @(X) h e a v i s i d e (1−abs (X−x ( i ) ) / delta ) .∗ (1−abs (X−x( i ) ) / delta ) .ˆ4 .∗
(−56∗( abs (X−x( i ) ) / delta ) .ˆ2 − 14∗( abs (X−x( i ) ) / delta ) ) ;
112 e l s e
113 p h i i= @(X) exp(−(abs (X−x ( i ) ) / delta ) . ˆ 2 ) ;
114 d p h i i= @(X) −2∗(abs (X−x( i ) ) / delta ) .∗ exp(−(abs (X−x ( i ) ) / delta ) . ˆ 2 ) ;
115
116 end
117 a=zeros (1 ,N) ;
118 f o r j =1:N
119 i f ( choice==1)
120 % Wendland function − Compact support
121 p h i j= @(X) h e a v i s i d e (1−abs (X−x ( j ) ) / delta ) .∗ (1−abs (X−x( j ) ) / delta ) .ˆ5
. ∗ ( 8 ∗ ( abs (X−x( j ) ) / delta ) .ˆ2 + 5∗( abs (X−x( j ) ) / delta ) + 1) ;
122 dph i j= @(X) h e a v i s i d e (1−abs (X−x ( j ) ) / delta ) .∗ (1−abs (X−x( j ) ) / delta ) .ˆ4
.∗ (−56∗( abs (X−x ( j ) ) / delta ) .ˆ2 − 14∗( abs (X−x( j ) ) / delta ) ) ;
123
124 e l s e
125 % Gaussian function
126 p h i j= @(X) exp(−(abs (X−x ( j ) ) / delta ) . ˆ 2 ) ;
127 dph i j= @(X) −2∗(abs (X−x ( j ) ) / delta ) .∗ exp(−(abs (X−x( j ) ) / delta ) . ˆ 2 ) ;
128 end
129
130 Function1= @(X) ( e p s i l o n ∗( sign (X−x ( j ) ) / delta ) ) .∗ dp hi j (X) . ∗ ( sign (X−x ( i ) ) /
delta ) .∗ d p h i i (X) + ( sign (X−x( j ) ) / delta ) .∗ dph i j (X) .∗ p h i i (X) ;
131 term1= e p s i l o n ∗( theta ∗ p h i j (1) ∗( sign (1−x( i ) ) / delta ) ∗ d p h i i (1)− ( sign (1−x( j ) )
/ delta ) ∗ dph i j (1) ∗ p h i i (1) ) ;
83
132 term2= e p s i l o n ∗( theta ∗ p h i j (0) ∗( sign (0−x( i ) ) / delta ) ∗ d p h i i (0)− ( sign (0−x( j ) )
/ delta ) ∗ dph i j (0) ∗ p h i i (0) ) ;
133 term3= sigma ∗( p h i i (1) ∗ p h i j (1) +p h i i (0) ∗ p h i j (0) ) ;
134 term extra1= −p h i j (0) .∗ p h i i (0) ;
135 a ( j )= i n t e g r a l ( Function1 , 0 , 1 ) + term1 − term2 + term3 − alpha ∗ term extra1 ;
136 end
137
138 term extra2= − 1∗ p h i i (0) ;
139 b= − e p s i l o n ∗ theta ∗( sign (0−x( i ) ) / delta ) ∗ d p h i i (0) + sigma ∗ p h i i (0) −alpha ∗
term extra2 ;
140
141
142
143 end
F.2.2 Gal 2D.m
1 function [ xsol , ysol ,A,U ]=Gal 2D ( epsilon , N1, N2, choice , delta )
2 % Calculates the s o l u t i o n to the PDE
3 % −e p s i l o n ∗( u xx + u yy ) + (1 ,2) ∗grad (u) =0, (x , y ) in (0 ,1) ˆ2 ,
4 % with boundary conditions
5 % u (1 , y )=u(x , 1 )=0
6 % u (0 , y )=(1−exp(−2∗(1−y ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) )
7 % u(x , 0 ) =(1−exp(−2∗(1−x ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ,
8 % using r a d i a l b a s i s f u n c t i o n s
9 % Method : Galerkin formulation
10 % Artemis Nika 2014
11
12 %% D i s c r e t i z a t i o n in x and y d i r e c t i o n
13 x=l i n s p a c e (0 ,1 ,N1) ;
14 y=l i n s p a c e (0 ,1 ,N2) ;
15 N=N1∗N2 ; % t o t a l number of nodes
16
17 %% Parameters
18 theta= −1;
19 sigma= 5/(1/N1) ;
20 alpha= 0;
21
22
23
24 %% Create Matrix Equation
25 A= zeros (N,N) ; % i n i t i a l i z e matrix
26 F= zeros (N, 1 ) ; % i n i t i a l i z e RHS of matrix equation
27
28 % Create an Nx2 matrix containing a l l the nodes
29 X= zeros (N, 2 ) ;
30 k= 1;
31 f o r i= 1:N2
32 f o r j =1:N1
33 X(k , 1 )= x( j ) ;
34 X(k , 2 )= y( i ) ;
35 k= k+1;
36 end
37 end
38
39 i f matlabpool ( ’ s i z e ’ ) == 0 % checking to see i f my pool i s already open
40 matlabpool open
41 end
42
43 parfor i =1:N % execute loop i t e r a t i o n s in p a r a l l e l
44 [ a , b ] =c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N,X) ;
45 A( i , : )= a ;
46 F( i )= b ;
47 end
48
84
49
50 % Solve to get c o e f f i c i e n t vector
51 c=AF;
52
53
54 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us
55 N1sol =30;
56 N2sol =30;
57
58 xsol=l i n s p a c e (0 ,1 , N1sol ) ;
59 ysol=l i n s p a c e (0 ,1 , N2sol ) ;
60 Nsol=N1sol ∗ N2sol ;
61
62 Xsol=zeros (N, 2 ) ;
63 k=1;
64 f o r i =1: N2sol
65 f o r j =1: N1sol
66 Xsol (k , 1 )=xsol ( j ) ;
67 Xsol (k , 2 )=ysol ( i ) ;
68 k=k+1;
69 end
70 end
71
72 i f ( choice==1)
73 Phi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ6 ∗ (35∗ r ˆ2 + 18∗ r +3) ;
74 e l s e
75 Phi=@( r ) exp(−r ˆ2) ;
76 end
77
78
79 Us=zeros ( Nsol , 1 ) ;
80 f o r i =1: Nsol
81 dummy=0;
82 f o r j =1:N
83 dummy=dummy+c ( j ) ∗Phi (norm( Xsol ( i , : )−X( j , : ) ) / delta ) ;
84 end
85 Us( i )=dummy;
86 end
87
88
89 %% Arrange s o l u t i o n in a matrix to plot
90 k=1;
91 U=zeros ( N2sol , N1sol ) ;
92 f o r i =1: N2sol
93 f o r j =1: N1sol
94 U( i , j )=Us( k ) ; % in f i r s t row y=y1 , second row y=y2 , etc .
95 k=k+1;
96 end
97 end
98
99 %% plot numerical and exact s o l u t i o n
100
101
102 f i g u r e (1)
103 % numerical s o l u t i o n
104 subplot (2 ,1 ,1)
105 s u r f ( xsol , ysol ,U) ;
106 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n
107 axis ( [ 0 1 0 1 0 1 ] )
108 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ;
109
110
111 subplot (2 ,1 ,2)
112 % exact s o l u t i o n
113 Z1=(1−exp(−(1− xsol ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
114 Z2= (1−exp(−2∗(1− ysol ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ;
115 Uexact=Z2 ’∗ Z1 ;
116 s u r f ( xsol , ysol , Uexact ) ;
85
117 view ( [ 4 0 , 6 5 ] ) ;
118 axis ( [ 0 1 0 1 0 1 ] )
119 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Uexact ’ ) ;
120
121
122 %% Error
123 f i g u r e (2)
124
125 subplot (2 ,1 ,1)
126 s u r f ( xsol , ysol ,U) ;
127 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n
128 axis ( [ 0 1 0 1 0 1 ] )
129 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ;
130
131 subplot (2 ,1 ,2)
132 Error=abs ( Uexact−U) ;
133 s u r f ( xsol , ysol , Error )
134 view ( [ 4 0 , 6 5 ] ) ;
135 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Error ’ ) ;
136
137
138 end
139
140 % D e f i n i t i o n of function c a l c u l a t e ()
141 function [ a , b ] =c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N,X)
142 r i=@( x1 , y1 ) ( sqrt (( x1−X( i , 1 ) ) .ˆ2 +(y1−X( i , 2 ) ) . ˆ 2 ) / delta ) ; % norm () does not
work with quad function
143 i f ( choice==1)
144 p h i i= @( x1 , y1 ) h e a v i s i d e (1− r i ( x1 , y1 ) ) .∗ (1− r i ( x1 , y1 ) ) .ˆ6 . ∗ (35∗
r i ( x1 , y1 ) .ˆ2 + 18∗ r i ( x1 , y1 ) +3) ;
145 dphi i dx= @( x1 , y1 ) h e a v i s i d e (1− r i ( x1 , y1 ) ) .∗ (( x1−X( i , 1 ) ) /( delta ˆ2 ) )
.∗(1 − r i ( x1 , y1 ) ) .ˆ5 . ∗ (−280∗ r i ( x1 , y1 ) −56) ;
146 dphi i dy= @( x1 , y1 ) h e a v i s i d e (1− r i ( x1 , y1 ) ) .∗ (( y1−X( i , 2 ) ) /( delta ˆ2 ) )
.∗(1 − r i ( x1 , y1 ) ) .ˆ5 . ∗ (−280∗ r i ( x1 , y1 ) −56) ;
147 e l s e
148 p h i i= @( x1 , y1 ) exp(− r i ( x1 , y1 ) . ˆ 2 ) ;
149 dphi i dx= @( x1 , y1 ) −2∗((x1−X( i , 1 ) ) /( delta ˆ2 ) ) .∗ exp(− r i ( x1 , y1 ) . ˆ 2 ) ;
150 dphi i dy= @( x1 , y1 ) −2∗((y1−X( i , 2 ) ) /( delta ˆ2 ) ) .∗ exp(− r i ( x1 , y1 ) . ˆ 2 ) ;
151 end
152 a=zeros (1 ,N) ;
153 f o r j =1:N
154 r j=@( x1 , y1 ) ( sqrt (( x1−X( j , 1 ) ) .ˆ2 +(y1−X( j , 2 ) ) . ˆ 2 ) / delta ) ;
155
156 i f ( choice==1)
157 p h i j= @( x1 , y1 ) h e a v i s i d e (1− r j ( x1 , y1 ) ) .∗ (1− r j ( x1 , y1 ) ) .ˆ6 .∗ (35∗
r j ( x1 , y1 ) .ˆ2 + 18∗ r j ( x1 , y1 ) +3) ;
158 dphi j dx= @( x1 , y1 ) h e a v i s i d e (1− r j ( x1 , y1 ) ) .∗ (( x1−X( j , 1 ) ) /( delta ˆ2 ) )
.∗(1 − r j ( x1 , y1 ) ) .ˆ5 .∗ (−280∗ r j ( x1 , y1 ) −56) ;
159 dphi j dy= @( x1 , y1 ) h e a v i s i d e (1− r j ( x1 , y1 ) ) .∗ (( y1−X( j , 2 ) ) /( delta ˆ2 ) )
.∗(1 − r j ( x1 , y1 ) ) .ˆ5 .∗ (−280∗ r j ( x1 , y1 ) −56) ;
160 e l s e
161 p h i j= @( x1 , y1 ) exp(−( sqrt (( x1−X( j , 1 ) ) .ˆ2 +(y1−X( j , 2 ) ) . ˆ 2 ) / delta ) . ˆ 2 ) ;
162 dphi j dx= @( x1 , y1 ) −2∗((x1−X( j , 1 ) ) /( delta ˆ2 ) ) . ∗ exp(− r j ( x1 , y1 ) . ˆ 2 ) ;
163 dphi j dy= @( x1 , y1 ) −2∗((y1−X( j , 2 ) ) /( delta ˆ2 ) ) . ∗ exp(− r j ( x1 , y1 ) . ˆ 2 ) ;
164 end
165
166 term1=@( x1 , y1 ) e p s i l o n ∗( dphi j dx ( x1 , y1 ) .∗ dphi i dx ( x1 , y1 ) + dphi j dy ( x1 ,
y1 ) .∗ dphi i dy ( x1 , y1 ) ) + ( dphi j dx ( x1 , y1 ) + 2∗ dphi j dy ( x1 , y1 ) ) .∗ p h i i ( x1 , y1 ) ;
167 term2=@( y1 ) e p s i l o n ∗ (− p h i i (0 , y1 ) .∗ dphi j dx (0 , y1 )+theta ∗ p h i j (0 , y1 ) .∗
dphi i dx (0 , y1 ) ) − sigma∗ p h i j (0 , y1 ) .∗ p h i i (0 , y1 ) ;
168 term3=@( y1 ) e p s i l o n ∗ ( p h i i (1 , y1 ) .∗ dphi j dx (1 , y1 )−theta ∗ p h i j (1 , y1 ) .∗
dphi i dx (1 , y1 ) ) − sigma∗ p h i j (1 , y1 ) .∗ p h i i (1 , y1 ) ;
169 term4=@( x1 ) e p s i l o n ∗ (− p h i i ( x1 , 0 ) .∗ dphi j dy ( x1 , 0 )+theta ∗ p h i j ( x1 , 0 ) .∗
dphi i dy ( x1 , 0 ) ) − sigma∗ p h i j ( x1 , 0 ) .∗ p h i i ( x1 , 0 ) ;
170 term5=@( x1 ) e p s i l o n ∗ ( p h i i ( x1 , 1 ) .∗ dphi j dy ( x1 , 1 )−theta ∗ p h i j ( x1 , 1 ) .∗
dphi i dy ( x1 , 1 ) ) − sigma∗ p h i j ( x1 , 1 ) .∗ p h i i ( x1 , 1 ) ;
171
172 term extra1=@( y1 ) −p h i j (0 , y1 ) .∗ p h i i (0 , y1 ) ;
86
173 term extra2=@( x1 ) −2∗ p h i j ( x1 , 0 ) .∗ p h i i ( x1 , 0 ) ;
174
175 yterm=@( y1 ) term2 ( y1 )+term3 ( y1 )−alpha ∗ term extra1 ( y1 ) ;
176 xterm=@( x1 ) term4 ( x1 )+term5 ( x1 )−alpha ∗ term extra2 ( x1 ) ;
177 % quad2d i s f a s t e r than i n t e g r a l 2 , i n t e g r a l i s f a s t e r than quad
178 a ( j )= quad2d ( term1 , 0 , 1 , 0 , 1 )−i n t e g r a l ( yterm , 0 , 1 )− i n t e g r a l ( xterm , 0 , 1 ) ;
179 end
180
181 Fterm extra1=@( y1 ) −p h i i (0 , y1 ) .∗(1 − exp(−2∗(1−y1 ) / e p s i l o n ) ) ./(1 − exp(−2/ e p s i l o n ) ) ;
182 Fterm extra2=@( x1 ) −2∗ p h i i ( x1 , 0 ) .∗(1 − exp(−(1−x1 ) / e p s i l o n ) ) ./ (1−exp(−1/ e p s i l o n ) )
;
183 Fterm1=@( y1 ) (− e p s i l o n ∗ theta .∗ dphi i dx (0 , y1 ) + sigma∗ p h i i (0 , y1 ) ) .∗(1 − exp
(−2∗(1−y1 ) / e p s i l o n ) ) ./(1 − exp(−2/ e p s i l o n ) ) ;
184 Fterm2=@( x1 ) (− e p s i l o n ∗ theta .∗ dphi i dy ( x1 , 0 ) + sigma∗ p h i i ( x1 , 0 ) ) .∗(1 − exp(−(1−
x1 ) / e p s i l o n ) ) ./ (1−exp(−1/ e p s i l o n ) ) ;
185
186 b= i n t e g r a l ( Fterm1 , 0 , 1 ) + i n t e g r a l ( Fterm2 , 0 , 1 ) − alpha ∗( i n t e g r a l ( Fterm extra1
, 0 , 1 ) + i n t e g r a l ( Fterm extra2 , 0 , 1 ) ) ;
187
188
189 end
F.3 Generalised Interpolation
F.3.1 GenInter 1D.m
1 function [ X,A, Us ]=GenInter 1D ( epsilon , N, choice , delta )
2 % Calculates the s o l u t i o n to the ODE
3 % −e p s i l o n ∗ u ’ ’ + u ’ = 0 , e p s i l o n > 0 , f o r 0 < x < 1 ,
4 % with boundary conditions u (0) = 1 , u (1) = 0.
5 % Method : Generalised i n t e r p o l a t i o n
6 % Radial b a s i s function of the form phi (x , y) = Phi ( r )
7 % where r = | x−y |/ delta .
8 % Artemis Nika 2014
9
10 %% D i s c r e t i z a t i o n in x−d i r e c t i o n :
11
12 x=l i n s p a c e (0 ,1 ,N) ;
13
14
15 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r )
16
17 i f choice==1
18 % t h i s r a d i a l b a s i s function has compact support f o r O<r <1.
19 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗(8∗ r ˆ2 + 5∗ r + 1) ;
20 dPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (−56∗ r ˆ2 − 14∗ r ) ; % 1 st der .
21 ddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ3 ∗ (336∗ r ˆ2 − 42∗ r − 14) ; % 2nd der .
22 dddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ2 ∗ (−1680∗ r ˆ2 + 840∗ r ) ; % 3rd der .
23 ddddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ∗ (6720∗ r ˆ2 − 5880∗ r +840) ; % 4th der .
24 e l s e i f choice==2
25 % no compact support
26 Phi=@( r ) exp(−r ˆ2) ;
27 dPhi=@( r ) −2∗r ∗exp(−r ˆ2) ;
28 ddPhi=@( r ) (−2 + 4∗ r ˆ2) ∗ exp(−r ˆ2) ;
29 dddPhi=@( r ) (12∗ r − 8∗ r ˆ3) ∗ exp(−r ˆ2) ;
30 ddddPhi= @( r ) (12−48∗ r ˆ2 + 16∗ r ˆ4) ∗ exp(−r ˆ2) ;
31 e l s e
32 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = Wendland , 2 = Gaussian ’
)
33 end
34
35 %% Construct matrix equation
36
87
37 A=zeros (N,N) ;
38 %f o r 0<x<1
39 f o r i =2:N−1
40 f o r j =1:N
41 r=abs (x ( i )−x ( j ) ) / delta ; % x j −> p h i j ( x i )
42 %a l t e r n a t e sign because of the abs value
43 i f x( i ) > x ( j )
44 dr dy=−1/delta ;
45 dr dx=1/ delta ;
46 e l s e
47 dr dy=1/ delta ;
48 dr dx=−1/delta ;
49 end
50
51 % Calculate e n t r i e s of matrix
52 i f ( j==1 | | j==N)
53 A( i , j )= −( e p s i l o n / delta ˆ2) ∗ddPhi ( r ) + dr dx ∗dPhi ( r ) ;
54 e l s e
55 dF dx= dr dx ∗ (−( e p s i l o n / delta ˆ2) ∗dddPhi ( r )+dr dy ∗ddPhi ( r ) ) ;
56 ddF dxx= (1/ delta ˆ2) ∗((− e p s i l o n / delta ˆ2) ∗ddddPhi ( r ) + dr dy ∗ dddPhi ( r ) ) ;
57 A( i , j )= − e p s i l o n ∗ ddF dxx + dF dx ;
58 end
59 end
60 end
61
62 % F i r s t and l a s t rows of matrix A − boundary conditions
63
64 A(1 ,1)= Phi (0) ; A(1 ,N)= Phi ( abs (x (1)−x (N) ) / delta ) ;
65 A(N, 1 )= Phi ( abs ( x(N)−x (1) ) / delta ) ; A(N,N)= Phi (0) ;
66
67 f o r j =2:N−1
68 % f o r x i=0=x 1 ;
69 A(1 , j )= −( e p s i l o n / delta ˆ2) ∗ddPhi ( abs(0−x ( j ) ) / delta ) + (1/ delta ) ∗dPhi ( abs(0−x( j ) ) /
delta ) ; %x 1=0 −> p h i j (0)
70 % f o r x i=1=x N ;
71 A(N, j )= −( e p s i l o n / delta ˆ2) ∗ddPhi ( abs (x (N)−x( j ) ) / delta ) + (−1/ delta ) ∗dPhi ( abs (x (N)
−x ( j ) ) / delta ) ;
72 end
73
74 % RHS of Matrix Equation
75
76 b= zeros (N, 1 ) ;
77 b (1)= 1; % because of u (0)=1
78
79
80 %% Solve Matrix Equation to obtain c o e f f i c i e n t vector U
81
82 c= Ab ;
83
84
85 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us
86
87 Nsol =100;
88 X=l i n s p a c e (0 ,1 , Nsol ) ;
89 Us=zeros ( Nsol , 1 ) ;
90 f o r i =1: Nsol
91 dummy=0;
92 f o r j =1:N
93 r= abs (X( i )−x ( j ) ) / delta ;
94 i f X( i ) > x ( j )
95 dr dy= −1/ delta ;
96 e l s e
97 dr dy= 1/ delta ;
98 end
99 i f ( j==1 | | j==N)
100 mult= Phi ( r ) ;
101 e l s e
102 mult= −( e p s i l o n / delta ˆ2) ∗ddPhi ( r )+ dr dy ∗dPhi ( r ) ;
88
103 end
104 dummy= dummy+c ( j ) ∗mult ;
105 end
106 Us( i )= dummy;
107 end
108
109
110 %% Exact Solution
111
112 uExact= (1− exp ((X−1)/ e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
113
114
115 %% Error
116
117 Error= zeros ( Nsol , 1 ) ;
118 f o r i =1: Nsol
119 Error ( i )= abs (Us( i )−uExact ( i ) ) ;
120 end
121
122 %% Plots
123
124 f i g u r e (1)
125
126 subplot (2 ,1 ,1)
127 plot (X, Us , ’ r−’ , ’ LineWidth ’ ,2) ;
128 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ;
129 s t r=s p r i n t f ( ’  e p s i l o n= %f ,  delta= %f , points N= %f ’ , epsilon , delta , N) ;
130 t i t l e ( s t r ) ;
131 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ u s ’ ) ;
132
133 subplot (2 ,1 ,2)
134 plot (X, Error , ’k−∗ ’ )
135 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Error ’ ) ;
136
137 f i g u r e (2)
138
139 hold on
140 t i t l e ( s t r ) ;
141 plot (X, Us , ’−∗r ’ ) ;
142 plot (X, uExact ) ;
143 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ;
144 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’u ’ ) ;
145 legend ( ’ Numerical Solution ’ , ’ Exact Solution ’ )
146 hold o f f
147
148
149
150 end
F.3.2 GenInter 2D.m
1 function [ xsol , ysol ,A,U ]=GenInter 2D ( epsilon , N1, N2, choice , delta )
2 % Calculates the s o l u t i o n to the PDE
3 % −e p s i l o n ∗( u xx + u yy ) + (1 ,2) ∗grad (u) =0, (x , y ) in (0 ,1) ˆ2 ,
4 % with boundary conditions
5 % u (1 , y )=u(x , 1 )=0
6 % u (0 , y )=(1−exp(−2∗(1−y ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) )
7 % u(x , 0 ) =(1−exp(−2∗(1−x ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ,
8 % using r a d i a l b a s i s f u n c t i o n s
9 % Method : Generalised I n t e r p o l a t i o n
10 % Artemis Nika 2014
11
12 %% D i s c r e t i z a t i o n in x and y d i r e c t i o n
13 x=l i n s p a c e (0 ,1 ,N1) ;
14 y=l i n s p a c e (0 ,1 ,N2) ;
89
15 N=N1∗N2 ; % t o t a l number of nodes
16
17 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r )
18
19 i f choice==1
20 % t h i s r a d i a l b a s i s function has compact support f o r O<r <1.
21 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ6 ∗(35∗ r ˆ2 + 18∗ r + 3) ;
22 dPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗ (−280∗ r − 56) ; % 1 st der . −divided by r
23 ddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (1960∗ r ˆ2 − 224∗ r − 56) ; % 2nd der .
24 %dddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ3 ∗ (7∗ r − 3) ∗1680; % 3rd der . −divided by r
25 %ddddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ2 ∗ 1680 ∗ (5∗ r −3) ∗ (7∗ r −1) ; % 4th der .
26 e l s e i f choice==2
27 % no compact support
28 Phi=@( r ) exp(−r ˆ2) ;
29 dPhi=@( r ) −2∗exp(−r ˆ2) ; % divided by r
30 ddPhi=@( r ) (−2 + 4∗ r ˆ2) ∗ exp(−r ˆ2) ;
31 %dddPhi=@( r ) (12 − 8∗ r ˆ2) ∗ exp(−r ˆ2) ; % divided by r
32 %ddddPhi= @( r ) (12−48∗ r ˆ2 + 16∗ r ˆ4) ∗ exp(−r ˆ2) ;
33 e l s e
34 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = Wendland , 2 = Gaussian ’
)
35 end
36
37
38 %% Create matrix equation and obtain c o e f f i c i e n t vector
39
40 % Create an Nx2 matrix containing a l l the nodes
41 X=zeros (N, 2 ) ;
42 k=1;
43 f o r i =1:N2
44 f o r j =1:N1
45 X(k , 1 )=x ( j ) ;
46 X(k , 2 )=y ( i ) ;
47 k=k+1;
48 end
49 end
50
51 % Create matrix A
52 A=zeros (N,N) ;
53 f o r i =1:N
54 f o r j =1:N
55 r=norm(X( i , : )−X( j , : ) ) / delta ; % −−> p h i j
56 % on the boundary
57 i f ( X( i , 1 )==1 | | X( i , 2 )==1 | | X( i , 1 )==0 | | X( i , 2 ) ==0)
58 i f ( X( j , 1 )==1 | | X( j , 2 )==1 | | X( j , 1 )==0 | | X( j , 2 ) ==0)
59 %x j on the boundary
60 A( i , j )=Phi ( r ) ;
61 e l s e
62 % f o r x j not on the boundary
63 LaplacianPhi =1/( delta ˆ2 ) ∗dPhi ( r ) +1/( delta ˆ2) ∗ ddPhi ( r ) ;
64 dphi dx2= (X( j , 1 )−X( i , 1 ) ) /( delta ˆ2) ∗ dPhi ( r ) ;
65 dphi dy2= (X( j , 2 )−X( i , 2 ) ) /( delta ˆ2) ∗ dPhi ( r ) ;
66 GradPhi=[dphi dx2 , dphi dy2 ] ;
67 A( i , j )= −e p s i l o n ∗ LaplacianPhi + [ 1 , 2 ] ∗ GradPhi ’ ;
68 end
69 e l s e
70 i f ( X( j , 1 )==1 | | X( j , 2 )==1 | | X( j , 1 )==0 | | X( j , 2 ) ==0)
71 % f o r x j on the boundary
72 LaplacianPhi =1/( delta ˆ2 ) ∗dPhi ( r ) +1/( delta ˆ2) ∗ ddPhi ( r ) ;
73 dphi dx2= (X( i , 1 )−X( j , 1 ) ) /( delta ˆ2) ∗ dPhi ( r ) ;
74 dphi dy2= (X( i , 2 )−X( j , 2 ) ) /( delta ˆ2) ∗ dPhi ( r ) ;
75 GradPhi= [ dphi dx2 , dphi dy2 ] ;
76 A( i , j )= −e p s i l o n ∗ LaplacianPhi + [ 1 , 2 ] ∗ GradPhi ’ ;
77 e l s e
78 % x i and x j i n s i d e the domain
79 i f choice==1
80 term1= 6720∗(4∗ r −1)∗(3∗ r −2)∗( r −1)ˆ2 ∗ h e a v i s i d e (1−r ) ;
81 term2= 1680∗(1− r ) ˆ4 ∗ h e a v i s i d e (1−r ) ;
90
82 e l s e
83 term1= 16 ∗(2−4∗ rˆ2+r ˆ4) ∗ exp(−r ˆ2) ;
84 term2= 4∗ exp(−r ˆ2) ;
85 end
86 term3= −5/( delta ˆ2) ∗( dPhi ( r ) ) ;
87 term4= r ˆ2∗ delta ˆ2+4∗(X( j , 1 )−X( i , 1 ) ) ∗(X( j , 2 )−X( i , 2 ) ) + 3∗(X( j , 2 )−X( i , 2 )
) ˆ2;
88 A( i , j )= ( e p s i l o n ˆ2/ delta ˆ4) ∗term1 −term4 /( delta ˆ4) ∗term2 + term3 ;
89 end
90 end
91 end
92 end
93
94 % create RHS vector b
95 b=zeros (N, 1 ) ;
96 f o r i =1:N
97 i f (X( i , 1 ) ==0) % f o r x=0
98 b( i )= (1−exp(−2∗(1−X( i , 2 ) ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ;
99 e l s e i f (X( i , 2 ) ==0) % f o r y=0
100 b( i )= (1−exp(−(1−X( i , 1 ) ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
101 e l s e % a l l other nodes
102 b( i )= 0;
103 end
104 end
105
106
107 % Solve to get c o e f f i c i e n t vector
108 c=Ab ;
109
110
111 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us
112 N1sol =30;
113 N2sol =30;
114
115 xsol=l i n s p a c e (0 ,1 , N1sol ) ;
116 ysol=l i n s p a c e (0 ,1 , N2sol ) ;
117 Nsol=N1sol ∗ N2sol ;
118
119 Xsol=zeros (N, 2 ) ;
120 k=1;
121 f o r i =1: N2sol
122 f o r j =1: N1sol
123 Xsol (k , 1 )=xsol ( j ) ;
124 Xsol (k , 2 )=ysol ( i ) ;
125 k=k+1;
126 end
127 end
128
129 Us=zeros ( Nsol , 1 ) ;
130 f o r i =1: Nsol
131 dummy=0;
132 f o r j =1:N
133 r=norm( Xsol ( i , : )−X( j , : ) ) / delta ;
134 i f ( X( j , 1 )==1 | | X( j , 2 )==1 | | X( j , 1 )==0 | | X( j , 2 ) ==0)
135 % f o r x j on the boundary
136 dummy=dummy+c ( j ) ∗Phi ( r ) ;
137 e l s e
138 % f o r x j not on the boundary
139 LaplacianPhi =1/( delta ˆ2 ) ∗dPhi ( r ) +1/( delta ˆ2) ∗ ddPhi ( r ) ;
140 dphi dx2= (X( j , 1 )−Xsol ( i , 1 ) ) /( delta ˆ2) ∗ dPhi ( r ) ;
141 dphi dy2= (X( j , 2 )−Xsol ( i , 2 ) ) /( delta ˆ2) ∗ dPhi ( r ) ;
142 GradPhi=[dphi dx2 , dphi dy2 ] ;
143 dummy=dummy+c ( j ) ∗ (− e p s i l o n ∗ LaplacianPhi + [ 1 , 2 ] ∗ GradPhi ’ ) ;
144 end
145 end
146 Us( i )=dummy;
147 end
148
91
149
150 %% Arrange s o l u t i o n in a matrix to plot
151 k=1;
152 U=zeros ( N2sol , N1sol ) ;
153 f o r i =1: N2sol
154 f o r j =1: N1sol
155 U( i , j )=Us( k ) ; % in f i r s t row y=y1 , second row y=y2 , etc .
156 k=k+1;
157 end
158 end
159
160 %% plot numerical and exact s o l u t i o n
161
162
163 f i g u r e (1)
164 % numerical s o l u t i o n
165 subplot (2 ,1 ,1)
166 s u r f ( xsol , ysol ,U) ;
167 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n
168 axis ( [ 0 1 0 1 0 1 ] )
169 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ;
170
171
172 subplot (2 ,1 ,2)
173 % exact s o l u t i o n
174 Z1=(1−exp(−(1− xsol ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ;
175 Z2= (1−exp(−2∗(1− ysol ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ;
176 Uexact=Z2 ’∗ Z1 ;
177 s u r f ( xsol , ysol , Uexact ) ;
178 view ( [ 4 0 , 6 5 ] ) ;
179 axis ( [ 0 1 0 1 0 1 ] )
180 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Uexact ’ ) ;
181
182
183 %% Error
184 f i g u r e (2)
185
186 subplot (2 ,1 ,1)
187 s u r f ( xsol , ysol ,U) ;
188 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n
189 axis ( [ 0 1 0 1 0 1 ] )
190 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ;
191
192 subplot (2 ,1 ,2)
193 Error=abs ( Uexact−U) ;
194 s u r f ( xsol , ysol , Error )
195 view ( [ 4 0 , 6 5 ] ) ;
196 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Error ’ ) ;
197
198
199 end
92
Bibliography
[1] Zhijie Cai. Best estimates of RBF-based meshless Galerkin methods for Dirichlet
problem. Applied Mathematics and Computation, 215:2149–2153, 2009.
[2] Jichun Li, Alexander H.-D. Cheng and Ching-Shyang Chen. A comparison of
efficiency and error convergence of multiquadric collocation method and finite
element method. Engineering Analysis with Boundary Elements, 27:251–257,
2003.
[3] Yong Duan and Yong-Ji Tan. A meshless Galerkin method for Dirichlet problems
using radial basis functions. Journal of Computational and Applied Mathematics,
196:394–401, 2006.
[4] Patricio Farrell and Holger Wendland. RBF multiscale collocation for second
order elliptic boundary value problems. SIAM J. Numer. Anal., 51(4):2403–
2425, August 2013.
[5] Gregory E. Fasshauer. Meshfree Approximation Methods with MATLAB. World
Scientific Publishers, 2007.
[6] Rolland L. Hardy. Multiquadric equations of topography and other irregular
surfaces. Journal of Geophysical Research, 76(8):1905–1915, March 1971.
[7] Y. C. Hon and R. Schaback. On unsymmetric collocation by radial basis func-
tions. Applied Mathematics and Computation, 119:177–186, 2001.
[8] E. J. Kansa. Multiquadrics– a scattered data approximation scheme with appli-
cations to computational fluid dynamics–I: Surface approximations and partial
derivative estimates. Computers Math. Applic., 19(8/9):127–145, 1990.
[9] E. J. Kansa. Multiquadrics– a scattered data approximation scheme with applica-
tions to computational fluid dynamics–II: Solutions to parabolic, hyperbolic and
elliptic partial differential equations. Computers Math. Applic., 19(8/9):147–161,
1990.
93
[10] Elisabeth Larsson and Bengt Fornberg. A numerical study of some radial basis
function based solution methods for elliptic pdes. Computers and Mathematics
with Applications, 46(5):891–902, 2003.
[11] N. Mai-Duy and T. Tran-Cong. An integrated-RBF technique based on Galerkin
formulation for elliptic differential equations. Engineering analysis with boundary
elements, 33(2):191–199, 2009.
[12] Charles A. Micchelli. Interpolation of scattered data: Distance matrices and
conditionally positive definite functions. Constructive Approximation, 2:11–22,
1986.
[13] Michael Mongillo. Choosing basis functions and shape parameters for radial basis
function methods. SIAM Undergraduate Research Online, 2011.
[14] Shmuel Rippa. An algorithm for selecting a good value for the parameter c in
radial basis function interpolation. Advances in Computational Mathematics,
11:193–210, 1999.
[15] Robert Schaback. Error estimates and condition numbers for radial basis function
interpolation. Advances in Computational Mathematics, 3(3):251–264, 1995.
[16] K. Harriman, P. Houston, B. Senior and E. S¨uli. hp-version discontinuous
galerkin methods with interior penalty for partial differential equations with
nonnegative characteristic form. Contemporary Mathematics, 330:89–120, 2003.
[17] H.-G. Roos, M. Stynes and L. Tobiska. Robust numerical methods for singularly
perturbed differential equations. Springer Ser. Comput. Math, 24, 2008.
[18] Marjan Uddin. On the selection of a good shape parameter in solving time-
dependent partial differential equations using RBF approximation method. Ap-
plied Mathematical Modelling, 38:135–144, 2014.
[19] Holger Wendland. Error estimates for interpolation by compactly supported
radial basis functions of minimal degree. Journal of approximation theory,
93(2):258–272, 1998.
[20] Holger Wendland. Numerical solution of variational problems by radial basis
functions. Approximation theory IX, 2:361–368, 1998.
94
[21] Holger Wendland. Meshless Galerkin methods using radial basis functions. Math-
ematics of Computation, 68(228):1521–1531, March 1999.
[22] Holger Wendland. On the stability of meshless symmetric collocation for bound-
ary value problems. BIT Numerical Mathematics, 47:455–468, March 2007.
[23] J. Wloka. Partial Differential Equations. Cambridge University, 1987.
[24] Grady B. Wright. Radial Basis Function Interpolation: Numerical and Analytical
Developments. PhD thesis, University of Colorado, 2003.
95

Thesis_Main

  • 1.
    Comparison of RadialBasis Function Algorithms for Convection Diffusion Equations Artemis Nika Keble College University of Oxford A thesis submitted for the degree of Master of Science 2014
  • 2.
    I dedicate thisproject to my wonderful parents who have always been by my side.
  • 3.
    Acknowledgements There are severalpeople that helped me throughout the project and I would like to take a moment to thank them. First and foremost I would like to thank my family. They have been with me through the entirety of this project and have always supported me the best way they could. A special thank you goes to my supervisor Dr. Kathryn Gillow who has al- ways provided her support and guidance, not only throughout the project, but during the whole duration of this masters course. Lastly I want to thank all of my closest friends that have always been there for me. Special thanks to Pavlos Georgiou who has always stood by me and had the patience to proof read most of my reports.
  • 4.
    Abstract In this project,we implement three radial basis function methods for solving convection-diffusion equations, aiming to compare them in terms of ease of implementation, accuracy, stability and efficiency. Namely, the methods we consider are collocation, Galerkin formulation and generalised interpolation. Each of the methods is implemented twice using one of Wendland’s compactly supported RBFs and the Gaussian RBF. We per- form numerical experiments in order to find how the scaling parameter δ affects their accuracy and conditioning, if convergence can be achieved by increasing the number of points N, and also what are the advantages and disadvantages of using each of the RBFs. We find that the Gaussian RBF helps us obtain high accuracy but at the same time makes the method used unstable, and hence the choice of δ dif- ficult. The method which displays higher accuracy when using the Gaus- sian is generalised interpolation, except when used for the stiff 2D model problem where collocation is superior. The compactly supported Wend- land RBFs provide greater stability but reduced accuracy. The method which results in the most accurate results when used with them is the col- location method. Concerning the Galerkin method, we found that it was unstable and extremely inefficient because of the numerical integrations that take place. Finally we give some possible extensions of the project. Specifically we discuss possible ways to predict good values for the scaling parameter δ, an alternative point distribution to the uniform one for stiff problems in one space dimension and we also briefly describe the ideas behind multiscale versions of the methods.
  • 5.
    Contents 1 Introduction 1 1.1Aim of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Model Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Measuring the Error . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Collocation 8 2.1 Method Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.1 Scaling Parameter δ . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.3 Distribution of Eigenvalues . . . . . . . . . . . . . . . . . . . . 14 2.3 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 Scaling Parameter δ . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.3 Distribution of Eigenvalues . . . . . . . . . . . . . . . . . . . . 17 2.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Galerkin Formulation 19 3.1 Galerkin Method for Robin Problems . . . . . . . . . . . . . . . . . . 19 3.2 Galerkin Method for the Dirichlet Problem . . . . . . . . . . . . . . . 20 3.3 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.2 Increasing the number of points N . . . . . . . . . . . . . . . 25 3.3.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.4.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 i
  • 6.
    3.4.2 Increasing thenumber of points N . . . . . . . . . . . . . . . 28 3.4.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4 Generalised Interpolation 31 4.1 Method Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Stability and Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.3.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.3.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.4 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.4.1 Effect of δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4.2 Varying N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.4.3 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5 Method Comparison 45 5.1 Ease of Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.2 Accuracy and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.2.1 One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.2.2 Two Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.3 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6 Further Work 50 6.1 Extension: Choice of the Scaling Parameter . . . . . . . . . . . . . . 50 6.2 Extension: Point Distribution . . . . . . . . . . . . . . . . . . . . . . 51 6.3 Extension: Multilevel Algorithms . . . . . . . . . . . . . . . . . . . . 53 A Collocation 55 A.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 A.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 B Galerkin Formulation 59 B.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 B.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 ii
  • 7.
    C Generalised Interpolation63 C.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 C.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 D Comparison 67 D.1 Accuracy Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 67 D.1.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 D.1.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 D.2 Conditioning Comparison . . . . . . . . . . . . . . . . . . . . . . . . 70 D.2.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 D.2.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 D.3 Efficiency Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 72 D.3.1 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 D.3.2 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 E Further Work 74 E.1 Choice of Scaling Parameter . . . . . . . . . . . . . . . . . . . . . . . 74 E.2 Point Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 E.2.1 Collocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 E.2.2 Galerkin Formulation . . . . . . . . . . . . . . . . . . . . . . . 76 E.2.3 Generalised Interpolation . . . . . . . . . . . . . . . . . . . . . 76 F MATLAB code 77 F.1 Collocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 F.1.1 Coll 1D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 F.1.2 Coll 2D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 F.2 Galerkin Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 81 F.2.1 Gal 1D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 F.2.2 Gal 2D.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 F.3 Generalised Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . 87 F.3.1 GenInter 1D.m . . . . . . . . . . . . . . . . . . . . . . . . . . 87 F.3.2 GenInter 2D.m . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Bibliography 92 iii
  • 8.
    List of Figures 1.1Numerical solutions obtained using finite element methods. . . . . . . 2 1.2 Radial basis functions, with centre xj = 4 9 , for different values of δ. . . 4 1.3 Exact solutions for one dimensional model problem. . . . . . . . . . . 5 1.4 Exact solutions for two dimensional model problem. . . . . . . . . . . 6 2.1 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum error is 5.6 × 10−3 , for δ = 5 with condition number 2.27 × 1010 . . . . 11 2.2 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum error is 6.7 × 10−4 , for δ = 0.12 with condition number 4.22 × 1017 . . 11 2.3 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 1.7 × 10−4 , for δ = 5 with condition number 4.81 × 105 . . . . 12 2.4 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Gaussian RBF. The minimum error is 2.45 × 10−7 , for δ = 0.7 with condition number 3.31 × 1017 . . . . . 12 2.5 Log of the error versus N for = 0.01. . . . . . . . . . . . . . . . . . . 13 2.6 Eigenvalue plots for = 0.01 and N = 64. . . . . . . . . . . . . . . . 14 2.7 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is 1.5×10−2 , for δ = 5 with condition number 8.01×1011 . 15 2.8 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 2.9 × 10−2 , for δ = 0.07 with condition number 1.09 × 109 . . . 16 2.9 Eigenvalue plots for = 0.5 and N = 256. . . . . . . . . . . . . . . . 17 2.10 Numerical solutions for 1D model problem for = 0.01 and N = 64. . 18 2.11 Numerical solutions and pointwise error for 2D model problem for = 0.01 and N = 1024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 iv
  • 9.
    3.1 Log ofthe error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Wendland(1D) RBF. Minimum error is 5.6 × 10−3 , for δ = 0.19 with condition number 3.64 × 1010 . . 23 3.2 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum error is 1.1 × 10−3 , for δ = 0.05 with condition number 2.62 × 1017 . . 23 3.3 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 3.0 × 10−5 , for δ = 4.1 with condition number 4.43 × 1016 . . . 24 3.4 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Gaussian RBF. Minimum error is 5.61 × 10−6 , for δ = 0.36 with condition number 3.54 × 1017 . . . . . . 24 3.5 Log of L2 norm of the error versus N for = 0.01. . . . . . . . . . . . 25 3.6 Eigenvalue plots for = 0.01 and N = 64. . . . . . . . . . . . . . . . 26 3.7 Eigenvalue plots for = 0.5 and N = 32. . . . . . . . . . . . . . . . . 26 3.8 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is 2.95 × 10−1 , for δ = 0.1 with condition number 5.32 × 1019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.9 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 1.42, for δ = 0.1 with condition number 9.81 × 1019 . . . . . . 28 3.10 Eigenvalue distribution for = 0.01 using the Wendland(2D) RBF with N = 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.11 Eigenvalue distribution for = 0.01 using the Gaussian RBF with N = 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.12 Numerical solutions for 1D model problem for = 0.01 and N = 64. . 30 3.13 Numerical solutions and pointwise error for 2D model problem for = 0.01 and N = 1024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.1 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum error is 2.51 × 10−1 , for δ = 5 with condition number 2.85 × 109 . . . . 35 4.2 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum error is 8.40 × 10−4 , for δ = 0.09 with condition number 4.97 × 1017 . . 36 v
  • 10.
    4.3 Log ofthe error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 1.14 × 10−4 , for δ = 5 with condition number 597. . . . . . . . 36 4.4 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Gaussian RBF. The minimum error is 1.36 × 10−8 , for δ = 0.86 with condition number 5.15 × 1017 . . . . . 37 4.5 Log of the error versus N for = 0.01. . . . . . . . . . . . . . . . . . 37 4.6 Log of the error versus N for = 0.5, using the Wendland(1D) with δ = 15h1−2/σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.7 Eigenvalue distributions for = 0.01. . . . . . . . . . . . . . . . . . . 39 4.8 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is 2.27×10−1 , for δ = 5 with condition number 3.48×1012 . 41 4.9 Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 1.45 × 10−1 , for δ = 0.25 with condition number 4.63 × 1022 . . 41 4.10 Log of the error versus N for = 0.5. . . . . . . . . . . . . . . . . . . 42 4.11 Eigenvalue distributions for = 0.01 with N = 256. . . . . . . . . . . 43 4.12 Numerical solutions for 1D model problem for = 0.01 and N = 64. . 43 4.13 Numerical solutions and pointwise error for 2D model problem for = 0.01 and N = 1024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.1 Execution times of methods for the 2D model problem for fixed and δ. 48 6.1 Log of exact and predicted errors versus δ with N = 64 and = 0.01. RBF method used is Collocation. . . . . . . . . . . . . . . . . . . . . 51 6.2 Shishkin mesh for = 0.01 and = 0.5 using N = 27 points. . . . . . 52 A.1 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.002637, for δ = 5 with condition number 27764490.18245. . 58 A.2 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.000091, for δ = 1.62 with condition number 4.95066936940742e+18. 58 B.1 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.003078, for δ = 0.1 with condition number 720774606009488770. . 61 vi
  • 11.
    B.2 Log ofthe error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.006890, for δ = 1 with condition number 623220457232.205570. . . . 62 C.1 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.006173, for δ = 5 with condition number 70823500.508384. . 65 C.2 Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.000042, for δ = 1.65 with condition number 798519455837603580. . 66 vii
  • 12.
    Chapter 1 Introduction Radial BasisFunction (RBF) methods trace their origins back to Hardy’s multi- quadric (MQ) method [6]. This method was developed by Hardy as a way to obtain a continuous function that would accurately describe the morphology of a geographical surface. The motivation came from the fact that the already existing methods at the time, like Fourier and polynomial series approximations, provided neither acceptable accuracy nor efficiency [6], [24]. Moreover obtaining the desired result as a continu- ous function meant that analytical techniques from calculus and geometry could be utilised to provide useful results, e.g. the height of the highest hill, unobstructed lines of sight, volumes of earth and others [6]. In mathematical terms, the problem Hardy was trying to solve can be stated as: given a set of unique points X = {x1, ..., xn} ∈ Rd with corresponding values {f1, ..., fn}, find a continuous function s(x) that satisfies the given value at each point, i.e., s(xi) = fi for i = 1, .., n. Hardy [6], following a trial and error approach, constructed the approximate solution by taking a linear combination of the multi- quadric functions qj(x) = c2 + x − xj 2 2, j = 1, .., n, (1.1) where each of these functions is centered about one of the unique points in our set. The problem therefore reduces to finding the coefficients Cj such that s(xi) = n j=1 Cjqj(xi) = fi, i = 1, .., n, (1.2) which involves nothing more than solving a system of linear equations. It has been proved that the system of linear equations resulting from the MQ method is always nonsingular, see for example, Micchelli [12]. 1
  • 13.
    The MQ method,which has found applications in areas other than topography, has also been used in order to numerically approximate the solution to partial dif- ferential equations. In fact, Kansa [9] found, after performing a series of numerical experiments, that the MQ method in many cases outperformed finite difference meth- ods, providing a more accurate solution while using a smaller number of points and without having to create a mesh. After Micchelli [12] provided the conditions under which the resulting linear sys- tem of Hardy’s method is nonsingular, it was realised that other functions, as well as (1.1) could also be used with the method. The common characteristic of those functions is that their value only depends on the distance from a chosen center, i.e., they are radially symmetric with respect to that center. These functions are now widely known as radial basis functions (RBFs). 1.1 Aim of Thesis The motivation for looking into radial basis function methods for numerically solv- ing partial differential equations (PDEs) comes from the fact that standard methods, like finite differences or finite elements, can easily become computationally expensive. This is a direct result of the requirement of these methods for creating a mesh, and also the need for having usually a large number of points in order to obtain accept- able accuracy. The necessity of using a really small stepsize is more evident when the solution to be approximated has stiff regions. For example, in Figure 1.1 we see the finite element solution on a uniform mesh to both a 1D and a 2D convection domi- nated diffusion problem where we get oscillations in the boundary layer. Moreover, Figure 1.1: Numerical solutions obtained using finite element methods. 2
  • 14.
    the methods becomeincreasingly complex as we take into consideration problems in higher dimensions. Perhaps the greatest advantage of RBF methods is that, in contrast to finite elements and finite differences, they offer mesh free approximation. Highly important is also the ease in which these methods can be generalized to higher dimensions. We only have to consider the Euclidean norm of the distance between a point and the center of the RBF in order to find its value and therefore, the majority of times, the same function can be used to solve problems in any dimension. These two main characteristics of RBFs captured the attention of mathematicians. Since the time Hardy’s MQ method was first introduced , several algorithms utilizing different RBFs have been developed, each with its own merits and faults. An ideal algorithm would combine ease of implementation with high accuracy and efficiency. The aim of this thesis is the implementation and comparison, in terms of accuracy and conditioning of the resulting linear system, of three different radial basis function algorithms for solving convection-diffusion equations in one and two dimensions. The methods we consider are collocation [8],[9], a Galerkin formulation method [1],[3],[11],[21] and generalized interpolation [4]. We will implement each of these methods using two different types of RBFs, an infinitely differentiable one and two compactly supported functions. 1.2 Radial Basis Functions As mentioned in the previous section, throughout this project we will make use of different RBFs when implementing our methods. We consider the infinitely differen- tiable Gaussian function and two of Wendland’s [19] compactly supported functions for one and two dimensions. These functions are available in Table 1.1. Type of RBF φ(r) r ≥ 0 Ck Gaussian exp(−r2 ) C∞ Wendland (1D) (1 − r)5 +(8r2 + 5r + 1) C4 Wendland (2D) (1 − r)6 +(35r2 + 18r + 3) C4 Table 1.1: Table of radial basis functions. We take as the input r for our RBF the Euclidean distance of a point from the centre of the function, scaled by a scaling parameter δ ∈ R, that is, r = x − xj δ , j = 1, .., n. (1.3) 3
  • 15.
    (a) Gaussian (b)Wendland (1D) (c) Wendland (2D) Figure 1.2: Radial basis functions, with centre xj = 4 9 , for different values of δ. The parameter δ affects the shape of the RBF, as well as the accuracy of the method. Observing Figure 1.2 we can see that as we increase the scaling parameter δ the Gaussian RBF flattens out while the support radius of the two Wendland functions increases. We expect that increasing δ will result in larger condition numbers for our systems, especially for the Gaussian function which is not compactly supported. A slight advantage of the Gaussian RBF is the fact that it can be used for ap- proximations in any dimension, while we need to use a different compactly supported Wendland function for each dimension. The reason why the same Wendland function should not be used for any dimension has to do with the positive definiteness of com- pactly supported functions, such as the Wendland RBFs, being dependent on how many dimensions we are working with, as stated in [19]. A positive definite function results in a positive definite linear system for (1.2), which in turns implies that we can always find a unique solution, as all its eigenvalues are positive. 1.3 Model Problems In this section we give the model problems on which we will test the different RBF algorithms. Our 1D convection diffusion equation with Dirichlet boundary conditions 4
  • 16.
    (a) = 0.01(b) = 0.5 Figure 1.3: Exact solutions for one dimensional model problem. is given by − u + u = 0, 0 < x < 1, u(0) = 1, u(1) = 0, (1.4) where > 0 is the diffusion coefficient. The exact solution for this problem is u(x) = 1 − exp(−(1 − x)/ ) 1 − exp(−1/ ) . (1.5) We will also consider a two dimensional problem, again with Dirichlet boundary conditions. It is given by − 2 u + (1, 2) · u = 0, (x, y) ∈ Ω, u(1, y) = 0, y ∈ [0, 1] u(x, 1) = 0, x ∈ [0, 1] u(0, y) = 1 − exp(−2(1 − y)/ ) 1 − exp(−2/ ) , y ∈ [0, 1], u(x, 0) = 1 − exp(−(1 − x)/ ) 1 − exp(−1/ ) , x ∈ [0, 1], (1.6) where again > 0 is the diffusion coefficient and Ω = (0, 1)2 . The exact solution of the problem is u(x, y) = 1 − exp(−(1 − x)/ ) 1 − exp(−1/ ) 1 − exp(−2(1 − y)/ ) 1 − exp(−2/ ) . (1.7) In both cases, varying affects how stiff the solution is and hence how easy it is to construct an approximation for it, i.e., for small values of our solution becomes stiff whereas for values close to 1 we have a solution whose gradient is not as steep. We will perform our numerical experiments for two values of , namely = 0.01 for which our solutions are stiff and = 0.5 for which the solutions are more well behaved, see Figures 1.3 and 1.4. 5
  • 17.
    (a) = 0.01(b) = 0.5 Figure 1.4: Exact solutions for two dimensional model problem. 1.4 Measuring the Error In order to be able to compare the methods effectively we need to quantify the error between the exact and numerical solution. Since the solutions to our model problems are known, measuring the error is relatively simple. We will make use of the L2 norm of the error, where for the one dimensional case we have u − s L2 = m i=2 xi xi−1 (u − s)2 dx 1/2 ≈ 1 √ m − 1 u − s 2, (1.8) where we have used the trapezium rule with m points in order to evaluate the integral for the L2 norm. Also u and s are vectors of the exact and numerical solutions at each of the m points. A similar expression can be obtained for two dimensional problems. Note that, since the numerical solutions resulting from RBF algorithms are continuous, it is possible to use the same number of m points for the trapezium rule for approximations that used a different number of nodes. 1.5 Thesis Structure The structure for the rest of this project is as follows. Chapters 2 to 4 contain the description and implementation details, as well as some background theory where relevant, for each of the three RBF methods. In each of these chapters we investigate the effect of choosing different RBFs in order to generate our basis fucntions and also the effect of varying δ. We will specifically look into the accuracy and stability of each method for either choice of , i.e., for stiff and non-stiff problems. The 6
  • 18.
    data gathered fromeach method can be found in the corresponding appendix. In Chapter 5, we make a general comparison between the methods focusing on the ease of implementation, the accuracy, the stability and also efficiency of each one of them. Again, the tables containing all relevant data in detail can be found in the appropriate appendix. Finally, in Chapter 6 we present the reader with ideas about possible extensions of this project, as well as with some further work. The MATLAB files containing the implementation of the methods can be found in Appendix F. 7
  • 19.
    Chapter 2 Collocation 2.1 MethodDescription Collocation, which was first introduced by Kansa [8], [9], is perhaps the most straight- forward method amongst the three to be discussed. In order to demonstrate how collo- cation works, let us first consider the following general convection-diffusion equation, Lu = − 2 u + b · u + cu = f in Ω ⊆ Rd u = gD on ∂Ω (2.1) where > 0 is the diffusion coefficient and b ∈ Rd . The first step of this method is to consider a set of basis functions Φj(x) = φ x − xj 2 δ , j = 1, ..., N, (2.2) where N is the total number of points we are using and j indicates which node we are centering around. Note that we are using Φj as a function with a vector argument and φ as a function with a scalar argument. The set of points used are known as collocation points. We then form our approximate solution by taking a linear combination of our basis functions, that is, s(x) = N j=1 CjΦj(x). (2.3) Substituting s(x) back into our equation and boundary conditions and evaluating at each of our N points gives, N j=1 Cj (− 2 Φj(xi) + b · Φj(xi) + cΦj(xi)) = f(xi) i = 1, ..., N∗ , N j=1 Cj Φj(xi) = gD(xi) i = N∗ + 1, ..., N, (2.4) 8
  • 20.
    where points 1to N∗ are located within the domain and points N∗ +1 to N lie on the boundary. We will only consider uniform distributions for all methods in this project, for comparison purposes. The system of equations (2.4) can be written as a matrix equation of the form AC = F (2.5) where A =     Lφ φ     , C =     C     , F =     f gD     . (2.6) The matrix A has dimensions N × N and F, C are N dimensional vectors. The method is also known as unsymmetric collocation [7] because of the non symmetric collocation matrix A. Note that the matrix A is not symmetric even if the PDE is self-adjoint. In order to acquire the unknown coefficients Cj we must solve Equation (2.5) for C. It is therefore important that the collocation matrix is nonsingular for this method to work. In the case of a simple interpolation problem, see (1.2), this method may some- times yield singular collocation matrices for specific RBFs [7], like Thin Plate Splines for which φ(r) = r2 log(r). This can be fixed by adding an extra polynomial to (2.3) and imposing additional constraints on the coefficients Cj in order to eliminate the additional degrees of freedom. However, it has been proven that this is not the case for elliptic problems. Hon and Schaback [7] have managed to construct examples where the collocation matrix becomes singular, whether you add the extra appropri- ate polynomial or not. In particular, Hon and Schaback [7], have managed to find, relatively easily, cases where using the Gaussian RBF results in a singular collocation matrix. Numerical experiments were performed with Wendland functions as well but a singular colloca- tion matrix was not found. However, Hon and Schaback do not believe that using the compactly supported Wendland functions will always result in a nonsingular system of equations. This indicates however that we should probably expect better condi- tioning when using the Wendland RBFs, compared to the Gaussian. We also note that we will not be considering any additional polynomial terms in our approximate solution because, as stated by [10] and [2], they do not offer any significant benefits with regards to the accuracy of the method. In the following sections of this chapter we will apply the method to our model problems in one and two dimensions, for different values of , using the Wendland and Gaussian RBFs. The aim is to look into which of the two RBFs perform better 9
  • 21.
    Gaussian Wendland (1D)Wendland (2D) φ(r) exp(−r2 ) (1 − r)5 +(8r2 + 5r + 1) (1 − r)6 +(35r2 + 18r + 3) φ (r) −2r exp(−r2 ) (1 − r)4 +(−56r − 14r) (1 − r)5 +(−280r2 − 56r) φ (r) (−2 + 4r2 ) exp(−r2 ) (1 − r)3 +(336r2 − 42r − 14) (1 − r)4 +(1960r2 − 224r − 56) Table 2.1: Derivatives of RBFs. in terms of conditioning of the collocation matrix and accuracy of the solution. Also of interest is how the value of the scaling parameter δ affects the method. 2.2 One Dimension Having explained how the collocation method works, we will now apply it to our 1D problem (1.4). Prior to coding the method in MATLAB, we must first calculate the first and second derivatives of φ with respect to x, that is dΦj dx = 1 δ dφ dr if x > xj −1 δ dφ dr if x < xj , d2 Φj dx2 = 1 δ2 d2 φ dr2 , (2.7) where the derivatives of φ with respect to r can be found in Table 2.1. The translation of the collocation method to a computer program is relatively simple, which is one of the reasons why this method became popular. 2.2.1 Scaling Parameter δ The scaling parameter δ is known to affect the accuracy of RBF based methods. Figures 2.1 and 2.2 show how the error and condition number get affected when we vary δ, for the Wendland(1D) and Gaussian RBFs respectively, where = 0.01. The Wendland(1D) RBF seems to provide us with exponential convergence as we increase δ. However as the error decreases the condition number of the method increases. This implies that as we increase δ the method becomes unstable due to the ill-conditioning of the collocation matrix. Something similar was also observed for plain interpolation problems, where as mentioned in Schaback [15],‘either one goes for a small error and gets a bad sensitivity, or one wants a stable algorithm and has to take a comparably larger error’. As far as the Wendland(1D) case of the method is concerned, for = 0.01, we observe that after some point increasing δ any further does not have a significant impact on the accuracy while it still affects the condition number. This suggests that it might be worth sacrificing a bit of accuracy for a more stable method. Changing the number of points does not affect the behaviour observed in Figure 2.1. 10
  • 22.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 2.1: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum error is 5.6 × 10−3 , for δ = 5 with condition number 2.27 × 1010 . (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 2.2: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum error is 6.7 × 10−4 , for δ = 0.12 with condition number 4.22 × 1017 . The Gaussian RBF, for = 0.01, causes a more erratic behaviour of the error and the condition number. In contrast with the Wendland(1D) RBF, the range of δ values we can use is very limited and the condition number is generally a lot larger. This might be looked upon as a disadvantage of the Gaussian for this type of problem, however for particular values of δ we do get a better approximation compared to Wendland(1D). We note here that even though Wendland might outperform the Gaussian for some values of N, see Tables A.1 and A.2, we can usually obtain a more accurate solution using the Gaussian RBF, at the cost though of an unstable method. Again, even if we change the number of points used we still obtain similar plots. Now, for the case = 0.5, we can obtain a very good approximation using a 11
  • 23.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 2.3: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 1.7×10−4 , for δ = 5 with condition number 4.81 × 105 . (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 2.4: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Gaussian RBF. The minimum error is 2.45 × 10−7 , for δ = 0.7 with condition number 3.31 × 1017 . significantly smaller number of points than what we have used for = 0.01, see Tables A.3 and A.4. The Wendland(1D) still provides us with exponential convergence, see Figure 2.3, but this time the line does not flatten out as quickly as in the = 0.01 case. We can improve the accuracy, with the side effect of also increasing the condition number, by considering an even larger value of δ. For example, for δ = 20 the error reduces to 8.73 × 10−5 . However, increasing δ means we also increase the support radius of our RBF, which in a way defeats the purpose of using a compactly supported function. The Gaussian, on the other hand, allows us to use a slightly bigger range of δ values this time, see Figure 2.4, and outperforms the Wendland(1D) for values of N at least up to 128, see Tables A.3 and A.4. We can use a bigger range of δ values 12
  • 24.
    (a) Gaussian RBF,where δ = 5h. The minimum error is 2.0 × 10−6, for N = 512 with condition number 1.1 × 1018. (b) Wendland RBF, where δ = 0.25. The minimum error is 2.5 × 10−5, for N = 512 with condition number 1.1 × 108. Figure 2.5: Log of the error versus N for = 0.01. because we are using less points, which means that to start off the condition number is somewhat decreased. 2.2.2 Varying N We have seen in the previous section that choosing the appropriate δ can lead to a more accurate solution without increasing the operation count of the method, with the Wendland(1D) RBF requiring larger δ values compared to the Gaussian. Another way to improve the accuracy is to use more points, even though in the case of the Gaussian it is not always beneficial, see Table A.4. Now, keeping δ fixed while increasing the number of points does not lead to a convergent method for the Gaussian because the behaviour of the error as we vary δ is erratic, as we have seen before. The Wendland(1D) however will cause the method to converge as we increase N for most values of δ, even though it still may not achieve the higher level of accuracy of the Gaussian. For = 0.01 we can get convergence out of the Gaussian by setting δ = ch, where c is a constant, for some values of c, see Figure 2.5. After trying out a few values for c we found that c = 5 produces satisfactory results, but since convergence is not guaranteed for every c this is not an ideal setting. The same choice for δ can be used with the Wendland(1D) for large values of c, however numerical experiments suggest that it is more beneficial to keep δ fixed rather than having it proportional to the meshsize h. Setting δ = ch for the Wendland RBFs is similar to what happens in finite element methods, where the support radius of the function is proportional to the meshsize. It is sometimes called the stationary setting as the bandwidth of the matrix A stays fixed as the meshsize 13
  • 25.
    (a) Gaussian withδ = 0.25. (b) Wendland(1D) δ = 0.25. Figure 2.6: Eigenvalue plots for = 0.01 and N = 64. decreases. For the case = 0.5, choosing δ = ch does not lead to convergence for either of the RBFs. However, we can obtain convergence of the method for Wendland(1D) only, if we keep δ fixed while increasing N. Therefore, while the Gaussian in many cases yields a better approximation, the fact that the range of δ values we can consider is very limited, coupled with the ill- conditioning of the resulting linear systems, is a significant disadvantage. Equally unsatisfactory is the fact that it is difficult to obtain a convergent scheme when we use the Gaussian. 2.2.3 Distribution of Eigenvalues The method for most combinations of N and δ will produce a collocation matrix with complex eigenvalues for both RBFs, see Figure 2.6. The only exception we have found is for = 0.5 using a small number of points, where we only have real eigenvalues, see Tables A.3 and A.4. In general, when using the Gaussian, some of the eigenvalues are clustered around 0 and some have a large modulus. The Wendland usually produces eigenvalues which are better distributed, hence the smaller condition numbers. If one looks at Figure 2.6, at first it looks as though there are less eigenvalues when the Gaussian is used. In reality the majority of the eigenvalues are extremely close to zero and hence they cannot be seen without zooming in multiple times. This phenomenon is not as obvious in the Wendland(1D) case and this is what we mean by saying it produces better distributed eigenvalues. 14
  • 26.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 2.7: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is 1.5 × 10−2 , for δ = 5 with condition number 8.01 × 1011 . 2.3 Two Dimensions As for the one dimensional case, before proceeding at the coding stage, we must first calculate the derivatives required for problem (1.6), that is ∂Φj ∂x = x − xj δ2r dφ dr , ∂Φj ∂y = y − yj δ2r dφ dr (2.8) and ∂2 Φj ∂x2 = 1 δ2r dφ dr + (x − xj)2 δ4r − 1 r2 dφ dr + 1 r d2 φ dr2 ∂2 Φj ∂y2 = 1 δ2r dφ dr + (y − yj)2 δ4r − 1 r2 dφ dr + 1 r d2 φ dr2 , (2.9) where r is given by (1.3), x = (x, y) and the derivatives of φ with respect to r can be found in Table 2.1. We need to be careful to simplify the expressions for the derivatives before implementing them in MATLAB in order to avoid divisions by zero, therefore after some algebra we can write the Laplacian of Φj as 2 Φj = 1 δ2r dφ dr + 1 δ2 d2 φ dr2 , (2.10) where the 1/r in the first term can always be cancelled with the r factor of dφ dr . 2.3.1 Scaling Parameter δ As we are working with two dimensions we naturally need a significantly greater number of points, especially when = 0.01 as the solution becomes stiff. In order to 15
  • 27.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 2.8: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 2.9 × 10−2 , for δ = 0.07 with condition number 1.09 × 109 . get acceptable accuracy we need to use up to at least 1024 points, in which case the Wendland(2D) is the better choice as far as accuracy is concerned, see Figures 2.7 and 2.8. The Gaussian surprisingly has a smaller condition number. This is because a very small value of δ was chosen, if we had picked a slightly larger δ the conditioning would be much worse as the behaviour of the condition number for a wider range of δ values is similar to the 1D case, i.e., it increases very rapidly close to δ = 0.2. Experimenting with other values of N we can conclude that for the case = 0.01 using the Wendland(2D) is a better choice, see Tables A.5 and A.6, not only because of the better accuracy but also for allowing more values for δ to be used. If we take = 0.5, we find that the collocation method produces more accurate results when the Gaussian is used, see Tables A.7 and A.8. It is also worth noticing how well the method performs for smoother solutions, even if just a few points are used. In general, for cases like = 0.01, collocation requires a large amount of points to be used, which implies larger condition numbers. The Gaussian is likely to make the method unstable very quickly and for that reason one might want to use a compactly supported Wendland function with a not too large value for the scaling parameter δ. 2.3.2 Varying N As for the 1D case we are interested in whether we can achieve convergence by in- creasing the number of points we use. The results we have obtained for our two dimensional problem are somewhat similar to what we had for the 1D case. 16
  • 28.
    (a) Gaussian withδ = 0.3. (b) Wendland(2D) δ = 0.3. Figure 2.9: Eigenvalue plots for = 0.5 and N = 256. If we make δ proportional to the stepsize h, i.e. δ = ch, we do not always obtain a convergent scheme for either of our functions. Moreover, we find that making the method using the Gaussian converge to be very challenging, if we keep δ fixed. The Wendland function, on the other hand, will result in a convergent method if δ is kept fixed. 2.3.3 Distribution of Eigenvalues We find again that the majority of times the method produces complex eigenvalues. We can get real eigenvalues for a small number of points and a small δ, however those cases do not produce acceptable solutions. As we had observed for the 1D case, if we use the Gaussian RBF, more eigenvalues tend to have modulus close to 0, while the same is not true if we use the Wendland(2D) RBF. This fact is quite obvious if we observe Figure 2.9, where the eigenvalues obtained when using the Gaussian look a lot less in quantity than the ones we get when the Wendland(2D) is used. This happens because the majority of them are clustered extremely close to zero. 2.4 Chapter Summary In this chapter, we have implemented the collocation method, using two different types of RBFs, for our model problems in one and two dimensions. We have seen that the Gaussian RBF provides better accuracy for the right choice of δ, except for the 2D problem with = 0.01 where the Wendland(2D) is clearly better. Also, the Gaussian makes the method unstable due to the ill-conditioning and hence it is very difficult to obtain convergence, which is not the case for the Wendland RBFs. Figures 2.10 and 2.11 show numerical solutions obtained through collocation for the case = 0.01, for 17
  • 29.
    both 1D and2D cases respectively. Perhaps a drawback of the method when used for solving PDEs, is the fact that there is no theoretical background for it and also the fact that there is a small possibility that the collocation matrix A will be singular. (a) Solution obtained using the Gaussian RBF with δ = 0.12. (b) Solution obtained using the Wendland(1D) RBF with δ = 0.05. Figure 2.10: Numerical solutions for 1D model problem for = 0.01 and N = 64. (a) Solution obtained using the Gaussian RBF with δ = 0.1. (b) Solution obtained using the Wend- land(2D) RBF with δ = 5. Figure 2.11: Numerical solutions and pointwise error for 2D model problem for = 0.01 and N = 1024. 18
  • 30.
    Chapter 3 Galerkin Formulation 3.1Galerkin Method for Robin Problems The second method we consider is based on a Galerkin formulation of the problem. It is somewhat similar to a finite element method with the distinct difference that we do not need to perform a computationally expensive mesh generation. Of course the basis functions used in our case are radially symmetric. Wendland [21], looked into a Galerkin based RBF method for second order PDEs with Robin boundary conditions, for an open bounded domain Ω having a C1 bound- ary, that is, − d i,j=1 ∂ ∂xi aij ∂u ∂xj (x) + c(x)u(x) = f(x), x ∈ Ω, d i,j=1 aij(x) ∂u(x) ∂xj νi(x) + h(x)u(x) = g(x), x ∈ ∂Ω, (3.1) where aij, c ∈ L∞(Ω), i, j = 1, ..., n, f ∈ L2(Ω), aij, h ∈ L∞(∂Ω), g ∈ L2(∂Ω) and ν is the outward unit normal vector to ∂Ω. The entries aij(x) satisfy the following ellipticity condition, namely, there exists a constant γ > 0 such that for all x ∈ Ω and all α ∈ Rd γ d j=1 α2 j ≤ d i,j=1 aij(x)αiαj. (3.2) As stated in [21], under the additional assumption that the functions c and h are both non-negative and one or both of them are ‘uniformly bounded away from zero on a subset of nonzero measure of Ω for c or ∂Ω for h, we obtain a strictly coercive and continuous bilinear functional’ a(u, v) = Ω d i,j=1 aij ∂u ∂xj ∂v ∂xi + cuv dx + ∂Ω huv dS ∈ V × V, (3.3) 19
  • 31.
    where V =H1 (Ω). Combining a(u, v) with the continuous linear functional l(v) ∈ V , where l(v) = Ω fvdx + ∂Ω gvdS, (3.4) we obtain the weak formulation of the problem, find u ∈ V such that a(u, v) = l(v) holds for all v ∈ V, (3.5) which by the Lax-Milgram theorem is always uniquely solvable. The next step is to consider a finite dimensional subspace VN ⊂ V spanned by our RBFs, that is, VN := span{Φj(x), j = 1, ..., N} (3.6) for a set of pairwise distinct points X = {x1, ..., xN} ∈ Ω, and search for an approx- imation s ∈ VN to u that satisfies a(s, v) = l(v) for all v ∈ VN . It is mentioned in [21] that is preferable to use compactly supported functions, such as the Wendland RBFs, in order to obtain some sparsity in the resulting matrix. Wendland provided the theoretical bounds for such settings. We give below a special case of Theorem 5.3, with m = 0, proved in [21]. Theorem 3.1.1. Assume u ∈ Hk (Ω) and Φ is such that its Fourier transform satisfies ˆΦ(ω) ∼ (1 + ω 2)−2σ with σ ≥ k > d/2. Then there exists a function s ∈ VN , such that u − s L2(Ω) ≤ Cˆhk u Hk(Ω), for sufficiently small ˆh = supx∈Ω min1≤j≤N x − xj 2. The Wendland RBFs satisfy the requirements of the theorem with σ = 3 and σ = 3.5, for the 1D and 2D versions respectively. Also, ˆh corresponds to the data density or mesh norm as stated in [4] that ‘measures the radius of the largest data-free hole contained in Ω’. 3.2 Galerkin Method for the Dirichlet Problem In our case, both model problems have Dirichlet boundary conditions. This poses a difficulty for Galerkin methods as they cannot simply be used with RBFs, as men- tioned in [1]. As the boundary conditions need to be satisfied by the space in which the solution u belongs [1], [3], a problem arises because RBFs, even compactly sup- ported ones, do not satisfy these boundary conditions in general. Therefore the error bound given in Theorem 3.1.1, is most likely, not valid in our case. 20
  • 32.
    Different methods havebeen developed to tackle problems with Dirichlet boundary conditions, including approximating the Dirichlet problem by an appropriate Robin problem [1], using a Lagrange multiplier approach [3] and others [11]. We will follow a quite different approach by adding additional terms to our weak formulation in order to impose the boundary conditions. The weak formulation of the general convection-diffusion problem (2.1) is to find u ∈ H1 E(Ω) such that Ω u · v + b · uv + cuv dΩ − ∂Ω v ∂u ∂ν dS = Ω fv dΩ (3.7) for all v ∈ H1 E0 (Ω), where H1 E(Ω) = {ω| ω ∈ H1 (Ω), ω = gD on ∂Ω} and H1 Eo (Ω) = {ω| ω ∈ H1 (Ω), ω = 0 on ∂Ω}. Note here that the boundary terms vanish since v ∈ H1 Eo (Ω). However, as we have pointed out earlier our RBFs span VN which is a subspace of H1 (Ω) and therefore cannot be used in this setting. In order to impose the boundary conditions we consider the following relation ∂Ω θ ∂v ∂ν + κv u dS = ∂Ω θ ∂v ∂ν + κv gD dS, (3.8) where θ ∈ [−1, 1] and κ = c/h, which we incorporate in (3.7). Note that here h is the meshsize, which is constant as we are using a uniform distribution. Our new formulation of the problem is, find u ∈ H1 (Ω) such that a(u, v) = l(v) holds for all v ∈ H1 (Ω), (3.9) where a(u, v) = Ω u · v + b · uv + cuv dΩ − ∂Ω v ∂u ∂ν dS + ∂Ω θ ∂v ∂ν + κv u dS l(v) = Ω fv dΩ + ∂Ω θ ∂v ∂ν + κv gD dS. (3.10) The idea of imposing the boundary conditions in a weak sense, like we have done here, has been discussed in [16] for finite element methods. After a few numerical experiments we have found that choosing θ = −1 and κ = 5/h produces more accurate results the majority of time. Setting θ = −1 means that the symmetric part of the elliptic operator will correspond to a symmetric bilinear form. We construct the approximate solution s ∈ VN by taking a linear combination of our basis functions, that is, s(x) = N j=1 CjΦj(x) (3.11) 21
  • 33.
    and we takev to be each of our basis functions in turn, i.e., v = Φi(x) for i = 1, ..., N. The problem once again reduces to a matrix equation AC = F, (3.12) where Aij = a(Φj, Φi) and Fi = l(Φi). Now all that is left is to solve Equation (3.12) in order to obtain the coefficients Cj. A disadvantage of this method is the fact that we have to employ numerical integration in order to get the entries of the matrix A. This is computationally expensive and time consuming, especially for problems in more that one dimension. After trying different MATLAB methods for numerical integration, as well as some we implemented from scratch like the Trapezium and Simpson’s rule, we found the quad2d method for double integration and the integral method for one dimensional integrals to be the fastest. However, as the execution time is still very poor, especially when we increase N, we had to use parallel for loops in the implementation of the algorithm in order to improve running times, see Appendix F, Section F.2. However, the improvement this gives depends on how powerful the CPU of the computer is. 3.3 One Dimension We will now apply the Galerkin formulation method to our 1D problem. The matrix A and right-hand-side vector F, are given by Aij = 1 0 ΦjΦi + ΦjΦi dx + (θΦj(1)Φi(1) − Φj(1)Φi(1)) − (θΦj(0)Φi(0) − Φj(0)Φi(0)) + κΦj(1)Φi(1) + κΦj(0)Φi(0), Fi = − θΦi(0) + κΦi(0). (3.13) The expressions for the derivatives of Φj with respect to x are given by (2.7) and the derivatives of φ with respect to r can be found in Table 2.1 for each RBF. 3.3.1 Effect of δ In this section we investigate how the scaling parameter δ affects the accuracy of the method as well as which of the two RBFs gives better results for the one dimensional model problem. Let us first take a look at the case = 0.01 for which the solution is stiff. Using the Wendland(1D) RBF results in a method whose accuracy is sensitive to the choice of the scaling parameter δ, as can be seen in Figure 3.1. This phenomenon is weaker if fewer points are used, however the accuracy in those cases is not desirable. 22
  • 34.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 3.1: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Wendland(1D) RBF. Minimum error is 5.6 × 10−3 , for δ = 0.19 with condition number 3.64 × 1010 . (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 3.2: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum error is 1.1 × 10−3 , for δ = 0.05 with condition number 2.62 × 1017 . The condition number of the resulting matrix also depends on the choice of δ, where most times an increase in the scaling parameter will result in an increase of the condition number. It turns out that mostly small values of δ produce better results, see Table B.1. If we use the Gaussian RBF we find that we can improve the accuracy of the method, see Table B.2. The behaviour of the error as we vary δ seems unstable but in this case we can find a range of δ values for which the error does not seem to change dramatically and this can be observed for other values of N. The condition number does not really increase by increasing δ but for the values of the scaling parameter for which we have good accuracy, for N = 64, the system is more ill conditioned. However this is not the case for every choice of N making it hard to 23
  • 35.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 3.3: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 3.0×10−5 , for δ = 4.1 with condition number 4.43 × 1016 . (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 3.4: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Gaussian RBF. Minimum error is 5.61 × 10−6 , for δ = 0.36 with condition number 3.54 × 1017 . predict which values of δ will produce an accurate method. We observe that using the Gaussian RBF will most times result in worse conditioning than the Wendland RBF. The reason is that Wendland(1D) can produce sparse linear systems for the method in contrast to the Gaussian. We now move on to the smoother case of = 0.5. The first observation we make is that the method requires far fewer points in order to produce a good approximation, as expected. The Wendland(1D) RBF this time allows us to use a greater range of δ values, however this fact is not true if the number of points is large. Also, there comes a point were increasing the parameter any further has negative effects on the accuracy and stability of the method, see Figure 3.3. Nevertheless, if a small number of points 24
  • 36.
    (a) Wendland(1D) withδ = 0.2. (b) Gaussian with δ = 0.25. Figure 3.5: Log of L2 norm of the error versus N for = 0.01. is used, the choice of δ is not as tricky as when = 0.01. Now for the Gaussian, even though the choice of δ is a bit more difficult, see Figure 3.4, the accuracy of the method is clearly better, even though more unstable, for most values of N, see Tables B.3 and B.4. 3.3.2 Increasing the number of points N It should be possible to achieve convergence of the method by increasing the number of points used. We remind the reader that we are always using uniformly distributed points. As can be observed from the tables in Appendix B, values of δ that work well with a specific number of points, do not necessarily result in an accurate scheme if we change N. This is an issue if one wishes to obtain convergence by increasing the number of points used. After a few numerical experiments we realise that keeping δ fixed while increasing N does not guarantee convergence for either RBF, see Figure 3.5. Also, for compactly supported functions, such as the Wendland(1D) RBF, using an increasingly smaller stepsize h while keeping the support fixed, eliminates the advantage of having sparse matrices [20]. The fact that we cannot obtain convergence by increasing N confirms that Theorem 3.1.1 does not hold in our setting. The other choice we have would be to take δ to be proportional to h. This choice would potentially keep the sparsity of the resulting linear systems if a compactly supported function is used. However numerical experiments suggest that convergence is unattainable in this setting too. This was also observed in [20] where the Helmholtz equation is considered. 25
  • 37.
    (a) Wendland withδ = 0.19. (b) Wendland with δ = 0.19, zoom in. Figure 3.6: Eigenvalue plots for = 0.01 and N = 64. (a) Wendland with δ = 0.1. (b) Wendland with δ = 0.4. Figure 3.7: Eigenvalue plots for = 0.5 and N = 32. 3.3.3 Eigenvalues For the case = 0.01, both RBFs produce mainly complex eigenvalues. Figure 3.6 shows the distribution of eigenvalues when the Wendland(1D) RBF is used. If instead we had chosen the Gaussian RBF, in addition to having a dense matrix, the eigenvalue distribution even though schematically similar to Figure 3.6, consists mainly of eigenvalues whose real part is really close to zero. This is expected as we have already seen that using the Gaussian usually results in ill conditioned matrices. For the case = 0.5, when using the Wendland(1D) RBF we get better condi- tioning when we use small enough δ values that give a sparse linear system. This can also be deduced from the distribution of the eigenvalues. In Figure 3.7 we observe that for a bigger δ the majority of eigenvalues are situated close to 0. The Gaussian RBF still produces eigenvalues whose modulus tends to zero as we increase δ, in a rate faster than that of the Wendland(1D), as expected. 26
  • 38.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 3.8: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is 2.95 × 10−1 , for δ = 0.1 with condition number 5.32 × 1019 . 3.4 Two Dimensions Applying the Galerkin formulation method to problems of higher dimensions becomes increasingly complex as we have to perform the appropriate numerical integrations. This has an impact on the speed of the method, which quickly becomes more and more unattractive to use because of the slow execution times. After applying the method to our 2D model problem we obtain the matrix formulation of the problem, where the expressions for the entries of the matrix A and vector F are considerably nastier and longer than those we had for the 1D problem and that is why we chose to omit them. 3.4.1 Effect of δ Varying δ for the 2D case we observe a slightly peculiar behaviour of the error and condition number of the method. For both values of we are considering and both RBFs, the condition number always seems to have a constant value for all values of δ except when δ = 1 for which it rapidly decreases, see for example Figures 3.8 and 3.9. The behaviour of the error is similar but with the difference that it sometimes increases for δ = 1 and sometimes it decreases, see Figure B.2. Now in terms of accuracy, for both = 0.01 and = 0.5, using the Wendland(2D) RBF provides us with a more accurate solution, see Tables B.5 to B.8. Concerning the conditioning of the method, for the same value of δ the Wendland(2D) produces a better conditioned linear system. However, for the choice of δ that produces the 27
  • 39.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 3.9: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 1.42, for δ = 0.1 with condition number 9.81 × 1019 . most accurate result for each RBF, the Gaussian may sometimes result in a smaller condition number than the Wendland(2D). 3.4.2 Increasing the number of points N We saw that we were not able to make the Galerkin method converge by increasing the number of points for the 1D model problem. After looking at the error and condition number plots from Section 3.4.1 one may suspect that this will also be the case for our 2D problem. As predicted, we found obtaining a convergent method to be difficult. Keeping δ fixed while increasing N not only eliminates the advantage of having compactly supported functions when using the Wendland(2D), but does not produce a conver- gent scheme for any of our two RBFs for either choice of . We have also considered taking δ to be proportional to the stepsize h again with no success. This seems to be a disadvantage of the method as if the choice of δ is not appropriate, increasing N not only rapidly increases the computational time, but might also have negative results on the accuracy of the method. 3.4.3 Eigenvalues The first observation we make with regards to the eigenvalue distribution for the 2D problem, is that it is the same for all values of δ unequal to 1, for both RBFs. This is something we expected after looking into the condition number plots of Section 3.4.1. We note that the majority of times the method produces complex eigenvalues, from 28
  • 40.
    which for δ= 1, the majority have real and imaginary parts that are extremely close to zero. Comparing the distribution of eigenvalues produced by using our two RBFs, we find that for the same value of δ using the Gaussian will result in eigenvalues closer to zero and usually a real one with a very large modulus. In contrast employing the Wendland(2D), whilst still having complex eigenvalues, they are usually situated a bit further away from 0 and the real eigenvalue with the largest modulus is smaller in comparison, see Figures 3.10 and 3.11. Our numerical experiments suggest that for δ = 1 the conditioning when using the compactly supported RBF is better than when using the Gaussian, while the same is not always true for other values of δ. We note that for = 0.5, i.e., for smoother solutions, the conditioning is generally better. (a) δ = 0.1 (b) δ = 1 Figure 3.10: Eigenvalue distribution for = 0.01 using the Wendland(2D) RBF with N = 64. (a) δ = 0.1 (b) δ = 1 Figure 3.11: Eigenvalue distribution for = 0.01 using the Gaussian RBF with N = 64. 29
  • 41.
    3.5 Chapter Summary Afterimplementing the Galerking method for our model problems the obvious con- clusion is that it is impractical to use because of the slow execution times and also the fact that it seems to be impossible to obtain a convergent scheme. We have observed that for the 1D case using the Gaussian results in a more accurate scheme but with worse conditioning, where for the 2D case the Wendland(2D) is without a doubt a better choice. Figures 3.12 and 3.13 show the numerical solution to the 1D and 2D model problem for = 0.01, where for the 2D case it is obvious that none of the RBFs produces an acceptable solution. Employing the Wendland RBFs results in sparse matrices for small enough δ and usually results in better conditioning for the method. (a) Solution obtained using the Gaussian RBF with δ = 0.05. (b) Solution obtained using the Wendland(1D) RBF with δ = 0.19. Figure 3.12: Numerical solutions for 1D model problem for = 0.01 and N = 64. (a) Solution obtained using the Gaussian RBF with δ = 0.1. (b) Solution obtained using the Wend- land(1D) RBF with δ = 0.1. Figure 3.13: Numerical solutions and pointwise error for 2D model problem for = 0.01 and N = 1024. 30
  • 42.
    Chapter 4 Generalised Interpolation Thefinal method we will discuss is called generalised interpolation. This method is unique in the sense that it results in symmetric collocation matrices which are also positive definite for constant coefficient PDEs when positive definite RBFs are used [22]. Wendland in [22] has managed to provide bounds for the smallest eigenvalue of this method for general boundary value problems, while in [4] a bound on the condition number is provided for strictly elliptic PDEs with Dirichlet boundary conditions, i.e., for Lu =f in Ω, u =gD on ∂Ω, (4.1) where Lu(x) = d i,j=1 aij(x) ∂2 ∂xi∂xj u(x) + d i=1 bi(x) ∂ ∂xi u(x) + c(x)u(x), (4.2) and the coefficients aij(x) satisfy the ellipticity condition (3.2). These results however hold only for a specific type of RBFs. As the setting discussed in [4] is closer to our model problems we will focus on it. 4.1 Method Description Before moving on to the convergence and stability results , in this section, we provide the reader with a description of the method. To start off, we define the functionals λi as follows λi(u) = Lu(xi) if xi ∈ Ω, λi(u) = u(xi) if xi ∈ ∂Ω. (4.3) For generalised interpolation, we construct our approximation s to the solution u in a slightly different way than the previous two methods. The approximation s to the 31
  • 43.
    exact solution u,this time is given by s(x) = N j=1 Cjλχ j φ x − χ 2 δ , (4.4) where this time our RBF depends on two variables x and χ. Note that we have applied the functional λj to the function φ with respect to the new argument χ. In this chapter we redefine Φ as a function of two vector arguments, that is, Φ(x, χ) = φ(r), where r = x−χ 2 δ . As always, we are using N uniformly distributed points in order to simplify the comparison process. We then substitute our expression for s(x) back into the PDE and boundary conditions or equivalently apply the functional λi to s with respect to x, that is, λx i (s) = fi, i = 1, ..., N, (4.5) where fi = f(xi) if xi ∈ Ω, fi = gD(xi) if xi ∈ ∂Ω. (4.6) This can be rewritten as a matrix equation AC = F, (4.7) where the entries of the symmetric collocation matrix A are given by Ai,j = λx i λχ j φ x − χ 2 δ , (4.8) and the entries of the right-hand-side vector F are given by Fi = fi. Solving the matrix Equation (4.7) will provide us with the unknown coefficients Cj and hence the approximate solution s(x). 4.2 Stability and Accuracy We first need to give the definition of the fractional Sobolev space Wσ 2 (Rd ). As stated in [4] ‘we describe functions in the fractional Sobolev space Wσ 2 (Rd ) as those square integrable functions that are finite in the norm f 2 Wσ 2 (Rd) = Rd | ˆf(ω)|2 (1 + ω 2 2)σ dω, (4.9) where ˆf is the usual Fourier transform’. We note here that σ can be both an integer and a fraction. Also needed is the definition of a reproducing kernel to a Hilbert space, which is given below as found in [4]. 32
  • 44.
    Definition 4.2.1. (ReproducingKernel) Suppose H ⊂ C(Ω) denotes a real Hilbert space of continuous functions f : Ω → R; then Φ : Ω × Ω → R is said to be a reproducing kernel for H if • Φ(·, χ) ∈ H for all χ ∈ Ω • f(χ) = (f, Φ(·, χ))H for all f ∈ H and all χ ∈ Ω. Both of our RBFs are reproducing kernels to a Hilbert space, also know as the native space of the RBF [5]. We now move on to the main stability result proven in [4]. It is important here to notice that the result only holds for the Wendland RBFs which are compactly supported reproducing kernels. As we will also see from numerical experiments in the following sections, this result does not apply to the Gaussian RBF. Theorem 4.2.1. Suppose Wσ 2 (Rd ) with σ > d/2+2 has a compactly supported reproducing kernel, Φ, which has a Fourier transform satisfying c1(1 + ω 2 2)−σ ≤ ˆΦ(ω) ≤ c2(1 + ω 2 2)−σ . Let 0 < δ ≤ 1. Suppose L is a linear, strictly elliptic, bounded, second order differ- ential operator. Then for sufficiently small δ the condition number of the collocation matrix A can be bounded by cond(A) ≤ Cδ−4 1 + 2δ h d δ h 2σ−d with a constant C independent of h and δ. The bound on the condition number of A is derived from the bounds λmin ≥ C1 h δ 2σ−d , λmax ≤ C2δ−4 1 + 2δ h d , (4.10) where λmax and λmin are respectively the maximum and minimum eigenvalues of A and C1, C2 > 0 are independent of h and δ. It is proved in [19], that Wendland(1D) and Wendland(2D) RBFs satisfy the conditions of Theorem 4.2.1 with σ = 3 and σ = 3.5 respectively. Wendland and Farrell in [4] have also provided a bound on the L2 norm of the error. We give below Lemma 4.4 which is stated and proven in [4]. 33
  • 45.
    Gaussian Wendland (1D)Wendland (2D) φ (r) −4r(2r2 − 3)e−r2 (1 − r)2 +(−1680r2 + 840r) (1 − r)3 +(5040r − 11760r2 ) φiv (r) (12 − 48r2 + 16r4 )e−r2 (1 − r)+(6720r2 − 5880r + 840) 1680(1 − r)2 +(5r − 3)(7r − 1) Table 4.1: Derivatives of RBFs. Theorem 4.2.2. (L2−error) Assume δ ∈ (0, 1]. Let u ∈ Hσ (Ω) be the solution of (4.1). Let the domain Ω have a Ck,s boundary for s ∈ (0, 1] such that σ = k + s and k := σ > 2 + d/2. Then the error between the solution u and its generalised interpolation approximation s can be bounded in the L2 norm by u − s L2(Ω) ≤ Cδ−σ hσ−2 u Hσ(Ω), where h = supx∈Ω min1≤j≤N x − xj 2. In the original version of the theorem, h is the maximum between the mesh norm of the points in Ω and the mesh norm of the points on the boundary ∂Ω. In our case we have uniformly distributed points so h can be taken to simply be the meshsize. The definition of a Ck,s boundary is given in [23], Definition 2.7. In our numerical experiments we will consider three cases for δ, the stationary setting δ = ch, the nonstationary setting δ = ch1−2/σ and also keeping δ fixed. We note that the nonstationary setting will only be considered for the Wendland RBFs. 4.3 One Dimension Prior to coding the method for our 1D model problem in MATLAB, we need to calculate the appropriate derivatives. Let Gj = λχ j φ(|x−χ| δ ) for j = 1, ..., N where Gj(x) = − ∂2 Φ ∂χ2 + ∂Φ ∂χ χ=xj , j = 2, ..., N − 1, Gj(x) = φ |x − xj| δ , j = 1, N, (4.11) and ∂Φ ∂χ = −1 δ dφ dr if x > χ 1 δ dφ dr if x < χ , ∂2 Φ ∂χ2 = 1 δ2 d2 φ dr2 . (4.12) We can then write s(x) = N j=1 CjGj(x) which we essentially substitute back into the PDE and boundary conditions. We therefore need to calculate up to the fourth 34
  • 46.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 4.1: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Wendland(1D) RBF. The minimum error is 2.51 × 10−1 , for δ = 5 with condition number 2.85 × 109 . derivative with respect to r of our RBFs. These are given in Tables 2.1 and 4.1. We also need the first and second derivatives of Gj with respect to x, which are given by Gj(x) = ∂r ∂x χ=xj − δ2 d3 φj dr3 + ∂r ∂χ χ=xj d2 φj dr2 , Gj (x) = − δ4 d4 φj dr4 + 1 δ2 ∂r ∂χ χ=xj d3 φj dr3 , (4.13) where r = |x − χ| δ . We can now move on to developing our code in MATLAB and experimenting. 4.3.1 Effect of δ As we have seen for the previous two methods, the choice of δ plays an important role on how accurate and ill-conditioned our method is. We start off by considering the case for which the exact solution to our model problem is stiff, i.e., for = 0.01. Observing Figures 4.1 and 4.2, the first thing that we notice is that the Wendland(1D) RBF allows the use of a wider range of δ values. However using the Gaussian allows us to achieve a by far more accurate approximation to the solution, even though by doing so we jeopardise the stability of the method. The Wendand(1D) seems to offer exponential convergence as we increase δ. If we consider more values of δ we can slightly improve the accuracy but we also increase the condition number. For example for δ = 20 and N = 64 the error is 0.2455 with the condition number shooting up 35
  • 47.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 4.2: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 64, using the Gaussian RBF. The minimum error is 8.40 × 10−4 , for δ = 0.09 with condition number 4.97 × 1017 . (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 4.3: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Wendland(1D) RBF. The minimum error is 1.14×10−4 , for δ = 5 with condition number 597. to 2.04 × 1011 . We feel that it might not be worth sacrificing stability for such a small improvement in accuracy. The Gaussian on the other hand, produces accurate solutions for only a few δ values. This has the disadvantage that it might be hard to choose a ‘good’ δ value. In general, the behaviour of the error versus the δ values is similar for other values of N for the Wendland(1D), whereas for the Gaussian we observe a less erratic behaviour for smaller values of N. Now let us have a look at the smoother case for = 0.5. Observing Figure 4.3 we see that the error reduces as we increase δ while the condition number seems to decrease initially only to start increasing again later. If we produce the same plots considering a larger range of δ values, for example up to δ = 50, the resulting plots are 36
  • 48.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 4.4: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 16, using the Gaussian RBF. The minimum error is 1.36 × 10−8 , for δ = 0.86 with condition number 5.15 × 1017 . almost identical to those of Figure 4.1. The Gaussian again provides a much better approximation, see Figure 4.4, but the difference in the conditioning of the method is enormous. We note here, that even if the behaviour of the error is again erratic, almost all of the δ values considered here will produce acceptable accuracy which is almost always better than that produced by Wendland(1D). Once more the decision on which one of the RBFs is better, depends on whether accuracy or stability is more important to the user. 4.3.2 Varying N The question we want to answer in this section is whether convergence can be obtained by increasing N and what should δ be in order to achieve it. Let us first consider the case = 0.01. Our numerical experiments have shown than the stationary setting, (a) Gaussian with δ = 5h. (b) Wendland(1D) with δ = 0.25. Figure 4.5: Log of the error versus N for = 0.01. 37
  • 49.
    δ = ch,does not lead to a convergent scheme when the Wendland(1D) is used, while for the nonstationary setting δ = ch1−2/σ or keeping the scaling parameter fixed, we obtain convergence, see Figure 4.5(b). This agrees with Theorem 4.2.2. How accurate the scheme is depends on the value of c for the nonstationary setting or what fixed δ we choose. We observed that larger values of c or in general δ produce more accurate approximations but also increased condition numbers. Now, if we use the Gaussian, the method does not converge if we keep δ fixed. Also, when the Gaussian is used with the stationary setting, the method seems to initially converge but then starts to diverge see Figure 4.5(a). However, even in this situation where the accuracy might not necessary improve as we increase N, using the Gaussian leads to more accurate results. For = 0.5, we have similar results. Again, it seems hard to make the method converge when the Gaussian is used and for the Wendland(1D) it converges if we fix the scaling parameter or use the nonstationary setting, see Figure 4.6. Once more we get better results for large values of c, e.g. c = 20, 25 and so on. We note here that higher accuracy can be achieved for the case = 0.5 in comparison to = 0.01. The Wendland(1D) seems to have an advantage over the Gaussian, as for the right choice of δ the method converges as we increase N. Figure 4.6: Log of the error versus N for = 0.5, using the Wendland(1D) with δ = 15h1−2/σ . 4.3.3 Eigenvalues We are interested in the distribution of the eigenvalues of the collocation matrix A. More importantly we want to check whether the bounds for the maximum and minimum eigenvalue given by (4.10) hold in our case. In all our numerical experiments, for both choices of which we are considering, using the Wendland(1D) produced strictly positive real eigenvalues, see Tables C.1 38
  • 50.
    (a) Gaussian withδ = 0.25. (b) Wendland(1D) with δ = 0.25. Figure 4.7: Eigenvalue distributions for = 0.01. and C.3. The Gaussian however, for up to a certain number of points yielded real eigenvalues some of which were negative and extremely close to zero. Increasing N further caused complex eigenvalues with very small imaginary parts to appear, see Tables C.2 and C.4. This fact confirms that using the Wendland(1D) function pro- duces a positive definite collocation matrix. However our numerical findings suggest this is not the case if we use the Gaussian RBF, see Figure 4.7. Since the colloca- tion matrix A is always symmetric this implies that the eigenvalues should always be real. Therefore the imaginary parts must be due to rounding errors either in the computation of the matrix entries or in the computation of the eigenvalues. In fact the matrix A should be positive definite [4], so the negative eigenvalues must also be an artifact. Nevertheless, the smallest Gaussian eigenvalues are significantly smaller than the smallest Wendland ones. Now, as far as the bounds given by (4.10) are concerned we found that they were satisfied when the Wendland(1D) was used, for all the different values of N we considered and δ ∈ (0, 1], for both values of . Of course there is no reason to check these bounds for the Gaussian as they only apply to compactly supported RBFs. 4.4 Two Dimensions Generalised interpolation requires us to calculate up to the fourth derivative of our RBFs, therefore it can only be used with RBFs that at least belong in C4 . Another drawback that becomes obvious when the method is used for higher dimensions, is the need to basically apply the PDE twice, which complicates the process significantly even in just two dimensions. 39
  • 51.
    For our 2Dmodel problem, let us write s(x) = N j=1 CjGj(x) where Gj(x) = λχ j φ( x−χ 2 δ ) for j = 1, ..., N. That is, Gj(x) = − 2 χΦ + (1, 2) · χΦ χ=xj , j = 1, ..., N∗ , Gj(x) = φ x − xj 2 δ , j = N∗ + 1, ..., N, (4.14) where points 1 to N∗ are located within the domain, points N∗ + 1 to N lie on the boundary and x = (x1, y1), χ = (x2, y2). The notation χ simply means take the gradient with respect to the argument χ. The next step is to substitute our expression for s back into our PDE and boundary conditions, or equivalently apply the functional λi, this time with respect to x. After long and tedious calculations we end up with the following expressions, λiGj(x) = − 2 Gj(x) + (1, 2) · Gj(x) x=xi = 2 δ4 d4 φ dr4 + 2 2 δ4r d3 φ dr3 − 5 δ2r dφ dr − 1 δ4 2 + δ2 r2 + 4(x2 − x1)(y2 − y1) + 3(y2 − y1)2 × 1 r2 d2 φ dr2 − 1 r3 dφ dr2 x=xi (4.15) for i = 1, ..., N∗ and λiGj(x) = Gj(xi) for i = N∗ + 1, ..., N. Expression (4.15) is used for the entries (i, j) of matrix A for which both xi and xj are points lying within our domain. We notice that we have terms that have divisions by powers of r. This cannot be left as it is, as our code will not work due to divisions by zero in the diagonal entries of our matrix. We therefore need to simplify these further and the only way to do so is to actually substitute the appropriate expression for φ and its derivatives and hope that we will be able to cancel out the problematic terms. This process involves a significant amount of algebra and it turns out to be hard and confusing to do by hand, especially for the Wendland(2D) RBF whose expression is a bit longer. For this reason at this point we had to make use of MAPLE in order to obtain the appropriate expressions for the entries of the collocation matrix A. We found that we could cancel out all divisions by powers of r. 4.4.1 Effect of δ One more time, the observations we make about the effect of δ for the 2D model problem are similar to those we had for the 1D model problem. Making use of the Gaussian RBF generally produces more accurate results for a well chosen scaling 40
  • 52.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 4.8: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Wendland(2D) RBF. The minimum error is 2.27 × 10−1 , for δ = 5 with condition number 3.48 × 1012 . (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure 4.9: Log of the error and condition number versus the scaling parameter δ for = 0.01 and N = 1024, using the Gaussian RBF. The minimum error is 1.45 × 10−1 , for δ = 0.25 with condition number 4.63 × 1022 . parameter. However, the optimal value for δ is usually situated in parts of the graph that look unstable, see Figure 4.9. This phenomenon weakens as we decrease the number of points, especially for = 0.5. In contrast, for the Wendland(2D) the behaviour of the error does not become erratic as we vary δ. Generally increasing the value of the scaling parameter seems to reduce the error while increasing the condition number of the collocation matrix, see Figure 4.8. 4.4.2 Varying N Investigating convergence for the 2D model problem leads us to similar observations to the ones made for the 1D model problem. Again we found that a convergent scheme 41
  • 53.
    (a) Gaussian withδ = 10h. (b) Wendland(2D) with δ = 10h1−2σ. Figure 4.10: Log of the error versus N for = 0.5. could only be obtained for = 0.01 if the compactly supported Wendland(2D) RBF was used with a fixed δ or with δ = ch1−2/σ , where σ = 3.5. As before, increasing c seems to help us achieve better accuracy. Once more, our findings agree with Theorem 4.2.2. In our numerical experiments for = 0.5, however, we found that the Gaussian could produce a convergent scheme for δ = ch, providing us with better accuracy than the Wendland(2D) RBF, see Figure 4.10. Of course this increased accuracy that the Gaussian RBF provides us with goes together with an enormous condition number. 4.4.3 Eigenvalues Experimenting with various values for N we see that as for the 1D case using the Wendland(2D) produces a collocation matrix with only real and positive eigenvalues for both = 0.01 and = 0.5. Furthermore, we found that these eigenvalues satisfied the bounds given by (4.10) for δ ∈ (0, 1]. Now if we use the Gaussian RBF, in contrast to the 1D case, we only found real eigenvalues in all our numerical experiments. However the majority of times some of them were negative, even though with a very small modulus. The largest in modulus negative eigenvalue in Figure 4.11(a) is −1.74×10−13 . Now, since the matrix is again theoretically positive definite, we believe that the negative eigenvalues appear due to rounding errors, as for the 1D model problem. This confirms, however, that similar bounds to (4.10) cannot be obtained when we use translates of the Gaussian RBF as our basis functions, as we had also deduced for the 1D case. 42
  • 54.
    (a) Gaussian withδ = 0.5. (b) Wendland(2D) with δ = 0.5. Figure 4.11: Eigenvalue distributions for = 0.01 with N = 256. 4.5 Chapter Summary It is clear that generalised interpolation is a stable method, whose eigenvalues can be bounded, when used with the compactly supported Wendland RBFs. However if one chooses to use the Gaussian function, the method produces by far more accurate solutions even though the method becomes unstable, making the choice of δ difficult. Hence, it is hard to choose δ in a way that causes the method to converge as we increase N for the Gaussian, whereas this can be easily done for the Wendland RBFs. Figures 4.12 and 4.13 show numerical solutions obtained through generalised interpolation for for both model problems. (a) Solution obtained using the Gaussian RBF with δ = 0.09. (b) Solution obtained using the Wendland(1D) RBF with δ = 5. Figure 4.12: Numerical solutions for 1D model problem for = 0.01 and N = 64. 43
  • 55.
    (a) Solution obtainedusing the Gaussian RBF with δ = 0.25. (b) Solution obtained using the Wend- land(1D) RBF with δ = 5. Figure 4.13: Numerical solutions and pointwise error for 2D model problem for = 0.01 and N = 1024. 44
  • 56.
    Chapter 5 Method Comparison Inthe previous three chapters we have presented three different methods that use RBFs in order to solve PDEs. We have talked through their implementation in general and also more specifically for our model problems in one and two dimensions. We implemented each of the three methods using two different types of RBFS, the compactly supported Wendland functions and the Gaussian. In each chapter we have looked into the accuracy and stability of the methods for the different RBFs. We have also investigated the effect of the scaling parameter δ on the accuracy of the method. In this chapter we aim to compare the three methods. As mentioned at the beginning of this project a desirable algorithm would combine ease of implementation, accuracy, stability and efficiency. We therefore want to conclude whether one of these methods satisfies our requirements better than the other two, when used on convection-diffusion equations. 5.1 Ease of Implementation A method that is easy to implement is immediately a lot more attractive to a possible user. When implementing the methods, we found that the collocation algorithm was the easiest to implement. Its advantage over the Galerkin method was the fact that we did not need to concern ourselves with weak formulations and more importantly numerical integrations. It was also simpler than generalized interpolation, as we did not have to consider any functionals and it involved less differentiation. The simplicity of collocation was more evident when we considered the 2D problem. The Galerkin formulation and generalized interpolation method become increasingly complex as we consider problems in higher dimensions because of the integrations and multiple differentiations respectively. 45
  • 57.
    5.2 Accuracy andStability Two very important aspects of a method are its accuracy and its stability. In this section we will compare the methods on the quality of the solutions they provide and on their stability, which has to do with the condition number of the matrix A and how sensitive the method is to the choice of δ. This section is based on the tables of Appendix D, Sections D.1 and D.2. 5.2.1 One Dimension We will first look how well the methods perform when the solution is stiff, i.e., we take = 0.01. Suppose our RBF of choice is the Wendland(1D). Comparing the accu- racy of the methods for different number of points, we find that collocation provides the best results, followed closely be the Galerkin formulation method. Generalised interpolation is proven useless in this setting as it does not provide us with satisfac- tory results, compared to the other two methods. If we look at the stability of the methods, the best conditioning is given by generalised interpolation and the worse by the Galerkin formulation. Recall the Galerkin method was very sensitive to the choice of δ and convergence by increasing N could not be obtained for any setting for δ, in contrast with the other two methods. Suppose now that we use translates of the Gaussian RBF as our basis functions. The first observation is that all three methods are sensitive to the choice of δ and in- creasing N does not provide a convergent scheme for any of the settings for δ which we have considered, i.e., δ = ch and δ = c. The accuracy however is generally better than when we used the Wendland(1D) RBF. The most accurate method is now generalised interpolation followed by collocation. The Galerkin method, even though it provides acceptable accuracy, still stays in last place. In terms of conditioning, all three meth- ods have large condition numbers, but the worst conditioning is given by the Galerkin method. Generalised interpolation and collocation have similar conditioning. Now let us consider the case = 0.5 when the Wendland(1D) is used. Each of the methods we have considered produced acceptable results, however for most choices of N, generalised interpolation was outperformed by the other two methods. The Galerkin method produced better accuracy than collocation for up to N = 32 and after that, the roles were reversed. In terms of conditioning, the Galerkin formulation was by far the worst of the three and the only one for which convergence could not be obtained by increasing N for any setting of δ, while generalised interpolation was by far the best. 46
  • 58.
    Using the GaussianRBF makes all the methods unstable, i.e., we have large condition numbers and hence sensitivity at the choice of δ. The situation is similar to when = 0.01 with generalised interpolation giving the highest accuracy and the Galerkin method giving the lowest. One more time, using the Gaussian RBF with the right δ results in higher accuracy for all three methods, when compared to the Wendland(1D) RBF. 5.2.2 Two Dimensions Let us now compare the performance of the methods for our two dimensional problem. We start off again with the stiff case for = 0.01 and using the Wendland(2D) RBF. As for the corresponding case for the 1D problem, the Galerkin method results in the worse accuracy and conditioning. Generalised interpolation even though slightly better, still does not produce any useful solutions. Collocation gives the best accuracy and conditioning, making it clearly better than the other two methods and the only one that actually results in approximate solutions that look like the exact ones. Now, employing the Gaussian RBF for = 0.01 not only does not greatly improve the accuracy but in some cases we find that it actually makes it worse. This fact combined with the higher instability of the algorithms does not make the Gaussian an attractive choice. Among the three methods, the best accuracy and conditioning is provided by collocation. The other two methods are both unusable, where the Galerkin one has the lowest accuracy between them . Next, we look into the smooth case for = 0.5. Suppose that we choose basis functions based on Wendland(2D). It turns out that collocation is once more the method that gives the most accurate approximations to our problem. The worst method in this case is the Galerkin formulation which results in the lowest accuracy and the highest condition number. It is the only algorithm for which we cannot get convergence for any of the choices of δ we have considered. Collocation and generalised interpolation converge as we increase N if δ = c or δ = ch1−2/σ . Finally the last case we will discuss is the performance of the methods for = 0.5 when we use the Gaussian RBF. We find that generalised interpolation is clearly better in terms of accuracy. While both the other two methods produce acceptable results, collocation is clearly better than the Galerkin formulation. In terms of conditioning, generalised interpolation suffers from high condition numbers, as does collocation. The Galerkin formulation, on the other hand, gives lower condition numbers. Inter- estingly enough, it seems that convergence by increasing N can be obtained only for the generalised interpolation for δ = ch. 47
  • 59.
    (a) Gaussian. (b)Wendland(2D). Figure 5.1: Execution times of methods for the 2D model problem for fixed and δ. 5.3 Efficiency It is also of interest to see how fast our methods run in MATLAB. For this reason we have performed numerical experiments where we run each method ten times, for different number of points and fixed and δ, and measure each execution time using MATLAB’s tic, toc commands. The average execution times for each method, for both 1D and 2D model problems, are presented, in seconds, in the tables of Appendix D. Figure 5.1 summarises the results for the 2D case, where it is clear the Galerkin approach is by far the worst method of the three in terms of efficiency and this is due to the numerical integration taking place in each loop. We remind the reader that we had used parallel for loops, parfor in MATLAB, in the implementation of the Galerkin formulation method in order to speed it up. Therefore the execution times here are for when two loops are performed simultaneously. No parallel for loops were used in the implementation of the other two methods. Collocation and generalised interpolation have similar execution times, with the second one being slightly slower. Similar observations can be made for the 1D version of the methods, where again the Galerkin formulation method is clearly impractical because of the very slow exe- cution times. As far as the other two methods are concerned their difference in speed is extremely small, especially for smaller values of N, see Tables D.17 and D.18. 5.4 Conclusion It is very clear at this point that the worst method in terms of accuracy, stability, ease of implementation and efficiency, is the Galerkin formulation method. Another 48
  • 60.
    thing that canbe added to the list of disadvantages of this method, is the difficulty of applying it to problems with Dirichlet boundary conditions. Now in terms of accuracy, we have seen that collocation performs better than gen- eralised interpolation in all cases, when the Wendland RBFs are used. The opposite is true when we make use of the Gaussian RBF, providing better accuracy at the cost of instability. The choice therefore seems to depend on what one regards as more important, stability or accuracy. If the highest accuracy is required, then the Gaussian RBF should be chosen and since all methods are more or less unstable for this choice of RBF, we might as well choose the one that gives the most accurate solution, i.e., generalised interpolation. However we note here that for the stiff version of the 2D model problem, collocation was the only method that could produce acceptable accuracy for both RBFs. Now, if one is willing to sacrifice a bit of accuracy in order to have stability and therefore make the choice of δ easier, the Wendland RBFs should be chosen. The most accurate method when the Wendland RBFs are used is collocation. Concluding, our personal choice would be the collocation method used with the Wendland RBFs, especially for stiff problems. It is the most efficient and easy to implement out of the three and also provides us with convergence as we increase the number of points. Its only drawbacks is that the collocation matrix A might not always be invertible, however as mentioned in [7] these cases are rare. Also, the lack of theory for collocation, as opposed to the other two approaches, might also be seen as a disadvantage, even though in the case of the Galerkin method the theory is not really applicable. 49
  • 61.
    Chapter 6 Further Work 6.1Extension: Choice of the Scaling Parameter We have observed throughout this project that the choice of the scaling parameter plays an important role in the accuracy of all three methods. We were able to find the optimal values of δ because the exact solution to our model problems was known. However this will not be the case when the algorithms are used in practice. It would therefore be very useful if there was a way to predict which value of δ should be used. Mongillo in [13] investigates the use of ‘predictor functions’ whose behaviour is similar to that of the interpolation error, in the case of collocation used for plain interpolation problems. The optimal parameter δ is chosen by minimizing these predictor functions. Uddin in [18] has tried one of these ‘predictor functions’, more specifically the ‘Leave One Out Cross Validation’ (LOOCV), in the context of Collocation applied to time-dependent PDEs. As also mentioned in [13], for LOOCV we use N − 1 points, out of the total of N points, in order to compute the solution. We then test the error at the point we have left out. That is we have, LOOCV (δ) = N i=1 |si(xi) − fi|2 , (6.1) where si is the approximate solution evaluated at N − 1 points by leaving out the ith point and fi is the exact value at points xi. This is repeated N times leaving out a different point every time. As one might imagine, this procedure increases the computational complexity of the algorithm, as we have to solve N systems of equations. This drawback can be eliminated using Rippas algorithm, [14] through [13], which is also the version of LOOCV algorithm used in [18]. What Rippa essentially showed was that it is not necessary to solve N linear systems as we have, si(xi) − fi = Ci A−1 ii . (6.2) 50
  • 62.
    (a) Gaussian. (b)Wendland(1D). Figure 6.1: Log of exact and predicted errors versus δ with N = 64 and = 0.01. RBF method used is Collocation. As stated in [13], using Rippas algorithm reduces the computational complexity to O(N3 ) rather than O(N4 ), which is the complexity for solving a dense linear system of equations. Uddin’s numerical experiments have shown that Rippa’s algorithm is not very useful when used for time-dependent PDEs. Even though in some cases the results were fairly good, in general the numerical experiments have shown that it does not always predict good values for δ. Nevertheless, looking into how well this algorithm, and more generally ‘predictor functions’, perform for the methods we have used in this project is a topic that requires further work and study. Here, we have implemented Rippa’s algorithm for all three methods using both RBFs. Detailed results can be found in Appendix E for the 1D model problem. Figure 6.1 shows the behaviour of the exact error and the error predictor function obtained through LOOCV. We can see that the predictor function mimics the error behaviour best when the Gaussian RBF is used. However our numerical experiments seem to agree with Uddin’s findings for time-dependent PDES, i.e., even though in some cases this algorithm gives good results, see Table E.3, in general it is not a very reliable way to predict δ. As already mentioned in this section, further work in finding trustworthy methods for predicting δ is required. 6.2 Extension: Point Distribution Another interesting extension to this project would be to investigate how the methods perform when non uniform distributions are used. We have seen that a uniform distribution is perfectly adequate when the solution to be approximated is smooth, i.e., in our case for = 0.5. However, for stiff problems like for = 0.01, we saw that 51
  • 63.
    a much greaternumber of points was required in order to obtain acceptable levels of accuracy. This fact provides the motivation to look into other type of meshes. In this section, we will briefly look at the Shishkin mesh [17] and how it affects the accuracy of the methods when applied to our 1D model problem. A Shishkin mesh requires an odd number of points N, or equivalently an even number of mesh spacings M = N − 1. Then, for an equation of the form − u + bu = 0, with constant b > 0, a Shishkin mesh consists of M/2 equal mesh spacings in the interval [0, 1 − σ] and M/2 equal mesh spacings in the interval [1 − σ, 1], where σ = min(1/2, 2 log(N)/b). (6.3) For our model problem we have b = 1. We note here that when σ = 0.5 then the Shishkin mesh reduces to a uniform distribution of the points, see Figure 6.2. This is the case when = 0.5, therefore we will consider the case for = 0.01. A question that naturally arises at this point is whether the value of δ should be the same for all our basis functions. In order to answer this question we will perform our experiments trying out different values of δ in each of the two differently scaled sections of the interval [0, 1]. Carrying out our numerical experiments requires us to adjust our code in order to admit two values for the scaling parameter δ instead of one, i.e., we now have δ = δ1 in the first part of the interval and δ = δ2 in the second part. We then repeatedly solved our model problem each time using different combinations for our δ values, for both RBFs and all three methods. We then picked out, for each method and (a) Shiskin mesh for = 0.5 is just a uniform distribution. (b) Shishkin mesh for = 0.01 splits the interval in two. Figure 6.2: Shishkin mesh for = 0.01 and = 0.5 using N = 27 points. 52
  • 64.
    RBF, the twoδ values, amongst those considered, for which the approximate solution displayed the lowest error. Detailed results form these experiments can be found in Appendix E, Section E.2. We found that using the Shishkin mesh in almost all cases considerably reduces the error. Exceptions are for the collocation method with both RBFs and generalised interpolation for the Gaussian, where for N = 9 and N = 27 the error actually increases. However we feel that this might be avoided if we consider more δ values from the same interval. We observe that for the Galerkin formulation method, using the Shishkin mesh leads to reductions in the error as great as 99%, for example for the Gaussian RBF for N = 27 which is the smaller error produced by any combination of method and RBF for this many points. The most accurate solution, however, is produced by collocation using the Wendland(1D) RBF for N = 243. It is also worth noting that in most cases we have δ2 < δ1. Moreover, we observe that in general the Shishkin mesh with the two different values for δ produces more ill-conditioned systems than a uniform distribution does. It is clear that a non uniform distribution of points might be beneficial for stiff problems, however it does complicates RBF methods especially with respect to how δ should be chosen. It would be interesting to look into how different distributions affect the accuracy and also the stability of each of the methods, and whether some of those methods work better for particular meshes. Without any doubt, alternative meshes and specifically the Shishkin mesh are worthy of further research. 6.3 Extension: Multilevel Algorithms Finally, a possible extension would be to look into how well the multilevel versions of the three methods perform. In general, a multilevel algorithm consists of solving the same problem on each level and only changing the right hand side. As mentioned in [20], ‘on each level the residual of the previous level is interpolated’. The multilevel version of generalized interpolation is investigated in [4] as a way to overcome the problem of ill-conditioning of the collocation matrix A. This version of the method is known to converge for δ = ch1−2/σ . A multilevel algorithm, based on a Galerkin formulation method, has also been proposed for the solution of PDEs with Neumann boundary conditions in [20]. In that paper, the multilevel algorithm is investigated as a solution to the problem of obtaining a convergent scheme and at the same time not eliminating the sparsity of the matrix A. The multilevel versions have been investigated theoretically in [4] and [20]. 53
  • 65.
    A potential extensionwould be to formulate the multilevel version for the col- location method and also implement the remaining two methods in this setting for our convection-diffusion PDEs. We would then move on to compare the conditioning and accuracy of each of the algorithms. Also, it is worth comparing them with the initial version of the methods in order to determine whether the extra effort and time required for these algorithms is well spent. 54
  • 66.
    Appendix A Collocation The δvalues chosen for each number of points are the best, in terms of accuracy of the method, within the range of values considered. We in no way claim that these are the absolute best values to be used. A.1 1D N δ Error cond(A) Eigenvalues 1 4 0.8 0.40634 7.1789 0 8 0.6 0.19869 236.232 0 16 5 0.062222 60258581.0221 0 32 5 0.022785 1386351862.49 0 64 5 0.0056014 22687684893.9963 0 128 5 0.00065556 271760927278.709 0 256 5 0.00012573 2646791243817.722 0 512 3.2 1.7216e-05 5102837511132.485 0 Table A.1: Results using Wendland function for = 0.01. 1 For Eigenvalues column: 0 indicates complex eigenvalues and 1 only real eigenvalues. 55
  • 67.
    N δ Errorcond(A) Eigenvalues 4 0.34 0.40923 8.9518 0 8 0.15 0.20734 36.3858 0 16 0.47 0.049507 1.863193727017158e+16 0 32 0.08 0.021768 131194.0161 0 64 0.12 0.00066631 4.217959126339261e+17 0 128 0.06 6.3811e-05 1.675720930657156e+18 0 256 0.02 2.392e-05 6.086050711230822e+17 0 512 0.02 2.1085e-05 4.318159163807684e+18 0 Table A.2: Results using Gaussian function for = 0.01 N δ Error cond(A) Eigenvalues 4 5 0.012211 684.8798 1 8 5 0.0014008 24962.042 1 16 5 0.00017005 480852.7521 0 32 5 2.0912e-05 7641571.0764 0 64 5 2.5846e-06 118572387.009 0 128 5 3.1776e-07 1854779891.047 0 256 5 3.976e-08 29289234459.2367 0 512 5 5.2946e-09 465347995430.5385 0 Table A.3: Results using Wendland function for = 0.5 N δ Error cond(A) Eigenvalues 4 5 0.0074665 1061857.5419 1 8 3 3.4202e-06 742939487097616.3 1 16 0.67 9.3575e-08 4.524462381605034e+16 0 32 0.33 2.4145e-08 2.759966451309931e+17 0 64 0.3 2.2839e-07 1.089502194484428e+18 0 128 0.15 9.1153e-08 3.875515968623898e+18 0 256 0.07 5.2388e-08 3.878070971754523e+18 0 512 0.19 1.151e-07 7.525978178568281e+19 0 Table A.4: Results using Gaussian function for = 0.5 56
  • 68.
    A.2 2D N δError cond(A) Eigenvalues 16 5 0.508194 186938.025475 0 64 0.3 0.301323 50.314541 0 256 0.6 0.094124 325107.765529 0 1024 5 0.015602 800617423818.607 0 4096 5 0.003259 91297640853704.6 0 Table A.5: Results using Wendland function for = 0.01 N δ Error cond(A) Eigenvalues 16 0.6 0.52026 5692.0091 0 64 0.1 0.30343 46.3017 0 256 0.1 0.14144 58512.3481 0 1024 0.07 0.029366 1085482055.205480 0 4096 0.05 0.003157 6.13886162265298e+19 0 Table A.6: Results using Gaussian function for = 0.01 N δ Error cond(A) Eigenvalues 16 5 0.024465 58605.2944 1 64 5 0.0026371 27764490.1825 0 256 5 0.0002832 6485369347.7429 0 1024 5 2.6301e-05 1033308657219.152 0 4096 5 2.4048e-06 144800011412563.3 0 Table A.7: Results using Wendland function for = 0.5 N δ Error cond(A) Eigenvalues 16 5 0.019235 82551618520355.1 1 64 1.62 9.1e-05 4.95066936940742e+18 0 256 0.49 1e-06 1.83830041750604e+19 0 1024 0.29 1e-06 3.00422330230600e+20 0 4096 0.25 6e-06 2.84007149271890e+22 0 Table A.8: Results using Gaussian function for = 0.5 57
  • 69.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure A.1: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.002637, for δ = 5 with condition number 27764490.18245. (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure A.2: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.000091, for δ = 1.62 with condition number 4.95066936940742e+18. 58
  • 70.
    Appendix B Galerkin Formulation Theδ values in the following tables give the best accuracy within the range we have considered. As the Galerkin Formulation method has painfully slow execution times, we had to restrict the range of δ values we have considered as well as the number of points. B.1 1D N δ Error cond(A) Eigenvalues 4 0.9 0.433397 58.1637 0 8 0.25 0.288309 170.291246 0 16 2 0.073294 339691646549598.9 0 32 1.46 0.022290 7.89359235137908e+16 0 64 0.19 0.005615 36392621426.390694 0 128 0.09 0.001345 23212406003.945152 0 256 0.06 0.000874 70484873188531.266 0 Table B.1: Results using Wendland function for = 0.01. N δ Error cond(A) Eigenvalues 4 0.35 0.43574 62.6952 0 8 1.6 0.18091 4.328950475744867e+16 1 16 0.26 0.038523 5.265356321020146e+17 0 32 0.1 0.0066638 1.361314063016573e+18 0 64 0.05 0.0010972 2.619578357454881e+17 0 128 0.03 0.00020555 2.19561102223129e+18 0 256 0.01 8.5308e-05 1014938382486065 0 Table B.2: Results using Gaussian function for = 0.01. 59
  • 71.
    N δ Errorcond(A) Eigenvalues 4 5 0.0050045 242574753.3912 1 8 5 0.00029727 35655713527269.57 1 16 4.1 3.0000e-05 4.42760613410415e+16 1 32 2.03 6.5529e-06 2.105717664442266e+17 1 64 1.71 8.0281e-06 1.597721924465754e+18 0 128 1.67 9.1199e-06 2.170646574057813e+19 0 256 1.61 6.8245e-06 5.923205956564853e+19 0 Table B.3: Results using Wendland function for = 0.5. N δ Error cond(A) Eigenvalues 4 4.8 0.0020629 276392112925696.7 1 8 0.7 2.901e-05 2111846305320057 1 16 0.36 5.6092e-06 3.538336270104092e+17 0 32 0.26 2.0748e-06 4.658050252898005e+19 0 64 0.13 2.2661e-06 4.473572009763266e+19 0 128 0.05 2.5265e-06 1.301690708433364e+19 0 256 0.1 2.5583e-06 4.695635430601746e+20 0 Table B.4: Results using Gaussian function for = 0.5. B.2 2D N δ Error cond(A) Eigenvalues 16 1 0.46142 1188.8894 0 64 1 0.4336 686336430.5341 0 256 0.1 0.40509 4.924631211639372e+19 0 1024 0.1 0.29478 5.322726475826274e+19 0 Table B.5: Results using Wendland function for = 0.01. N δ Error cond(A) Eigenvalues 16 1 0.48025 5136276.6079 0 64 0.1 0.92714 9700973739763108 0 256 1 1.1924 4346555597682143 0 1024 0.1 1.4182 9.811440348332992e+19 0 Table B.6: Results using Gaussian function for = 0.01. 60
  • 72.
    N δ Errorcond(A) Eigenvalues 16 0.1 0.014283 249105911903.8025 1 64 0.1 0.0008122 7.207746060094888e+17 0 256 0.1 0.0021552 7.312308496581809e+18 0 1024 1 0.002126 9760447139440.068 0 Table B.7: Results using Wendland function for = 0.5. N δ Error cond(A) Eigenvalues 16 1 0.019492 2079281.5879 1 64 1 0.0068897 623220457232.2056 0 256 1 0.00652 402103070535078.6 0 1024 1 0.0097393 3.742233733577658e+16 0 Table B.8: Results using Gaussian function for = 0.5. (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure B.1: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.003078, for δ = 0.1 with condition number 720774606009488770. 61
  • 73.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure B.2: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.006890, for δ = 1 with condition number 623220457232.205570. 62
  • 74.
    Appendix C Generalised Interpolation C.11D N δ Error cond(A) Eigenvalues 4 1.8 0.45536 112.9914 1 8 5 0.36434 158126.1561 1 16 5 0.25498 8863191.6872 1 32 5 0.22782 233303888.1452 1 64 5 0.25101 2848330301.4329 1 128 5 0.21819 21239886014.0627 1 256 5 0.13402 106507240858.5696 1 512 5 0.05704 365354941172.672 1 Table C.1: Results using Wendland function for = 0.01. N δ Error cond(A) Eigenvalues 4 0.55 0.45047 68.8652 1 8 0.28 0.34876 1193.5552 1 16 0.56 0.03694 1.702112482244674e+16 1 32 0.16 0.010234 1.326559500827952e+17 1 64 0.09 0.000840 1.000396558251471e+18 0 128 0.04 3.2671e-05 4.153978485650736e+17 0 256 0.02 2.139e-05 7.905967417209493e+17 0 512 0.02 2.1576e-07 1.533557481596383e+18 0 Table C.2: Results using Gaussian function for = 0.01. 63
  • 75.
    N δ Errorcond(A) Eigenvalues 4 5 0.023871 51.9623 1 8 5 0.0051338 204.8982 1 16 5 0.0011399 597.2818 1 32 5 0.00026767 2383.4702 1 64 5 6.4983e-05 9897.304 1 128 5 1.6025e-05 40304.9445 1 256 5 3.9805e-06 162640.6925 1 512 5 9.9199e-07 653394.4112 1 Table C.3: Results using Wendland function for = 0.5. N δ Error cond(A) Eigenvalues 4 5 0.0074469 60144.4006 1 8 4.6 1.6556e-06 1595685885209409 1 16 0.86 1.3606e-08 5.145780341841712e+17 1 32 0.36 3.5396e-09 1.924714104632663e+17 0 64 0.36 6.3564e-09 2.551981270282159e+18 0 128 0.1 1.121e-09 5.736698929414776e+18 0 256 0.08 4.9564e-09 2.821489837620749e+18 0 512 0.23 1.5405e-08 4.762049908883697e+19 0 Table C.4: Results using Gaussian function for = 0.5. C.2 2D N δ Error cond(A) Eigenvalues 16 5 0.48945 66049.23 1 64 1.1 0.43249 194217.6804 1 256 0.9 0.3511 21864049.0801 1 1024 5 0.22714 3483390037138.676 1 4096 5 0.18793 504276784046839.3 1 Table C.5: Results using Wendland function for = 0.01. N δ Error cond(A) Eigenvalues 16 0.44 0.51916 890.5127 1 64 0.22 0.46686 47120.3821 1 256 0.61 0.24758 1.465149099790438e+21 1 1024 0.25 0.14459 4.631545773847626e+22 1 4096 0.04 0.044236 445868376077.6423 1 Table C.6: Results using Gaussian function for = 0.01. 64
  • 76.
    N δ Errorcond(A) Eigenvalues 16 5 0.032098 51058.8644 1 64 5 0.0061729 70823500.5084 1 256 5 0.001049 22627183199.4702 1 1024 5 0.00015061 4098731439680.468 1 4096 5 1.992e-05 609077055765232 1 Table C.7: Results using Wendland function for = 0.5. N δ Error cond(A) Eigenvalues 16 2.15 0.008364 361092141.253364 1 64 1.65 0.000042 1.49584049996374e+18 1 256 0.69 3.3031e-07 1.342322394773982e+20 1 1024 0.31 3.3604e-08 4.603221314210624e+21 1 4096 0.13 3.3061e-08 1.01980084765424e+23 1 Table C.8: Results using Gaussian function for = 0.5. (a) Log of L2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure C.1: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Wendland(2D) RBF. Minimum error is 0.006173, for δ = 5 with condition number 70823500.508384. 65
  • 77.
    (a) Log ofL2 norm of the error ver- sus δ. (b) Log of condition number of the collocation matrix versus δ. Figure C.2: Log of the error and condition number versus the scaling parameter δ for = 0.5 and N = 64, using the Gaussian RBF. Minimum error is 0.000042, for δ = 1.65 with condition number 798519455837603580. 66
  • 78.
    Appendix D Comparison D.1 AccuracyComparison The numbers (1), (2), (3) are there to order the methods in terms of accuracy, namely, (1) indicates the lowest error and (3) indicates the highest. We remind the reader that we had to restrict the number of points we considered for the Galerkin method because of the really slow execution times. This is the reason behind the missing results in the tables. D.1.1 1D N Collocation Galerkin Formulation Generalised Interpolation 4 (1) 0.40634 (2) 0.433397 (3) 0.45536 8 (1) 0.19869 (2) 0.288309 (3) 0.36434 16 (1) 0.06222 (2) 0.073294 (3) 0.25498 32 (2) 0.022785 (1) 0.022290 (3) 0.22782 64 (1) 0.0056014 (2) 0.005615 (3) 0.25101 128 (1) 0.00065556 (2) 0.001345 (3) 0.21810 256 (1) 0.00012573 (2) 0.000874 (3) 0.13402 512 (1) 1.7216e-05 - (2) 0.05704 Table D.1: Results using Wendland function for = 0.01. 67
  • 79.
    N Collocation GalerkinFormulation Generalised Interpolation 4 (1) 0.40923 (2) 0.43574 (3) 0.45047 8 (2) 0.20734 (1) 0.18091 (3) 0.34876 16 (3) 0.049507 (2) 0.038523 (1) 0.03694 32 (3) 0.021768 (1) 0.0066638 (2) 0.010234 64 (1) 0.00066631 (3) 0.0010972 (2) 0.00071748 128 (2) 6.3811e-05 (3) 0.00020555 (1) 3.2671e-05 256 (2) 2.392e-05 (3) 8.5308e-05 (1) 2.139e-05 512 (2) 2.1085e-05 - (1) 2.1576e-07 Table D.2: Results using Gaussian function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 4 (2) 0.012211 (1) 0.005 (3) 0.023871 8 (2) 0.0014008 (1) 0.00029727 (3) 0.0051338 16 (2) 0.00017005 (1) 3.0000e-05 (3) 0.0011399 32 (1) 2.0912e-05 (1) 6.5529e-06 (3) 0.00026767 64 (1) 2.5846e-06 (2) 8.0281e-06 (3) 6.4983e-05 128 (1) 3.1776e-07 (2) 9.1199e-06 (3) 1.6025e-05 256 (1) 3.976e-08 (2) 6.8245e-06 (3) 3.9805e-06 512 (1) 5.2946e-09 - (2) 9.9199e-07 Table D.3: Results using Wendland(1D) function for = 0.5. N Collocation Galerkin Formulation Generalised Interpolation 4 (3) 0.0074665 (1) 0.0020629 (2) 0.0074469 8 (2) 3.4202e-06 (3) 2.901e-05 (1) 1.655e-06 16 (2) 9.3575e-08 (3) 5.6092e-06 (1) 1.3606e-08 32 (2) 2.4145e-08 (3) 2.0748e-06 (1) 3.5396e-09 64 (2) 2.2839e-07 (3) 2.2661e-06 (1) 6.3564e-09 128 (2) 9.1153e-08 (3) 2.5265e-06 (2) 1.121e-08 256 (2) 5.2388e-08 (3) 2.5583e-06 (1) 4.9564e-09 512 (2) 1.151e-07 - (1) 1.5405e-08 Table D.4: Results using Gaussian function for = 0.5. 68
  • 80.
    D.1.2 2D N CollocationGalerkin Formulation Generalised Interpolation 16 (3) 0.508194 (1) 0.46142 (2) 0.48945 64 (1) 0.301323 (3) 0.4336 (2) 0.43249 256 (1) 0.094124 (3) 0.40509 (2) 0.3511 1024 (1) 0.015602 (3) 0.29478 (2) 0.22714 4096 (1) 0.003259 - (2) 0.18793 Table D.5: Results using Wendland(2D) function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 16 (3) 0.52026 (1) 0.48025 (2) 0.51916 64 (1) 0.30343 (3) 0.92714 (2) 0.46689 256 (1) 0.14144 (3) 1.1924 (2) 0.24758 1024 (1) 0.029366 (3) 1.4182 (2) 0.14459 4096 (1) 0.003157 - (2) 0.044236 Table D.6: Results using Gaussian function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 16 (2) 0.024465 (1) 0.014283 (3) 0.032098 64 (2) 0.0026371 (1) 0.0008122 (3) 0.0061729 256 (1) 0.0002832 (3) 0.0021552 (2) 0.001049 1024 (1) 2.6301e-05 (3) 0.002120 (2) 0.00015061 4096 (1) 2.4048e-06 - (2) 1.992e-05 Table D.7: Results using Wendland(2D) function for = 0.5. N Collocation Galerkin Formulation Generalised Interpolation 16 (2) 0.019235 (3) 0.19492 (1) 0.008364 64 (2) 9.1000e-05 (3) 0.0068897 (1) 4.2e-05 256 (2) 1.0000e-06 (3) 0.00652 (1) 3.3031e-07 1024 (2) 1.0000e-06 (3) 0.0097393 (1) 3.3604e-08 4096 (2) 6.0000e-06 - (1) 3.3061e-08 Table D.8: Results using Gaussian function for = 0.5. 69
  • 81.
    D.2 Conditioning Comparison D.2.11D N Collocation Galerkin Formulation Generalised Interpolation 4 7.1789 58.1637 112.9914 8 236.232 170.291246 158126.1561 16 60258581.0221 339691646549598.9 8863191.6872 32 1386351862.49 7.89359235137908e+16 233303888.1452 64 22687684893.9963 36392621426.390694 2848330301.4329 128 271760927278.709 23212406003.945152 21239886014.0627 256 2646791243817.722 70484873188531.266 106507240858.5696 512 5102837511132.485 - 365354941172.672 Table D.9: Results using Wendland(1D) function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 4 8.9518 62.6952 68.8652 8 36.3858 4.328950475744867e+16 1193.5552 16 1.863193727017158e+16 5.265356321020146e+17 1.702112482244674e+16 32 131194.0161 1.361314063016573e+18 1.326559500827952e+17 64 4.217959126339261e+17 2.619578357454881e+17 1.000396558251471e+18 128 1.675720930657156e+18 2.19561102223129e+18 4.153978485650736e+17 256 6.086050711230822e+17 1014938382486065 7.905967417209493e+17 512 4.318159163807684e+18 - 1.533557481596383e+18 Table D.10: Results using Gaussian function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 4 684.8798 242574753.3912 51.9623 8 24962.042 35655713527269.57 204.8982 16 480852.7521 4.42760613410415e+16 597.2818 32 7641571.0764 2.105717664442266e+17 2383.4702 64 118572387.009 1.597721924465754e+18 9897.304 128 1854779891.047 2.170646574057813e+19 40304.9445 256 29289234459.2367 5.923205956564853e+19 162640.6925 512 465347995430.5385 - 653394.4112 Table D.11: Results using Wendland(1D) function for = 0.5. 70
  • 82.
    N Collocation GalerkinFormulation Generalised Interpolation 4 1061857.5419 276392112925696.7 60144.4006 8 742939487097616.3 2111846305320057 1595685885209409 16 4.524462381605034e+16 3.538336270104092e+17 5.145780341841712e+17 32 2.759966451309931e+17 4.658050252898005e+19 1.924714104632663e+17 64 1.089502194484428e+18 4.473572009763266e+19 2.551981270282159e+18 128 3.875515968623898e+18 1.301690708433364e+19 5.736698929414776e+18 256 3.878070971754523e+18 4.695635430601746e+20 2.821489837620749e+18 512 7.525978178568281e+19 - 4.762049908883697e+19 Table D.12: Results using Gaussian function for = 0.5. D.2.2 2D N Collocation Galerkin Formulation Generalised Interpolation 16 186938.025475 1188.8894 66049.23 64 50.314541 686336430.5341 194217.6804 256 325107.765529 4.924631211639372e+19 21864049.0801 1024 800617423818.607 5.322726475826274e+19 3483390037138.676 4096 91297640853704.6 - 504276784046839.3 Table D.13: Condition numbers using Wendland(2D) function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 16 5692.0091 5136276.6079 890.5127 64 46.3017 9700973739763108 47120.3821 256 58512.3481 4346555597682143 1.465149099790438e+21 1024 1085482055.205480 9.811440348332992e+19 4.631545773847626e+22 4096 6.13886162265298e+19 - 445868376077.6423 Table D.14: Condition numbers using Gaussian function for = 0.01. N Collocation Galerkin Formulation Generalised Interpolation 16 58605.2944 249105911903.8025 51058.8644 64 27764490.1825 7.207746060094888e+17 70823500.5084 256 6485369347.7429 7.312308496581809e+18 22627183199.4702 1024 1033308657219.152 9760447139440.068 4098731439680.468 4096 144800011412563.3 - 609077055765232 Table D.15: Condition numbers using Wendland(2D) function for = 0.5. 71
  • 83.
    N Collocation GalerkinFormulation Generalised Interpolation 16 82551618520355.1 2079281.5879 361092141.253364 64 4.95066936940742e+18 623220457232.2056 1.49584049996374e+18 256 1.83830041750604e+19 402103070535078.6 1.342322394773982e+20 1024 3.00422330230600e+20 3.742233733577658e+16 4.603221314210624e+21 4096 2.84007149271890e+22 - 1.01980084765424e+23 Table D.16: Condition numbers using Gaussian function for = 0.5. D.3 Efficiency Comparison D.3.1 1D N Collocation Galerkin Formulation Generalised Interpolation 4 0.3289 0.4004 0.29874 8 0.36008 0.53399 0.30214 16 0.37228 0.79611 0.32572 32 0.41707 1.8339 0.39505 64 0.4835 5.8412 0.53788 128 0.73317 18.802 0.87016 256 1.4056 71.4824 2.2811 Table D.17: Execution times of methods when the Wendland(1D) function is used. N Collocation Galerkin Formulation Generalised Interpolation 4 0.31229 0.41018 0.26693 8 0.31422 0.46013 0.28149 16 0.33549 0.63725 0.33069 32 0.42061 1.251 0.44263 64 0.54576 3.5872 0.76113 128 0.98028 11.9279 1.6383 256 2.4739 43.1757 5.4173 Table D.18: Execution times of methods when the Gaussian function is used. 72
  • 84.
    D.3.2 2D N CollocationGalerkin Formulation Generalised Interpolation 16 0.41514 4.4828 0.56719 64 0.79762 87.0723 1.6729 256 4.1495 1082.24 7.5874 1024 38.1636 20364.0093 45.0266 Table D.19: Execution times of methods when the Wendland(2D) function is used. N Collocation Galerkin Formulation Generalised Interpolation 16 0.43681 1.5006 0.67272 64 1.2195 16.2408 3.0231 256 8.1501 241.4667 15.8797 1024 77.5698 3811.1843 94.0172 Table D.20: Execution times of methods when the Gaussian function is used. 73
  • 85.
    Appendix E Further Work E.1Choice of Scaling Parameter = 0.01 δ Error Predicted δ Error Collocation 5 0.0026214 0.1 0.017358 Galerkin 0.2 0.0086031 2.2 2.7256 Gen. Interpolation 5 0.24861 0.1 0.86234 Table E.1: Results for the actual δ for which the error is minimised and for the predicted δ with the corresponding error. RBF used is the Wendland(1D) with N = 64 for = 0.01. = 0.01 δ Error Predicted δ Error Collocation 0.1 0.00046022 0.07 0.00096631 Galerkin 0.05 0.00098121 0.14 0.018369 Gen. Interpolation 0.09 0.0003916 0.1 0.0012091 Table E.2: Results for the actual δ for which the error is minimised and for the predicted δ with the corresponding error. RBF used is the Gaussian with N = 64 for = 0.01. = 0.5 δ Error Predicted δ Error Collocation 5 1.9913e-05 5 1.9913e-05 Galerkin 1.9 8.2733e-06 2.6 1.236e-05 Gen. Interpolation 5 0.00026325 5 0.00026325 Table E.3: Results for the actual δ for which the error is minimised and for the predicted δ with the corresponding error. RBF used is the Wendland(1D) with N = 32 for = 0.5. 74
  • 86.
    = 0.5 δError Predicted δ Error Collocation 0.31 3.9171e-08 0.46 1.0208e-06 Galerkin 0.17 2.3076e-06 0.11 1.8138e-05 Gen. Interpolation 0.43 4.3992e-09 0.49 2.7605e-08 Table E.4: Results for the actual δ for which the error is minimised and for the predicted δ with the corresponding error. RBF used is the Gaussian with N = 32 for = 0.5. E.2 Point Distribution This section contains results from all three methods, for both RBFs, when used with a Shishkin mesh to solve the 1D problem with = 0.01. Note that we have two values of δ here, δ1 corresponds to the value of δ used in the interval [0, 1 − σ] and δ2 is used in the interval (1 − σ, 1]. The U.Error column corresponds to the error of the approximate solution when a uniform distribution of points is used, for the value of δ which minimizes it. The column U.cond(A) shows the condition number of the corresponding matrix. E.2.1 Collocation N δ1 δ2 Error cond(A) U.Error U.cond(A) 9 0.4 0.1 0.39562 8.38e+02 0.30291 1.91e+03 27 3.8 3.8 0.0017414 7.97e+10 0.029909 6.64e+08 81 3.7 3.7 3.0637e-05 1.15e+12 0.00295 5.44e+10 243 0.9 1.4 1.9548e-06 4.35e+14 0.00014385 1.78e+12 Table E.5: Results for Collocation using the Wendland(1D) RBF for = 0.01. N δ1 δ2 Error cond(A) U.Error U.cond(A) 9 0.14 0.02 0.40512 2.39e+02 0.25087 30.0 27 0.62 0.02 0.039876 3.61e+17 0.029754 6.49e+17 81 0.26 0.04 4.8402e-05 2.96e+18 0.00027538 4.58e+17 243 0.12 0.02 4.9847e-06 2.04e+19 5.9445e-06 1.86e+18 Table E.6: Results for Collocation using the Gaussian RBF for = 0.01. 75
  • 87.
    E.2.2 Galerkin Formulation Nδ1 δ2 Error cond(A) U.Error U.cond(A) 9 2 1.4 0.012246 1.12e+17 0.28796 3.25e+02 27 3.3 0.1 0.0050187 1.62e+17 0.033607 3.28e+16 81 0.2 0.1 0.0023519 5.60e+12 0.0033483 1.76e+12 243 0.1 0.1 0.0010913 5.61e+14 0.001579 7.20e+13 Table E.7: Results for Galerkin Formulation using the Wendland(1D) RBF for = 0.01. N δ1 δ2 Error cond(A) U.Error U.cond(A) 9 0.26 0.42 0.055148 2.77e+16 0.11467 2.04e+16 27 0.28 0.02 9.0069e-05 1.60e+17 0.012188 3.53e+18 81 0.08 0.02 4.6448e-05 5.89e+19 0.0011801 4.04e+19 243 0.04 0.02 6.6215e-05 2.49e+20 0.0002221 5.92e+18 Table E.8: Results for Galerkin Formulation using the Gaussian RBF for = 0.01. E.2.3 Generalised Interpolation N δ1 δ2 Error cond(A) U.Error U.cond(A) 9 5 0.1 0.27934 9.16e+06 0.34495 3.17e+05 27 2.7 1.2 0.012462 3.12e+06 0.22412 1.13e+08 81 3.9 1.3 0.00070319 1.57e+08 0.24776 5.90e+09 243 3.7 1.3 7.4493e-05 9.60e+08 0.14088 9.57e+10 Table E.9: Results for Generalised Interpolation using the Wendland(1D) RBF for = 0.01. N δ1 δ2 Error cond(A) U.Error U.cond(A) 9 0.3 0.04 0.2901 1.08e+04 0.087719 5.88e+06 27 0.84 0.02 0.039681 8.39e+18 0.016475 1.93e+12 81 0.32 0.04 5.477e-05 2.00e+20 0.00039222 1.13e+17 243 0.08 0.04 3.1187e-06 2.31e+19 9.456e-06 2.52e+17 Table E.10: Results for Generalised Interpolation using the Gaussian RBF for = 0.01. 76
  • 88.
    Appendix F MATLAB code F.1Collocation F.1.1 Coll 1D.m 1 function [ X,A, Us ] = Coll 1D ( epsilon , N, choice , delta ) 2 % Calculates the s o l u t i o n to the ODE 3 % −e p s i l o n ∗u’ ’+u’=0 , e p s i l o n > 0 , f o r 0<x<1, 4 % with boundary conditions u (0) =1, u (1) =0. 5 % Method : Collocation 6 % Radial b a s i s function of the form p h i j = Phi ( r ) 7 % where r =|x−x j |/ delta . 8 % Artemis Nika 2014 9 10 %% D i s c r e t i z a t i o n in x−d i r e c t i o n : 11 12 % Equispaced points 13 x=l i n s p a c e (0 ,1 ,N) ; 14 15 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r ) 16 17 i f choice==1 18 % t h i s r a d i a l b a s i s function has compact support f o r O<r <1. 19 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗(8∗ r ˆ2 + 5∗ r + 1) ; 20 % h e a v i s i d e (0) =0.5 but here i t i s not a problem as f o r r=1 21 % the r e s t of Phi i s 0. 22 dPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (−56∗ r ˆ2 − 14∗ r ) ; % 1 st deriv . 23 ddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ3 ∗ (336∗ r ˆ2 − 42∗ r − 14) ; % 2nd der . 24 e l s e i f choice==2 25 % no compact support 26 Phi=@( r ) exp(−r ˆ2) ; 27 dPhi=@( r ) −2∗r ∗exp(−r ˆ2) ; 28 ddPhi=@( r ) (−2+4∗r ˆ2) ∗ exp(−r ˆ2) ; 29 e l s e 30 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = compact support , 2 = no compact support ’ ) 31 32 end 33 34 35 %% Create Matrix Equation 36 A=zeros (N,N) ; 37 % f o r 0<x<1 38 f o r i =2:N−1 39 f o r j =1:N 40 r=abs (x ( i )−x ( j ) ) / delta ; % x j −> p h i j ( x i ) 41 % f i r s t d e r i v a t i v e of p h i j a l t e r n a t e s sign because of the abs value 77
  • 89.
    42 i fx( i ) > x ( j ) 43 dphi= (1/ delta ) ∗ dPhi ( r ) ; 44 e l s e 45 dphi= − (1/ delta ) ∗ dPhi ( r ) ; 46 end 47 ddphi= (1/ delta ˆ2) ∗ ddPhi ( r ) ; 48 % Calculate e n t r i e s of matrix 49 A( i , j )= − e p s i l o n ∗ ddphi + dphi ; 50 end 51 end 52 53 % F i r s t and l a s t rows of matrix A − boundary conditions 54 f o r j =1:N 55 % f o r x i=0=x 1 ; 56 A(1 , j )= Phi ( abs(0−x ( j ) ) / delta ) ; %x 1=0 −> p h i j (0) 57 % f o r x i=1=x N ; 58 A(N, j )= Phi ( abs(1−x ( j ) ) / delta ) ; 59 end 60 61 % RHS of Matrix Equation 62 b= zeros (N, 1 ) ; 63 b (1)= 1; % because of u (0)=1 64 65 %% Solve Matrix Equation to obtain c o e f f i c i e n t vector U 66 c=Ab ; 67 68 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us 69 Nsol =100; 70 Us=zeros ( Nsol , 1 ) ; 71 X=l i n s p a c e (0 ,1 , Nsol ) ; 72 73 f o r i =1: Nsol 74 dummy=0; 75 f o r j =1:N 76 dummy=dummy+c ( j ) ∗Phi ( abs (X( i )−x ( j ) ) / delta ) ; 77 end 78 Us( i )=dummy; 79 end 80 81 82 %% Exact Solution 83 84 uExact= (1− exp ((X−1)/ e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 85 86 87 %% Error 88 Error=zeros ( Nsol , 1 ) ; 89 f o r i =1: Nsol 90 Error ( i )=abs (Us( i )−uExact ( i ) ) ; 91 end 92 93 % %% Plots 94 f i g u r e (1) 95 96 subplot (2 ,1 ,1) 97 98 plot (X, Us , ’ r ’ , ’ LineWidth ’ ,2) ; 99 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ; 100 s t r=s p r i n t f ( ’ e p s i l o n= %f , delta= %f , h= %f ’ , epsilon , delta , h) ; 101 t i t l e ( s t r ) ; 102 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Numerical Solution ’ ) ; 103 104 subplot (2 ,1 ,2) 105 plot (X, Error , ’k−∗ ’ ) 106 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Error ’ ) ; 107 108 f i g u r e (2) 109 78
  • 90.
    110 hold on 111plot (X, Us , ’ r−∗ ’ , ’ LineWidth ’ ,2) ; 112 plot (X, uExact , ’ LineWidth ’ ,2) ; 113 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ; 114 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’u ’ ) ; 115 legend ( ’ Numerical Solution ’ , ’ Exact Solution ’ ) 116 t i t l e ( s t r ) ; 117 hold o f f 118 119 120 121 end F.1.2 Coll 2D.m 1 function [ xsol , ysol ,A,U]= Coll 2D ( epsilon , N1, N2, delta , choice ) 2 % Calculates the s o l u t i o n to the PDE 3 % −e p s i l o n ∗( u xx + u yy ) + (1 ,2) ∗grad (u) =0, (x , y ) in (0 ,1) ˆ2 , 4 % with boundary conditions 5 % u (1 , y )=u(x , 1 )=0 6 % u (0 , y )=(1−exp(−2∗(1−y ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) 7 % u(x , 0 ) =(1−exp(−2∗(1−x ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) . 8 % Method : Collocation 9 % Radial b a s i s function of the form p h i j = Phi ( r ) 10 % where r =|x−x j |/ delta . 11 % Artemis Nika 2014 12 13 %% D i s c r e t i z a t i o n in x and y d i r e c t i o n 14 x=l i n s p a c e (0 ,1 ,N1) ; 15 y=l i n s p a c e (0 ,1 ,N2) ; 16 N=N1∗N2 ; % t o t a l number of nodes 17 18 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r ) 19 20 i f choice==1 21 % r a d i a l b a s i s with compact support 22 Phi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ6 ∗ (35∗ r ˆ2 + 18∗ r +3) ; 23 dPhi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗ (−280∗ r −56) ; % f i r s t d e r i v a t i v e divided by r 24 ddPhi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (1960∗ r ˆ2 − 224∗ r −56) ; 25 e l s e i f choice==2 26 % r a d i a l b a s i s with no compact support 27 % no compact support 28 Phi=@( r ) exp(−r ˆ2) ; 29 dPhi=@( r ) −2∗exp(−r ˆ2) ; % f i r s t d e r i v a t i v e divided by r 30 ddPhi=@( r ) (−2+4∗r ˆ2) ∗ exp(−r ˆ2) ; 31 e l s e 32 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = compact support , 2 = no compact support ’ ) 33 34 end 35 36 %% Create matrix equation and obtain c o e f f i c i e n t vector 37 38 % Create an Nx2 matrix containing a l l the nodes 39 X=zeros (N, 2 ) ; 40 k=1; 41 f o r i =1:N2 42 f o r j =1:N1 43 X(k , 1 )=x ( j ) ; 44 X(k , 2 )=y ( i ) ; 45 k=k+1; 46 end 47 end 48 79
  • 91.
    49 % Creatematrix A 50 A=zeros (N,N) ; 51 f o r i =1:N 52 f o r j =1:N 53 54 r=norm(X( i , : )−X( j , : ) ) / delta ; % −−> p h i j 55 56 i f ( X( i , 1 )==1 | | X( i , 2 )==1 | | X( i , 1 )==0 | | X( i , 2 ) ==0) % on the boundary 57 A( i , j )= Phi ( r ) ; 58 59 e l s e % f o r i n t e r n a l nodes 60 dphi x= ((X( i , 1 )−X( j , 1 ) ) /( delta ˆ2 ) ) ∗dPhi ( r ) ; % we mu lti plie d by r and divided dPhi by r 61 dphi y= ((X( i , 2 )−X( j , 2 ) ) /( delta ˆ2 ) ) ∗dPhi ( r ) ; 62 grad= [ dphi x , dphi y ] ’ ; 63 % dphi xx= (1/( delta ˆ2 )−(X( i , 1 )−X( j , 1 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗dPhi ( r ) +((X( i , 1 )−X( j , 1 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗ddPhi ( r ) ; 64 % dphi yy= (1/( delta ˆ2 )−(X( i , 2 )−X( j , 2 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗dPhi ( r ) +((X( i , 2 )−X( j , 2 ) ) ˆ2/( delta ˆ4 ∗ r ˆ2) ) ∗ddPhi ( r ) ; 65 l a p l a c i a n= (1/ delta ˆ2) ∗dPhi ( r ) +(1/ delta ˆ2) ∗ddPhi ( r ) ; 66 % l a p l a c i a n = dphi xx+dphi yy −> t h i s way we can avoid divi ding by r 67 % which i s zero f o r diagonal e n t r i e s 68 A( i , j )= −e p s i l o n ∗ ( l a p l a c i a n ) + [ 1 , 2 ] ∗ grad ; 69 end 70 71 72 end 73 end 74 75 % create RHS vector b 76 b=zeros (N, 1 ) ; 77 f o r i =1:N 78 i f (X( i , 1 ) ==0) % f o r x=0 79 b( i )= (1−exp(−2∗(1−X( i , 2 ) ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ; 80 e l s e i f (X( i , 2 ) ==0) % f o r y=0 81 b( i )= (1−exp(−(1−X( i , 1 ) ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 82 e l s e % a l l other nodes 83 b( i )= 0; 84 end 85 end 86 87 88 89 % Solve to get c o e f f i c i e n t vector 90 c=Ab ; 91 92 93 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us 94 N1sol =30; 95 N2sol =30; 96 97 xsol=l i n s p a c e (0 ,1 , N1sol ) ; 98 ysol=l i n s p a c e (0 ,1 , N2sol ) ; 99 Nsol=N1sol ∗ N2sol ; 100 101 Xsol=zeros (N, 2 ) ; 102 k=1; 103 f o r i =1: N2sol 104 f o r j =1: N1sol 105 Xsol (k , 1 )=xsol ( j ) ; 106 Xsol (k , 2 )=ysol ( i ) ; 107 k=k+1; 108 end 109 end 110 111 Us=zeros ( Nsol , 1 ) ; 112 f o r i =1: Nsol 113 dummy=0; 80
  • 92.
    114 f or j =1:N 115 dummy=dummy+c ( j ) ∗Phi (norm( Xsol ( i , : )−X( j , : ) ) / delta ) ; 116 end 117 Us( i )=dummy; 118 end 119 120 121 %% Arrange s o l u t i o n in a matrix to plot 122 k=1; 123 U=zeros ( N2sol , N1sol ) ; 124 f o r i =1: N2sol 125 f o r j =1: N1sol 126 U( i , j )=Us( k ) ; % in f i r s t row y=y1 , second row y=y2 , etc . 127 k=k+1; 128 end 129 end 130 131 %% plot numerical and exact s o l u t i o n 132 133 134 f i g u r e (1) 135 % numerical s o l u t i o n 136 subplot (2 ,1 ,1) 137 s u r f ( xsol , ysol ,U) ; 138 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n 139 axis ( [ 0 1 0 1 0 1 ] ) 140 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ; 141 142 143 subplot (2 ,1 ,2) 144 % exact s o l u t i o n 145 Z1=(1−exp(−(1− xsol ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 146 Z2= (1−exp(−2∗(1− ysol ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ; 147 Uexact=Z2 ’∗ Z1 ; 148 s u r f ( xsol , ysol , Uexact ) ; 149 view ( [ 4 0 , 6 5 ] ) ; 150 axis ( [ 0 1 0 1 0 1 ] ) 151 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Uexact ’ ) ; 152 153 154 %% Error 155 f i g u r e (2) 156 157 subplot (2 ,1 ,1) 158 s u r f ( xsol , ysol ,U) ; 159 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n 160 axis ( [ 0 1 0 1 0 1 ] ) 161 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ; 162 163 subplot (2 ,1 ,2) 164 Error=abs ( Uexact−U) ; 165 s u r f ( xsol , ysol , Error ) 166 view ( [ 4 0 , 6 5 ] ) ; 167 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Error ’ ) ; 168 169 170 end F.2 Galerkin Formulation F.2.1 Gal 1D.m 1 function [ X,A, Us ]=Gal 1D ( epsilon , N, choice , delta ) 81
  • 93.
    2 % Calculatesthe s o l u t i o n to the ODE 3 % −e p s i l o n ∗u’ ’+u’= 0 , e p s i l o n > 0 , f o r 0<x<1, 4 % with D i r i c h l e t boundary conditions 5 % u (0) =1, 6 % u (1) =0. 7 % Method : Galerkin Formulation 8 % Radial b a s i s function of the form p h i j = Phi ( r ) 9 % where r =|x−x j |/ delta . 10 % Artemis Nika 2014 11 12 13 14 %% D i s c r e t i z a t i o n in x−d i r e c t i o n : 15 16 % Equispaced points 17 x= l i n s p a c e (0 ,1 ,N) ; 18 19 % Parameters 20 theta= −1 ; 21 sigma= 5/(1/N) ; 22 alpha= 1; 23 24 %% Create Matrix Equation 25 A= zeros (N,N) ; % i n i t i a l i z e matrix 26 F= zeros (N, 1 ) ; % i n i t i a l i z e RHS of matrix equation 27 28 29 i f matlabpool ( ’ s i z e ’ ) == 0 % checking to see i f my pool i s already open 30 matlabpool open 31 end 32 33 34 parfor i =1:N % execute loop i t e r a t i o n s in p a r a l l e l 35 [ a , b]= c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N, x) ; 36 A( i , : )= a ; 37 F( i )= b ; 38 end 39 40 41 42 % Solve equation to obtain c o e f f i c i e n t s 43 c= AF; 44 45 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us 46 Nsol= 100; 47 Us= zeros ( Nsol , 1 ) ; 48 X= l i n s p a c e (0 ,1 , Nsol ) ; 49 50 i f ( choice == 1) 51 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗(8∗ r ˆ2 + 5∗ r + 1) ; 52 e l s e 53 Phi= @( r ) exp(−r ˆ2) ; 54 end 55 56 h=1/(N−1) ; 57 f o r i= 1: Nsol 58 dummy= 0; 59 f o r j= 1:N 60 dummy= dummy+c ( j ) ∗Phi ( abs (X( i )−x ( j ) ) / delta ) ; 61 end 62 Us( i )= dummy; 63 end 64 65 66 %% Exact Solution 67 68 uExact= (1− exp ((X−1)/ e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 69 82
  • 94.
    70 71 %% Error 72Error= zeros ( Nsol , 1 ) ; 73 f o r i= 1: Nsol 74 Error ( i )= abs (Us( i )−uExact ( i ) ) ; 75 end 76 77 %% Plots 78 f i g u r e (1) 79 80 subplot (2 ,1 ,1) 81 82 plot (X, Us , ’ r ’ , ’ LineWidth ’ ,2) ; 83 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ; 84 s t r=s p r i n t f ( ’ e p s i l o n= %f , delta= %f , s t e p s i z e h= %f ’ , epsilon , delta , h) ; 85 t i t l e ( s t r ) ; 86 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Numerical Solution ’ ) ; 87 88 subplot (2 ,1 ,2) 89 plot (X, Error , ’k−∗ ’ ) 90 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Error ’ ) ; 91 92 f i g u r e (2) 93 94 hold on 95 plot (X, Us , ’ r−∗ ’ , ’ LineWidth ’ ,2) ; 96 plot (X, uExact , ’ LineWidth ’ ,2) ; 97 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ; 98 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’u ’ ) ; 99 legend ( ’ Numerical Solution ’ , ’ Exact Solution ’ ) 100 t i t l e ( s t r ) ; 101 hold o f f 102 103 104 105 end 106 107 % D e f i n i t i o n of function c a l c u l a t e () 108 function [ a , b]= c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N, x) 109 i f ( choice==1) 110 p h i i= @(X) h e a v i s i d e (1−abs (X−x ( i ) ) / delta ) .∗ (1−abs (X−x( i ) ) / delta ) .ˆ5 . ∗ ( 8 ∗ ( abs (X−x( i ) ) / delta ) .ˆ2 + 5∗( abs (X−x( i ) ) / delta ) + 1) ; 111 d p h i i= @(X) h e a v i s i d e (1−abs (X−x ( i ) ) / delta ) .∗ (1−abs (X−x( i ) ) / delta ) .ˆ4 .∗ (−56∗( abs (X−x( i ) ) / delta ) .ˆ2 − 14∗( abs (X−x( i ) ) / delta ) ) ; 112 e l s e 113 p h i i= @(X) exp(−(abs (X−x ( i ) ) / delta ) . ˆ 2 ) ; 114 d p h i i= @(X) −2∗(abs (X−x( i ) ) / delta ) .∗ exp(−(abs (X−x ( i ) ) / delta ) . ˆ 2 ) ; 115 116 end 117 a=zeros (1 ,N) ; 118 f o r j =1:N 119 i f ( choice==1) 120 % Wendland function − Compact support 121 p h i j= @(X) h e a v i s i d e (1−abs (X−x ( j ) ) / delta ) .∗ (1−abs (X−x( j ) ) / delta ) .ˆ5 . ∗ ( 8 ∗ ( abs (X−x( j ) ) / delta ) .ˆ2 + 5∗( abs (X−x( j ) ) / delta ) + 1) ; 122 dph i j= @(X) h e a v i s i d e (1−abs (X−x ( j ) ) / delta ) .∗ (1−abs (X−x( j ) ) / delta ) .ˆ4 .∗ (−56∗( abs (X−x ( j ) ) / delta ) .ˆ2 − 14∗( abs (X−x( j ) ) / delta ) ) ; 123 124 e l s e 125 % Gaussian function 126 p h i j= @(X) exp(−(abs (X−x ( j ) ) / delta ) . ˆ 2 ) ; 127 dph i j= @(X) −2∗(abs (X−x ( j ) ) / delta ) .∗ exp(−(abs (X−x( j ) ) / delta ) . ˆ 2 ) ; 128 end 129 130 Function1= @(X) ( e p s i l o n ∗( sign (X−x ( j ) ) / delta ) ) .∗ dp hi j (X) . ∗ ( sign (X−x ( i ) ) / delta ) .∗ d p h i i (X) + ( sign (X−x( j ) ) / delta ) .∗ dph i j (X) .∗ p h i i (X) ; 131 term1= e p s i l o n ∗( theta ∗ p h i j (1) ∗( sign (1−x( i ) ) / delta ) ∗ d p h i i (1)− ( sign (1−x( j ) ) / delta ) ∗ dph i j (1) ∗ p h i i (1) ) ; 83
  • 95.
    132 term2= ep s i l o n ∗( theta ∗ p h i j (0) ∗( sign (0−x( i ) ) / delta ) ∗ d p h i i (0)− ( sign (0−x( j ) ) / delta ) ∗ dph i j (0) ∗ p h i i (0) ) ; 133 term3= sigma ∗( p h i i (1) ∗ p h i j (1) +p h i i (0) ∗ p h i j (0) ) ; 134 term extra1= −p h i j (0) .∗ p h i i (0) ; 135 a ( j )= i n t e g r a l ( Function1 , 0 , 1 ) + term1 − term2 + term3 − alpha ∗ term extra1 ; 136 end 137 138 term extra2= − 1∗ p h i i (0) ; 139 b= − e p s i l o n ∗ theta ∗( sign (0−x( i ) ) / delta ) ∗ d p h i i (0) + sigma ∗ p h i i (0) −alpha ∗ term extra2 ; 140 141 142 143 end F.2.2 Gal 2D.m 1 function [ xsol , ysol ,A,U ]=Gal 2D ( epsilon , N1, N2, choice , delta ) 2 % Calculates the s o l u t i o n to the PDE 3 % −e p s i l o n ∗( u xx + u yy ) + (1 ,2) ∗grad (u) =0, (x , y ) in (0 ,1) ˆ2 , 4 % with boundary conditions 5 % u (1 , y )=u(x , 1 )=0 6 % u (0 , y )=(1−exp(−2∗(1−y ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) 7 % u(x , 0 ) =(1−exp(−2∗(1−x ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) , 8 % using r a d i a l b a s i s f u n c t i o n s 9 % Method : Galerkin formulation 10 % Artemis Nika 2014 11 12 %% D i s c r e t i z a t i o n in x and y d i r e c t i o n 13 x=l i n s p a c e (0 ,1 ,N1) ; 14 y=l i n s p a c e (0 ,1 ,N2) ; 15 N=N1∗N2 ; % t o t a l number of nodes 16 17 %% Parameters 18 theta= −1; 19 sigma= 5/(1/N1) ; 20 alpha= 0; 21 22 23 24 %% Create Matrix Equation 25 A= zeros (N,N) ; % i n i t i a l i z e matrix 26 F= zeros (N, 1 ) ; % i n i t i a l i z e RHS of matrix equation 27 28 % Create an Nx2 matrix containing a l l the nodes 29 X= zeros (N, 2 ) ; 30 k= 1; 31 f o r i= 1:N2 32 f o r j =1:N1 33 X(k , 1 )= x( j ) ; 34 X(k , 2 )= y( i ) ; 35 k= k+1; 36 end 37 end 38 39 i f matlabpool ( ’ s i z e ’ ) == 0 % checking to see i f my pool i s already open 40 matlabpool open 41 end 42 43 parfor i =1:N % execute loop i t e r a t i o n s in p a r a l l e l 44 [ a , b ] =c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N,X) ; 45 A( i , : )= a ; 46 F( i )= b ; 47 end 48 84
  • 96.
    49 50 % Solveto get c o e f f i c i e n t vector 51 c=AF; 52 53 54 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us 55 N1sol =30; 56 N2sol =30; 57 58 xsol=l i n s p a c e (0 ,1 , N1sol ) ; 59 ysol=l i n s p a c e (0 ,1 , N2sol ) ; 60 Nsol=N1sol ∗ N2sol ; 61 62 Xsol=zeros (N, 2 ) ; 63 k=1; 64 f o r i =1: N2sol 65 f o r j =1: N1sol 66 Xsol (k , 1 )=xsol ( j ) ; 67 Xsol (k , 2 )=ysol ( i ) ; 68 k=k+1; 69 end 70 end 71 72 i f ( choice==1) 73 Phi=@( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ6 ∗ (35∗ r ˆ2 + 18∗ r +3) ; 74 e l s e 75 Phi=@( r ) exp(−r ˆ2) ; 76 end 77 78 79 Us=zeros ( Nsol , 1 ) ; 80 f o r i =1: Nsol 81 dummy=0; 82 f o r j =1:N 83 dummy=dummy+c ( j ) ∗Phi (norm( Xsol ( i , : )−X( j , : ) ) / delta ) ; 84 end 85 Us( i )=dummy; 86 end 87 88 89 %% Arrange s o l u t i o n in a matrix to plot 90 k=1; 91 U=zeros ( N2sol , N1sol ) ; 92 f o r i =1: N2sol 93 f o r j =1: N1sol 94 U( i , j )=Us( k ) ; % in f i r s t row y=y1 , second row y=y2 , etc . 95 k=k+1; 96 end 97 end 98 99 %% plot numerical and exact s o l u t i o n 100 101 102 f i g u r e (1) 103 % numerical s o l u t i o n 104 subplot (2 ,1 ,1) 105 s u r f ( xsol , ysol ,U) ; 106 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n 107 axis ( [ 0 1 0 1 0 1 ] ) 108 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ; 109 110 111 subplot (2 ,1 ,2) 112 % exact s o l u t i o n 113 Z1=(1−exp(−(1− xsol ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 114 Z2= (1−exp(−2∗(1− ysol ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ; 115 Uexact=Z2 ’∗ Z1 ; 116 s u r f ( xsol , ysol , Uexact ) ; 85
  • 97.
    117 view ([ 4 0 , 6 5 ] ) ; 118 axis ( [ 0 1 0 1 0 1 ] ) 119 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Uexact ’ ) ; 120 121 122 %% Error 123 f i g u r e (2) 124 125 subplot (2 ,1 ,1) 126 s u r f ( xsol , ysol ,U) ; 127 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n 128 axis ( [ 0 1 0 1 0 1 ] ) 129 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ; 130 131 subplot (2 ,1 ,2) 132 Error=abs ( Uexact−U) ; 133 s u r f ( xsol , ysol , Error ) 134 view ( [ 4 0 , 6 5 ] ) ; 135 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Error ’ ) ; 136 137 138 end 139 140 % D e f i n i t i o n of function c a l c u l a t e () 141 function [ a , b ] =c a l c u l a t e ( i , theta , sigma , alpha , epsilon , delta , choice ,N,X) 142 r i=@( x1 , y1 ) ( sqrt (( x1−X( i , 1 ) ) .ˆ2 +(y1−X( i , 2 ) ) . ˆ 2 ) / delta ) ; % norm () does not work with quad function 143 i f ( choice==1) 144 p h i i= @( x1 , y1 ) h e a v i s i d e (1− r i ( x1 , y1 ) ) .∗ (1− r i ( x1 , y1 ) ) .ˆ6 . ∗ (35∗ r i ( x1 , y1 ) .ˆ2 + 18∗ r i ( x1 , y1 ) +3) ; 145 dphi i dx= @( x1 , y1 ) h e a v i s i d e (1− r i ( x1 , y1 ) ) .∗ (( x1−X( i , 1 ) ) /( delta ˆ2 ) ) .∗(1 − r i ( x1 , y1 ) ) .ˆ5 . ∗ (−280∗ r i ( x1 , y1 ) −56) ; 146 dphi i dy= @( x1 , y1 ) h e a v i s i d e (1− r i ( x1 , y1 ) ) .∗ (( y1−X( i , 2 ) ) /( delta ˆ2 ) ) .∗(1 − r i ( x1 , y1 ) ) .ˆ5 . ∗ (−280∗ r i ( x1 , y1 ) −56) ; 147 e l s e 148 p h i i= @( x1 , y1 ) exp(− r i ( x1 , y1 ) . ˆ 2 ) ; 149 dphi i dx= @( x1 , y1 ) −2∗((x1−X( i , 1 ) ) /( delta ˆ2 ) ) .∗ exp(− r i ( x1 , y1 ) . ˆ 2 ) ; 150 dphi i dy= @( x1 , y1 ) −2∗((y1−X( i , 2 ) ) /( delta ˆ2 ) ) .∗ exp(− r i ( x1 , y1 ) . ˆ 2 ) ; 151 end 152 a=zeros (1 ,N) ; 153 f o r j =1:N 154 r j=@( x1 , y1 ) ( sqrt (( x1−X( j , 1 ) ) .ˆ2 +(y1−X( j , 2 ) ) . ˆ 2 ) / delta ) ; 155 156 i f ( choice==1) 157 p h i j= @( x1 , y1 ) h e a v i s i d e (1− r j ( x1 , y1 ) ) .∗ (1− r j ( x1 , y1 ) ) .ˆ6 .∗ (35∗ r j ( x1 , y1 ) .ˆ2 + 18∗ r j ( x1 , y1 ) +3) ; 158 dphi j dx= @( x1 , y1 ) h e a v i s i d e (1− r j ( x1 , y1 ) ) .∗ (( x1−X( j , 1 ) ) /( delta ˆ2 ) ) .∗(1 − r j ( x1 , y1 ) ) .ˆ5 .∗ (−280∗ r j ( x1 , y1 ) −56) ; 159 dphi j dy= @( x1 , y1 ) h e a v i s i d e (1− r j ( x1 , y1 ) ) .∗ (( y1−X( j , 2 ) ) /( delta ˆ2 ) ) .∗(1 − r j ( x1 , y1 ) ) .ˆ5 .∗ (−280∗ r j ( x1 , y1 ) −56) ; 160 e l s e 161 p h i j= @( x1 , y1 ) exp(−( sqrt (( x1−X( j , 1 ) ) .ˆ2 +(y1−X( j , 2 ) ) . ˆ 2 ) / delta ) . ˆ 2 ) ; 162 dphi j dx= @( x1 , y1 ) −2∗((x1−X( j , 1 ) ) /( delta ˆ2 ) ) . ∗ exp(− r j ( x1 , y1 ) . ˆ 2 ) ; 163 dphi j dy= @( x1 , y1 ) −2∗((y1−X( j , 2 ) ) /( delta ˆ2 ) ) . ∗ exp(− r j ( x1 , y1 ) . ˆ 2 ) ; 164 end 165 166 term1=@( x1 , y1 ) e p s i l o n ∗( dphi j dx ( x1 , y1 ) .∗ dphi i dx ( x1 , y1 ) + dphi j dy ( x1 , y1 ) .∗ dphi i dy ( x1 , y1 ) ) + ( dphi j dx ( x1 , y1 ) + 2∗ dphi j dy ( x1 , y1 ) ) .∗ p h i i ( x1 , y1 ) ; 167 term2=@( y1 ) e p s i l o n ∗ (− p h i i (0 , y1 ) .∗ dphi j dx (0 , y1 )+theta ∗ p h i j (0 , y1 ) .∗ dphi i dx (0 , y1 ) ) − sigma∗ p h i j (0 , y1 ) .∗ p h i i (0 , y1 ) ; 168 term3=@( y1 ) e p s i l o n ∗ ( p h i i (1 , y1 ) .∗ dphi j dx (1 , y1 )−theta ∗ p h i j (1 , y1 ) .∗ dphi i dx (1 , y1 ) ) − sigma∗ p h i j (1 , y1 ) .∗ p h i i (1 , y1 ) ; 169 term4=@( x1 ) e p s i l o n ∗ (− p h i i ( x1 , 0 ) .∗ dphi j dy ( x1 , 0 )+theta ∗ p h i j ( x1 , 0 ) .∗ dphi i dy ( x1 , 0 ) ) − sigma∗ p h i j ( x1 , 0 ) .∗ p h i i ( x1 , 0 ) ; 170 term5=@( x1 ) e p s i l o n ∗ ( p h i i ( x1 , 1 ) .∗ dphi j dy ( x1 , 1 )−theta ∗ p h i j ( x1 , 1 ) .∗ dphi i dy ( x1 , 1 ) ) − sigma∗ p h i j ( x1 , 1 ) .∗ p h i i ( x1 , 1 ) ; 171 172 term extra1=@( y1 ) −p h i j (0 , y1 ) .∗ p h i i (0 , y1 ) ; 86
  • 98.
    173 term extra2=@(x1 ) −2∗ p h i j ( x1 , 0 ) .∗ p h i i ( x1 , 0 ) ; 174 175 yterm=@( y1 ) term2 ( y1 )+term3 ( y1 )−alpha ∗ term extra1 ( y1 ) ; 176 xterm=@( x1 ) term4 ( x1 )+term5 ( x1 )−alpha ∗ term extra2 ( x1 ) ; 177 % quad2d i s f a s t e r than i n t e g r a l 2 , i n t e g r a l i s f a s t e r than quad 178 a ( j )= quad2d ( term1 , 0 , 1 , 0 , 1 )−i n t e g r a l ( yterm , 0 , 1 )− i n t e g r a l ( xterm , 0 , 1 ) ; 179 end 180 181 Fterm extra1=@( y1 ) −p h i i (0 , y1 ) .∗(1 − exp(−2∗(1−y1 ) / e p s i l o n ) ) ./(1 − exp(−2/ e p s i l o n ) ) ; 182 Fterm extra2=@( x1 ) −2∗ p h i i ( x1 , 0 ) .∗(1 − exp(−(1−x1 ) / e p s i l o n ) ) ./ (1−exp(−1/ e p s i l o n ) ) ; 183 Fterm1=@( y1 ) (− e p s i l o n ∗ theta .∗ dphi i dx (0 , y1 ) + sigma∗ p h i i (0 , y1 ) ) .∗(1 − exp (−2∗(1−y1 ) / e p s i l o n ) ) ./(1 − exp(−2/ e p s i l o n ) ) ; 184 Fterm2=@( x1 ) (− e p s i l o n ∗ theta .∗ dphi i dy ( x1 , 0 ) + sigma∗ p h i i ( x1 , 0 ) ) .∗(1 − exp(−(1− x1 ) / e p s i l o n ) ) ./ (1−exp(−1/ e p s i l o n ) ) ; 185 186 b= i n t e g r a l ( Fterm1 , 0 , 1 ) + i n t e g r a l ( Fterm2 , 0 , 1 ) − alpha ∗( i n t e g r a l ( Fterm extra1 , 0 , 1 ) + i n t e g r a l ( Fterm extra2 , 0 , 1 ) ) ; 187 188 189 end F.3 Generalised Interpolation F.3.1 GenInter 1D.m 1 function [ X,A, Us ]=GenInter 1D ( epsilon , N, choice , delta ) 2 % Calculates the s o l u t i o n to the ODE 3 % −e p s i l o n ∗ u ’ ’ + u ’ = 0 , e p s i l o n > 0 , f o r 0 < x < 1 , 4 % with boundary conditions u (0) = 1 , u (1) = 0. 5 % Method : Generalised i n t e r p o l a t i o n 6 % Radial b a s i s function of the form phi (x , y) = Phi ( r ) 7 % where r = | x−y |/ delta . 8 % Artemis Nika 2014 9 10 %% D i s c r e t i z a t i o n in x−d i r e c t i o n : 11 12 x=l i n s p a c e (0 ,1 ,N) ; 13 14 15 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r ) 16 17 i f choice==1 18 % t h i s r a d i a l b a s i s function has compact support f o r O<r <1. 19 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗(8∗ r ˆ2 + 5∗ r + 1) ; 20 dPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (−56∗ r ˆ2 − 14∗ r ) ; % 1 st der . 21 ddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ3 ∗ (336∗ r ˆ2 − 42∗ r − 14) ; % 2nd der . 22 dddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ2 ∗ (−1680∗ r ˆ2 + 840∗ r ) ; % 3rd der . 23 ddddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ∗ (6720∗ r ˆ2 − 5880∗ r +840) ; % 4th der . 24 e l s e i f choice==2 25 % no compact support 26 Phi=@( r ) exp(−r ˆ2) ; 27 dPhi=@( r ) −2∗r ∗exp(−r ˆ2) ; 28 ddPhi=@( r ) (−2 + 4∗ r ˆ2) ∗ exp(−r ˆ2) ; 29 dddPhi=@( r ) (12∗ r − 8∗ r ˆ3) ∗ exp(−r ˆ2) ; 30 ddddPhi= @( r ) (12−48∗ r ˆ2 + 16∗ r ˆ4) ∗ exp(−r ˆ2) ; 31 e l s e 32 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = Wendland , 2 = Gaussian ’ ) 33 end 34 35 %% Construct matrix equation 36 87
  • 99.
    37 A=zeros (N,N); 38 %f o r 0<x<1 39 f o r i =2:N−1 40 f o r j =1:N 41 r=abs (x ( i )−x ( j ) ) / delta ; % x j −> p h i j ( x i ) 42 %a l t e r n a t e sign because of the abs value 43 i f x( i ) > x ( j ) 44 dr dy=−1/delta ; 45 dr dx=1/ delta ; 46 e l s e 47 dr dy=1/ delta ; 48 dr dx=−1/delta ; 49 end 50 51 % Calculate e n t r i e s of matrix 52 i f ( j==1 | | j==N) 53 A( i , j )= −( e p s i l o n / delta ˆ2) ∗ddPhi ( r ) + dr dx ∗dPhi ( r ) ; 54 e l s e 55 dF dx= dr dx ∗ (−( e p s i l o n / delta ˆ2) ∗dddPhi ( r )+dr dy ∗ddPhi ( r ) ) ; 56 ddF dxx= (1/ delta ˆ2) ∗((− e p s i l o n / delta ˆ2) ∗ddddPhi ( r ) + dr dy ∗ dddPhi ( r ) ) ; 57 A( i , j )= − e p s i l o n ∗ ddF dxx + dF dx ; 58 end 59 end 60 end 61 62 % F i r s t and l a s t rows of matrix A − boundary conditions 63 64 A(1 ,1)= Phi (0) ; A(1 ,N)= Phi ( abs (x (1)−x (N) ) / delta ) ; 65 A(N, 1 )= Phi ( abs ( x(N)−x (1) ) / delta ) ; A(N,N)= Phi (0) ; 66 67 f o r j =2:N−1 68 % f o r x i=0=x 1 ; 69 A(1 , j )= −( e p s i l o n / delta ˆ2) ∗ddPhi ( abs(0−x ( j ) ) / delta ) + (1/ delta ) ∗dPhi ( abs(0−x( j ) ) / delta ) ; %x 1=0 −> p h i j (0) 70 % f o r x i=1=x N ; 71 A(N, j )= −( e p s i l o n / delta ˆ2) ∗ddPhi ( abs (x (N)−x( j ) ) / delta ) + (−1/ delta ) ∗dPhi ( abs (x (N) −x ( j ) ) / delta ) ; 72 end 73 74 % RHS of Matrix Equation 75 76 b= zeros (N, 1 ) ; 77 b (1)= 1; % because of u (0)=1 78 79 80 %% Solve Matrix Equation to obtain c o e f f i c i e n t vector U 81 82 c= Ab ; 83 84 85 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us 86 87 Nsol =100; 88 X=l i n s p a c e (0 ,1 , Nsol ) ; 89 Us=zeros ( Nsol , 1 ) ; 90 f o r i =1: Nsol 91 dummy=0; 92 f o r j =1:N 93 r= abs (X( i )−x ( j ) ) / delta ; 94 i f X( i ) > x ( j ) 95 dr dy= −1/ delta ; 96 e l s e 97 dr dy= 1/ delta ; 98 end 99 i f ( j==1 | | j==N) 100 mult= Phi ( r ) ; 101 e l s e 102 mult= −( e p s i l o n / delta ˆ2) ∗ddPhi ( r )+ dr dy ∗dPhi ( r ) ; 88
  • 100.
    103 end 104 dummy=dummy+c ( j ) ∗mult ; 105 end 106 Us( i )= dummy; 107 end 108 109 110 %% Exact Solution 111 112 uExact= (1− exp ((X−1)/ e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 113 114 115 %% Error 116 117 Error= zeros ( Nsol , 1 ) ; 118 f o r i =1: Nsol 119 Error ( i )= abs (Us( i )−uExact ( i ) ) ; 120 end 121 122 %% Plots 123 124 f i g u r e (1) 125 126 subplot (2 ,1 ,1) 127 plot (X, Us , ’ r−’ , ’ LineWidth ’ ,2) ; 128 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ; 129 s t r=s p r i n t f ( ’ e p s i l o n= %f , delta= %f , points N= %f ’ , epsilon , delta , N) ; 130 t i t l e ( s t r ) ; 131 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ u s ’ ) ; 132 133 subplot (2 ,1 ,2) 134 plot (X, Error , ’k−∗ ’ ) 135 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ Error ’ ) ; 136 137 f i g u r e (2) 138 139 hold on 140 t i t l e ( s t r ) ; 141 plot (X, Us , ’−∗r ’ ) ; 142 plot (X, uExact ) ; 143 axis ( [ 0 , 1 , 0 , 1 . 5 ] ) ; 144 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’u ’ ) ; 145 legend ( ’ Numerical Solution ’ , ’ Exact Solution ’ ) 146 hold o f f 147 148 149 150 end F.3.2 GenInter 2D.m 1 function [ xsol , ysol ,A,U ]=GenInter 2D ( epsilon , N1, N2, choice , delta ) 2 % Calculates the s o l u t i o n to the PDE 3 % −e p s i l o n ∗( u xx + u yy ) + (1 ,2) ∗grad (u) =0, (x , y ) in (0 ,1) ˆ2 , 4 % with boundary conditions 5 % u (1 , y )=u(x , 1 )=0 6 % u (0 , y )=(1−exp(−2∗(1−y ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) 7 % u(x , 0 ) =(1−exp(−2∗(1−x ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) , 8 % using r a d i a l b a s i s f u n c t i o n s 9 % Method : Generalised I n t e r p o l a t i o n 10 % Artemis Nika 2014 11 12 %% D i s c r e t i z a t i o n in x and y d i r e c t i o n 13 x=l i n s p a c e (0 ,1 ,N1) ; 14 y=l i n s p a c e (0 ,1 ,N2) ; 89
  • 101.
    15 N=N1∗N2 ;% t o t a l number of nodes 16 17 %% Radial b a s i s function and d e r i v a t i v e s ( wrt to r ) 18 19 i f choice==1 20 % t h i s r a d i a l b a s i s function has compact support f o r O<r <1. 21 Phi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ6 ∗(35∗ r ˆ2 + 18∗ r + 3) ; 22 dPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ5 ∗ (−280∗ r − 56) ; % 1 st der . −divided by r 23 ddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ4 ∗ (1960∗ r ˆ2 − 224∗ r − 56) ; % 2nd der . 24 %dddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ3 ∗ (7∗ r − 3) ∗1680; % 3rd der . −divided by r 25 %ddddPhi= @( r ) h e a v i s i d e (1−r ) ∗ (1−r ) ˆ2 ∗ 1680 ∗ (5∗ r −3) ∗ (7∗ r −1) ; % 4th der . 26 e l s e i f choice==2 27 % no compact support 28 Phi=@( r ) exp(−r ˆ2) ; 29 dPhi=@( r ) −2∗exp(−r ˆ2) ; % divided by r 30 ddPhi=@( r ) (−2 + 4∗ r ˆ2) ∗ exp(−r ˆ2) ; 31 %dddPhi=@( r ) (12 − 8∗ r ˆ2) ∗ exp(−r ˆ2) ; % divided by r 32 %ddddPhi= @( r ) (12−48∗ r ˆ2 + 16∗ r ˆ4) ∗ exp(−r ˆ2) ; 33 e l s e 34 e r r o r ( ’ Choice of r a d i a l b a s i s not c o r r e c t . Choices : 1 = Wendland , 2 = Gaussian ’ ) 35 end 36 37 38 %% Create matrix equation and obtain c o e f f i c i e n t vector 39 40 % Create an Nx2 matrix containing a l l the nodes 41 X=zeros (N, 2 ) ; 42 k=1; 43 f o r i =1:N2 44 f o r j =1:N1 45 X(k , 1 )=x ( j ) ; 46 X(k , 2 )=y ( i ) ; 47 k=k+1; 48 end 49 end 50 51 % Create matrix A 52 A=zeros (N,N) ; 53 f o r i =1:N 54 f o r j =1:N 55 r=norm(X( i , : )−X( j , : ) ) / delta ; % −−> p h i j 56 % on the boundary 57 i f ( X( i , 1 )==1 | | X( i , 2 )==1 | | X( i , 1 )==0 | | X( i , 2 ) ==0) 58 i f ( X( j , 1 )==1 | | X( j , 2 )==1 | | X( j , 1 )==0 | | X( j , 2 ) ==0) 59 %x j on the boundary 60 A( i , j )=Phi ( r ) ; 61 e l s e 62 % f o r x j not on the boundary 63 LaplacianPhi =1/( delta ˆ2 ) ∗dPhi ( r ) +1/( delta ˆ2) ∗ ddPhi ( r ) ; 64 dphi dx2= (X( j , 1 )−X( i , 1 ) ) /( delta ˆ2) ∗ dPhi ( r ) ; 65 dphi dy2= (X( j , 2 )−X( i , 2 ) ) /( delta ˆ2) ∗ dPhi ( r ) ; 66 GradPhi=[dphi dx2 , dphi dy2 ] ; 67 A( i , j )= −e p s i l o n ∗ LaplacianPhi + [ 1 , 2 ] ∗ GradPhi ’ ; 68 end 69 e l s e 70 i f ( X( j , 1 )==1 | | X( j , 2 )==1 | | X( j , 1 )==0 | | X( j , 2 ) ==0) 71 % f o r x j on the boundary 72 LaplacianPhi =1/( delta ˆ2 ) ∗dPhi ( r ) +1/( delta ˆ2) ∗ ddPhi ( r ) ; 73 dphi dx2= (X( i , 1 )−X( j , 1 ) ) /( delta ˆ2) ∗ dPhi ( r ) ; 74 dphi dy2= (X( i , 2 )−X( j , 2 ) ) /( delta ˆ2) ∗ dPhi ( r ) ; 75 GradPhi= [ dphi dx2 , dphi dy2 ] ; 76 A( i , j )= −e p s i l o n ∗ LaplacianPhi + [ 1 , 2 ] ∗ GradPhi ’ ; 77 e l s e 78 % x i and x j i n s i d e the domain 79 i f choice==1 80 term1= 6720∗(4∗ r −1)∗(3∗ r −2)∗( r −1)ˆ2 ∗ h e a v i s i d e (1−r ) ; 81 term2= 1680∗(1− r ) ˆ4 ∗ h e a v i s i d e (1−r ) ; 90
  • 102.
    82 e ls e 83 term1= 16 ∗(2−4∗ rˆ2+r ˆ4) ∗ exp(−r ˆ2) ; 84 term2= 4∗ exp(−r ˆ2) ; 85 end 86 term3= −5/( delta ˆ2) ∗( dPhi ( r ) ) ; 87 term4= r ˆ2∗ delta ˆ2+4∗(X( j , 1 )−X( i , 1 ) ) ∗(X( j , 2 )−X( i , 2 ) ) + 3∗(X( j , 2 )−X( i , 2 ) ) ˆ2; 88 A( i , j )= ( e p s i l o n ˆ2/ delta ˆ4) ∗term1 −term4 /( delta ˆ4) ∗term2 + term3 ; 89 end 90 end 91 end 92 end 93 94 % create RHS vector b 95 b=zeros (N, 1 ) ; 96 f o r i =1:N 97 i f (X( i , 1 ) ==0) % f o r x=0 98 b( i )= (1−exp(−2∗(1−X( i , 2 ) ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ; 99 e l s e i f (X( i , 2 ) ==0) % f o r y=0 100 b( i )= (1−exp(−(1−X( i , 1 ) ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 101 e l s e % a l l other nodes 102 b( i )= 0; 103 end 104 end 105 106 107 % Solve to get c o e f f i c i e n t vector 108 c=Ab ; 109 110 111 %% Add up c o e f f i c i e n t s to obtain s o l u t i o n vector Us 112 N1sol =30; 113 N2sol =30; 114 115 xsol=l i n s p a c e (0 ,1 , N1sol ) ; 116 ysol=l i n s p a c e (0 ,1 , N2sol ) ; 117 Nsol=N1sol ∗ N2sol ; 118 119 Xsol=zeros (N, 2 ) ; 120 k=1; 121 f o r i =1: N2sol 122 f o r j =1: N1sol 123 Xsol (k , 1 )=xsol ( j ) ; 124 Xsol (k , 2 )=ysol ( i ) ; 125 k=k+1; 126 end 127 end 128 129 Us=zeros ( Nsol , 1 ) ; 130 f o r i =1: Nsol 131 dummy=0; 132 f o r j =1:N 133 r=norm( Xsol ( i , : )−X( j , : ) ) / delta ; 134 i f ( X( j , 1 )==1 | | X( j , 2 )==1 | | X( j , 1 )==0 | | X( j , 2 ) ==0) 135 % f o r x j on the boundary 136 dummy=dummy+c ( j ) ∗Phi ( r ) ; 137 e l s e 138 % f o r x j not on the boundary 139 LaplacianPhi =1/( delta ˆ2 ) ∗dPhi ( r ) +1/( delta ˆ2) ∗ ddPhi ( r ) ; 140 dphi dx2= (X( j , 1 )−Xsol ( i , 1 ) ) /( delta ˆ2) ∗ dPhi ( r ) ; 141 dphi dy2= (X( j , 2 )−Xsol ( i , 2 ) ) /( delta ˆ2) ∗ dPhi ( r ) ; 142 GradPhi=[dphi dx2 , dphi dy2 ] ; 143 dummy=dummy+c ( j ) ∗ (− e p s i l o n ∗ LaplacianPhi + [ 1 , 2 ] ∗ GradPhi ’ ) ; 144 end 145 end 146 Us( i )=dummy; 147 end 148 91
  • 103.
    149 150 %% Arranges o l u t i o n in a matrix to plot 151 k=1; 152 U=zeros ( N2sol , N1sol ) ; 153 f o r i =1: N2sol 154 f o r j =1: N1sol 155 U( i , j )=Us( k ) ; % in f i r s t row y=y1 , second row y=y2 , etc . 156 k=k+1; 157 end 158 end 159 160 %% plot numerical and exact s o l u t i o n 161 162 163 f i g u r e (1) 164 % numerical s o l u t i o n 165 subplot (2 ,1 ,1) 166 s u r f ( xsol , ysol ,U) ; 167 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n 168 axis ( [ 0 1 0 1 0 1 ] ) 169 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ; 170 171 172 subplot (2 ,1 ,2) 173 % exact s o l u t i o n 174 Z1=(1−exp(−(1− xsol ) / e p s i l o n ) ) /(1−exp(−1/ e p s i l o n ) ) ; 175 Z2= (1−exp(−2∗(1− ysol ) / e p s i l o n ) ) /(1−exp(−2/ e p s i l o n ) ) ; 176 Uexact=Z2 ’∗ Z1 ; 177 s u r f ( xsol , ysol , Uexact ) ; 178 view ( [ 4 0 , 6 5 ] ) ; 179 axis ( [ 0 1 0 1 0 1 ] ) 180 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Uexact ’ ) ; 181 182 183 %% Error 184 f i g u r e (2) 185 186 subplot (2 ,1 ,1) 187 s u r f ( xsol , ysol ,U) ; 188 view ( [ 4 0 , 6 5 ] ) ; % viewpoint s p e c i f i c a t i o n 189 axis ( [ 0 1 0 1 0 1 ] ) 190 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’Unumer ’ ) ; 191 192 subplot (2 ,1 ,2) 193 Error=abs ( Uexact−U) ; 194 s u r f ( xsol , ysol , Error ) 195 view ( [ 4 0 , 6 5 ] ) ; 196 x l a b e l ( ’ x ’ ) ; y l a b e l ( ’ y ’ ) ; z l a b e l ( ’ Error ’ ) ; 197 198 199 end 92
  • 104.
    Bibliography [1] Zhijie Cai.Best estimates of RBF-based meshless Galerkin methods for Dirichlet problem. Applied Mathematics and Computation, 215:2149–2153, 2009. [2] Jichun Li, Alexander H.-D. Cheng and Ching-Shyang Chen. A comparison of efficiency and error convergence of multiquadric collocation method and finite element method. Engineering Analysis with Boundary Elements, 27:251–257, 2003. [3] Yong Duan and Yong-Ji Tan. A meshless Galerkin method for Dirichlet problems using radial basis functions. Journal of Computational and Applied Mathematics, 196:394–401, 2006. [4] Patricio Farrell and Holger Wendland. RBF multiscale collocation for second order elliptic boundary value problems. SIAM J. Numer. Anal., 51(4):2403– 2425, August 2013. [5] Gregory E. Fasshauer. Meshfree Approximation Methods with MATLAB. World Scientific Publishers, 2007. [6] Rolland L. Hardy. Multiquadric equations of topography and other irregular surfaces. Journal of Geophysical Research, 76(8):1905–1915, March 1971. [7] Y. C. Hon and R. Schaback. On unsymmetric collocation by radial basis func- tions. Applied Mathematics and Computation, 119:177–186, 2001. [8] E. J. Kansa. Multiquadrics– a scattered data approximation scheme with appli- cations to computational fluid dynamics–I: Surface approximations and partial derivative estimates. Computers Math. Applic., 19(8/9):127–145, 1990. [9] E. J. Kansa. Multiquadrics– a scattered data approximation scheme with applica- tions to computational fluid dynamics–II: Solutions to parabolic, hyperbolic and elliptic partial differential equations. Computers Math. Applic., 19(8/9):147–161, 1990. 93
  • 105.
    [10] Elisabeth Larssonand Bengt Fornberg. A numerical study of some radial basis function based solution methods for elliptic pdes. Computers and Mathematics with Applications, 46(5):891–902, 2003. [11] N. Mai-Duy and T. Tran-Cong. An integrated-RBF technique based on Galerkin formulation for elliptic differential equations. Engineering analysis with boundary elements, 33(2):191–199, 2009. [12] Charles A. Micchelli. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11–22, 1986. [13] Michael Mongillo. Choosing basis functions and shape parameters for radial basis function methods. SIAM Undergraduate Research Online, 2011. [14] Shmuel Rippa. An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Advances in Computational Mathematics, 11:193–210, 1999. [15] Robert Schaback. Error estimates and condition numbers for radial basis function interpolation. Advances in Computational Mathematics, 3(3):251–264, 1995. [16] K. Harriman, P. Houston, B. Senior and E. S¨uli. hp-version discontinuous galerkin methods with interior penalty for partial differential equations with nonnegative characteristic form. Contemporary Mathematics, 330:89–120, 2003. [17] H.-G. Roos, M. Stynes and L. Tobiska. Robust numerical methods for singularly perturbed differential equations. Springer Ser. Comput. Math, 24, 2008. [18] Marjan Uddin. On the selection of a good shape parameter in solving time- dependent partial differential equations using RBF approximation method. Ap- plied Mathematical Modelling, 38:135–144, 2014. [19] Holger Wendland. Error estimates for interpolation by compactly supported radial basis functions of minimal degree. Journal of approximation theory, 93(2):258–272, 1998. [20] Holger Wendland. Numerical solution of variational problems by radial basis functions. Approximation theory IX, 2:361–368, 1998. 94
  • 106.
    [21] Holger Wendland.Meshless Galerkin methods using radial basis functions. Math- ematics of Computation, 68(228):1521–1531, March 1999. [22] Holger Wendland. On the stability of meshless symmetric collocation for bound- ary value problems. BIT Numerical Mathematics, 47:455–468, March 2007. [23] J. Wloka. Partial Differential Equations. Cambridge University, 1987. [24] Grady B. Wright. Radial Basis Function Interpolation: Numerical and Analytical Developments. PhD thesis, University of Colorado, 2003. 95