Comparative Analysis of Different Numerical Methods of Solving First Order Di...ijtsrd
A mathematical equation which relates various function with its derivatives is known as a differential equation.. It is a well known and popular field of mathematics because of its vast application area to the real world problems such as mechanical vibrations, theory of heat transfer, electric circuits etc. In this paper, we compare some methods of solving differential equations in numerical analysis with classical method and see the accuracy level of the same. Which will helpful to the readers to understand the importance and usefulness of these methods. Chandrajeet Singh Yadav"Comparative Analysis of Different Numerical Methods of Solving First Order Differential Equation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd13045.pdf http://www.ijtsrd.com/mathemetics/applied-mathematics/13045/comparative-analysis-of-different-numerical-methods-of-solving-first-order-differential-equation/chandrajeet-singh-yadav
Course: Intro to Computer Science (Malmö Högskola):
A overview of computability and complexity (for non-mathematicians). definition of algorithm, turing machines, lambda, calculus and concepts of complexity
Comparative Analysis of Different Numerical Methods for the Solution of Initi...YogeshIJTSRD
A mathematical equation which involves a function and its derivatives is called a differential equation. We consider a real life situation, from this form a mathematical model, solve that model using some mathematical concepts and take interpretation of solution. It is a well known and popular concept in mathematics because of its massive application in real world problems. Differential equations are one of the most important mathematical tools used in modeling problems in Physics, Biology, Economics, Chemistry, Engineering and medical Sciences. Differential equation can describe many situations viz exponential growth and de cay, the population growth of species, the change in investment return over time. We can solve differential equations using classical as well as numerical methods, In this paper we compare numerical methods of solving initial valued first order ordinary differential equations namely Euler method, Improved Euler method, Runge Kutta method and their accuracy level. We use here Scilab Software to obtain direct solution for these methods. Vibahvari Tukaram Dhokrat "Comparative Analysis of Different Numerical Methods for the Solution of Initial Value Problems in First Order Ordinary Differential Equations" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd45066.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathematics/45066/comparative-analysis-of-different-numerical-methods-for-the-solution-of-initial-value-problems-in-first-order-ordinary-differential-equations/vibahvari-tukaram-dhokrat
Objective: The main target of this project is to study the Baby-Step Giant-Step algorithm and propose an approach for the betterment of the algorithm for solving Elliptic Curve Discrete Logarithmic Problem.
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Asynchronous Stochastic Optimization, New Analysis and AlgorithmsFabian Pedregosa
As datasets continue to increase in size and multi-core computer architectures are developed, asynchronous parallel optimization algorithms become more and more essential to the field of Machine Learning. In this talk I will describe two of our recent contributions to this topic. First, we highlight an important technical issue present in a large fraction of the recent convergence proofs for asynchronous parallel optimization algorithms and propose a new framework that resolves it [1]. Second, we propose a novel asynchronous variant of SAGA, a stochastic method that combines the low cost per iteration of SGD with the fast convergence rates of gradient descent [2]
[1] Leblond, R., Pedregosa, F., & Lacoste-Julien, S. (2018). Improved asynchronous parallel optimization analysis for stochastic incremental methods. arXiv:1801.03749, https://arxiv.org/pdf/1801.03749.pdf
[2] Pedregosa, F., Leblond, R., & Lacoste-Julien, S. (2017). Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization. In Advances in Neural Information Processing Systems, http://papers.nips.cc/paper/6611-breaking-the-nonsmooth-barrier-a-scalable-parallel-method-for-composite-optimization.pdf
Finding the Extreme Values with some Application of Derivativesijtsrd
There are many different way of mathematics rules. Among them, we express finding the extreme values for the optimization problems that changes in the particle life with the derivatives. The derivative is the exact rate at which one quantity changes with respect to another. And them, we can compute the profit and loss of a process that a company or a system. Variety of optimization problems are solved by using derivatives. There were use derivatives to find the extreme values of functions, to determine and analyze the shape of graphs and to find numerically where a function equals zero. Kyi Sint | Kay Thi Win "Finding the Extreme Values with some Application of Derivatives" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29347.pdf Paper URL: https://www.ijtsrd.com/mathemetics/other/29347/finding-the-extreme-values-with-some-application-of-derivatives/kyi-sint
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...AI Publications
In this article, we develop a continuous implicit seventh-eight method using interpolation and collocation of the approximate solution for the solution of y’ = f(x, y) with a constant step-size. The method uses power series as the approximate solution in the derivation of the method. The independent solution was then derived by adopting block integrator. The properties of the method was investigated and found to be zero stable, consistent and convergent. The integrator was tested on numerical examples ranging from linear problem, Prothero-Robinson Oscillatory, Growth Model and Sir Model. The results show that the computed solution is closer to the exact solution and also, the absolutes errors perform better than the existing methods.
Comparative Analysis of Different Numerical Methods of Solving First Order Di...ijtsrd
A mathematical equation which relates various function with its derivatives is known as a differential equation.. It is a well known and popular field of mathematics because of its vast application area to the real world problems such as mechanical vibrations, theory of heat transfer, electric circuits etc. In this paper, we compare some methods of solving differential equations in numerical analysis with classical method and see the accuracy level of the same. Which will helpful to the readers to understand the importance and usefulness of these methods. Chandrajeet Singh Yadav"Comparative Analysis of Different Numerical Methods of Solving First Order Differential Equation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd13045.pdf http://www.ijtsrd.com/mathemetics/applied-mathematics/13045/comparative-analysis-of-different-numerical-methods-of-solving-first-order-differential-equation/chandrajeet-singh-yadav
Course: Intro to Computer Science (Malmö Högskola):
A overview of computability and complexity (for non-mathematicians). definition of algorithm, turing machines, lambda, calculus and concepts of complexity
Comparative Analysis of Different Numerical Methods for the Solution of Initi...YogeshIJTSRD
A mathematical equation which involves a function and its derivatives is called a differential equation. We consider a real life situation, from this form a mathematical model, solve that model using some mathematical concepts and take interpretation of solution. It is a well known and popular concept in mathematics because of its massive application in real world problems. Differential equations are one of the most important mathematical tools used in modeling problems in Physics, Biology, Economics, Chemistry, Engineering and medical Sciences. Differential equation can describe many situations viz exponential growth and de cay, the population growth of species, the change in investment return over time. We can solve differential equations using classical as well as numerical methods, In this paper we compare numerical methods of solving initial valued first order ordinary differential equations namely Euler method, Improved Euler method, Runge Kutta method and their accuracy level. We use here Scilab Software to obtain direct solution for these methods. Vibahvari Tukaram Dhokrat "Comparative Analysis of Different Numerical Methods for the Solution of Initial Value Problems in First Order Ordinary Differential Equations" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd45066.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathematics/45066/comparative-analysis-of-different-numerical-methods-for-the-solution-of-initial-value-problems-in-first-order-ordinary-differential-equations/vibahvari-tukaram-dhokrat
Objective: The main target of this project is to study the Baby-Step Giant-Step algorithm and propose an approach for the betterment of the algorithm for solving Elliptic Curve Discrete Logarithmic Problem.
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Asynchronous Stochastic Optimization, New Analysis and AlgorithmsFabian Pedregosa
As datasets continue to increase in size and multi-core computer architectures are developed, asynchronous parallel optimization algorithms become more and more essential to the field of Machine Learning. In this talk I will describe two of our recent contributions to this topic. First, we highlight an important technical issue present in a large fraction of the recent convergence proofs for asynchronous parallel optimization algorithms and propose a new framework that resolves it [1]. Second, we propose a novel asynchronous variant of SAGA, a stochastic method that combines the low cost per iteration of SGD with the fast convergence rates of gradient descent [2]
[1] Leblond, R., Pedregosa, F., & Lacoste-Julien, S. (2018). Improved asynchronous parallel optimization analysis for stochastic incremental methods. arXiv:1801.03749, https://arxiv.org/pdf/1801.03749.pdf
[2] Pedregosa, F., Leblond, R., & Lacoste-Julien, S. (2017). Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization. In Advances in Neural Information Processing Systems, http://papers.nips.cc/paper/6611-breaking-the-nonsmooth-barrier-a-scalable-parallel-method-for-composite-optimization.pdf
Finding the Extreme Values with some Application of Derivativesijtsrd
There are many different way of mathematics rules. Among them, we express finding the extreme values for the optimization problems that changes in the particle life with the derivatives. The derivative is the exact rate at which one quantity changes with respect to another. And them, we can compute the profit and loss of a process that a company or a system. Variety of optimization problems are solved by using derivatives. There were use derivatives to find the extreme values of functions, to determine and analyze the shape of graphs and to find numerically where a function equals zero. Kyi Sint | Kay Thi Win "Finding the Extreme Values with some Application of Derivatives" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29347.pdf Paper URL: https://www.ijtsrd.com/mathemetics/other/29347/finding-the-extreme-values-with-some-application-of-derivatives/kyi-sint
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...AI Publications
In this article, we develop a continuous implicit seventh-eight method using interpolation and collocation of the approximate solution for the solution of y’ = f(x, y) with a constant step-size. The method uses power series as the approximate solution in the derivation of the method. The independent solution was then derived by adopting block integrator. The properties of the method was investigated and found to be zero stable, consistent and convergent. The integrator was tested on numerical examples ranging from linear problem, Prothero-Robinson Oscillatory, Growth Model and Sir Model. The results show that the computed solution is closer to the exact solution and also, the absolutes errors perform better than the existing methods.
Similar to Scientific Computations and Differential Equations (20)
General linear methods for ordinary differential equationsYuri Vyatkin
John Butcher
The University of Auckland, New Zealand
Acknowledgements:
Will Wright, Zdzisław Jackiewicz, Helmut Podhaisky
Huatulco, Oaxaca
July 23-28, 2006
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Scientific Computations and Differential Equations
1. Scientific Computation and
Differential Equations
John Butcher
The University of Auckland
New Zealand
Annual Fundamental Sciences Seminar
June 2006
Institut Kajian Sains Fundamental Ibnu Sina
Scientific Computation and Differential Equations – p. 1/36
2. Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
Scientific Computation and Differential Equations – p. 2/36
3. Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
Scientific Computation and Differential Equations – p. 2/36
4. Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
This computer was built for one reason only – to solve
scientific problems.
Scientific Computation and Differential Equations – p. 2/36
5. Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
This computer was built for one reason only – to solve
scientific problems.
Scientific problems have always been the driving force in
the development of faster and faster computers.
Scientific Computation and Differential Equations – p. 2/36
6. Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
This computer was built for one reason only – to solve
scientific problems.
Scientific problems have always been the driving force in
the development of faster and faster computers.
Within Scientific Computation, the approximate solution
of differential equations has always been an area of
special challenge. Scientific Computation and Differential Equations – p. 2/36
7. Differential equations can usually not be solved
analytically and numerical methods are necessary.
Scientific Computation and Differential Equations – p. 3/36
8. Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
Scientific Computation and Differential Equations – p. 3/36
9. Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Scientific Computation and Differential Equations – p. 3/36
10. Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Today we will look briefly at the history of numerical
methods for differential equations.
Scientific Computation and Differential Equations – p. 3/36
11. Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Today we will look briefly at the history of numerical
methods for differential equations.
We will then look at some particular questions
concerning the theory of general linear methods.
Scientific Computation and Differential Equations – p. 3/36
12. Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Today we will look briefly at the history of numerical
methods for differential equations.
We will then look at some particular questions
concerning the theory of general linear methods.
We will also look at some aspects of their practical
implementation.
Scientific Computation and Differential Equations – p. 3/36
13. Contents
A short history of numerical differential equations
Scientific Computation and Differential Equations – p. 4/36
14. Contents
A short history of numerical differential equations
Linear multistep methods
Scientific Computation and Differential Equations – p. 4/36
15. Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
Scientific Computation and Differential Equations – p. 4/36
16. Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Scientific Computation and Differential Equations – p. 4/36
17. Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Scientific Computation and Differential Equations – p. 4/36
18. Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Order of GLMs
Scientific Computation and Differential Equations – p. 4/36
19. Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Order of GLMs
Methods with the IRK stability property
Scientific Computation and Differential Equations – p. 4/36
20. Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Order of GLMs
Methods with the IRK stability property
Implementation questions for IRKS methods
Scientific Computation and Differential Equations – p. 4/36
21. A short history of numerical ODEs
We will make use of three standard types of initial value
problems
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1)
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ RN
, (2)
y′
(x) = f(y(x)), y(x0) = y0 ∈ RN
. (3)
Scientific Computation and Differential Equations – p. 5/36
22. A short history of numerical ODEs
We will make use of three standard types of initial value
problems
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1)
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ RN
, (2)
y′
(x) = f(y(x)), y(x0) = y0 ∈ RN
. (3)
Problem (1) is used in traditional descriptions of
numerical methods but in applications we need to use
either (2) or (3).
Scientific Computation and Differential Equations – p. 5/36
23. A short history of numerical ODEs
We will make use of three standard types of initial value
problems
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1)
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ RN
, (2)
y′
(x) = f(y(x)), y(x0) = y0 ∈ RN
. (3)
Problem (1) is used in traditional descriptions of
numerical methods but in applications we need to use
either (2) or (3).
These are actually equivalent and we will often use (3)
instead of (2) because of its simplicity.
Scientific Computation and Differential Equations – p. 5/36
24. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations.
Scientific Computation and Differential Equations – p. 6/36
25. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
Scientific Computation and Differential Equations – p. 6/36
26. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
Scientific Computation and Differential Equations – p. 6/36
27. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
Scientific Computation and Differential Equations – p. 6/36
28. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
Scientific Computation and Differential Equations – p. 6/36
29. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1)
Scientific Computation and Differential Equations – p. 6/36
30. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1)
Scientific Computation and Differential Equations – p. 6/36
31. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1) y3
f(y2
)
y3 =y2 +hf(y2)
Scientific Computation and Differential Equations – p. 6/36
32. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1) y3
f(y2
)
y3 =y2 +hf(y2)
Scientific Computation and Differential Equations – p. 6/36
33. The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1) y3
f(y2
)
y3 =y2 +hf(y2)
y4
f(y3
)
y4 =y3 +hf(y3)
Scientific Computation and Differential Equations – p. 6/36
34. More modern methods attempt to improve on the Euler
method by
Scientific Computation and Differential Equations – p. 7/36
35. More modern methods attempt to improve on the Euler
method by
1. Using more past history
Scientific Computation and Differential Equations – p. 7/36
36. More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
Scientific Computation and Differential Equations – p. 7/36
37. More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
Scientific Computation and Differential Equations – p. 7/36
38. More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
– Runge–Kutta methods
Scientific Computation and Differential Equations – p. 7/36
39. More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
– Runge–Kutta methods
3. Doing both of these
Scientific Computation and Differential Equations – p. 7/36
40. More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
– Runge–Kutta methods
3. Doing both of these
– General linear methods
Scientific Computation and Differential Equations – p. 7/36
41. Some important dates
1883 Adams & Bashforth Linear multistep methods
1895 Runge
Runge-Kutta method
1901 Kutta
1925 Nyström Special methods for second order
1926 Moulton Adams-Moulton method
1952 Curtiss & Hirschfelder Stiff problems
Scientific Computation and Differential Equations – p. 8/36
42. Linear multistep methods
We will write the differential equation in autonomous
form
y′
(x) = f(y(x)), y(x0) = y0,
Scientific Computation and Differential Equations – p. 9/36
43. Linear multistep methods
We will write the differential equation in autonomous
form
y′
(x) = f(y(x)), y(x0) = y0,
and the aim, for the moment, will be to calculate
approximations to y(xi), where
xi = x0 + hi, i = 1, 2, 3, . . . ,
and h is the “stepsize”.
Scientific Computation and Differential Equations – p. 9/36
44. Linear multistep methods
We will write the differential equation in autonomous
form
y′
(x) = f(y(x)), y(x0) = y0,
and the aim, for the moment, will be to calculate
approximations to y(xi), where
xi = x0 + hi, i = 1, 2, 3, . . . ,
and h is the “stepsize”.
Linear multistep methods base the approximation to
y(xn) on a linear combination of approximations to
y(xn−i) and approximations to y′
(xn−i), i = 1, 2, . . . , k.
Scientific Computation and Differential Equations – p. 9/36
45. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
Scientific Computation and Differential Equations – p. 10/36
46. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
Scientific Computation and Differential Equations – p. 10/36
47. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
Scientific Computation and Differential Equations – p. 10/36
48. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
Scientific Computation and Differential Equations – p. 10/36
49. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
2k value? This many quantities are passed between steps.
Scientific Computation and Differential Equations – p. 10/36
50. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
2k value? This many quantities are passed between steps.
β0 = 0: explicit.
Scientific Computation and Differential Equations – p. 10/36
51. Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
2k value? This many quantities are passed between steps.
β0 = 0: explicit. β0 = 0: implicit.
Scientific Computation and Differential Equations – p. 10/36
52. Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys,
Scientific Computation and Differential Equations – p. 11/36
53. Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
Scientific Computation and Differential Equations – p. 11/36
54. Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
yn = yn−1 + h
s
i=1
bif(Yi).
Scientific Computation and Differential Equations – p. 11/36
55. Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
yn = yn−1 + h
s
i=1
bif(Yi).
This is an s-stage 1-value method.
Scientific Computation and Differential Equations – p. 11/36
56. Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
yn = yn−1 + h
s
i=1
bif(Yi).
This is an s-stage 1-value method.
It is natural to ask if there are useful methods which are
multistage (as for Runge–Kutta methods) and multivalue
(as for linear multistep methods).
Scientific Computation and Differential Equations – p. 11/36
57. In other words, we ask if there is any value in completing
this diagram:
Runge-Kutta Linear Multistep
Euler
Scientific Computation and Differential Equations – p. 12/36
58. In other words, we ask if there is any value in completing
this diagram:
Runge-Kutta Linear Multistep
Euler
Scientific Computation and Differential Equations – p. 12/36
59. In other words, we ask if there is any value in completing
this diagram:
General Linear Methods
Runge-Kutta Linear Multistep
Euler
Scientific Computation and Differential Equations – p. 12/36
60. General linear methods
We will consider methods characterised by an
(s + r) × (s + r) partitioned matrix of the form
A U
B V
s r
s
r
.
Scientific Computation and Differential Equations – p. 13/36
61. General linear methods
We will consider methods characterised by an
(s + r) × (s + r) partitioned matrix of the form
A U
B V
s r
s
r
.
The r values input to step n − 1 will be denoted by
y
[n−1]
i , i = 1, 2, . . . , r with corresponding output values
y
[n]
i and the stage values by Yi, i = 1, 2, . . . , s.
Scientific Computation and Differential Equations – p. 13/36
62. General linear methods
We will consider methods characterised by an
(s + r) × (s + r) partitioned matrix of the form
A U
B V
s r
s
r
.
The r values input to step n − 1 will be denoted by
y
[n−1]
i , i = 1, 2, . . . , r with corresponding output values
y
[n]
i and the stage values by Yi, i = 1, 2, . . . , s.
The stage derivatives will be denoted by Fi = f(Yi).
Scientific Computation and Differential Equations – p. 13/36
63. The formula for computing the stages (and
simultaneously the stage derivatives) are:
Yi = h
s
j=1
aijFj +
r
j=1
uijy
[n−1]
j , Fi = f(Yi),
for i = 1, 2, . . . , s.
Scientific Computation and Differential Equations – p. 14/36
64. The formula for computing the stages (and
simultaneously the stage derivatives) are:
Yi = h
s
j=1
aijFj +
r
j=1
uijy
[n−1]
j , Fi = f(Yi),
for i = 1, 2, . . . , s.
To compute the output values, use the formula
y
[n]
i = h
s
j=1
bijFj +
r
j=1
vijy
[n−1]
j , i = 1, 2, . . . , r.
Scientific Computation and Differential Equations – p. 14/36
66. For convenience, write
y[n−1]
=
y
[n−1]
1
y
[n−1]
2
...
y
[n−1]
r
, y[n]
=
y
[n]
1
y
[n]
2
...
y
[n]
r
, Y =
Y1
Y2
...
Ys
, F =
F1
F2
...
Fs
,
so that we can write the calculations in a step more
simply as
Y
y[n] =
A U
B V
hF
y[n−1] .
Scientific Computation and Differential Equations – p. 15/36
67. Examples of general linear methods
We will look at five examples
Scientific Computation and Differential Equations – p. 16/36
68. Examples of general linear methods
We will look at five examples
A Runge–Kutta method
Scientific Computation and Differential Equations – p. 16/36
69. Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
Scientific Computation and Differential Equations – p. 16/36
70. Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
An Almost Runge–Kutta method
Scientific Computation and Differential Equations – p. 16/36
71. Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
An Almost Runge–Kutta method
An Adams-Bashforth/Adams-Moulton method
Scientific Computation and Differential Equations – p. 16/36
72. Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
An Almost Runge–Kutta method
An Adams-Bashforth/Adams-Moulton method
A modified linear multistep method
Scientific Computation and Differential Equations – p. 16/36
73. A Runge–Kutta method
One of the famous families of fourth order methods of
Kutta, written as a general linear method, is
A U
B V
=
0 0 0 0 1
θ 0 0 0 1
1
2 − 1
8θ
1
8θ 0 0 1
1
2θ − 1 − 1
2θ 2 0 1
1
6 0 2
3
1
6 1
Scientific Computation and Differential Equations – p. 17/36
74. A Runge–Kutta method
One of the famous families of fourth order methods of
Kutta, written as a general linear method, is
A U
B V
=
0 0 0 0 1
θ 0 0 0 1
1
2 − 1
8θ
1
8θ 0 0 1
1
2θ − 1 − 1
2θ 2 0 1
1
6 0 2
3
1
6 1
In a step from xn−1 to xn = xn−1 + h, the stages give
approximations at
xn−1, xn−1 + θh, xn−1 + 1
2h and xn−1 + h.
Scientific Computation and Differential Equations – p. 17/36
75. A Runge–Kutta method
One of the famous families of fourth order methods of
Kutta, written as a general linear method, is
A U
B V
=
0 0 0 0 1
θ 0 0 0 1
1
2 − 1
8θ
1
8θ 0 0 1
1
2θ − 1 − 1
2θ 2 0 1
1
6 0 2
3
1
6 1
In a step from xn−1 to xn = xn−1 + h, the stages give
approximations at
xn−1, xn−1 + θh, xn−1 + 1
2h and xn−1 + h.
We will look at the special case θ = −1
2.
Scientific Computation and Differential Equations – p. 17/36
76. In the special θ = −1
2 case
A U
B V
=
0 0 0 0 1
−1
2 0 0 0 1
3
4 −1
4 0 0 1
−2 1 2 0 1
1
6 0 2
3
1
6 1
Scientific Computation and Differential Equations – p. 18/36
77. In the special θ = −1
2 case
A U
B V
=
0 0 0 0 1
−1
2 0 0 0 1
3
4 −1
4 0 0 1
−2 1 2 0 1
1
6 0 2
3
1
6 1
Because the derivative at
xn−1 + θh = xn−1 − 1
2h = xn−2 + 1
2h,
was evaluated in the previous step, we can try re-using
this value.
Scientific Computation and Differential Equations – p. 18/36
78. In the special θ = −1
2 case
A U
B V
=
0 0 0 0 1
−1
2 0 0 0 1
3
4 −1
4 0 0 1
−2 1 2 0 1
1
6 0 2
3
1
6 1
Because the derivative at
xn−1 + θh = xn−1 − 1
2h = xn−2 + 1
2h,
was evaluated in the previous step, we can try re-using
this value.
This will save one function evaluation.
Scientific Computation and Differential Equations – p. 18/36
79. A ‘re-use’ method
This gives the re-use method
A U
B V
=
0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0
Scientific Computation and Differential Equations – p. 19/36
80. A ‘re-use’ method
This gives the re-use method
A U
B V
=
0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0
Why should this method not be preferred to a standard
Runge–Kutta method?
Scientific Computation and Differential Equations – p. 19/36
81. A ‘re-use’ method
This gives the re-use method
A U
B V
=
0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0
Why should this method not be preferred to a standard
Runge–Kutta method?
There are at least two reasons
Stepsize change is complicated and difficult
Scientific Computation and Differential Equations – p. 19/36
82. A ‘re-use’ method
This gives the re-use method
A U
B V
=
0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0
Why should this method not be preferred to a standard
Runge–Kutta method?
There are at least two reasons
Stepsize change is complicated and difficult
The stability region is smaller
Scientific Computation and Differential Equations – p. 19/36
83. To overcome these difficulties, we can do several things:
Scientific Computation and Differential Equations – p. 20/36
84. To overcome these difficulties, we can do several things:
Restore the missing stage,
Scientific Computation and Differential Equations – p. 20/36
85. To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Scientific Computation and Differential Equations – p. 20/36
86. To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Use a linear combination of the derivatives
computed in the previous step (instead of just one of
these),
Scientific Computation and Differential Equations – p. 20/36
87. To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Use a linear combination of the derivatives
computed in the previous step (instead of just one of
these),
Re-organize the data passed between steps.
Scientific Computation and Differential Equations – p. 20/36
88. To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Use a linear combination of the derivatives
computed in the previous step (instead of just one of
these),
Re-organize the data passed between steps.
We then get methods like the following:
Scientific Computation and Differential Equations – p. 20/36
92. The good things about this “Almost Runge–Kutta
method” are:
Scientific Computation and Differential Equations – p. 22/36
93. The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Scientific Computation and Differential Equations – p. 22/36
94. The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Unlike standard Runge–Kutta methods, the stage
order is 2.
Scientific Computation and Differential Equations – p. 22/36
95. The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Unlike standard Runge–Kutta methods, the stage
order is 2.
This means that the stage values are computed to the
same accuracy as an order 2 Runge-Kutta method.
Scientific Computation and Differential Equations – p. 22/36
96. The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Unlike standard Runge–Kutta methods, the stage
order is 2.
This means that the stage values are computed to the
same accuracy as an order 2 Runge-Kutta method.
Although it is a multi-value method, both starting
the method and changing stepsize are essentially
cost-free operations.
Scientific Computation and Differential Equations – p. 22/36
97. An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
Scientific Computation and Differential Equations – p. 23/36
98. An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
Scientific Computation and Differential Equations – p. 23/36
99. An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
yn = yn−1 + h 5
12f(y∗
n) + 2
3f(yn−1) − 1
12f(yn−2) .
Scientific Computation and Differential Equations – p. 23/36
100. An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
yn = yn−1 + h 5
12f(y∗
n) + 2
3f(yn−1) − 1
12f(yn−2) .
It might be asked: Is it possible to obtain improved order
by using values of yn−2, yn−3 in the formulae?
Scientific Computation and Differential Equations – p. 23/36
101. An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
yn = yn−1 + h 5
12f(y∗
n) + 2
3f(yn−1) − 1
12f(yn−2) .
It might be asked: Is it possible to obtain improved order
by using values of yn−2, yn−3 in the formulae?
The answer is that not much can be gained because we
are limited by the famous ‘Dahlquist barrier’.
Scientific Computation and Differential Equations – p. 23/36
102. A modified linear multistep method
But what if we allow off-step points?
Scientific Computation and Differential Equations – p. 24/36
103. A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
Scientific Computation and Differential Equations – p. 24/36
104. A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
This new method, with predictors at the off-step point
and also at the end of the step, is
y∗
n−1
2
= yn−2 + h 9
8f(yn−1) + 3
8f(yn−2) ,
Scientific Computation and Differential Equations – p. 24/36
105. A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
This new method, with predictors at the off-step point
and also at the end of the step, is
y∗
n−1
2
= yn−2 + h 9
8f(yn−1) + 3
8f(yn−2) ,
y∗
n = 28
5 yn−1 − 23
5 yn−2
+ h 32
15f(y∗
n−1
2
) − 4f(yn−1) − 26
15f(yn−2) ,
Scientific Computation and Differential Equations – p. 24/36
106. A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
This new method, with predictors at the off-step point
and also at the end of the step, is
y∗
n−1
2
= yn−2 + h 9
8f(yn−1) + 3
8f(yn−2) ,
y∗
n = 28
5 yn−1 − 23
5 yn−2
+ h 32
15f(y∗
n−1
2
) − 4f(yn−1) − 26
15f(yn−2) ,
yn = 32
31yn−1 − 1
31yn−2
+ h 64
93f(y∗
n−1
2
)+ 5
31f(y∗
n)+ 4
31f(yn−1)− 1
93f(yn−2) .
Scientific Computation and Differential Equations – p. 24/36
107. Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
Scientific Computation and Differential Equations – p. 25/36
108. Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
Scientific Computation and Differential Equations – p. 25/36
109. Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
We want to understand order in a similar general way.
Scientific Computation and Differential Equations – p. 25/36
110. Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
We want to understand order in a similar general way.
The key ideas are
Use a general starting method to represent the input
to a step.
Scientific Computation and Differential Equations – p. 25/36
111. Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
We want to understand order in a similar general way.
The key ideas are
Use a general starting method to represent the input
to a step.
Require the output to be similarly related to the
starting method applied one time-step later.
Scientific Computation and Differential Equations – p. 25/36
112. The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
Scientific Computation and Differential Equations – p. 26/36
113. The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
Scientific Computation and Differential Equations – p. 26/36
114. The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
If the input is exactly what it is supposed to approximate,
then the “local truncation error” is defined as the error in
the output after a single step.
Scientific Computation and Differential Equations – p. 26/36
115. The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
If the input is exactly what it is supposed to approximate,
then the “local truncation error” is defined as the error in
the output after a single step.
If this can be estimated in terms of hp+1
, then the method
has order p.
Scientific Computation and Differential Equations – p. 26/36
116. The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
If the input is exactly what it is supposed to approximate,
then the “local truncation error” is defined as the error in
the output after a single step.
If this can be estimated in terms of hp+1
, then the method
has order p.
We will refer to the calculation which produces y[n−1]
from y(xn−1) as a “starting method”.
Scientific Computation and Differential Equations – p. 26/36
117. Let S denote the “starting method”
Scientific Computation and Differential Equations – p. 27/36
118. Let S denote the “starting method”, that is a mapping
from RN
to RrN
Scientific Computation and Differential Equations – p. 27/36
119. Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method
Scientific Computation and Differential Equations – p. 27/36
120. Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method, such that F ◦ S = id.
Scientific Computation and Differential Equations – p. 27/36
121. Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method, such that F ◦ S = id.
The order of accuracy of a multivalue method is defined
in terms of the diagram
E
S S
M
Scientific Computation and Differential Equations – p. 27/36
122. Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method, such that F ◦ S = id.
The order of accuracy of a multivalue method is defined
in terms of the diagram
E
S S
M
O(hp+1
)
(h = stepsize)
Scientific Computation and Differential Equations – p. 27/36
123. By duplicating this diagram over many steps, global
error estimates can be found.
Scientific Computation and Differential Equations – p. 28/36
124. By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
Scientific Computation and Differential Equations – p. 28/36
125. By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
O(hp
)
Scientific Computation and Differential Equations – p. 28/36
126. By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
O(hp
)
F
Scientific Computation and Differential Equations – p. 28/36
127. By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
O(hp
)
F
O(hp
)
Scientific Computation and Differential Equations – p. 28/36
128. Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
Scientific Computation and Differential Equations – p. 29/36
129. Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
This represents the behaviour of the method in the case
of linear problems.
Scientific Computation and Differential Equations – p. 29/36
130. Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
This represents the behaviour of the method in the case
of linear problems.
That is, for the problem y′
(x) = qy(x), we have
y[n]
= M(z)y[n−1]
where z = hq
Scientific Computation and Differential Equations – p. 29/36
131. Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
This represents the behaviour of the method in the case
of linear problems.
That is, for the problem y′
(x) = qy(x), we have
y[n]
= M(z)y[n−1]
where z = hq
In the special case of a Runge–Kutta method, M(z) is a
scalar R(z).
Scientific Computation and Differential Equations – p. 29/36
132. To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
Scientific Computation and Differential Equations – p. 30/36
133. To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
In the case of Runge–Kutta methods the meanings of
these are
Scientific Computation and Differential Equations – p. 30/36
134. To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
In the case of Runge–Kutta methods the meanings of
these are
For an A-stable method,
|R(z)| ≤ 1, if Rez ≤ 0.
Scientific Computation and Differential Equations – p. 30/36
135. To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
In the case of Runge–Kutta methods the meanings of
these are
For an A-stable method,
|R(z)| ≤ 1, if Rez ≤ 0.
An L-stable method is A-stable and, in addition,
R(∞) = 0.
Scientific Computation and Differential Equations – p. 30/36
136. A general linear method is said to have
“Runge–Kutta stability”
if the stability matrix for the method M(z) has
characteristic polynomial of the form
det(wI − M(z)) = wr−1
(w − R(z)).
Scientific Computation and Differential Equations – p. 31/36
137. A general linear method is said to have
“Runge–Kutta stability”
if the stability matrix for the method M(z) has
characteristic polynomial of the form
det(wI − M(z)) = wr−1
(w − R(z)).
This means that the method has exactly the same stability
region as a Runge–Kutta method whose stability
function is R(z).
Scientific Computation and Differential Equations – p. 31/36
138. Do methods with RK stability exist?
Scientific Computation and Differential Equations – p. 32/36
139. Do methods with RK stability exist?
Yes, it is even possible to construct them with rational
operations by imposing a condition known as “Inherent
RK stability”.
Scientific Computation and Differential Equations – p. 32/36
140. Do methods with RK stability exist?
Yes, it is even possible to construct them with rational
operations by imposing a condition known as “Inherent
RK stability”.
Methods exist for both stiff and non-stiff problems for
arbitrary orders and the only question is how to select the
best methods from the large families that are available.
Scientific Computation and Differential Equations – p. 32/36
141. Do methods with RK stability exist?
Yes, it is even possible to construct them with rational
operations by imposing a condition known as “Inherent
RK stability”.
Methods exist for both stiff and non-stiff problems for
arbitrary orders and the only question is how to select the
best methods from the large families that are available.
We will give just two examples.
Scientific Computation and Differential Equations – p. 32/36
144. Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
Scientific Computation and Differential Equations – p. 35/36
145. Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Scientific Computation and Differential Equations – p. 35/36
146. Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Because of the variable order and stepsize aims, we wish
to be able to do the following:
Estimate the local truncation error of the current step
Scientific Computation and Differential Equations – p. 35/36
147. Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Because of the variable order and stepsize aims, we wish
to be able to do the following:
Estimate the local truncation error of the current step
Estimate the local truncation error of an alternative
method of higher order
Scientific Computation and Differential Equations – p. 35/36
148. Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Because of the variable order and stepsize aims, we wish
to be able to do the following:
Estimate the local truncation error of the current step
Estimate the local truncation error of an alternative
method of higher order
Change the stepsize with little cost and with little
impact on stability
Scientific Computation and Differential Equations – p. 35/36
149. We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
Scientific Computation and Differential Equations – p. 36/36
150. We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
I would like to name, with gratitude and appreciation, my
principal collaborators in this project:
Scientific Computation and Differential Equations – p. 36/36
151. We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
I would like to name, with gratitude and appreciation, my
principal collaborators in this project:
Zdzisław Jackiewicz Arizona State University, Phoenix AZ
Helmut Podhaisky Martin Luther Universität, Halle
Will Wright La Trobe University, Melbourne
Scientific Computation and Differential Equations – p. 36/36
152. We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
I would like to name, with gratitude and appreciation, my
principal collaborators in this project:
Zdzisław Jackiewicz Arizona State University, Phoenix AZ
Helmut Podhaisky Martin Luther Universität, Halle
Will Wright La Trobe University, Melbourne
I also express my thanks to other colleagues who are
closely associated with this project, especially:
Robert Chan, Allison Heard, Shirley Huang,
Nicolette Rattenbury, Gustaf Söderlind, Angela Tsai.
Scientific Computation and Differential Equations – p. 36/36